Alterslash

the unofficial Slashdot digest
 

Contents

  1. US Government Warns of Severe CopyFail Bug Affecting Major Versions of Linux
  2. Oscars Bans AI Actors and Writing From Awards
  3. VS Code Update Added Copilot As Default Co-Author To Git Commits
  4. ‘Notepad++ For Mac’ Release Is Disavowed By the Creator of the Original
  5. How Microplastics Are Likely Helping To Heat Up the Planet
  6. Astronomers May Have Detected an Atmosphere Around a Tiny, Icy World Past Pluto
  7. OpenAI President Discloses His Stake In the Company Is Worth $30 Billion
  8. White House Considers Vetting AI Models Before They Are Released
  9. OpenAI, Google, and Microsoft Back Bill To Fund ‘AI Literacy’ In Schools
  10. The Pixel 11 Could Be the Next Victim of the RAM Shortage
  11. Expanded AMD HDMI 2.1 Support Is Coming To Linux
  12. The Audio Industry Is Grappling With the Rise of ‘Podslop’
  13. Anthropic Nears $1.5 Billion AI Joint Venture With Wall Street Firms
  14. GameStop Offers to Buy eBay for $56 Billion
  15. Scientists Discover 27 Potential New Planets That Orbit Two Stars

Alterslash picks up to the best 5 comments from each of the day’s Slashdot stories, and presents them on a single page for easy reading.

US Government Warns of Severe CopyFail Bug Affecting Major Versions of Linux

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from TechCrunch:
A severe security vulnerability affecting almost every version of the Linux operating system has caught defenders off-guard and scrambling to patch after security researchers publicly released exploit code that allows attackers to take complete control of vulnerable systems. The U.S. government says the bug, dubbed “CopyFail,” is now being exploited in the wild, meaning it’s being actively used in malicious hacking campaigns. […] Given the risk to the federal enterprise network, U.S. cybersecurity agency CISA has ordered all civilian federal agencies to patch any affected systems by May 15.

Oscars Bans AI Actors and Writing From Awards

Posted by BeauHD View on SlashDot Skip
The Academy has clarified that only human-performed acting and human-authored writing are eligible for Oscar nominations. The Oscars will not ban AI tools broadly, but says it will judge films based on the degree to which humans remain central to the creative work. The BBC reports:
The Academy of Motion Picture Arts and Sciences […], which controls the US film industry’s most prestigious award, on Friday issued updated rules for what kind of work in movies and documentaries would be considered eligible for an Oscar as the use of artificial intelligence (AI) technology grows. In updated eligibility requirements, the Academy specified that only acting “demonstrably performed by humans” and that writing “must be human-authored” in order to be nominated for an award. The Academy called the requirements a “substantive” change to the rules for the Oscars.

The need to specify awards can only go to acting and writing done by “humans” is new for the academy. […] However, the academy did not issue a ban on AI use in films more broadly. Outside of acting and writing, if a filmmaker used AI tools in their work, such “tools neither help nor harm the chances of achieving a nomination,” the academy wrote. “The Academy and each branch will judge the achievement, taking into account the degree to which a human was at the heart of the creative authorship when choosing which movie to award,” the group added. “If questions arise regarding the aforementioned use of generative artificial intelligence, the Academy reserves the right to request more information about the nature of the use and human authorship.”

Using AI actors or writing is a misuse of the tech

By MpVpRb • Score: 3 Thread

We already have actors and writers who do what they do perfectly.
We need AI to do stuff we can’t do

Why? If it’s slop?

By Larry_Dillon • Score: 3 Thread

If everything AI produces is crap or slop content, why would you need to ban it from receiving awards? I see this as a tacit admission that Hollywood is worried about the quality being good or eventually better than humans, in certain situations.

The real solution would be a AI categories.

VS Code Update Added Copilot As Default Co-Author To Git Commits

Posted by BeauHD View on SlashDot Skip
Longtime Slashdot reader UnknowingFool writes:
On April 15, 2026, a Microsoft employee made a change to Visual Studio Code and pushed it within 8 hours without review, notification, or documentation. The change added “Co-authored-by: Copilot” by default to the end of commit messages in Git when Copilot was used in creating the code. However, the implementation was bugged, and the message was added to every commit regardless if Copilot was used or disabled. Since this message was automatically added to the end of commit messages, users were not aware of it as the UI does not show this addition when making commits. The change as been reverted as of May 3, but not before 1.4 million commits were made. Unfortunately, those messages cannot be cleansed and are permanent.

Re: Isn’t this fraud?

By reanjr • Score: 4, Insightful Thread

It’s even worse. LLM generated code can’t be copyrighted.

Re:Isn’t this fraud?

By drinkypoo • Score: 4, Interesting Thread

I think jail time for corporate employees doing shit like this should be a last resort but at this point I don’t really see any other good options.

Let them go free, but jail literally everyone above them on the org chart.

I actually propose that every executive salary be capped at a percentage of the sum of their direct reports, and that they share responsibility for any act they take.

I’ll say it again

By 93 Escort Wagon • Score: 3 Thread

If there’s one thing that comes to my mind when I think about Microsoft developers, it’s quality software.

‘Notepad++ For Mac’ Release Is Disavowed By the Creator of the Original

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from Ars Technica, written by Andrew Cunningham:
As its name implies, the venerable Notepad++ text editor began as a more capable version of the classic Windows Notepad, with features such as line numbering and syntax highlighting. It was created in 2003 by Don Ho, who continues to be its primary author and maintainer, and it has been a Windows-exclusive app throughout its existence (older Notepad++ versions support OSes as old as Windows 95; the current version officially supports everything going back to Windows 7). I’m not a devoted user of the app, but I was aware of its history, which is why I was surprised to see news of a “Notepad++ for Mac” port making the rounds last week, as though it were a port of the original available from the Notepad++ website.

Apparently, this news surprised Ho as well, who claims that the Mac version and its author, Andrey Letov, are "using the Notepad++ trademark (the name) without permission.” “This is misleading, inappropriate, and frankly disrespectful to both the project and its users,” Ho wrote. “It has already fooled people — including tech media — into believing this is an official release. To be crystal clear: Notepad++ has never released a macOS version. Anyone claiming otherwise is simply riding on the Notepad++ name.”
Ho repeatedly asked the developer to stop using the brand and eventually reported the trademark use to Cloudflare, the CDN of the Notepad++ for Mac site. “Every day that website remains active, you are in further violation of the law,” Ho wrote. “I cannot authorize a ‘week or two’ of continued trademark infringement.”
Letov has since begun rebranding the app as “NextPad++,” though the old branding and URL reportedly remained available. The name changes is “an homage to NeXT Computer,” notes Ars, “and uses a frog icon rather than the Notepad++ lizard.”

BBedit

By Malc • Score: 5, Interesting Thread

Does the job. Been doing the job on Macs for decades (since 1992). Sometimes called Text Wrangler (it was the free cut-down version), until BBedit got a free version too. Please support Bare Bones by using BBedit.

And there’s always VIM.

BBedit and Beyond Compare are my two must-have utilities on my Macs. Both companies have served Mac users for a long time, great products, great support and none of this bullshit and enshitification like so many recent software companies.

Trademark in GPU source

By michaelmalak • Score: 4, Interesting Thread
It strikes me that putting a product name inside source code under GPL license — which explicitly encourages modification and distribution of source code — should constitute abandonment of U.S. trademark. However, a California District Court ruled against that logic in Neo4j v. PureThink. It seems GPL needs to explicitly address trademarks, such as right to say “fork of X” — akin to how it had to address the patent issue.

TextEdit++

By reanjr • Score: 4, Interesting Thread

The name TextEdit++ was right there.

Re:Trademark

By giesen • Score: 4, Informative Thread

Don Ho is correct about his trademark (yes, it’s registered), and Andrey Letov appears to be showing the proper respect to Don by renaming and rebranding the port.

That’s a charitable interpretation of what’s been going on. If you read the GitHub issue, Don Ho has been asking Andrey Letov for days to rebrand, and Andrey has been stalling and deferring, and even after he started the rename, and was implying some sort of coordination between the projects or official support where none existed. Don was initially very polite with Andrey, giving him the benefit of the doubt, but it’s become clear through his tactics that Andrey has been trying to ride Notepad++‘s trademark into launch his vibe-coded MacOS port.

Re:Takes two to tango

By drinkypoo • Score: 4, Insightful Thread

That’s called spam, whether or not I or anyone else agrees with it (or not).

Spam is messages you have not agreed to receive. If he’s sending them out to you then they’re spam. If he’s posting announcements about them, they’re arguably spam. If he’s including them in other messages then it’s just offtopic content.

How Microplastics Are Likely Helping To Heat Up the Planet

Posted by BeauHD View on SlashDot Skip
A new Nature Climate Change study suggests airborne microplastics — especially darker and colored particles — are likely contributing to atmospheric warming by absorbing more heat than they reflect. Researchers estimate the effect could be roughly one-sixth that of black carbon, though outside experts say the uncertainties remain large and more study is needed before drawing firm policy conclusions. “We can say with confidence that overall they are warming agents,” said Drew Shindell, a Duke University earth science professor and co-author of the study. “To me, that’s the big advance.” The Washington Post reports:
To undertake their study, a group led by researchers at Fudan University in China examined how different colors and sizes of microplastics interact with light across the spectrum, while combining that information with simulations of how particles get dispersed in the air across the planet. “Black, yellow, blue and red [particles] absorb sunlight much more strongly than the white particles,” Yu Liu, a Fudan professor and study co-author, said in a call with reporters. In fact, the study details how black and colored particles showed “absorption levels nearly 75 times higher than pristine, non-pigmented plastics.” The scientists also found that different sizes of particles absorb light at different intensities — and that how they absorb light can change as they age.

The authors estimate that microplastics suspended in the atmosphere could be contributing to global warming at about one-sixth the amount of black carbon, also known as soot, a pollutant generated largely from burning fossil fuels. If the latest estimates are right, Shindell said, microplastics might not be an enormous source of atmospheric warming, compared with massive contributors such as cars and trucks, belching industrial plants or even burping cows. “But not a trivial one, either,” he said.

By his calculation, the effect of one year’s microplastic emissions globally is approximately equivalent to 200 coal-fired power plants running for that year. But that rough estimate does not factor the longer-term repercussions of microplastics decaying and persisting in the environment for decades to come. Whatever the exact impact, the topic deserves further study, the authors say, because current climate modeling does not account for any additional warming that these tiny particles might be causing.

especially darker and colored particles

By rossdee • Score: 5, Funny Thread

Obviously we need to release more white coloured micro plastis

How does this compare

By wakeboarder • Score: 3 Thread

With sand and dirt?

Astronomers May Have Detected an Atmosphere Around a Tiny, Icy World Past Pluto

Posted by BeauHD View on SlashDot Skip
“The Associated Press is reporting on a new study in Nature Astronomy suggesting that a tiny, icy world beyond Pluto harbors a thin, delicate atmosphere that may have been created by volcanic eruptions or a comet strike,” writes longtime Slashdot reader fahrbot-bot. From the report:
Just 300 miles (500 kilometers) or so across, this mini Pluto is thought to be the solar system’s smallest object yet with a clearly detected global atmosphere bound by gravity, said lead researcher Ko Arimatsu of the National Astronomical Observatory of Japan. This so-called minor planet — formally known as (612533) 2002 XV93 — is considered a plutino, circling the sun twice in the time it takes Neptune to complete three solar orbits. At the time of the study, it was more than 3.4 billion miles (5.5 billion kilometers) away, farther than even Pluto, the only other object in the Kuiper Belt with an observed atmosphere. This cosmic iceball’s atmosphere is believed to be 5 million to 10 million times thinner than Earth’s protective atmosphere, according to the the study […].

It’s 50 to 100 times thinner than even Pluto’s tenuous atmosphere. The likeliest atmospheric chemicals are methane, nitrogen or carbon monoxide, any of which could reproduce the observed dimming as the object passed before the star, according to Arimatsu. Further observations, especially by NASA’s Webb Space Telescope, could verify the makeup of the atmosphere, according to Arimatsu.

Non-paywalled source

By Tx • Score: 5, Informative Thread

A preprint of the article is available on the arXiv (https://arxiv.org/pdf/2605.02243, for those that don’t have access behind Nature’s paywall.

Re:Similar to that of Pluto, but let’s sensational

By Anonymous Coward • Score: 4, Interesting Thread
It’s interesting to people who understand orbital dynamics

The 2:3 orbital resonance is the primary reason that Pluto and other plutinos (like 2002 XV93) can exist in stable orbits despite crossing Neptune’s path.This specific ratio provides several critical evolutionary and mechanical benefits:

1. Collision Avoidance (“Phase Protection”)Even though many plutinos have highly elliptical orbits that technically cross inside Neptune’s orbit, they never actually collide or even come close to the planet. The 2:3 resonance ensures that whenever a plutino reaches its perihelion (the point closest to the Sun where it crosses Neptune’s path), Neptune is consistently a quarter of an orbit away. This “phase protection” keeps them at a safe minimum distance of billions of kilometres at all times.

OpenAI President Discloses His Stake In the Company Is Worth $30 Billion

Posted by BeauHD View on SlashDot Skip
OpenAI president Greg Brockman’s testimony dominated the fifth day of the trial for Elon Musk’s lawsuit against the AI company. Brockman took the witness stand on Monday, disclosing that his stake in OpenAI is worth nearly $30 billion, despite not personally investing money in OpenAI. The judge also declined to admit a pretrial text in which Musk allegedly warned Brockman that he and Altman would become “the most hated men in America.” From a report:
Brockman’s disclosure would put him on the Forbes list of the world’s richest people, with wealth comparable to Melinda French Gates. […] Late Sunday, OpenAI lawyers tried to admit as evidence a text message Musk sent to Brockman two days before the trial began. According to a court filing — which did not include the actual text exchange — Musk sent a message to Brockman to gauge interest in settlement.

When Brockman replied that both sides should drop their respective claims, Musk shot back, according to the filing, “By the end of this week, you and Sam will be the most hated men in America. If you insist, so it will be.” Judge Yvonne Gonzalez Rogers, who is overseeing the trial, did not admit the text exchange as evidence.
Brockman acknowledged that he had promised to personally donate $100,000 to OpenAI’s charity but never did. In explaining the delay, Brockman put the onus on Altman: “I asked Sam when I should donate this, and he said he would let me know,” reports Business Insider.

The first witness to testify on Monday was Stuart Russell, an artificial intelligence expert who teaches computer science at the University of California, Berkeley. “The most memorable part of Russell’s testimony was when he talked about how much Musk’s legal team paid him,” notes Business Insider. “He received an eye-popping $5,000 per hour for 40 hours of preparatory work. Expert witnesses in high-profile cases typically make between $500 to $1,000 per hour.”

Recap:
Musk Concludes Testimony At OpenAI Trial (Day Four)
Elon Musk Says OpenAI Betrayed Him, Clashes With Company’s Attorney (Day Three)
Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two)
Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)

Now, this might strike some regulars as harsh,

By Mr. Dollar Ton • Score: 5, Insightful Thread

but I would not mind if everyone involved in this case as litigants and their reps dies an unpleasant death.

Won’t solve the basic problem of modern capitalism’s complete subversion of democracy, but it might slow it down a bit.

Sorry, elmo

By T34L • Score: 3 Thread

Musk allegedly warned Brockman that he and Altman would become “the most hated men in America.”

Somebody needs to explain to him that’s not a title he can just pass onto somebody else. And don’t get me wrong. I wouldn’t piss on either if they were on fire, but as far as I’m concerned, they’re not even in the running for the top.

Re:If the asset tax passes, he’ll owe 1.5B

By dfghjk • Score: 5, Insightful Thread

“The asset tax is dumb. how is he supposed to pay that tax without diluting his ownership stake? "

That’s the goal. The government exists for the benefit of the people, the people suffer when all wealth accumulates at the top. Massive ownership stake is not only NOT a goal of the government, it is the problem to solve.

“When he announces he’s selling shares, the value of OpenAI will drop just by that.”

The very existence of this phenomenon IS the problem.

“So does he pay tax on the new or old valuation?”

Yes.

“I mean, if you had $30 billion and someone pisses you off beyond anything by taking what you put your heart and soul into you’d do every legal means to makes sure whoever done that to you pays. "

Slave owners were pissed too. And heart and soul? Fuck off, he got that by exploiting people.

Re:Sorry, elmo

By dfghjk • Score: 4, Interesting Thread

You cannot explain anything to Musk, he believes that anything he says becomes the truth. Same as trump. The power of positive thinking combined with uncontrolled greed and criminal sociopathy. Envision owning all of humanity and it will become true. Tony Robbins coaching Hitler.

You dare criticize my cave submarine? Well you’re a pedophile. How do people not see this? It has been plain as day for a decade, yet people have only wised up in the last year.

Most hated man?

By thegarbz • Score: 5, Informative Thread

Simply having money due to AI doesn’t make someone hated. On the other hand, destroying the government, aligning with extremists, and acting like a persecuted crybaby while being the richest man alive certainly does tick a lot of boxes for people hating you the most.

White House Considers Vetting AI Models Before They Are Released

Posted by BeauHD View on SlashDot Skip
The Trump administration is reportedly considering an executive order to create a working group that could review advanced AI models before public release. The shift follows concerns over Anthropic’s powerful Mythos model and its cyber capabilities, with officials weighing whether the government should get early access to frontier models without necessarily blocking their release. The New York Times reports:
In meetings last week, White House officials told executives from Anthropic, Google and OpenAI about some of those plans, people briefed on the conversations said. The working group is likely to consider a number of oversight approaches, officials said. But a review process could be similar to one being developed in Britain, which has assigned several government bodies to ensure that A.I. models meet certain safety standards, people in the tech industry and the administration said.

The discussions signal a stark reversal in the Trump administration’s approach to A.I. Since returning to office last year, Mr. Trump has been a major booster of the technology, which he has said is vital to winning the geopolitical contest against China. Among other moves, he swiftly rolled back a Biden administration regulatory process that asked A.I. developers to perform safety evaluations and report on A.I. models with potential military applications. “We’re going to make this industry absolutely the top, because right now it’s a beautiful baby that’s born,” Mr. Trump said of A.I. at an event in July. “We have to grow that baby and let that baby thrive. We can’t stop it. We can’t stop it with politics. We can’t stop it with foolish rules and even stupid rules.” Mr. Trump left room for some rules, but he added that “they have to be more brilliant than even the technology itself.”

The White House wants to avoid any political repercussions if a devastating A.I.-enabled cyberattack were to occur, people in the tech industry and the administration said. The administration is also evaluating whether new A.I. models could yield cyber-capabilities that could be useful to the Pentagon and U.S. intelligence agencies, they said. To get ahead of models like Mythos, some officials are pushing for a review system that would give the government first access to A.I. models, but that would not block their release, people briefed on the talks said.

The expertise

By Valgrus Thunderaxe • Score: 5, Insightful Thread
to “vet” these models is all in the White House. I’m sure of that.

Re:A rigorous test plan, no doubt…

By Powercntrl • Score: 5, Interesting Thread

It’d be hilarious if they pulled a Volkswagen and had the AI recognize when it is being vetted, so it provides answers the current administration wants to hear, and then goes super woke after the model is actually deployed.

xAI’s Grok wouldn’t need to cheat, obviously. That thing is biased so far to the right it makes Fox News almost look sane.

Set the precedent

By backslashdot • Score: 5, Interesting Thread

When the Democrats come in, they’ll vet the AI models properly.

Re:On what authority?

By dfghjk • Score: 5, Interesting Thread

Apparently the same authority that allows ICE to murder citizens in the streets.

Also, what does it mean to “release a model”? Is ChatGPT a model? No, it is not. If making a model available becomes a problem, then keep the model private and only release tools that use it.

And how is a model dangerous? It’s the tool that uses it that might be. How does the government know what any cloud services does behind the scenes.

It’s all complete bullshit from the most incompetent administration ever.

“small” government

By zeiche • Score: 5, Interesting Thread

is this the small government that the “conservatives” keep banging on about?

what, exactly, is small about white-house review of products offered to the public?

please, MAGAts, clue me in.

OpenAI, Google, and Microsoft Back Bill To Fund ‘AI Literacy’ In Schools

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from 404 Media:
A new, bipartisan bill introduced (PDF) by Democratic Senator of California Adam Schiff and endorsed by the biggest AI developers in the world — including OpenAI, Google, and Microsoft — would change the K-12 curriculum to shoehorn in “AI literacy,” something that young people and teachers alike already hate in schools. The Literacy in Future Technologies Artificial Intelligence, or LIFT AI Act, would empower the new director of the National Science Foundation (NSF) to make grant awards “on a merit-reviewed, competitive basis to institutions of higher education or nonprofit organizations (or a consortium thereof) to support research activities to develop educational curricula, instructional material, teacher professional development, and evaluation methods for AI literacy at the K-12 level,” the bill says.

It defines AI literacy as using AI; specifically, “having the age-appropriate knowledge and ability to use artificial intelligence effectively, to critically interpret outputs, to solve problems in an AI-enabled world, and to mitigate potential risks.” The bill is endorsed by the American Federation of Teachers, Google, OpenAI, Information Technology Industry Council, Software & Information Industry Association, Microsoft, and HP Inc. […] The grant would support “AI literacy evaluation tools and resources for educators assessing proficiency in AI literacy,” according to the bill. It would also fund “professional development courses and experiences in AI literacy,” and the development of “hands-on learning tools to assist in developing and improving AI literacy.” Most importantly for real-world implications, it would fund changing the existing curriculum “to incorporate AI literacy where appropriate, including responsible use of AI in learning.”

Use our products!

By locater16 • Score: 5, Insightful Thread
“Indoctrinate, indoctrinate, indoctrinate!” - Tech Company CEO’s

Start with regular literacy, eh?

By MIPSPro • Score: 5, Insightful Thread
21% of US adults are illiterate. 56% read below a 6th grade level. 2/3 of 4th/8th graders not proficient in reading; recent declines post-2019/2022. We don’t need smartphones and AI in schools at all. What they need is to go back to chalkboards, physical textbooks, and homework. The only thing that needs tweaking is to add AI detection and resistance to their assignments (ie.. do more in class, in person). Schools that do this get consistently better results than the ones that focus on technology.

Just great

By fahrbot-bot • Score: 5, Insightful Thread

… Fund ‘AI Literacy’ In Schools.

“Learn to Code” becomes “Learn to Prompt” /s

Fuck the techbros

By sinkskinkshrieks • Score: 4, Insightful Thread
First, we need history, English, math, and critical thinking skills literacy before AI claptrap.

Re:A Positive Slant

By Junta • Score: 4, Interesting Thread

In the 90s, the school systems were kind of left to fend for themselves. The vast majority of the computers in my schools were systems the area companies were scrapping, but donated them on the way out. A decent part of my programming class was trying to salvage 20 out of 24 systems that a business donated that wouldn’t boot. They spent what budget they could on a handful of computers capable of running encarta for the library.

In the 2000s, things started shifting a bit, in a college course we were handed out ‘donated’ copies of Visual Studio, but the teacher said that’s for us but wasn’t going to be used for class at all.

Since 2010, things have gotten a bit worrisome as a lot of the big tech have started getting awfully opinionated and wanting to ‘help’ kids learn to code. Education is all well and good, but when the big corporate interests get actively involved and prescriptive, things drift toward indoctrination more than education.

At least with the ‘learn to code’, a skill that needed significant develop was being theoretically served, though a lot to be worried about there, with the LLM scenario, it’s pretty much just indoctrination. To the extent an LLM works or does not work is not something that takes a significant amount of time to sort out.

As an example, my kid was asked to write a brief thing on what excitingly awesome thing they are looking forward to using AI to do as part of an “AI challenge” at school sponsored by a local tech company. Not to take a critical assessment of things, of evaluating the nuance of benefits and drawbacks, nothing on helping them understand how to best use it, just to blatantly write a puff piece about how awesome AI is/would be for something. Basically soliciting marketing fodder and awarding three kids a couple hundred bucks. It was going to be a grade and so they had to do it and take it seriously..

The Pixel 11 Could Be the Next Victim of the RAM Shortage

Posted by BeauHD View on SlashDot Skip
Google’s Pixel 11 lineup could see RAM cuts or lower starting configurations because of the global memory shortage, with leaks suggesting the base model may drop from 12GB to 8GB while Pro models could add 12GB versions below the current 16GB tier. The Verge reports:
There will be 16GB configurations available for each, but adding a lower-spec model could mean the 16GB version is getting a price hike. However, the silver lining is that the specs from MysticLeaks also include camera upgrades and brighter displays for the Pro models. The RAM shortage is pushing other phone makers, including Samsung, to raise prices, too.

Hey, Google! Here’s an alternate idea

By fahrbot-bot • Score: 3 Thread

I know it would be against your business model, but how about reinstating support for your older phones so people can keep the ones they have longer? My Pixel 5a still works great. And while it doesn’t get OS/Security updates anymore, I’m planning to keep it as long as it’s working and supported on my network (Ting/T-Mobile) and the Play Store - like I did with my my previous phone, a Kyocera HydroVibe (2015 to 2021).

On the bright side

By turb • Score: 4, Insightful Thread

Pressure to reduce RAM in a device due to cost could drive engineering to make phones be more efficient and utilize LESS RAM. Linux/Android does NOT need to be a pig. It’s a pig because device vendors/OS engineers/app makers get lazy.

Sadly running AI native on your phone could as a result be less than useful but that’s a good thing isn’t it?

Re:Hey, Google! Here’s an alternate idea

By rta • Score: 5, Insightful Thread

no no… please buy a new pixel 9/10/11.

it takes slightly better pictures and it only weighs 50% more than your current one. oh did we mention it gets a full day of battery… yeah, apparently that’s notable again as it was 15 years ago.

the in-screen fingerprint sensor kinda sucks, but don’t worry, you’ll eventually forget the rear sensor was flawless for years.

Re:How much RAM?

By 93 Escort Wagon • Score: 4, Funny Thread

F**k it, we’re doing 5 layers!

Re: On the bright side

By sodul • Score: 5, Insightful Thread

My first computer had 64kB, second one 512kB, and that was what we now call ‘unified memory’, shared between the cpu and the video encoding chip (no gpu back then). These machines did not have virtual memory either to overflow to disk. I had to be very mindful of memory usage when writing code or we would simply crash by running out of memory, no forgiveness.

I’ve worked in Silicon Valley for my entire career and I was surprised at how little attention most ‘software engineers’ paid to ram and cpu optimization. You would expect that from scripters, folks that write Bash or Python code, but I saw that a lot with Java developers as well. It was especially bad when the devs could ask the OPS team for a machine with more RAM as their first instinct rather than consider any optimizations, after all it would come from some other team budget, and that same OPS team would get blamed for going over budget, not the Devs.

So yeah in a way, a good RAM shortage for a while might help bring back some discipline. Unfortunately the vast majority of AI training is done on code that does not care about optimizing RAM consumption.

It is not just RAM consumption, but storage as well. My son got a second hand Nintendo Switch OLED yesterday, the prior owner had 2 games installed leaving 6GB out of the 64GB free. One game was downloaded, the other one still required the cartridge and used 26GB of storage. That’s rather insane.

Meanwhile you can get a generic retro gaming device for $50 with thousands of classic games on a 64 GB SDCard. I’m pretty sure a lot of that space could be better optimized, but there is little incentive for that these days.

Expanded AMD HDMI 2.1 Support Is Coming To Linux

Posted by BeauHD View on SlashDot Skip
AMD is preparing expanded HDMI 2.1 support for Linux, following earlier delays after the HDMI Forum rejected an open source implementation of HDMI 2.1 as proprietary technology. As GamingOnLinux reports, AMD developer Harry Wentland submitted a patch series to the Linux kernel mailing list, noting that it brings “HDMI FRL support to the amdgpu display driver” and that “DSC is still being tested and will be sent out later.”

A forum post on Phoronix from an AMD driver developer also said “a full implementation will ultimately be available once the patches are ready and have completed compliance testing.”

Re:Proprietary irony.

By drnb • Score: 5, Insightful Thread

I still don’t know why people are using that shitty port when DisplayPort is royalty-free, has better specs, is supported by everyone, and can be used over other cables that don’t have all the licensing attachment (USB-C for example).

HDMI is the new VGA. It allows the use of TVs, older monitors, etc. I have an ancient 1080p monitor plugged into a PC as a secondary display. The overhead projector used for presentations is often HDMI.

Re:Proprietary irony.

By thegarbz • Score: 4, Insightful Thread

I still don’t know why people are using that shitty port when DisplayPort is royalty-free, has better specs, is supported by everyone

Not only is there plenty of hardware out there without DisplayPort, but if you at all want to connect to a TV your choice is either HDMI or go pound sand. Dp is supported by precisely 0% of the AV industry. As for why the AV industry uses it, it’s because it has features specific to the AV industry that DP lack, such as an Audio Return Channel which is critical if using an audio system that relies on input switching. It also has CEC baked in to the standard, and has longer cable runs without resorting to active amplification or conversion.

and can be used over other cables that don’t have all the licensing attachment (USB-C for example).

The ability to USB-C for Displayport relies on the implementation of DP Alternative Mode on the USB host which not all devices have and is really only common on mobile devices. This isn’t some magic solution. Interestingly HDMI Alternative Mode was something that existed for a solid 7 years before vendors gave up because no one used it. But more relevant: There simply isn’t a need to support USB alt mode directly when HDMI can piggyback off the Dp signal with an active adapter, especially when the active circuitry is small enough to fit in the HDMI connector. (Yes I connect my laptop to TVs via USB-C, it’s a normal thing you can do through the same port which supports dp).

Re:Proprietary irony.

By thegarbz • Score: 4, Insightful Thread

HDMI is the new VGA.

No not really. HDMI is still very much the active and dominant standard in the entire AV industry. It just isn’t in the computer monitor industry. Saying it’s VGA implies it’s some kind of legacy tech, but the reality is right now in 2026 if you go to your local Wallmart and buy the latest fancy TV you’ll find 100% of them use HDMI not as a legacy connector, but as the current latest hot tech.

Re:Proprietary irony.

By jsonn • Score: 4, Insightful Thread
The only reason the AV industry prefers HDMI is the DRM bit. Everything else is a gimmick.

Re:Proprietary irony.

By bn-7bc • Score: 4, Insightful Thread
I think op might have meant thst HDNI is what replaced VGA itvwas just imprecisely worded so you misunderstood the intent and gave corect info based on the misunderstanding. No harm done you ehere both more or less correct

The Audio Industry Is Grappling With the Rise of ‘Podslop’

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from Bloomberg’s Ashley Carman:
Welcome to the modern era of podcasting in which thousands of new shows are released into the world every day with a sizable portion likely being AI-generated. Figuring out exactly which ones fall into that growing category is becoming more difficult just as the industry is starting to take this issue seriously. In only the past month or so, Amazon launched a feature that explains a product by generating a quasi-podcast, complete with co-hosts talking to each other and taking questions from users. Shout out to Business Insider reporter Katie Notopoulos for spotting this (and, naturally, demoing it with an adult diaper rash-cream). Not long ago, Nicholas Thompson, chief executive officer of the Atlantic, noted “podslop” dominated his Spotify search results when he typed in the word “Sora.” This was around the time that OpenAI shut down its user-generated, AI-content-only app.

[…] All of which raises some big, difficult questions. For one, what should the listening platforms do about this incursion? As of right now, Apple Podcasts requires creators who generated a “material portion” of their show using AI to disclose it. The platform also bans misleading or deceptive content. Spotify hasn’t published any specific guidelines around AI, though it maintains general rules around dangerous and misleading content. Where this conversation gets even trickier is when it comes to money. Many of these podcasts are hosted on at least one free service that allows programs to opt into their ad marketplace with zero barrier to entry, meaning these shows (and the hosting service) profit off every listen or download. Spreaker, a company owned by iHeartMedia, is the primary one to watch here. Though it tells users to disclose when they rely on AI, it still allows those shows to opt into its programmatic ad marketplace, which pays creators 60% of the revenue generated by the ads placed in their shows. It stands to reason that most of these thousands of shows don’t reach many people. But in the aggregate, the ears and dollars could add up. Are the advertisers on board with being next to AI-generated content, some of which might be deemed “slop?”
There’s also the question of how to define “slop.” Jackson of the Podcast Index and his co-host Adam Curry treat it as something listeners simply know when they hear it, while Alberto Betella, co-founder of RSS.com, defines it as “fully automated content with no human review.”
Jeanine Wright, co-founder of Inception Point, rejects the debate altogether: “The people still talking about slop are still making 6-7 jokes,” she said. “It’s still yesterday’s conversation.”

The answer is simple

By MpVpRb • Score: 5, Interesting Thread

All AI content should be accurately labeled as AI and credit given to the model used

YouTube Too

By XopherMV • Score: 5, Insightful Thread
The same thing is occurring in YouTube too. Someone posts a video with a clickbait title. It’s an AI voice reading an AI script showing video that’s only tangentially related to the script. Overall, the video isn’t outright bad. But, it’s not particularly good either. They’re just poor quality. They all just seem to ramble on for a pre-determined amount of time and then stop.

The problem is that the shear number of these videos and channels is unreal. Someone’s automated the creation of these channels and videos. This someone is pumping out these videos faster than you can block whole channels.

Further, it’s impossible to tell which channel has human-generated content and which is all-AI. YouTube doesn’t help at all since Google is promoting the usage of AI. So, the service is getting flooded with poor-quality AI content. As a YouTube user, you either deal with this AI enshittification or you stop using YouTube.

Industrialized Content Subsumes Industries

By nightflameauto • Score: 5, Interesting Thread

Industrializing content creation was bound to eventually come up with a way to industrialize their own industry. Creators annoyed the business side of the content creation business and always have. Whether it be music and real musicians, books and real authors, videos and real videographers (and writers, and lighting experts and all the rest), or audio form news / stories and all the producers required to make them well, including the researchers helping gather the information behind the scenes, all the humans involved have always been seen as a cost center, and an obstacle to pure, unfiltered profit possibilities. Now that AI is good enough to generate slop in all of these realms, the industrialized version of all of these realms are of course obsessing over how quickly they can rid themselves of the human involvement in creating any of these forms of content.

This obsession has led very quickly to creating so much automated content that it’s beginning to swamp traditional content creators, who simply will not be able to keep up with the automated creation.

And perhaps in the end, the “industrial” part of the content creation industries will falter and fail under the flood of slop that they are creating. And maybe we can get back to a point where the content itself becomes important again, rather than the quantity of slop that can be generated for clicks. At least, that’s what the tiny little hopeful part of my brain is wishing for. More likely, we’ll just watch traditional and even modern distribution methods for content choke on the tsunami of slop until there’s no distribution methods left, and we’ll be back to passing things around on tapes, cds, or notebooks.

Slop =

By Bahbus • Score: 5, Informative Thread

Low quality + low effort. It doesn’t matter if it was created by AI or humans.

Suspicious

By OzJimbob • Score: 5, Insightful Thread

Jeanine Wright, co-founder of Inception Point, rejects the debate altogether: “The people still talking about slop are still making 6-7 jokes,” she said. “It’s still yesterday’s conversation.”

That sounds like someone profiting off slop might say. Reminding people that people hate AI slop always angers the people who generate slop.

Anthropic Nears $1.5 Billion AI Joint Venture With Wall Street Firms

Posted by BeauHD View on SlashDot Skip
Anthropic is reportedly nearing a roughly $1.5 billion joint venture with Blackstone, Goldman Sachs, Hellman & Friedman, and other Wall Street firms to sell AI tools to private-equity-backed companies. “The investors aim to create a company that acts as a consulting arm for Anthropic and helps teach businesses — including the private-equity firms’ portfolio companies — how to incorporate AI across their operations,” reports the Wall Street Journal. Anthropic, Blackstone, and Hellman & Friedman would each invest about $300 million, while Goldman would contribute around $150 million.

Interesting

By hdyoung • Score: 5, Insightful Thread
It might be that getting “Trumped” has caused Anthropic to turn away from government work and look for other business. Becoming the go-to AI company for Wall Street would hardly be a consolation prize. It’s probably an even bigger market than defense.

It’s all a ploy by AI companies

By Albinoman • Score: 3 Thread
Get everyone using and depending on AI, that way when it finally comes time to pay for the massive bill they can’t pay for from operating at huge losses we can use the government to bail them out too! Meanwhile they’ll gobble up all the electric and water and we get to foot the bill.

“joint” venture is right

By Pseudonymous Powers • Score: 3 Thread

What exactly is being proposed here? Is this saying that venture capitalists will force the companies they back to use AI tools? And the venture capitalists will get a cut of these forced sales?

I told you, we should never have left the economy sitting out overnight. That’s how you get oligarchs!

This seems right up PE’s ally

By wakeboarder • Score: 3 Thread

Fire all the workers and hire AI, its a perfect world for them. But every time they try and get rid of workers they find out there is consequences, I doubt this time will be different.

GameStop Offers to Buy eBay for $56 Billion

Posted by BeauHD View on SlashDot Skip
GameStop has made an unsolicited $56 billion cash-and-stock offer to buy eBay (paywalled; alternative source), with CEO Ryan Cohen arguing he can turn the marketplace into a far larger Amazon competitor. “EBay should be worth — and will be worth — a lot more money,” Cohen said in an interview. “I’m thinking about turning eBay into something worth hundreds of billions of dollars.” The Wall Street Journal reports:
Cohen said GameStop has a commitment letter from TD Bank to provide up to $20 billion in debt financing to help make a deal possible. GameStop delivered an offer letter to eBay on Sunday and released a copy of it following the Journal’s report on the details of the bid. Cohen wrote in the letter to eBay Chairman Paul Pressler that GameStop started building its eBay position on Feb. 4. It said its offer consists of 50% cash and 50% GameStop shares.

EBay said Monday morning its board and financial advisers would review GameStop’s unsolicited proposal. It said there were no discussions with or outreach from GameStop before receiving the offer. Ebay added that it will review the offer “with a focus on the value to be delivered to eBay shareholders, including the value of the GameStop stock consideration and the ability of GameStop to deliver a binding, actionable proposal.”

If eBay isn’t receptive, Cohen said he was prepared to run a proxy fight and take the offer directly to its shareholders. The window for shareholders to nominate director candidates at eBay ahead of an annual meeting scheduled for this June has already closed, according to the company’s proxy materials. Cohen told the Journal that putting his videogame retailer and eBay under one roof could create opportunities to cut costs and improve earnings. The two companies have some overlap already, including a focus on selling collectibles such as trading cards. “There is nobody who is more qualified, based on my experience, to run the eBay business,” Cohen said, referencing his time at GameStop and previously Chewy, the online pet-products marketplace he co-founded.

But…

By LordHighExecutioner • Score: 5, Funny Thread
…is the offer on Ebay ?!?

Re:Here we go again

By Anonymous Coward • Score: 5, Insightful Thread

This is like when K-Mart (which was failing) bought Sears (which was not failing) and then they both went down.

There is some logic to this

By blastard • Score: 5, Interesting Thread

Gamestop made a business model of taking used products and selling them on. Unlike the storefronts that tried to make a living by helping people sell on eBay, Gamestop already has those locations in place, with a compatible business. Adding grading and verification, and maybe even packaging and shipping makes sense. How many more people would sell their unwanted items if all they had to do was drop them off and someone else handles the listing, selling, packaging and shipping.

Almost a Dupe

By Vlad_the_Inhaler • Score: 5, Informative Thread

GameStop is preparing an offer for eBay is from two days ago, and some of the comments there were actually appropriate.

Because: CEO Ryan Cohen has signalled interest in large-scale acquisitions to grow GameStop into a significantly larger business. Cohen’s compensation package includes incentives tied to achieving a $100 billion market valuation.

(posted by an A/C)

Re:Where the other $36bn come from?

By misnohmer • Score: 5, Insightful Thread
How are they going to more than double their own market cap by just issuing new stock? Why not issue a lot more stock and buy Apple, Google, Microsoft, and few other companies while at it? They could offer 10 trillion each in Game Stock, since they can issue any amount of stock they want, right?

Scientists Discover 27 Potential New Planets That Orbit Two Stars

Posted by BeauHD View on SlashDot
Astronomers have identified 27 potential new circumbinary planets — worlds that orbit two stars, like Star Wars’ Tatooine. “To date, only about 18 circumbinary planets … had been identified in the universe,” reports the Guardian. “More than 6,000 planets have been discovered that orbit single stars, like Earth does around the sun.” The Guardian reports:
In a timely publication for May 4, also known as Star Wars Day, scientists have identified nearly 30 more candidate planets, whose distances range from 650 to 18,000 light years away from Earth. […] More than half of the stars in the universe exist in binary or multiple star systems. The researchers instead used a method known as “apsidal precession,” searching for a wobble between stars that orbit around and eclipse each other.

“If we monitor the exact timing of these eclipses … that can tell us that there’s something else going on in the system,” said Margo Thornton, the study’s lead author and a PhD candidate at UNSW. After eliminating other factors such as the rotation and gravitational pull of the two stars, the team identified 36 star systems out of 1,590 whose behavior could only be explained by a third body. For “27 of those objects, it is possible that they are planet mass,” Thornton said.

More research into their spectra — the light they emit — was needed to formally confirm them as circumbinary planets, she said. “It’s just a matter of: what is the mass of it? Is it a planet? Is it a brown dwarf? Is it a star?” The team discovered the potential planets — which likely range from Neptune-sized to ten times heavier than Jupiter — using data from Nasa’s Transiting Exoplanet Survey Satellite, a planet-hunting space telescope that launched in 2018.
The research was published in the Monthly Notices of the Royal Astronomical Society.

Re:About time…

By HiThere • Score: 4, Interesting Thread

There are lots of “special case” solutions to the three body problem. The general case, however, still is unsolved.