Alterslash

the unofficial Slashdot digest
 

Contents

  1. Apple Agrees To Pay iPhone Owners $250 Million For Not Delivering AI Siri
  2. Coinbase Lays Off Nearly 700 Workers In ‘AI-Native’ Restructuring
  3. Google DeepMind Workers Vote To Unionize Over Military AI Deals
  4. Moving To Mainframe Can Be Cheaper Than Sticking With VMware
  5. Kids Bypass Age Verification With Fake Moustaches
  6. US Government Warns of Severe CopyFail Bug Affecting Major Versions of Linux
  7. Oscars Bans AI Actors and Writing From Awards
  8. VS Code Update Added Copilot As Default Co-Author To Git Commits
  9. ‘Notepad++ For Mac’ Release Is Disavowed By the Creator of the Original
  10. How Microplastics Are Likely Helping To Heat Up the Planet
  11. Astronomers May Have Detected an Atmosphere Around a Tiny, Icy World Past Pluto
  12. OpenAI President Discloses His Stake In the Company Is Worth $30 Billion
  13. White House Considers Vetting AI Models Before They Are Released
  14. OpenAI, Google, and Microsoft Back Bill To Fund ‘AI Literacy’ In Schools
  15. The Pixel 11 Could Be the Next Victim of the RAM Shortage

Alterslash picks up to the best 5 comments from each of the day’s Slashdot stories, and presents them on a single page for easy reading.

Apple Agrees To Pay iPhone Owners $250 Million For Not Delivering AI Siri

Posted by BeauHD View on SlashDot Skip
Apple has agreed to a proposed $250 million settlement over claims that it misled iPhone buyers about the availability of Apple Intelligence and its upgraded Siri features. The settlement would cover U.S. buyers of the iPhone 16 lineup and iPhone 15 Pro models between June 10, 2024, and March 29, 2025. The Verge reports:
The settlement will resolve a 2025 lawsuit, alleging Apple’s advertisements created a “clear and reasonable consumer expectation” that Apple Intelligence features would be available with the launch of the iPhone 16. The lawsuit claimed Apple’s products “offered a significantly limited or entirely absent version of Apple Intelligence, misleading consumers about its actual utility and performance.”

Apple brought certain AI-powered features to the iPhone 16 weeks after its release, and delayed the launch of its more personalized Siri, which is now expected to arrive later this year. Last April, the National Advertising Division recommended that Apple “discontinue or modify” its “available now” claim for Apple Intelligence. Apple also pulled an iPhone 16 ad showing actor Bella Ramsey using the AI-upgraded Siri.

No AI?

By Valgrus Thunderaxe • Score: 5, Funny Thread
That’s a feature. These people should be paying Apple.

Kill all the lawyers

By boxless • Score: 3 Thread

What do the aggrieved parties get? A $10
coupon to the Apple Store?

And the lawyers? Millions. This one case made the careers of several of them. Never have to work again.

Crazy.

Coinbase Lays Off Nearly 700 Workers In ‘AI-Native’ Restructuring

Posted by BeauHD View on SlashDot Skip
Coinbase is laying off about 700 workers, or 14% of its workforce, as CEO Brian Armstrong says the company is restructuring to become “lean, fast, and AI-native.” Engadget reports:
Armstrong claimed he’d seen engineers “use AI to ship in days what used to take a team weeks” and that non-technical teams in the company are “shipping production code,” while Coinbase is automating many of its workflows. “All of this has led us to an inflection point, not just for Coinbase, but for every company,” Armstrong wrote. “The biggest risk now is not taking action. We are adjusting early and deliberately to rebuild Coinbase to be lean, fast and AI-native. We need to return to the speed and focus of our startup founding, with AI at our core.”

An AI-driven restructuring is only one half of the equation for Coinbase, though. Armstrong wrote that while the company “is well-capitalized, has diversified revenue streams and is well-positioned to weather any storm,” the crypto market is down. As such, Coinbase is attempting to become leaner and faster ahead of the next crypto cycle. The company is eliminating some management layers and organizing the business around “AI-native talent who can manage fleets of agents to drive outsized impact,” Armstrong wrote. “We’ll also be experimenting with reduced pod sizes, including ‘one person teams’ with engineers, designers and product managers all in one role.” That sure sounds like an attempt to get workers to take on more responsibilities.

Really? Wow!

By oldgraybeard • Score: 3 Thread
“non-technical teams in the company are “shipping production code,”"

Everything you hate in one company

By Rosco P. Coltrane • Score: 5, Insightful Thread

Crypto grift, AI bubble and psychopathic billionaire CEO.

Google DeepMind Workers Vote To Unionize Over Military AI Deals

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from Wired:
Employees at Google DeepMind in London have voted to unionize as part of a bid to block the AI lab from providing its technology to the US and Israeli militaries. In a letter addressed to Google’s managing director for the UK and Ireland, Debbie Weinstein, the workers asked the company to recognize the Communication Workers Union and Unite the Union as joint representatives for DeepMind employees. “Fundamentally, the push for unionization is about holding Google to its own ethical standards on AI, how they monetize it, what the products do, and who they work with,” John Chadfield, national officer for technology at the CWU, tells WIRED. “Through the process of unionization, workers are collectively in a much stronger place to put [demands] to an increasingly deaf management.”

[…] The DeepMind employee tells WIRED that if the staff succeeds in unionizing in the UK, they will likely demand that Google pulls out of its long-standing contract with the Israeli military, and seek greater transparency over how its AI products will be used, and some sort of assurance relating to layoffs made possible by automation. If Google does not engage, the letter states, the employees will ask an arbitration committee to compel the company to recognize the unions. Since the turn of the year, both Anthropic and OpenAI have announced large-scale expansions of their operations in London. CWU hopes the unionization effort at DeepMind will spur workers at those labs into similar action. “These conversations are happening,” claims Chadfield. “The workers at other frontier labs have seen what Google DeepMind workers have done. They’ve come to us asking for help as well.”
The unionization push began in February 2025 after Alphabet removed a pledge from its AI ethics guidelines that had barred uses such as weapons development and surveillance. “A lot of people here bought into the Google DeepMind tagline ‘to build AI responsibly to benefit humanity,’" the DeepMind employee told WIRED. “The direction of travel is to further militarization of the AI models we’re building here.”

Go Google Employees!

By machineghost • Score: 4 Thread

Go Google employees: best of luck to you!

It won’t work: Google is a for profit company, and there are A LOT of profits to be made in the made from the military. They will stop operating in the UK before they give up that much money.

It’s almost like there’s some sort of complex set up between the military and industrial sides of the equation, designed to drain US taxpayer money from citizens and into the that military industrial complex …

But still, keep fighting the good fight!

Re:Unions are for employee protections.

By kwelch007 • Score: 4, Informative Thread

It’s not going anywhere. Unions can bargain all they want over this issue, and they may even have a valid societal value in doing so. But unless they can show that it will cost Google more in legal fees than it will in lost revenue or lack of willing employees, the only real option the employees will have is to quit in protest.

Moving To Mainframe Can Be Cheaper Than Sticking With VMware

Posted by BeauHD View on SlashDot Skip
Gartner says some VMware customers may find it cheaper to move certain Linux VM workloads to IBM mainframes than to adopt Broadcom’s new VMware licensing, especially for fleets of hundreds of Linux VMs and mission-critical apps needing long-term stability. The Register reports:
Speaking to The Register to discuss the analyst firm’s mid-April publication, “The State of the IBM Mainframe in 2026,” [Gartner Vice President Analyst Alessandro Galimberti] said some buyers in many fields are comparing mainframes to modern environments and deciding Big Blue’s big iron comes out ahead. “I can build a multi-region cloud application, but things like data synchronization and high availability are things I need to build into application logic,” he said. “The mainframe has that in the platform, which shields developers from complexity.” He also thinks mainframes are ideally suited to workloads that need many years of transactional consistency and backward-compatibility.

That said, Galimberti doesn’t recommend the mainframe for all applications. He said mission-critical applications that are unlikely to change much for a decade are best-suited to the machines, as are Linux applications because the open source OS runs on IBM’s hardware. IBM also offers the z/VM hypervisor, which he says can make Linux “even better and more enterprise-ready.” Which is why Galimberti thinks IBM’s ecosystem is attractive to VMware users, especially those who operate a fleet of 500 to 700 Linux VMs. […]

Committing to mainframes therefore means planning “to spend time negotiating price and renewal protections, rather than prioritizing the business value these solutions can deliver.” Another downside is that mainframes pose clear lock-in risk, so users may hold back on useful customizations out of fear they make it harder to extricate themselves from the platform. Access to skills remains an issue, too, as kids these days mostly don’t contemplate a career working with big iron. Galimberti sees more service providers investing in their mainframe programs, which might help. So does the availability of Linux.

Cheaper options

By GeekWithAKnife • Score: 5, Insightful Thread
I know many smaller businesses that opted for Huper-V but if you don’t need the high end features you might as well run ProxMox. It’ll do your basic HA and replication just fine. VCF canbe nice with providing a virtual slice of resource for Development to mismanage as they see fit BUT it’s still cheaper to use legacy hardware to run your dev/test VM on ProxMox etc.

Broadcom have shot themselves in the foot with the new pricing ambitions. Why do I need to pay 300-500% increase to run the same stuff on my own hardware?!

ProxMox doesn’t have the 24/7 support but for whatBroadcom charge you might as well pay a 3rd party to provide the cover. You’ll still be better off.

Not happening much. Proxmox & Nutantix better

By MIPSPro • Score: 4, Interesting Thread
Few want to be stuck with the requirement to keep IBM mainframe tooling and expertise attached to their business unless they are already there (banks mostly). One of the sister companies to ours under the same ownership actually does this kinda stuff for people and it’s still a pretty hard sell. It’s mostly folks who already have mainframes who will even listen to that sales pitch.

Proxmox and (especially) Nutanix have a much better sales pitch. They can support ESXi natively and provide the management layer. When they want to abandon the last VMware server they just V2V migrate the machines from ESXi (works pretty seamlessly in Nutanix AHV and there are some good orchestration bits for Proxmox that do it, too).

Oh come on

By jrnvk • Score: 3 Thread

Look, the VMWare debacle was one thing, but you should not aim to replace any already modern systems with IBM products in 2026.

If not for the obvious technological reasons, just look at how IBM has been run the last few years.

Gartner: Advertising Posing as Research

By nightflameauto • Score: 4, Insightful Thread

This is IBM trying to advertise that they’re still viable, when in reality, nobody is going to move from Linux in VMWare to an IBM mainframe.

Now, it’s not *COMPLETELY* outside the realm of possibility that Gartner is simply too unaware to understand that VMWare is/was not the only platform available for virtualizing Linux. They are, after all, notoriously unidimensional in their thinking on tech, and often seem to present information as if they were forced to wear blinders when doing their research. But it’s really hard to believe they’ve remained *COMPLETEL* ignorant of the other possibilities available that are anything other than, “Spend a fortune on VMWare licensing” or “Spend almost as much on IBM licensing + Hardware.”

One would almost think they’re goal was to promote spending ridiculously too much money to accomplish a business goal.

That Raspberry Pi is enterprise grade!

By LondoMollari • Score: 3 Thread

IBM’s mainframes have powered the world’s largest banks, airlines, and retail giants for decades with bulletproof reliability, built-in high availability, seamless data synchronization, and ironclad transactional integrity that keeps multi-billion-dollar operations running flawlessly—exactly the kind of rock-solid fit Gartner flagged for those big fleets of stable Linux VMs that don’t change much. When trouble hits, IBM’s elite engineers are literally on-call 24/7 and will parachute in to fix your crisis in under an hour, no GitHub tickets or crossed fingers required.

Contrast that with the chorus chanting for Raspberry Pi clusters and open-source stacks that come with zero paid support of the class IBM provides, and zero professional engineers on standby. Open source has its place in hobby labs and scrappy startups, sure, but big business, where millions or billions of US dollars count, isn’t running a charity experiment where volunteer heroes might answer a forum post before the next ice age. So keep mocking the “big iron” dinosaurs while the grown-ups at IBM quietly keep the global economy from imploding.

Bring on the hate. IBM won’t be paying attention.

Kids Bypass Age Verification With Fake Moustaches

Posted by BeauHD View on SlashDot Skip
A new Internet Matters survey suggests the UK’s Online Safety Act age checks are easy for many children to bypass. Reported workarounds include fake birthdays, borrowed IDs, video game characters, and even drawing on a fake mustache. The Register reports:
The group surveyed over 1,000 UK children and their parents, and while it did report some positive effects from changes made under the OSA, many children saw age verification as an easy-to-bypass hurdle rather than something that kept them genuinely safe. A full 46 percent of children even said that age checks were easy to bypass, while just 17 percent said that they were difficult to fool. The methods kids use to fool age gates vary, but most are pretty simple: There’s the classic use of a video game character to fool video selfie systems, while in other instances, children reported just entering a fake birthday or using someone else’s ID card when that was required.

The report even cites cases of children drawing a mustache on their faces to fool age detection filters. Seriously. While nearly half of UK kids say it’s easy to bypass online age checks (and another 17 percent say it’s neither hard nor easy), only 32 percent say they’ve actually bypassed them, according to Internet Matters. Like scoring some booze from “cool” parents, keeping age-gated content out of the hands of kids under the OSA is only as effective as parents let it be, and a quarter of them enable their kids’ online delinquency. More specifically, Internet Matters found that a full 17 percent of parents admitted to actively helping their kids evade age checks, while an additional 9 percent simply turned a blind eye to it.

Dupes with fake moustaches fool slashdot editors

By caseih • Score: 3 Thread

Come one editors. You can do better than that.

Maybe Age Verification is Backwards

By databasecowgirl • Score: 3 Thread
It might be smarter to ban parents from social media so they aren’t parenting while distracted.

Age restrictions turn access into a game

By MpVpRb • Score: 5, Insightful Thread

Kids are good at games

US Government Warns of Severe CopyFail Bug Affecting Major Versions of Linux

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from TechCrunch:
A severe security vulnerability affecting almost every version of the Linux operating system has caught defenders off-guard and scrambling to patch after security researchers publicly released exploit code that allows attackers to take complete control of vulnerable systems. The U.S. government says the bug, dubbed “CopyFail,” is now being exploited in the wild, meaning it’s being actively used in malicious hacking campaigns. […] Given the risk to the federal enterprise network, U.S. cybersecurity agency CISA has ordered all civilian federal agencies to patch any affected systems by May 15.

None of my machines has the module loaded.

By John Allsup • Score: 4, Informative Thread
You can check (as pages on this say):

grep -qE '^algif_aead ' /proc/modules && echo “Affected module is loaded” || echo “Affected module is NOT loaded”

And none of my machines has that module loaded, happily.

Re:None of my machines has the module loaded.

By kriston • Score: 5, Informative Thread

It still does if it’s not a module, which is true for many Linux distributions that have it compiled-in.

I have tested them and they were vulnerable even though that “grep” command said it was not loaded (because it’s not a module in many distros).

Bias: Expect the current regime

By hwstar • Score: 3 Thread

to publicize Linux security breaches more vigorously then IOS or Microsoft security breaches. Closed source OS providers have historically had more vulnerabilities, but the US government tends to look the other way.

Why would they do this?

They want closed source solutions to be adopted over open source solutions.

The future the government wants is to ensure each user of a personal computer can be ID’d and tracked. Age verification is the wedge to force this onto every PC. Open source operating systems get in the way of this.

Distrubtions with compiled in module:

By thegarbz • Score: 4, Informative Thread

Don’t be so self-assured. For the following distributions you can’t unload the module as it is compiled in the kernel and would not show up in /proc/modules either. These distributions cover a FUCKING HUGE market share for Linux:

Distributions with algif_aead compiled in (vulnerable as of early May 2026):
Ubuntu: 20.04 LTS, 22.04 LTS, 24.04 LTS.
RHEL-family: Red Hat Enterprise Linux 10.1 (and earlier), AlmaLinux, Rocky Linux, Oracle Linux, CloudLinux.
Amazon Linux: Amazon Linux 2023.
SUSE: SUSE Linux Enterprise 16 and earlier.
Others: Debian (all active releases), Arch Linux, and Fedora.
Embedded: Many Yocto BSPs, NVIDIA Jetson, and Ubuntu Core.

Is yours among them?

Re:Distrubtions with compiled in module:

By 93 Escort Wagon • Score: 4, Interesting Thread

FWIW AlmaLinux didn’t wait for Red Hat - they tested their own fixes and have now released new kernels to address this.

https://almalinux.org/blog/202…

Oscars Bans AI Actors and Writing From Awards

Posted by BeauHD View on SlashDot Skip
The Academy has clarified that only human-performed acting and human-authored writing are eligible for Oscar nominations. The Oscars will not ban AI tools broadly, but says it will judge films based on the degree to which humans remain central to the creative work. The BBC reports:
The Academy of Motion Picture Arts and Sciences […], which controls the US film industry’s most prestigious award, on Friday issued updated rules for what kind of work in movies and documentaries would be considered eligible for an Oscar as the use of artificial intelligence (AI) technology grows. In updated eligibility requirements, the Academy specified that only acting “demonstrably performed by humans” and that writing “must be human-authored” in order to be nominated for an award. The Academy called the requirements a “substantive” change to the rules for the Oscars.

The need to specify awards can only go to acting and writing done by “humans” is new for the academy. […] However, the academy did not issue a ban on AI use in films more broadly. Outside of acting and writing, if a filmmaker used AI tools in their work, such “tools neither help nor harm the chances of achieving a nomination,” the academy wrote. “The Academy and each branch will judge the achievement, taking into account the degree to which a human was at the heart of the creative authorship when choosing which movie to award,” the group added. “If questions arise regarding the aforementioned use of generative artificial intelligence, the Academy reserves the right to request more information about the nature of the use and human authorship.”

Re:AI is the master of DEI. DEI cheats with AI any

By whitroth • Score: 5, Insightful Thread

Got it. You’re in your basement, unable to get a job because you’re too stupid and incompetent, and imagine they’d hire you instead of “girl bosses” - you can’t get along with anyone, and dream that you’re better than any woman.

And fucking lying POS FASCISM IS RIGHT WING. The fascist fought the communists - in Spain, in Germany, in Italy. Your bullshit that you are the good guys… yeah, why haven’t you risen up to protect us from fascism (like ICE)? Because your a racist mysogynist.

Sorry, movies used to suck even worse.

By Somervillain • Score: 4 Thread

Serious question: Do we actually prefer current screen writing to be something worth protecting? It’s really not that dissimilar to much of software, where the entire production process has been so corporatized and dumbed/mellowed down that you might replace any individual contributor with AI without anyone noticing. Or all of them for what I care.

You’re not making a serious and sincere question. You’re stating you opinion and your agenda. If you think screenwriting is terrible today, you’re forgetting how badly it sucked before. Aliens may be my favorite movie, definitely a great movie, few would disagree, but remember how many shitty movies were released in 1986? Howard the Duck and Cobra were no masterpieces. 2026 is an intellectual utopia compared to 1986. Regarding corporatization? I assume you’re talking about Marvel? Well, Top Gun is a literal ad for the US Navy…massive hit in 1984 as well as 2022. I found it entertaining, but it was a fucking ad. Most children’s programs were toy ads. The Super Mario Brothers movie from 2023 was FAAAR superior to the one from 1993. I am pretty confident the Street Fighter movie coming out this year will be superior to the one from the 90s. Mortal Kombat?…OK, that was a downgrade…because the original was stupid, shitty, silly fun....and the newest one tried to be high quality…a mistake from not understanding your audience. However, it’s fair to say they’re closer to commercials than

Regarding AI. If you think that will make no difference?…no, you don’t understand AI. It’s a pattern matching tool. All movies will look the same, dialog will be awful unless heavily doctored. AI can write a decent short story, but will fall down writing a large piece. There will be TONS of errors and bad and confusing sentences and weird hallucinations. The best case scenario for LLM-based AIs is just averaging a bunch of screenplays....it will be noticeably more uniform and corporate and stale and tame....lacking in originality or creativity.

I think the Oscars committee made the right call. It has always been a celebration of human accomplishment. I don’t think AI accomplishments belong in the same awards criteria.

History repeats itself

By hcs_$reboot • Score: 4, Interesting Thread
In the 19th century, photography was seen as “mechanical” not true art (like paintings).
Synthesized music, CGI… all initially rejected.
But AI is somewhat different in that it directly threatens the income of the entire film industry.
Once AI has advanced further, no one will want these “physical” actors who perform more or less well in films with questionable scripts.

Re:AI is the master of DEI. DEI cheats with AI any

By Gravis Zero • Score: 4, Funny Thread

Hey, don’t lump us stupid and incompetent basement dwellers in with those fascist assholes!

Near term / long term

By Tschaine • Score: 3 Thread

For the near term, you can look at this rule as a statement: “AI actors and scripts suck, so don’t even bother trying.” And they’re probably right. They don’t want to get deluged with crap submissions any more than open-source repo maintainers want to get deluged with vibe-coded garbage pull requests.

But it is entirely possible that AI movies will not always suck. There may (probably will) be a day when people start to really enjoy AI-scripted movies with AI-rendered actors. (Iran’s Lego-world propaganda music videos are kind of amazing. As is the fact that a repressive regime is producing cutting-edge media. But that’s another topic for another time.)

At the point, this rule will just be an artifact of a clique of artisans who want to defend their prestige against a disruptive technology. Like horse-drawn-chariot race officials declaring that motorcar builders are not welcome to enter their races.

Another organizing body will spring up, and it will cater to the desires of producers and consumers who appreciate the new technology.

VS Code Update Added Copilot As Default Co-Author To Git Commits

Posted by BeauHD View on SlashDot Skip
Longtime Slashdot reader UnknowingFool writes:
On April 15, 2026, a Microsoft employee made a change to Visual Studio Code and pushed it within 8 hours without review, notification, or documentation. The change added “Co-authored-by: Copilot” by default to the end of commit messages in Git when Copilot was used in creating the code. However, the implementation was bugged, and the message was added to every commit regardless if Copilot was used or disabled. Since this message was automatically added to the end of commit messages, users were not aware of it as the UI does not show this addition when making commits. The change as been reverted as of May 3, but not before 1.4 million commits were made. Unfortunately, those messages cannot be cleansed and are permanent.

I want to be a co-author

By Anonymous Coward • Score: 3, Insightful Thread

Since Copilot was trained using my code, I want to be added as co-author to all code done using Copilot. Thank you.

Re: Isn’t this fraud?

By reanjr • Score: 4, Insightful Thread

It’s even worse. LLM generated code can’t be copyrighted.

Re:Isn’t this fraud?

By drinkypoo • Score: 4, Interesting Thread

I think jail time for corporate employees doing shit like this should be a last resort but at this point I don’t really see any other good options.

Let them go free, but jail literally everyone above them on the org chart.

I actually propose that every executive salary be capped at a percentage of the sum of their direct reports, and that they share responsibility for any act they take.

I’ll say it again

By 93 Escort Wagon • Score: 5, Funny Thread

If there’s one thing that comes to my mind when I think about Microsoft developers, it’s quality software.

‘Notepad++ For Mac’ Release Is Disavowed By the Creator of the Original

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from Ars Technica, written by Andrew Cunningham:
As its name implies, the venerable Notepad++ text editor began as a more capable version of the classic Windows Notepad, with features such as line numbering and syntax highlighting. It was created in 2003 by Don Ho, who continues to be its primary author and maintainer, and it has been a Windows-exclusive app throughout its existence (older Notepad++ versions support OSes as old as Windows 95; the current version officially supports everything going back to Windows 7). I’m not a devoted user of the app, but I was aware of its history, which is why I was surprised to see news of a “Notepad++ for Mac” port making the rounds last week, as though it were a port of the original available from the Notepad++ website.

Apparently, this news surprised Ho as well, who claims that the Mac version and its author, Andrey Letov, are "using the Notepad++ trademark (the name) without permission.” “This is misleading, inappropriate, and frankly disrespectful to both the project and its users,” Ho wrote. “It has already fooled people — including tech media — into believing this is an official release. To be crystal clear: Notepad++ has never released a macOS version. Anyone claiming otherwise is simply riding on the Notepad++ name.”
Ho repeatedly asked the developer to stop using the brand and eventually reported the trademark use to Cloudflare, the CDN of the Notepad++ for Mac site. “Every day that website remains active, you are in further violation of the law,” Ho wrote. “I cannot authorize a ‘week or two’ of continued trademark infringement.”
Letov has since begun rebranding the app as “NextPad++,” though the old branding and URL reportedly remained available. The name changes is “an homage to NeXT Computer,” notes Ars, “and uses a frog icon rather than the Notepad++ lizard.”

BBedit

By Malc • Score: 5, Interesting Thread

Does the job. Been doing the job on Macs for decades (since 1992). Sometimes called Text Wrangler (it was the free cut-down version), until BBedit got a free version too. Please support Bare Bones by using BBedit.

And there’s always VIM.

BBedit and Beyond Compare are my two must-have utilities on my Macs. Both companies have served Mac users for a long time, great products, great support and none of this bullshit and enshitification like so many recent software companies.

Re:Trademark

By giesen • Score: 5, Informative Thread

Don Ho is correct about his trademark (yes, it’s registered), and Andrey Letov appears to be showing the proper respect to Don by renaming and rebranding the port.

That’s a charitable interpretation of what’s been going on. If you read the GitHub issue, Don Ho has been asking Andrey Letov for days to rebrand, and Andrey has been stalling and deferring, and even after he started the rename, and was implying some sort of coordination between the projects or official support where none existed. Don was initially very polite with Andrey, giving him the benefit of the doubt, but it’s become clear through his tactics that Andrey has been trying to ride Notepad++‘s trademark into launch his vibe-coded MacOS port.

Re:Takes two to tango

By _merlin • Score: 5, Informative Thread

I don’t know what the AC you replied to is referring to, but Don Ho has occasionally used the release notes (opened automatically after installing an update) and the web site to express anti-PRC opinions. I wouldn’t call them “tirades”, but some people apparently get very upset if software developers express opinions.

Re:Takes two to tango

By drinkypoo • Score: 5, Insightful Thread

That’s called spam, whether or not I or anyone else agrees with it (or not).

Spam is messages you have not agreed to receive. If he’s sending them out to you then they’re spam. If he’s posting announcements about them, they’re arguably spam. If he’s including them in other messages then it’s just offtopic content.

Re:Is there a reason for not accepting?

By StormReaver • Score: 4, Informative Thread

My understanding is that he doesn’t object to the porting of the source code, but rather the unauthorized use of his trademark and the misleading association of the port with his work.

How Microplastics Are Likely Helping To Heat Up the Planet

Posted by BeauHD View on SlashDot Skip
A new Nature Climate Change study suggests airborne microplastics — especially darker and colored particles — are likely contributing to atmospheric warming by absorbing more heat than they reflect. Researchers estimate the effect could be roughly one-sixth that of black carbon, though outside experts say the uncertainties remain large and more study is needed before drawing firm policy conclusions. “We can say with confidence that overall they are warming agents,” said Drew Shindell, a Duke University earth science professor and co-author of the study. “To me, that’s the big advance.” The Washington Post reports:
To undertake their study, a group led by researchers at Fudan University in China examined how different colors and sizes of microplastics interact with light across the spectrum, while combining that information with simulations of how particles get dispersed in the air across the planet. “Black, yellow, blue and red [particles] absorb sunlight much more strongly than the white particles,” Yu Liu, a Fudan professor and study co-author, said in a call with reporters. In fact, the study details how black and colored particles showed “absorption levels nearly 75 times higher than pristine, non-pigmented plastics.” The scientists also found that different sizes of particles absorb light at different intensities — and that how they absorb light can change as they age.

The authors estimate that microplastics suspended in the atmosphere could be contributing to global warming at about one-sixth the amount of black carbon, also known as soot, a pollutant generated largely from burning fossil fuels. If the latest estimates are right, Shindell said, microplastics might not be an enormous source of atmospheric warming, compared with massive contributors such as cars and trucks, belching industrial plants or even burping cows. “But not a trivial one, either,” he said.

By his calculation, the effect of one year’s microplastic emissions globally is approximately equivalent to 200 coal-fired power plants running for that year. But that rough estimate does not factor the longer-term repercussions of microplastics decaying and persisting in the environment for decades to come. Whatever the exact impact, the topic deserves further study, the authors say, because current climate modeling does not account for any additional warming that these tiny particles might be causing.

especially darker and colored particles

By rossdee • Score: 5, Funny Thread

Obviously we need to release more white coloured micro plastis

How does this compare

By wakeboarder • Score: 4, Interesting Thread

With sand and dirt?

Likely

By devslash0 • Score: 3 Thread

Ping me back when you prove it. Until then, I’m not interested in “maybes”. Not that I don’t believe you that it’s a plausible theory. I just don’t have time for the barrage of unconfirmed shocking science news everyday anymore.

Astronomers May Have Detected an Atmosphere Around a Tiny, Icy World Past Pluto

Posted by BeauHD View on SlashDot Skip
“The Associated Press is reporting on a new study in Nature Astronomy suggesting that a tiny, icy world beyond Pluto harbors a thin, delicate atmosphere that may have been created by volcanic eruptions or a comet strike,” writes longtime Slashdot reader fahrbot-bot. From the report:
Just 300 miles (500 kilometers) or so across, this mini Pluto is thought to be the solar system’s smallest object yet with a clearly detected global atmosphere bound by gravity, said lead researcher Ko Arimatsu of the National Astronomical Observatory of Japan. This so-called minor planet — formally known as (612533) 2002 XV93 — is considered a plutino, circling the sun twice in the time it takes Neptune to complete three solar orbits. At the time of the study, it was more than 3.4 billion miles (5.5 billion kilometers) away, farther than even Pluto, the only other object in the Kuiper Belt with an observed atmosphere. This cosmic iceball’s atmosphere is believed to be 5 million to 10 million times thinner than Earth’s protective atmosphere, according to the the study […].

It’s 50 to 100 times thinner than even Pluto’s tenuous atmosphere. The likeliest atmospheric chemicals are methane, nitrogen or carbon monoxide, any of which could reproduce the observed dimming as the object passed before the star, according to Arimatsu. Further observations, especially by NASA’s Webb Space Telescope, could verify the makeup of the atmosphere, according to Arimatsu.

Non-paywalled source

By Tx • Score: 5, Informative Thread

A preprint of the article is available on the arXiv (https://arxiv.org/pdf/2605.02243, for those that don’t have access behind Nature’s paywall.

Re:Similar to that of Pluto, but let’s sensational

By Anonymous Coward • Score: 5, Interesting Thread
It’s interesting to people who understand orbital dynamics

The 2:3 orbital resonance is the primary reason that Pluto and other plutinos (like 2002 XV93) can exist in stable orbits despite crossing Neptune’s path.This specific ratio provides several critical evolutionary and mechanical benefits:

1. Collision Avoidance (“Phase Protection”)Even though many plutinos have highly elliptical orbits that technically cross inside Neptune’s orbit, they never actually collide or even come close to the planet. The 2:3 resonance ensures that whenever a plutino reaches its perihelion (the point closest to the Sun where it crosses Neptune’s path), Neptune is consistently a quarter of an orbit away. This “phase protection” keeps them at a safe minimum distance of billions of kilometres at all times.

OpenAI President Discloses His Stake In the Company Is Worth $30 Billion

Posted by BeauHD View on SlashDot Skip
OpenAI president Greg Brockman’s testimony dominated the fifth day of the trial for Elon Musk’s lawsuit against the AI company. Brockman took the witness stand on Monday, disclosing that his stake in OpenAI is worth nearly $30 billion, despite not personally investing money in OpenAI. The judge also declined to admit a pretrial text in which Musk allegedly warned Brockman that he and Altman would become “the most hated men in America.” From a report:
Brockman’s disclosure would put him on the Forbes list of the world’s richest people, with wealth comparable to Melinda French Gates. […] Late Sunday, OpenAI lawyers tried to admit as evidence a text message Musk sent to Brockman two days before the trial began. According to a court filing — which did not include the actual text exchange — Musk sent a message to Brockman to gauge interest in settlement.

When Brockman replied that both sides should drop their respective claims, Musk shot back, according to the filing, “By the end of this week, you and Sam will be the most hated men in America. If you insist, so it will be.” Judge Yvonne Gonzalez Rogers, who is overseeing the trial, did not admit the text exchange as evidence.
Brockman acknowledged that he had promised to personally donate $100,000 to OpenAI’s charity but never did. In explaining the delay, Brockman put the onus on Altman: “I asked Sam when I should donate this, and he said he would let me know,” reports Business Insider.

The first witness to testify on Monday was Stuart Russell, an artificial intelligence expert who teaches computer science at the University of California, Berkeley. “The most memorable part of Russell’s testimony was when he talked about how much Musk’s legal team paid him,” notes Business Insider. “He received an eye-popping $5,000 per hour for 40 hours of preparatory work. Expert witnesses in high-profile cases typically make between $500 to $1,000 per hour.”

Recap:
Musk Concludes Testimony At OpenAI Trial (Day Four)
Elon Musk Says OpenAI Betrayed Him, Clashes With Company’s Attorney (Day Three)
Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two)
Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)

Now, this might strike some regulars as harsh,

By Mr. Dollar Ton • Score: 5, Insightful Thread

but I would not mind if everyone involved in this case as litigants and their reps dies an unpleasant death.

Won’t solve the basic problem of modern capitalism’s complete subversion of democracy, but it might slow it down a bit.

Re:If the asset tax passes, he’ll owe 1.5B

By dfghjk • Score: 5, Insightful Thread

“The asset tax is dumb. how is he supposed to pay that tax without diluting his ownership stake? "

That’s the goal. The government exists for the benefit of the people, the people suffer when all wealth accumulates at the top. Massive ownership stake is not only NOT a goal of the government, it is the problem to solve.

“When he announces he’s selling shares, the value of OpenAI will drop just by that.”

The very existence of this phenomenon IS the problem.

“So does he pay tax on the new or old valuation?”

Yes.

“I mean, if you had $30 billion and someone pisses you off beyond anything by taking what you put your heart and soul into you’d do every legal means to makes sure whoever done that to you pays. "

Slave owners were pissed too. And heart and soul? Fuck off, he got that by exploiting people.

Re:Sorry, elmo

By dfghjk • Score: 4, Interesting Thread

You cannot explain anything to Musk, he believes that anything he says becomes the truth. Same as trump. The power of positive thinking combined with uncontrolled greed and criminal sociopathy. Envision owning all of humanity and it will become true. Tony Robbins coaching Hitler.

You dare criticize my cave submarine? Well you’re a pedophile. How do people not see this? It has been plain as day for a decade, yet people have only wised up in the last year.

Most hated man?

By thegarbz • Score: 4, Informative Thread

Simply having money due to AI doesn’t make someone hated. On the other hand, destroying the government, aligning with extremists, and acting like a persecuted crybaby while being the richest man alive certainly does tick a lot of boxes for people hating you the most.

Lots of consternation here…

By kwelch007 • Score: 3 Thread

Lots of folks here upset about Musk vs Altman or whoever, rich vs poor, all that.

Is nobody bothered that the CEO of a non-profit, who has invested none of their own money, has equity is said non-profit that is supposedly valued at $30B? Seems that speaks to what Musk is claiming as much as anything. I personally don’t care if Musk gets anything out of this lawsuit, but his point about keeping Non-profits from being gamed seems valid.

White House Considers Vetting AI Models Before They Are Released

Posted by BeauHD View on SlashDot Skip
The Trump administration is reportedly considering an executive order to create a working group that could review advanced AI models before public release. The shift follows concerns over Anthropic’s powerful Mythos model and its cyber capabilities, with officials weighing whether the government should get early access to frontier models without necessarily blocking their release. The New York Times reports:
In meetings last week, White House officials told executives from Anthropic, Google and OpenAI about some of those plans, people briefed on the conversations said. The working group is likely to consider a number of oversight approaches, officials said. But a review process could be similar to one being developed in Britain, which has assigned several government bodies to ensure that A.I. models meet certain safety standards, people in the tech industry and the administration said.

The discussions signal a stark reversal in the Trump administration’s approach to A.I. Since returning to office last year, Mr. Trump has been a major booster of the technology, which he has said is vital to winning the geopolitical contest against China. Among other moves, he swiftly rolled back a Biden administration regulatory process that asked A.I. developers to perform safety evaluations and report on A.I. models with potential military applications. “We’re going to make this industry absolutely the top, because right now it’s a beautiful baby that’s born,” Mr. Trump said of A.I. at an event in July. “We have to grow that baby and let that baby thrive. We can’t stop it. We can’t stop it with politics. We can’t stop it with foolish rules and even stupid rules.” Mr. Trump left room for some rules, but he added that “they have to be more brilliant than even the technology itself.”

The White House wants to avoid any political repercussions if a devastating A.I.-enabled cyberattack were to occur, people in the tech industry and the administration said. The administration is also evaluating whether new A.I. models could yield cyber-capabilities that could be useful to the Pentagon and U.S. intelligence agencies, they said. To get ahead of models like Mythos, some officials are pushing for a review system that would give the government first access to A.I. models, but that would not block their release, people briefed on the talks said.

Re:A rigorous test plan, no doubt…

By Powercntrl • Score: 5, Interesting Thread

It’d be hilarious if they pulled a Volkswagen and had the AI recognize when it is being vetted, so it provides answers the current administration wants to hear, and then goes super woke after the model is actually deployed.

xAI’s Grok wouldn’t need to cheat, obviously. That thing is biased so far to the right it makes Fox News almost look sane.

Set the precedent

By backslashdot • Score: 5, Interesting Thread

When the Democrats come in, they’ll vet the AI models properly.

Re:On what authority?

By dfghjk • Score: 5, Interesting Thread

Apparently the same authority that allows ICE to murder citizens in the streets.

Also, what does it mean to “release a model”? Is ChatGPT a model? No, it is not. If making a model available becomes a problem, then keep the model private and only release tools that use it.

And how is a model dangerous? It’s the tool that uses it that might be. How does the government know what any cloud services does behind the scenes.

It’s all complete bullshit from the most incompetent administration ever.

“small” government

By zeiche • Score: 5, Interesting Thread

is this the small government that the “conservatives” keep banging on about?

what, exactly, is small about white-house review of products offered to the public?

please, MAGAts, clue me in.

Re:A rigorous test plan, no doubt…

By Mr. Dollar Ton • Score: 5, Insightful Thread

This isn’t abkut “testing” AI models, this is just the testing of another grift model.

OpenAI, Google, and Microsoft Back Bill To Fund ‘AI Literacy’ In Schools

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from 404 Media:
A new, bipartisan bill introduced (PDF) by Democratic Senator of California Adam Schiff and endorsed by the biggest AI developers in the world — including OpenAI, Google, and Microsoft — would change the K-12 curriculum to shoehorn in “AI literacy,” something that young people and teachers alike already hate in schools. The Literacy in Future Technologies Artificial Intelligence, or LIFT AI Act, would empower the new director of the National Science Foundation (NSF) to make grant awards “on a merit-reviewed, competitive basis to institutions of higher education or nonprofit organizations (or a consortium thereof) to support research activities to develop educational curricula, instructional material, teacher professional development, and evaluation methods for AI literacy at the K-12 level,” the bill says.

It defines AI literacy as using AI; specifically, “having the age-appropriate knowledge and ability to use artificial intelligence effectively, to critically interpret outputs, to solve problems in an AI-enabled world, and to mitigate potential risks.” The bill is endorsed by the American Federation of Teachers, Google, OpenAI, Information Technology Industry Council, Software & Information Industry Association, Microsoft, and HP Inc. […] The grant would support “AI literacy evaluation tools and resources for educators assessing proficiency in AI literacy,” according to the bill. It would also fund “professional development courses and experiences in AI literacy,” and the development of “hands-on learning tools to assist in developing and improving AI literacy.” Most importantly for real-world implications, it would fund changing the existing curriculum “to incorporate AI literacy where appropriate, including responsible use of AI in learning.”

Use our products!

By locater16 • Score: 5, Insightful Thread
“Indoctrinate, indoctrinate, indoctrinate!” - Tech Company CEO’s

Start with regular literacy, eh?

By MIPSPro • Score: 5, Insightful Thread
21% of US adults are illiterate. 56% read below a 6th grade level. 2/3 of 4th/8th graders not proficient in reading; recent declines post-2019/2022. We don’t need smartphones and AI in schools at all. What they need is to go back to chalkboards, physical textbooks, and homework. The only thing that needs tweaking is to add AI detection and resistance to their assignments (ie.. do more in class, in person). Schools that do this get consistently better results than the ones that focus on technology.

Just great

By fahrbot-bot • Score: 5, Insightful Thread

… Fund ‘AI Literacy’ In Schools.

“Learn to Code” becomes “Learn to Prompt” /s

Fuck the techbros

By sinkskinkshrieks • Score: 4, Insightful Thread
First, we need history, English, math, and critical thinking skills literacy before AI claptrap.

Re:A Positive Slant

By Junta • Score: 4, Interesting Thread

In the 90s, the school systems were kind of left to fend for themselves. The vast majority of the computers in my schools were systems the area companies were scrapping, but donated them on the way out. A decent part of my programming class was trying to salvage 20 out of 24 systems that a business donated that wouldn’t boot. They spent what budget they could on a handful of computers capable of running encarta for the library.

In the 2000s, things started shifting a bit, in a college course we were handed out ‘donated’ copies of Visual Studio, but the teacher said that’s for us but wasn’t going to be used for class at all.

Since 2010, things have gotten a bit worrisome as a lot of the big tech have started getting awfully opinionated and wanting to ‘help’ kids learn to code. Education is all well and good, but when the big corporate interests get actively involved and prescriptive, things drift toward indoctrination more than education.

At least with the ‘learn to code’, a skill that needed significant develop was being theoretically served, though a lot to be worried about there, with the LLM scenario, it’s pretty much just indoctrination. To the extent an LLM works or does not work is not something that takes a significant amount of time to sort out.

As an example, my kid was asked to write a brief thing on what excitingly awesome thing they are looking forward to using AI to do as part of an “AI challenge” at school sponsored by a local tech company. Not to take a critical assessment of things, of evaluating the nuance of benefits and drawbacks, nothing on helping them understand how to best use it, just to blatantly write a puff piece about how awesome AI is/would be for something. Basically soliciting marketing fodder and awarding three kids a couple hundred bucks. It was going to be a grade and so they had to do it and take it seriously..

The Pixel 11 Could Be the Next Victim of the RAM Shortage

Posted by BeauHD View on SlashDot
Google’s Pixel 11 lineup could see RAM cuts or lower starting configurations because of the global memory shortage, with leaks suggesting the base model may drop from 12GB to 8GB while Pro models could add 12GB versions below the current 16GB tier. The Verge reports:
There will be 16GB configurations available for each, but adding a lower-spec model could mean the 16GB version is getting a price hike. However, the silver lining is that the specs from MysticLeaks also include camera upgrades and brighter displays for the Pro models. The RAM shortage is pushing other phone makers, including Samsung, to raise prices, too.

Hey, Google! Here’s an alternate idea

By fahrbot-bot • Score: 3 Thread

I know it would be against your business model, but how about reinstating support for your older phones so people can keep the ones they have longer? My Pixel 5a still works great. And while it doesn’t get OS/Security updates anymore, I’m planning to keep it as long as it’s working and supported on my network (Ting/T-Mobile) and the Play Store - like I did with my my previous phone, a Kyocera HydroVibe (2015 to 2021).

On the bright side

By turb • Score: 4, Insightful Thread

Pressure to reduce RAM in a device due to cost could drive engineering to make phones be more efficient and utilize LESS RAM. Linux/Android does NOT need to be a pig. It’s a pig because device vendors/OS engineers/app makers get lazy.

Sadly running AI native on your phone could as a result be less than useful but that’s a good thing isn’t it?

Re:Hey, Google! Here’s an alternate idea

By rta • Score: 5, Insightful Thread

no no… please buy a new pixel 9/10/11.

it takes slightly better pictures and it only weighs 50% more than your current one. oh did we mention it gets a full day of battery… yeah, apparently that’s notable again as it was 15 years ago.

the in-screen fingerprint sensor kinda sucks, but don’t worry, you’ll eventually forget the rear sensor was flawless for years.

Re:How much RAM?

By 93 Escort Wagon • Score: 4, Funny Thread

F**k it, we’re doing 5 layers!

Re: On the bright side

By sodul • Score: 5, Insightful Thread

My first computer had 64kB, second one 512kB, and that was what we now call ‘unified memory’, shared between the cpu and the video encoding chip (no gpu back then). These machines did not have virtual memory either to overflow to disk. I had to be very mindful of memory usage when writing code or we would simply crash by running out of memory, no forgiveness.

I’ve worked in Silicon Valley for my entire career and I was surprised at how little attention most ‘software engineers’ paid to ram and cpu optimization. You would expect that from scripters, folks that write Bash or Python code, but I saw that a lot with Java developers as well. It was especially bad when the devs could ask the OPS team for a machine with more RAM as their first instinct rather than consider any optimizations, after all it would come from some other team budget, and that same OPS team would get blamed for going over budget, not the Devs.

So yeah in a way, a good RAM shortage for a while might help bring back some discipline. Unfortunately the vast majority of AI training is done on code that does not care about optimizing RAM consumption.

It is not just RAM consumption, but storage as well. My son got a second hand Nintendo Switch OLED yesterday, the prior owner had 2 games installed leaving 6GB out of the 64GB free. One game was downloaded, the other one still required the cartridge and used 26GB of storage. That’s rather insane.

Meanwhile you can get a generic retro gaming device for $50 with thousands of classic games on a 64 GB SDCard. I’m pretty sure a lot of that space could be better optimized, but there is little incentive for that these days.