Alterslash

the unofficial Slashdot digest
 

Contents

  1. Google Announces Gemma 4 Open AI Models, Switches To Apache 2.0 License
  2. Artemis II Astronauts Have ‘Two Microsoft Outlooks’ and Neither Work
  3. Nvidia Rolls Out Its Fix For PC Gaming’s ‘Compiling Shaders’ Wait Times
  4. Steam On Linux Use Skyrocketed Above 5% In March
  5. Group Pushing Age Verification Requirements For AI Sneakily Backed By OpenAI
  6. Rapid Snow Melt-Off In American West Stuns Scientists
  7. SpaceX Files To Go Public
  8. NASA Launches Artemis II Astronauts Around the Moon
  9. UFC-Que Choisir Takes Ubisoft To French Court Over the Crew Shutdown
  10. AI Can Clone Open-Source Software In Minutes
  11. Cloudflare Announces EmDash As Open-Source ‘Spiritual Successor’ To WordPress
  12. Sweden Swaps Screens For Books In the Classroom
  13. OnlyOffice Suspends Nextcloud Partnership For Forking Its Project Without Approval
  14. Anthropic Issues Copyright Takedown Requests To Remove 8,000+ Copies of Claude Code Source Code
  15. CEO of America’s Largest Public Hospital System Says He’s Ready To Replace Radiologists With AI

Alterslash picks up to the best 5 comments from each of the day’s Slashdot stories, and presents them on a single page for easy reading.

Google Announces Gemma 4 Open AI Models, Switches To Apache 2.0 License

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from Ars Technica:
Google’s Gemini AI models have improved by leaps and bounds over the past year, but you can only use Gemini on Google’s terms. The company’s Gemma open-weight models have provided more freedom, but Gemma 3, which launched over a year ago, is getting a bit long in the tooth. Starting today, developers can start working with Gemma 4, which comes in four sizes optimized for local usage. Google has also acknowledged developer frustrations with AI licensing, so it’s dumping the custom Gemma license.

Like past versions of its open-weight models, Google has designed Gemma 4 to be usable on local machines. That can mean plenty of things, of course. The two large Gemma variants, 26B Mixture of Experts and 31B Dense, are designed to run unquantized in bfloat16 format on a single 80GB Nvidia H100 GPU. Granted, that’s a $20,000 AI accelerator, but it’s still local hardware. If quantized to run at lower precision, these big models will fit on consumer GPUs. Google also claims it has focused on reducing latency to really take advantage of Gemma’s local processing. The 26B Mixture of Experts model activates only 3.8 billion of its 26 billion parameters in inference mode, giving it much higher tokens-per-second than similarly sized models. Meanwhile, 31B Dense is more about quality than speed, but Google expects developers to fine-tune it for specific uses.

The other two Gemma 4 models, Effective 2B (E2B) and Effective 4B (E4B), are aimed at mobile devices. These options were designed to maintain low memory usage during inference, running at an effective 2 billion or 4 billion parameters. Google says the Pixel team worked closely with Qualcomm and MediaTek to optimize these models for devices like smartphones, Raspberry Pi, and Jetson Nano. Not only do they use less memory and battery than Gemma 3, but Google also touts “near-zero latency” this time around.
The Apache 2.0 license is much more flexible with its terms of use for commercial restrictions, “granting you complete control over your data, infrastructure, and models,” says Google.

Clement Delangue, co-founder and CEO of Hugging Face, called it “a huge milestone” that will help developers use Gemma for more projects and expand what Google calls the "Gemmaverse.”

Artemis II Astronauts Have ‘Two Microsoft Outlooks’ and Neither Work

Posted by BeauHD View on SlashDot Skip
Even on NASA’s Artemis II mission around the moon, astronauts apparently still have to deal with broken Microsoft Outlook. One of the crew members, Reid Wiseman, jokingly reported that he had “two Microsoft Outlooks” and neither worked. 404 Media reports:
On April 1, four astronauts from the U.S. and Canada embarked on a 10-day flight to loop around the moon. Spotted by VGBees podcast host Niki Grayson on the NASA livestream of live views from the , around 2 a.m. ET, mission control acknowledges an issue with a process control system and offers to remote in — yes, like how your office IT guy would pause his CoD campaign to log into Okta for you because you used the wrong password too many times.

One of the astronauts, Reid Wiseman, says that’s chill, but while they’re in there: “I also see that I have two Microsoft Outlooks, and neither one of those are working.” Astronauts are trained for decades in some of the most physically and mentally grueling environments of any career. They’re some of the smartest people on the planet, and they have to be, before we strap them to 3.2 million pounds of jet fuel and make them do complex experiments and high-stakes decisions for days on end. And yet, once they get up there, fucking Outlook is borked.

Just another day at the office

By LondoMollari • Score: 3 Thread

We’re trying to make sure that the astronauts feel comfortable in space and not out of sorts so we made it like just another day at the office.

Orion computers

By Valgrus Thunderaxe • Score: 3 Thread
Maybe Pine would be something more compatible with that hardware.

Nvidia Rolls Out Its Fix For PC Gaming’s ‘Compiling Shaders’ Wait Times

Posted by BeauHD View on SlashDot Skip
Nvidia has begun rolling out a beta feature that automatically compiles game shaders while a PC is idle. It won’t eliminate shader compilation the first time a game runs, but Ars Technica reports it could help reduce those repeated wait times. From the report:
Nvidia’s new Auto Shader Compilation system promises to “reduc[e] the frequency of game runtime compilation after driver updates” for users running Nvidia’s GeForce Game Ready Driver 595.97 WHQL or later. When the feature is active and your machine is idle, the app will automatically start rebuilding DirectX drivers for your games so they’re all set to roll the next time they launch.

While the feature defaults to being turned off when the Nvidia App is first downloaded, users can activate it by going to the Graphics Tab > Global Settings > Shader Cache. There, they can set aside disk space for precompiled shaders and decide how many system resources the compilation process should use. App users can also manually force shader recompilation through the app rather than waiting for the machine to go idle.

Unfortunately, Nvidia warns that users will still have to generate shaders in-game after downloading a title for the first time. The Auto Shader Compiler system only generates the new shaders needed after subsequent driver updates following that first run of a new title.

um ok, but…

By drinkypoo • Score: 3 Thread

Steam does this already and most of my games are delivered via steam, so most of my games have this already.

I think steam does set the processes slightly nice, but I don’t think they change the ioprio so it can still have a negative impact on systems without fast storage. (I have mirrored nVME SSDs so this is only a problem to any degree when this is done for infrequently played games, which are stored on HDD. That’s a 3-way mirror too, though.)

BitTorrent

By reanjr • Score: 3 Thread

They need to implement BitTorrent or something. There’s no reason everyone has to compile this shit themselves.

Steam On Linux Use Skyrocketed Above 5% In March

Posted by BeauHD View on SlashDot Skip
Valve’s March 2026 Steam Survey shows Linux gaming usage jumping to a record 5.33% share — more than double macOS’s 2.35%. Phoronix reports:
Steam on Linux was never above 5% and easily an all-time high for the Linux gaming marketshare, especially in absolute numbers. It was a massive 3.1% spike in March while macOS also jumped surprisingly by 1.19% to 2.35%. The Steam Survey numbers show Windows losing 4.28%, down to 92.33%.

Part of the jump at least appears to be explained by Valve correcting again the Steam China numbers. Month over month they report a 31.85% drop to the Simplified Chinese language use and English use increasing by 16.82% to 39.09%. Other languages also showed gains amid the massive decline in Simplified Chinese use.

The latest numbers for March show around a quarter of the Linux gamers are running Steam OS. Due in part to the Steam Deck APU being a custom AMD product and the popularity of AMD hardware on Linux for its open-source nature, AMD CPU use by Steam on Linux gamers remains just under 70%.

Skyrocketed and 5%?

By nightflameauto • Score: 5, Insightful Thread

Maybe escalated. I don’t think any trajectory that ends under 6% can really be called skyrocketing.

I’m not saying it’s a bad thing, and it’s a positive trend overall, certainly. But I’d hardly call it skyrocketing unless the climb continues to build. Though, to be quite frank, with Microsoft doing everything they can to make Windows less appealing as time goes on, I don’t see this trend stopping anytime soon. I just think “skyrocketed” is a little premature here.

Works pretty well.

By Qbertino • Score: 5, Insightful Thread

I’m part of that 5%+. The thing about gaming on Linux is that I have no time or mood for fussing around with compatibility issues. Steams Proton layer handles quite a few games without trouble. I used to be a GoG only person but since their requirements for Linux versions are very specific and cause trouble on newer versions of Linux I finally installed Steam on Linux a few weeks back. Sure it’s quite a performance hog and it keeps you in the dark about wether it’s taking so long to launch because it’s running some background update thingie and you have to use top to see what’s going on, but other than that, the games listed as playable on protondb launch with a simple click. Which is good.

Guess I’m a steam customer now. After, what, 25 years? I remember when Half-Life 2 came out and they tied it to steam to push the first big digital game distribution platform. Guess that was/is a huge success. Provide good value, get my money. I don’t mind.

Re:Works pretty well.

By wiggles • Score: 4, Informative Thread

There are a lot of community fixes for that kind of thing, but for most people who ain’t got time for fiddling, gaming oriented distributions like Nobara are shipping with baked in patches for things like SMP scheduling issues, Wine bugs, driver gotchas, etc.

I’ve been running Nobara on my PC exclusively for the past couple of years. It’s been great - like a fixed version of Fedora that just works. I hear great things about Bazzite and CachyOS too.

The most fiddling I really have to do to get my games to run is to check protondb and look for game specific launch options (there’s a place to put that in Steam), and select a compatible version of Proton (a simple drop-down in the Steam console). Most stuff just works out of the box though.

Fall is Coming

By Spinlock_1977 • Score: 3 Thread

Microsoft extended my Windows 10 license for a year - it expires this fall. They did this for a large mess of people. Those Linux numbers are going to go higher in the fall when I, and I’m sure many people, will take the leap to safety and leave Windows behind. For me, it’s goodby to 36 years of Windows.

Group Pushing Age Verification Requirements For AI Sneakily Backed By OpenAI

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from Gizmodo:
OpenAI hasn’t been shy about spending money lobbying for favorable laws and regulations. But when it comes to its involvement with child safety advocacy groups, the company has apparently decided it’s best to stay in the shadows — even if it means hiding from the people actually pushing for policy changes. According to a report from the San Francisco Standard, a number of people involved in the California-based Parents and Kids Safe AI Coalition were blindsided to learn their efforts were secretly being funded by OpenAI. Per the Standard, the Parents and Kids Safe AI Coalition was a group formed to push the Parents and Kids Safe AI Act, a piece of California legislation proposed earlier this year that would require AI firms to implement age verification and additional safeguards for users under the age of 18. That bill was backed by OpenAI in partnership with Common Sense Media, which proposed the legislation as a compromise after the two groups had pushed dueling ballot initiatives last year.

But when the coalition started to reach out to child safety groups and other advocacy organizations to try to get them to lend support to the bill, OpenAI was apparently conveniently left off the messaging. The AI giant was also left out of the marketing on the coalition’s website, according to the Standard. That reportedly led to a number of groups and individuals lending their support to the Parents and Kids Safe AI Coalition without realizing that they were aligning themselves with OpenAI. As it turns out, OpenAI isn’t just one of the members of the coalition; it is the group’s biggest funder. In fact, the Standard characterized the Parents and Kids Safe AI Coalition as being “entirely funded” by OpenAI. While it’s not clear exactly how much the company has funneled to this particular group, a Wall Street Journal report from January said OpenAI pledged $10 million to push the Parents and Kids Safe AI Act.
Gizmodo notes that OpenAI’s backing of the Parents and Kids Safe AI Act “could be self-serving for CEO Sam Altman,” who just so happens to head a company called World that provides age verification services.

Two changes

By Hentes • Score: 5, Funny Thread

I could accept Worldcoin based authentication with two minor changes: instead of an iris scan, it should use the more modern, IgNobel winning rectal print technology, and instead of a creepy orb Sam Altman would have to personally sketch the prints with a broken pencil.

Liability

By Dan East • Score: 5, Interesting Thread

It absolves them of liability. If there is a law they have to validate age (even if it is ineffective and easily worked around by minors), and they are doing whatever silly thing they need to do to be compliant, then they have shielded themselves from liability.

By being involved in the process they can steer things to something easy and affordable to implement on their end. Make it work the way they want to (scan an ID, have AI look at their face, DNA test, measure their height - whatever method they’re specifically wanting to do is why they are funding this and pushing for it).

Re:Liability

By DarkOx • Score: 5, Interesting Thread

All of that is true but I think it is far more about barriers to entry. For all the talk about the need for these massive datacenters, a lot of, maybe most of, the use cases for the the frontier models that actually are worth $$ like code assistants etc rapidly falling into the range where what OpenAI is selling just isn’t needed. Qwen is not as good as GPT but it is close, a Mac Studio maybe can’t pump out tokens quite as fast as an API hosted on OpenAI’s infrastructure but it is knocking on the door (for one human consumer, applications).

Is there going to be market for hosted models, of course not many are going to want to onprem the LLMs running the chat bot on their websites. A lot of companies will want to onprem their RAG tools and anything handling data they care about protecting.

At one point Microsoft people were saying workstations were over, that developers, engineers (not in the software sense), Architects (not in the software sense), were going to use Azure hosted VDIs…Yeah have not seen that, yes I know its possible and someone here will tell us how wonderful their thin-client virtual desktop experience is, but the lion’s share of these professionals that I encounter anyway are still buying workstations (or near-workstations pro-line Mac). Point is people are going to want to run their GenAI work loads locally, and they very nearly can. The free and “Open” models combined with affordable performant hardware are going to eat OpenAI’s lunch, in a huge slice of the market.

Unless - they could somehow make it impossible to distribute and bundle these things for compliance reasons....Then they’d have nice little moat that would be difficult to cross.

Re:human vs slop

By alexgieg • Score: 5, Insightful Thread

The main pusher has been Meta. They want age verification everywhere because it (mostly) allows distinguishing real humans from bots, including AI bots. From what I read, no idea whether this is accurate or not, they want that because of ads. Bots don’t generally buy products, so showing them ads reduces click-through metrics, thus ad revenue.

AI companies I don’t know. For Altman, World might be a driving factor, but I imagine a more important factor is regulatory capture. The more roadblocks to competition billion- and trillion-dollar incumbent companies manage to add to their markets, the less competition from new entrants unable to afford compliance.

Re:Could Be

By greytree • Score: 5, Informative Thread
The man who took the open source, not for profit company OpenAI and made it a closed source and for profit company could be a dirty, money-grubbing cunt.

Could be.

Rapid Snow Melt-Off In American West Stuns Scientists

Posted by BeauHD View on SlashDot Skip
Scientists say extreme March heat caused an unusually rapid collapse of snowpack across the American West that’s leaving major basins at record or near-record lows. “This year is on a whole other level,” said Dr Russ Schumacher, a Colorado State University climatologist. “Seeing this year so far below any of the other years we have data for is very concerning.” The Guardian reports:
[…] The issue is extremely widespread. Data from a branch of the US Department of Agriculture (USDA), which logs averages based on levels between 1991 and 2020, shows states across the south-west and intermountain west with eye-popping lows. The Great Basin had only 16% of average on Monday and the lower Colorado region, which includes most of Arizona and parts of Nevada, was at 10%. The Rio Grande, which covers parts of New Mexico, Texas and Colorado, was at 8%. “This year has the potential of being way worse than any of the years we have analogues for in the past,” Schumacher said.

Even with near-normal precipitation across most of the west, every major river basin across the region was grappling with snow drought when March began, according to federal analysts. Roughly 91% of stations reported below-median snow water equivalent, according to the last federal snow drought update compiled on March 8. Water managers and climate experts had been hopeful for a March miracle — a strong cold storm that could set the region on the right track. Instead, a blistering heatwave unlike any recorded for this time of year baked the region and spurred a rapid melt-off. “March is often a big month for snowstorms,” Schumacher said. “Instead of getting snow we would normally expect we got this unprecedented, way-off-the-scale warmth.”

More than 1,500 monthly high temperature records were broken in March and hundreds more tied. The event was “likely among the most statistically anomalous extreme heat events ever observed in the American south-west,” climate scientist Daniel Swain said in an analysis posted this week. “Beyond the conspicuous ‘weirdness’ of it all,” Swain added, “the most consequential impact of our record-shattering March heat will likely be the decimation of the water year 2025-26 snowpack across nearly all of the American west.” Calling the toll left by the heat “nothing short of shocking,” Swain noted that California was tied for its worst mountain snowpack value on record. While the highest elevations are still coated in white, “lower slopes are now completely bare nearly statewide.”

Re:Indeed

By chthon • Score: 5, Interesting Thread

Anyone who read “Chaos” by James Gleick, and had some mathematical knowledge, knew that was going to happen this way. I probably read it between 1987 and 1990, and have had a copy since 2000 somewhere.

We had rain fall in California, to warm for snow

By sarren1901 • Score: 5, Informative Thread

We had some snow for a minute up in the mountains but this was a very mild winter, so any rain fall we actually got ended up melting the snow we had. We don’t do a lot of water catchment, so it makes this lack of snowpack even more concerning.

This will be a rough year but weather forecasters have mentioned that we could get a nice El Nino effect, which for southern California means colder and wetter conditions. El Nino for most of the world means hotter and dryer.

From my vantage point, I feel like significantly more homes should have water catchment systems and water storage. Since we know temperatures are climbing, it means snow will be less reliable but that doesn’t necessarily also mean no rain. We can’t keep letting the rain runoff into the ocean anymore.

Water catchment systems aren’t not all that expensive either. For simple math, a 1000 sqft surface can produce 620 gallons of water with 1 inch of rain fall. If you just used that water for toilet flushing and outdoor watering, you can save a lot of the potable water you receive from the city for bathing, dishes and drinking. We just don’t seem to have our priorities lined up yet.

The biggest problem we have is for profit utilities that don’t actually want us to become more sustainable, because that cost them money. People also tend to bulk at up front capital cost despite the clear and obvious long term savings and the added resilience.

P.s. You can even buy an osmosis system for your home and ALL the water you collect can be used around the entire home. Even in the desert, where you may only get 5 inches of water a year, that’s still 3,100 gallons of collected water. It’s not a trivial amount.

Things are illusorily fabulous

By Sloppy • Score: 5, Interesting Thread

The heat wave made March be like late spring. Things that normally bloom in May, bloomed in March. And yesterday I got my first MRGCD irrigation of the year, flooding my back yard and letting the shade trees greedily suck up the water. We’re spending a lot more time outside on the patio, compared to previous years during this time-of-year.

If I were stupid, I would be out of my mind with pleasure. Things feel wonderful right now.

But that water I just got .. that is The snowpack, probably. Instead of getting it all throughout summer, this first irrigation is probably the last, or second-to-last.

This summer is going to SUCK.

Bad for us, but not “our fault”

By argStyopa • Score: 5, Informative Thread

https://medium.com/predict/thi…

“The real reason we will never be able to “fix” the drought is because the American West is not in a drought right now.
And you can’t fix something that isn’t broken. …
The West’s rapid aridification isn’t being caused by a “once-in-a-century” weather event like the flooding in Kentucky or the nearly constant hurricanes that pummel the Southeast each year.
It’s not even the direct result of climate change (although that’s definitely accelerating the process and making the effects more intense). Western states are running out of water because they are located in a desert. …
What we’re dealing with in the West is not a drought because the current lack of rainfall isn’t “abnormal” for a desert. Dry is the default setting. And you can’t call it a “drought” because you wish deserts were wetter.
The problem isn’t the so-called drought - - it’s the city planners, developers, and suburbanites who built cities in a desert with no plan to provide water beyond wishful thinking and praying for rain.
The fact that we got weirdly lucky with unseasonably wet weather for a few decades has helped us ignore the reality that the American West simply doesn’t have the water to support 65 million people - - and half of the country’s agriculture - - at least not at anything near our current water usage levels.
And there’s really nothing we can do about it.” …
According to researcher Lynn Ingram, a professor in the Department of Earth and Planetary Science at UC Berkeley, “The 20th century was abnormally wet and rainy.” Ingram goes on to claim, “The past 150 years have been wetter than the past 2,000 years.” (cf “The California drought is helping return the weather pattern to normal” https://archive.ph/0m3BI)

In other words, what we’re experiencing now isn’t a drought. It’s a reestablishment of the norm.”

Re:They were expecting what exactly?

By ArchieBunker • Score: 5, Insightful Thread

“Clean coal” anyone? Rolling back environmental regulations?

SpaceX Files To Go Public

Posted by BeauHD View on SlashDot Skip
Reuters reports that SpaceX has confidentially filed for a U.S. IPO, reportedly targeting a valuation above $1.75 trillion. Reuters reports:
SpaceX puts more rockets in space than any other company and promises a chance to invest in humanity’s return to the moon and attempt to colonize Mars. The company aspires to put artificial intelligence data centers in space, while running a lucrative satellite communications system that opens up much of the earth to the internet and is increasingly used in war. […]

A public listing at a potential valuation of more than $1.75 trillion comes after SpaceX merged with Musk’s artificial intelligence startup xAI in a deal that valued the rocket company at $1 trillion and the developer of the Grok chatbot at $250 billion. SpaceX is hosting an analyst day on April 21, encouraging research analysts to attend in person, […]. The company is also offering analysts an optional visit to xAI’s “Macrohard” data center site in Memphis, Tennessee, on April 23, and plans to hold a virtual session on May 4 to discuss financial models with banks’ research analysts, the source said.

Re:Go watch Patrick Boyle’s video on YouTube

By quenda • Score: 5, Interesting Thread

space x doesn’t have enough launch customers to justify the valuation. There’s not enough potential satellite Internet customers to make up the difference.
 

Maybe. But I remember saying “Amazon’s valuation is insane. There are not enough books sold to justify that even if Amazon had a 100% market share.”
My mistake was to think of Amazon as a bookseller. (For younger readers, they were.)
Telsa’s valuation is clearly not based on projections of car sales.

And I’m hearing echos of Thomas Watson, president of IBM, 1943.

Re:Thoughts and prayers

By jarkus4 • Score: 5, Informative Thread

Which likely means among others most “casual” long term investors. I remember reading that Musk was negotiating faster inclusion of SpaceX into indexes like S&P 500 (without a usual wait time). Once the company is included in the index managers of all the funds tied to that index will need to buy the stocks whether they like it or not, likely rising the prices. A lot of retirement and other long term savings are tied into funds like these so A LOT of normal people will get the exposure.

Amazon doesn’t just sell books

By rsilvergun • Score: 5, Insightful Thread
They had lots of other markets they could move into.

SpaceX doesn’t have any other markets to move into. Musk is trying to push AI but he’s getting his ass kicked and he’s already lost all his good engineers.

You’re basically comparing a company that got in on the ground floor and was able to use massive amounts of anti-competitive tactics to buy up their competitors and expand rapidly to a company that has maxed out its markets and doesn’t have anything new to spread into except one sector where they have already lost.

It’s a scam. It exists to loot 401ks.

Re: Thoughts and prayers

By AmiMoJo • Score: 4, Insightful Thread

Optimius won’t do any of that stuff because Musk consistently overestimates how good AI is at vision. He can’t even get his cars to stop crashing into stationary objects. A decade ago he promised they would be fully self driving within months, no driver oversight required, and he’s still assuming that the huge breakthrough that makes vision actually work is just around the corner and he will be the one to make it.

The method that Musk is attempting has been tried many times before, always ending in failure. You can’t just teach an AI to recognize more and more objects until it becomes competent. You can’t just teach it more and more facts until it understands the world. The kind of intelligence needed to do seemingly simple tasks like folding clothes is much more general than that.

Mind your index funds folks

By hwstar • Score: 4, Interesting Thread

Spacex is pushing to be included in the Nasdaq 100 in short order to spread their financial risks and to access the into people holding QQQ and SPY (Among others). The lobbying going on by SpaceX is intense to make this happen.

For the first few months, as long as SpaceX doesn’t crater due to problems with Starship, or its Money-Losing Xai, things may be OK. However, around 6 months out, people in SpaceX and Xai are going to want to cash in on their gains. This could force the price of SpaceX down.

Estimates given are that SpaceX could take up 3.5% of the value of the Nasdaq 100 stocks in QQQ. Nvidia is also a significant portion of QQQ value (6%?).

Not giving any financial advice here, but it might be prudent to look at how much exposure you have and maybe look at rediversifying out of QQQ.

NASA Launches Artemis II Astronauts Around the Moon

Posted by BeauHD View on SlashDot Skip
NASA’s Artemis II mission has launched four astronauts around the moon and back, marking humanity’s first crewed lunar voyage in 53 years and the first test flight of NASA’s Orion capsule and Space Launch System (SLS) with people on board. Five minutes into the flight, Commander Reid Wiseman saw the team’s target: “We have a beautiful moonrise, we’re headed right at it,” he said from the capsule. The Associated Press reports:
Artemis II set sail from the same Florida launch site that sent Apollo’s explorers to the moon so long ago. The handful still alive cheered this next generation’s grand adventure as the Space Launch System rocket thundered into the early evening sky, a nearly full moon beckoning some 248,000 miles (400,000 kilometers) away.

Artemis II commander Reid Wiseman led the charge into space with “Let’s go to the moon!” accompanied by pilot Victor Glover, Christina Koch and Canada’s Jeremy Hansen. It was the most diverse lunar crew ever with the first woman, person of color and non-U.S. citizen riding in NASA’s new Orion capsule.

Carrying three Americans and one Canadian, the 32-story rocket rose from NASA’s Kennedy Space Center where tens of thousands gathered to witness the dawn of this new era. Crowds also jammed the surrounding roads and beaches, reminiscent of the Apollo moonshots in the 1960s and ‘70s. It is NASA’s biggest step yet toward establishing a permanent lunar presence.
Visit NASA’s Artemis II Launch Day blog for the latest updates.

Developing…

Re:Five years old

By Locke2005 • Score: 5, Funny Thread
The Apollo astronauts ate all the cheese, there was no reason to return.

Re:Five years old

By caseih • Score: 5, Interesting Thread

Back in 2019 on the 50th anniversary of Apollo 11, someone put up a fantastic web site to play back the mission in real time. Complete with actual radio and mission control comms and telemetry data. https://apolloinrealtime.org/. Such an amazing historical data trove. I spent several days listening in real time to the flight unfold from launch to moon landing, to splash down. Even though I knew this was just playing back recordings from 50 years ago, and knew the outcome, it was a neat experience and it filled me with wonder and excitement at what was being accomplished as it were. I remember going outside and lookup up at the moon and thinking about people being on it, as someone in 1969 would have done.

Fast forward now to Artemis II and I have such mixed feelings about it, and the space program in general. Anyway I wish them a safe and uneventful journey.

Re:Seems pointlessly unsafe

By Mspangler • Score: 5, Insightful Thread

Speaking as a former submariner boredom is good.

Re:Not diversity hires

By Tyler Durden • Score: 5, Insightful Thread

Indeed. And yet Stephen Miller is still gonna have a tremendous fit. Probably more so.

Re:Not diversity hires

By AmiMoJo • Score: 5, Insightful Thread

I’m sure Miller thinks that any qualifications they have were just the result of DEI and not hard work.

You can’t reason with people like that. It’s like religion, they just invent another story to explain why anything contradictory to their belief is actually confirmation of it.

UFC-Que Choisir Takes Ubisoft To French Court Over the Crew Shutdown

Posted by BeauHD View on SlashDot Skip
Longtime Slashdot reader Elektroschock writes:
When Ubisoft pulled the plug on The Crew’s servers without warning, players were left with a worthless game they’d already paid for. Now, consumer watchdog UFC-Que Choisir is fighting back, demanding gamers’ right to play regardless of publisher whims. Supported by the "Stop Killing Games” movement, this landmark case challenges unfair terms before the Creteil Judicial Court (Val-de-Marne near Paris), and aims to protect players from disappearing games.
The lawsuit that UFC-Que Choisir filed against Ubisoft on Tuesday alleges that the video game publisher “misled consumers about the permanence of their purchase and imposed abusive contractual clauses stripping players of ownership rights,” reports Reuters.

User Licenses..

By kellin • Score: 3 Thread

Don’t they say its just a license and not outright ownership? Wonder how this will go down.

Re:The REAL enemy here.

By Anaerin • Score: 4, Informative Thread

The Crew, was a game developed and released over ten years ago.

Is it the ten years you’ve got a problem with? Okay, how about:

“The Crew” is just the game that Stop Killing Games has focused on, as it had an extensive global release, a large player base, and a still-active community. There are plenty of other examples

AI Can Clone Open-Source Software In Minutes

Posted by BeauHD View on SlashDot Skip
ZipNada writes:
Two software researchers recently demonstrated how modern AI tools can reproduce entire open-source projects, creating proprietary versions that appear both functional and legally distinct. The partly-satirical demonstration shows how quickly artificial intelligence can blur long-standing boundaries between coding innovation, copyright law, and the open-source principles that underpin much of the modern internet.

In their presentation, Dylan Ayrey, founder of Truffle Security, and Mike Nolan, a software architect with the UN Development Program, introduced a tool they call malus.sh. For a small fee, the service can “recreate any open-source project,” generating what its website describes as “legally distinct code with corporate-friendly licensing. No attribution. No copyleft. No problems.” It’s a test case in how intellectual property law — still rooted in 19th-century precedent — collides with 21st-century automation. Since the US Supreme Court’s Baker v. Selden ruling, copyright has been understood to guard expression, not ideas.

That boundary gave rise to clean-room design, a method by which engineers reverse-engineer systems without accessing the original source code. Phoenix Technologies famously used the technique to build its version of the PC BIOS during the 1980s. Ayrey and Nolan’s experiment shows how AI can perform a clean-room process in minutes rather than months. But faster doesn’t necessarily mean fair. Traditional clean-room efforts required human teams to document and replicate functionality — a process that demanded both legal oversight and significant labor. By contrast, an AI-mediated “clean room” can be invoked through a few prompts, raising questions about whether such replication still counts as fair use or independent creation.

Never have seen OG Source Code is a pre-requisite

By williamyf • Score: 5, Insightful Thread

for clean room implementations.

If the AI model was trained using the OG software project that is being replicated, they are screwed.

That should be very easy to see, in the discovery phase just ask for a list of all the software that was used to train the AI model. IS a yes/no answer, if the AI saw the OG software, then there was no clean room, the room was dirty, very, very dirty

Re:really?

By Sloppy • Score: 5, Interesting Thread

If a computer program ingests code (whether GPL or not) and then outputs some code, the big question is whether or not the resulting code is a derived work.

If it’s not a derived work, then the license of the original code is irrelevant, and it doesn’t matter if it’s GPLed, fully proprietary, or somewhere in between. The license has no say in the matter, because nobody ever needs to agree to the license; whatever they’re doing is legal under copyright law so they already had all the permission they needed, without ever needing the additional rights granted by a license.

If it is a derived work, then that’s copyright infringement unless the person who does it has permission. And the only way to get permission (i.e. cause copyright infringement to have not happened) is to agree to the license. So yes, the output would have to be GPLed.

But I don’t think we really know whether or not robots reading code and then writing code from what they “learned,” are creating derived works. Ask again in a few years, after a few court cases. This is hard. Rational people can disagree and come up with pretty good arguments no matter what side they’re on. We’ll see what the courts decide.

I think the most interesting case for determining it, won’t involve a GPLed input. It’ll be if Anthropic sues this project, since they will have contributed arguments to both sides. They’ll have to argue “it is a derived work” in court, but to all their customers, they have and will continue to preach “it’s not a derived work.”

Re:Can AI clone lawyers & judges?

By homerbrew • Score: 5, Insightful Thread
Of course, I doubt I would call it a clean room design, especially if the AI was trained with that open source project. Once it has seen that original code in it’s training, it’s quite difficult to convince me that it didn’t rely on that code in any way.

Re:Clean room?

By Waffle Iron • Score: 5, Interesting Thread

Even if you use an AI to extract an extremely condensed specification out of the source code, it’s hardly clean room if the LLM was pre-trained on the source code any way.

I once worked at a place that had a clean room process to create code compatible with a proprietary product. Anybody who had ever seen the original code or even loaded the original binary into a debugger was not allowed to write any code at all for the cloned product. The clone writers generally worked only off of the specifications and user documentation.

There were a handful of people who were allowed to debug the original to resolve a few questions about low-level compatibility. The only way they were allowed to communicate with the software writers was through written questions and answers that left a clear paper trail, and the answers had to be as terse as possible (usually just yes or no). Everyone knew that these memos were highly likely to be used as evidence in legal proceedings.

I highly doubt that any AI tech bros have ever been this rigorous, and I’d bet that most of these AIs have been trained on the exact same source code that they are cloning.

Re: Can AI clone lawyers & judges?

By AvitarX • Score: 5, Informative Thread

Generally if the implementors have seen the original then it’s not clean room.

Cloudflare Announces EmDash As Open-Source ‘Spiritual Successor’ To WordPress

Posted by BeauHD View on SlashDot Skip
In classic Cloudflare fashion, the CDN provider used April Fool’s Day to unveil an actual, “not a joke” product. Today, the company announced EmDash — an open-source “spiritual successor” to WordPress that aims to solve plugin security. Phoronix reports:
With the help of AI coding agents, Cloudflare engineers have been rebuilding the WordPress open-source project “from the ground up.” EmDash is written entirely in TypeScript and is a server-less design. Making plug-ins more secure than the WordPress architecture, EmDash plug-ins are sandboxed and run in their own isolate. EmDash builds upon the Astro web framework. EmDash doesn’t rely on any WordPress code but is designed to be compatible with WordPress functionality. EmDash is open-source now under the MIT license.
The EmDash code is available on GitHub.

My inner editor is incensed.

By nightflameauto • Score: 3 Thread

Making plug-ins more secure than the WordPress architecture, EmDash plug-ins are sandboxed and run in their own isolate.

While technically you can use “isolate” as a noun, the usage here is in such an awkward state that it would make a line editor do the line editor equivalent of flipping a table and throwing a bottle of expensive scotch at the writer’s head while screaming, “Fix your shit!”

Also, let’s not burden EmDash with the historical baggage of Wordpress just because people are looking for an alternative. I mean, it sicks in its own special way, but it’s not *THAT* terrible. Yet. Move enough people to it and I’m sure it can get there, but no reason to start its race with all the baggage of Wordpress hanging on its neck.

It went from zero to “beta preview” in 10 hours?

By Danborg • Score: 3 Thread

My hot take: This is Matt Kane (a legit Astro core team member) using Claude Code to generate an entire CMS in a day as an April Fools flex / proof of concept / “look what AI-assisted development can do now.”

Stupid stupid product name

By thegarbz • Score: 3 Thread

Stupid! Why name your product after a grammatical symbol? Is the intention here to make it impossible to find any information about it? Any search for this project will automatically be corrected to “em dash”. For those not familiar with English gramma it’s the long dash used to conjoin sentences: you could have used it instead of this colon. Not to be confused with an “en dash” or a hyphen.

But I said stupid stupid, not just stupid. Well here’s the second one:

Stupid! It clashes with another open source project with the same name: https://www.emdash.sh/ Who came first? Who cares, they both have stupid names already.

Sweden Swaps Screens For Books In the Classroom

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from Ars Technica:
In 2023, the Swedish government announced that the country’s schools would be going back to basics, emphasizing skills such as reading and writing, particularly in early grades. After mostly being sidelined, physical books are now being reintroduced into classrooms, and students are learning to write the old-fashioned way: by hand, with a pencil or pen, on sheets of paper. The Swedish government also plans to make schools cellphone-free throughout the country.

Educational authorities have been investing heavily. Last year alone, the education ministry allocated $83 million to purchase textbooks and teachers’ guides. In a country with about 11 million people, the aim is for every student to have a physical textbook for each subject. The government also put $54 million towards the purchase of fiction and non-fiction books for students.

These moves represent a dramatic pivot from previous decades, during which Sweden — and many other nations — moved away from physical books in favor of tablets and digital resources in an effort to prepare students for life in an online world. Perhaps unsurprisingly, the Nordic country’s efforts have sparked a debate on the role of digital technology in education, one that extends well beyond the country’s borders. US parents in districts that have adopted digital technology to a great extent may be wondering if educators will reverse course, too.
As for why Sweden is pivoting away from digital devices, researcher Linda Falth said the move was driven by several factors, including concerns over whether the digitization of classrooms had been evidence-based. “There was also a broader cultural reassessment,” Falth said. “Sweden had positioned itself as a frontrunner in digital education, but over time concerns emerged about screen time, distraction, reduced deep reading, and the erosion of foundational skills such as sustained attention and handwriting.”
Falth noted that proponents of reform believe that “basic skills — especially reading, writing, and numeracy — must be firmly established first, and that physical textbooks are often better suited for that purpose.”

Further reading: Digital Platforms Correlate With Cognitive Decline in Young Users

Too many distractions

By fropenn • Score: 5, Insightful Thread
The biggest problem with screen-based classrooms is that the devices themselves are not designed for that purpose. There’s too many games, chats, reminders, notices, updates, etc. etc. etc. that make it a fun and engaging device as a toy but terrible for maintaining concentration and focus on specific content. It is also more difficult for the teacher to be able to quickly look across a group of 30 and see who is doing the assigned task when the screens are all pointed away from the teacher (toward the student).

There are some features that are missing in physical books, such as the ability to long-tap on a word and get a definition, but those sorts of benefits do not outweigh the downsides.

Infinite scrolling …

By PPH • Score: 5, Funny Thread

… is broken on these book things. You reach the end of the text, then there’s this stupid number. And then what?

Re:AI can help here

By SumDog • Score: 4, Insightful Thread
Chromebooks had zero to do with education. They were 100% about Google forcing every high school student to have a Google account as early as possible. I bet less than 1% of parents said, “No, we’re not doing that. Here’s a Ubuntu laptop instead. Never sign in to Google, cause I said so.”

Everyone wants digital tracking of every human: governments (via ID programs), Google, Meta (both are pretty much governments at this point), Anthropic, OpenAI … they all want to know exactly who everyone is. They all want a Technocracy.

Re:Too many distractions

By rsilvergun • Score: 4, Interesting Thread
I don’t think distractions are the problem per se. It’s more that having a physical object that you can rapidly swap through pages on combined with the ability to take handwritten notes and the effort of it makes information stick a lot more. It’s a quirk of human cognition.

Screens are still infinitely better as reference material because of the ability to do rapid searches. They aren’t as good when you are trying to learn. Human beings need a lot of physical motion in order to create the mental pathways that go with learning.

Honestly we’ve known this for a long time. It’s why educational software isn’t as big as it was back when most of the people reading this were kids. As much fun as Odell lake and Oregon trail can be they Don’t really teach you anything except maybe how to work a computer. Which to be fair is a skill.

Re:Paper, iPad, Kindle Peperwhite - all of the abo

By jenningsthecat • Score: 4, Interesting Thread

For really important technical references I still prefer paper books. For the most important of these I also get the digital version on iPad, just in case I need the reference when on the road. But a regular work office, home office, or in the lazy room recliner just keeping up to date … I prefer paper books to iPad. Readable charts and graphics and better high lighting and notes in the margins.

I don’t use technical references much these days, but when I do I like to have both digital and paper, especially for data books and app notes. The digital version is easier to search, but the paper version allows for using my fingers to hold two or three sections open simultaneously. Paper is the best for random access.

And that’s why when I’m reading fiction or scholarly / scientific stuff, I only use the electronic version if have no other choice. For those kinds of reading, I will frequently flip back and forth by dozens or hundreds of pages to confirm my memory, or to find what a character said. In my experience, trying to do that on a device utterly sucks.

Reading a book and reading a screen are, for me, very different experiences. If I had to choose, I’d drop e-books in favour of paper and never look back. Except in the pages of my dead-tree books, of course…

OnlyOffice Suspends Nextcloud Partnership For Forking Its Project Without Approval

Posted by BeauHD View on SlashDot Skip
darwinmac writes:
OnlyOffice has suspended its partnership with Nextcloud after the latter forked its editors into a new project called Euro-Office, according to a report from Neowin. The move comes just days after Nextcloud and partners like IONOS announced the fork as part of a broader push for European digital sovereignty. In a statement, the company accused the project of violating its licensing terms and international intellectual property law, claiming that Euro-Office uses its technology without proper compliance. OnlyOffice also pointed to missing attribution requirements and branding obligations tied to its AGPL-based licensing model.

As a result, its 8-year-old partnership, which allowed Nextcloud users to edit and collaborate on office documents right inside their own instance, has been suspended. OnlyOffice also accused Nextcloud of not behaving in a manner expected of a partner, alleging attempts to poach its employees and influence customers against the company. Nextcloud said it forked the OnlyOffice repository instead of collaborating with the company because the project is notoriously difficult to contribute to. It also pointed out that OnlyOffice is a Russian company with Russian employees who leave code comments in Russian. In addition to that, some users may feel uncomfortable using software that could be linked to the Russian government.

OnlyOffice

By fahrbot-bot • Score: 5, Funny Thread

For people who like OnlyFans and/or The Office, you’re going to be disappointed. :-)

Re:hmm

By nashv • Score: 5, Informative Thread

There isn’t one. But there IS a need to be able to edit a single document with collaboratively with multiple people, and have decent reliability in changes being preserved and getting updated asynchronously.

At the moment, only Microsoft Office and Google Docs allow that. The browser is just a side-effect/perk of using web technologies to facilitate the above, and the fact that Google does everything in the browser as far as possible.

Re:hmm

By higuita • Score: 5, Informative Thread

I notice that many people still point to openoffice… they should not! :)

OpenOffice is mostly abandoned, the latest Apache OpenOffice is v4.1.16, from November 2025, but 4.1 was released in 2014 !! 12 years and you only got minor bug fixes. There are almost no developers and changes in OpenOffice. Everything moved to LibreOffice! Oracle killed the OpenOffice by being oracle and when it was dead already, dump it to Apache Foundation that little could do. The brand is still in the mind of many people but everyone should really move to libreoffice already

check this timeline: https://pt.wikipedia.org/wiki/…

So people still in OpenOffice should migrate to LibreOffice, that i suspect will solve many of the problems compatibility they have in OpenOffice and get lot more new features and performance. OnlyOffice is also good and lets see the Euro-Office

Re:hmm

By thegarbz • Score: 4, Informative Thread

At the moment, only Microsoft Office and Google Docs allow that.

False, Both of the products mentioned in TFS have this capability, as do forks of the usual darling: LibreOffice’s Online and Collabera.

Re:hmm

By codemachine • Score: 4, Interesting Thread

Just recently there was an editorial from a tech author who prefers OpenOffice precisely because it isn’t changing. It is stable and works, and the UI won’t get redone.

That is great if you want a desktop client that just works. Not as great for the EuroOffice folks who want it in a web browser.

LibreOffice only just recently restarted their online version, though they only provide the software and not a hosting mechanism. Perhaps that software could’ve been a base for EuroOffice, but it isn’t in production state yet. OnlyOffice is quite a bit ahead there.

I think one of the reasons LibreOffice hadn’t been working on their online version before is that Collabora is a major contributor to LibreOffice and they already have a product that does what LibreOffice online will do.

Anthropic Issues Copyright Takedown Requests To Remove 8,000+ Copies of Claude Code Source Code

Posted by BeauHD View on SlashDot Skip
Anthropic is using copyright takedown notices to try to contain an accidental leak of the underlying instructions for its Claude Code AI agent. According to the Wall Street Journal, “Anthropic representatives had used a copyright takedown request to force the removal of more than 8,000 copies and adaptations of the raw Claude Code instructions … that developers had shared on programming platform GitHub.” From the report:
Programmers combing through the source code so far have marveled on social media at some of Anthropic’s tricks for getting its Claude AI models to operate as Claude Code. One feature asks the models to go back periodically through tasks and consolidate their memories — a process it calls dreaming. Another appears to instruct Claude Code in some cases to go “undercover” and not reveal that it is an AI when publishing code to platforms like GitHub. Others found tags in the code that appeared pointed at future product releases. The code even included a Tamagotchi-style pet called “Buddy” that users could interact with.

After Anthropic requested that GitHub remove copies of its proprietary code, another programmer used other AI tools to rewrite the Claude Code functionality in other programming languages. Writing on GitHub, the programmer said the effort was aimed at keeping the information available without risking a takedown. That new version has itself become popular on the programming platform.

They should ask the MPAA....

By Sebby • Score: 5, Insightful Thread

… how well taking down DeCSS worked out.

Re:hohoho

By SumDog • Score: 5, Interesting Thread
The article is paywalled and every other article I found was obviously LLM generated shit and didn’t link to this new implementation. It took me a bit, but I found at least one of the Rust implementations of Claude’s CLI:

https://github.com/Outcomefocu…

I was to see Anthropic choke on this so bad.

Courts still haven’t really ruled on AI generated code in any big countries yet, as far as I can tell. Courts could view AI code the same as AI generated images: non-copyrightable. Generated images can still be subject to trademark if you try to commercialize them, but code not so much. If code ever gets rules as non-copyrightable, any generated code is open game if it gets leaked. Courts could also rule it is subject to copyright of the original training data holders.

Both of these outcomes would be equally devastating to the entire industry in entirely different ways. I’m kinda read to see it all burn.

Re:Stupid

By nomadic • Score: 5, Insightful Thread

This is actually a smart move if they envision ever trying to go after other companies for using their code. “If it wasn’t for public use, why didn’t you even try to get the distributor to take it down?”

Oh the Irony ..

By Mirnotoriety • Score: 5, Insightful Thread
Oh the Irony. a company whose business is built on other peoples works is suing for copyright infringement.

Re:April Foos!

By msauve • Score: 5, Funny Thread
OMG!!! PONIES!!1!

CEO of America’s Largest Public Hospital System Says He’s Ready To Replace Radiologists With AI

Posted by BeauHD View on SlashDot
Mitchell H. Katz, MD, president and CEO of NYC Health + Hospitals, said hospitals could already replace many radiologists with AI for some imaging tasks — if regulators allowed it. He argued the technology presents an opportunity to simultaneously cut costs and expand access. Radiology Business reports:
Katz — who has led the 11-hospital organization since 2018 — said he sees great potential for AI to increase access to breast cancer screening. Hospitals could potentially produce “major savings” by letting the technology handle first reads, with radiologists then double-checking any abnormal screenings. Fellow panelist David Lubarsky, MD, MBA, president and CEO of the Westchester Medical Center Health Network, said his system is already seeing great success in deploying such technology. The AI Westchester uses misses very few breast cancers and is “actually better than human beings,” he told the audience. “For women who aren’t considered high risk, if the test comes back negative, it’s wrong only about 3 times out of 10,000,” Lubarsky said.

Katz asked fellow hospital CEOs if there is any reason why they shouldn’t be pushing for changes to New York state regulations, allowing AI to read images “without a radiologist,” Crain’s reported. In this scenario, rads could then provide second opinions, if AI flags any images as abnormal. Sandra Scott, MD, CEO of the One Brooklyn Health, a small hospital facing tight margins, agreed with this line of thinking, according to Crain’s. “I mean, I’m in charge of a safety-net institution. It would be a game-changer,” Scott said about AI being used to replace rads.

Re:Radiologists

By geekmux • Score: 5, Insightful Thread

Shareholders are crying.

Really?

If we’re looking for an actual downside here, fire all the radiologists and put CEOs in their place to be personally liable for ALL diagnostic readings until AI gets it perfect enough to be defended 100% in every court case.

Perhaps then we’ll see how much of a loophole “AI” is with regards to dismissing a Recession.

False Positives Vs False Negatives

By gurps_npc • Score: 5, Insightful Thread

There are two distinctly different types of errors when it comes to these kind of tests:

False Positives: This is where the test in question falsely says “You have Cancer!” when in fact you do not have it.

False Negatives: This is where the test in question falsely says “You are Healthy” when in fact you have cancer.

False Positives cost money and time, but it is fairly easy to double check them as they should be uncommon.

False Negatives cost human lives and are almost impossible to double check them as most people should test negative for cancer.

For an AI test, you want to have false positives. If it saves you money by not requiring humans to look things over, then costing you money and time to double check things is a fair trade. If it costs too much to double check, then do not use the AI.

False Negatives should be a no no. If the AI has more false negatives than human radiologists do, then do not use the AI test. No one cares how much money you are saving if people are dying.

Note, with regards to jobs, this will likely be relatively flat. There are not that many humans doing this job - they take the results from radiologist exams from all over the country and send them to just a few companies. Those companies find the few people that do it best and hire them. I bet we are talking about less than a hundred people in the US, especially as the best of the best will be kept to double check the results.

All radiologists do is analyze digital images

By Locke2005 • Score: 5, Informative Thread
For years now, radiology has been a poor career choice, but it only makes sense to send those digital images to the place with the cheapest doctors. Turns out one thing AI is really much better at than humans is analyzing digital images, so yes, radiology careers will soon be extent. All the job growth is in health care, but it’s all in the jobs that require you to be in the same room as the patient. Administrators and back office staff are all getting laid off. The people that clean the rooms, body fluids and all? Hospitals can’t get enough of them. (Five of my relatives all work at OSHU. Four are housekeeping, one is a pharmacy technician. They have pharmacy robots now…)

Hilarious timing

By Tschaine • Score: 5, Interesting Thread

Just yesterday I stumbled on this substack post about a research paper whose authors found that AI scored well on x-ray evaluations even when the AI took the test WITHOUT ACCESS to the x-ray images.

https://drjo.substack.com/p/wh…

The moral of this story is that properly evaluating AI performance in classification tasks requires very very carefully designed tests, because neural nets are very very good at picking up correlations between the desired outputs and utterly unintentional signals in the inputs.

I’ve worked in medical imaging

By Arrogant-Bastard • Score: 5, Informative Thread
Twice, in fact — once in an academic research lab and once at a company that designed and built medical imaging equipment.

In both cases we worked on image classification using digital image processing and statistical pattern recognition. (In one of the two cases we also used syntactic pattern recognition and machine learning.) It’s very, very, very hard to make this accurate enough for clinical use even if you pour effort and time and money into it. There’s no way this technology should be deployed without humans backing it up.

As to the human mistakes: everyone can cite a case where a professional radiologist committed a false positive or false negative error. But did you stop to consider why they made a mistake? Were they 13 hours into a 14-hour shift, their third one in a row — because the hospital CEO felt that money should go into his pocket instead of into hiring another radiologist to share the load? Was it an imaging anomaly (they happen) that was ambiguous? Was it because the study that was done wasn’t the best choice? (I.e., imaging modality or location) There are all kinds of ways for this to go wrong that will result in blame being assigned to the radiologist, and only some of those assignments are fair.

AI isn’t a magic fix for this. And I certainly wouldn’t even try to use any of the general-purpose models — as Zathras would say: “This is wrong tool.” If I were to do this again today, I would return to the approach we used before with modest success, I’d take advantage of some of the improved algorithms that have come along, and obviously I’d use bigger/faster hardware, because that opens up approaches that were computationally infeasible. But I wouldn’t even consider removing humans: these are, or can be, life-and-death decisions, and a human being needs to make them.