Alterslash

the unofficial Slashdot digest
 

Contents

  1. Color-Changing Organogel Stretches 46 Times Its Size and Self-Heals
  2. China Is Sending Its World-Beating Auto Industry Into a Tailspin
  3. DeepSeek Writes Less-Secure Code For Groups China Disfavors
  4. After Child’s Trauma, Chatbot Maker Allegedly Forced Mom To Arbitration For $100 Payout
  5. GNOME 49 ‘Brescia’ Desktop Environment Released
  6. Chimps Drinking a Lager a Day in Ripe Fruit, Study Finds
  7. Sony Quietly Downgrades PS5 Digital Edition Storage To 825GB at Same Price
  8. Congress Asks Valve, Discord, and Twitch To Testify On ‘Radicalization’
  9. Flying Cars Crash Into Each Other At Air Show In China
  10. Microsoft Favors Anthropic Over OpenAI For Visual Studio Code
  11. Gemini AI Solves Coding Problem That Stumped 139 Human Teams At ICPC World Finals
  12. Extreme Heat Spurs New Laws Aimed at Protecting Workers Worldwide
  13. AI’s Ability To Displace Jobs is Advancing Quickly, Anthropic CEO Says
  14. Darkest Nights Are Getting Lighter
  15. OpenAI Says Models Programmed To Make Stuff Up Instead of Admitting Ignorance

Alterslash picks up to the best 5 comments from each of the day’s Slashdot stories, and presents them on a single page for easy reading.

Color-Changing Organogel Stretches 46 Times Its Size and Self-Heals

Posted by BeauHD View on SlashDot Skip
alternative_right shares a report from Phys.org:
Scientists from Taiwan have developed a new material that can stretch up to 4,600% of its original length before breaking. Even if it does break, gently pressing the pieces together at room temperature allows it to heal, fully restoring its shape and stretchability within 10 minutes.

The sticky and stretchy polyurethane (PU) organogels were designed by combining covalently linked cellulose nanocrystals (CNCs) and modified mechanically interlocked molecules (MIMs) that act as artificial molecular muscles. The muscles make the gel sensitive to external forces such as stretching or heat, where its color changes from orange to blue based on whether the material is at rest or stimulated. Thanks to these unique properties, the gels hold great promise for next-generation technologies — from flexible electronic skins and soft robots to anti-counterfeiting solutions.
The findings have been published in the journal Advanced Functional Materials.

China Is Sending Its World-Beating Auto Industry Into a Tailspin

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from Reuters:
On the outskirts of this city of 21 million, a showroom in a shopping mall offers extraordinary deals on new cars. Visitors can choose from some 5,000 vehicles. Locally made Audis are 50% off. A seven-seater SUV from China’s FAW is about $22,300, more than 60% below its sticker price. These deals — offered by a company called Zcar, which says it buys in bulk from automakers and dealerships — are only possible because China has too many cars. Years of subsidies and other government policies have aimed to make China a global automotive power and the world’s electric-vehicle leader. Domestic automakers have achieved those goals and more — and that’s the problem.

China has more domestic brands making more cars than the world’s biggest car market can absorb because the industry is striving to hit production targets influenced by government policy, instead of consumer demand, a Reuters examination has found. That makes turning a profit nearly impossible for almost all automakers here, industry executives say. Chinese electric vehicles start at less than $10,000; in the U.S., automakers offer just a few under $35,000. Most Chinese dealers can’t make money, either, according to an industry survey published last month, because their lots are jammed with excess inventory. Dealers have responded by slashing prices. Some retailers register and insure unsold cars in bulk, a maneuver that allows automakers to record them as sold while helping dealers to qualify for factory rebates and bonuses from manufacturers.

Unwanted vehicles get dumped onto gray-market traders like Zcar. Some surface on TikTok-style social-media sites in fire sales. Others are rebranded as “used” — even though their odometers show no mileage — and shipped overseas. Some wind up abandoned in weedy car graveyards. These unusual practices are symptoms of a vastly oversupplied market — and point to a potential shakeout mirroring turmoil in China’s property market and solar industry, according to many industry figures and analysts. They stem from government policies that prioritize boosting sales and market share — in service of larger goals for employment and economic growth — over profitability and sustainable competition. Local governments offer cheap land and subsidies to automakers in exchange for production and tax-revenue commitments, multiplying overcapacity across the country.

Every few years, a new canard

By shm • Score: 3 Thread

Am no fan of the Chinese, especially the leadership but come on.

Every few years there’s a new breathless article on real estate, banking, statistical shenanigans.

If it smells like propaganda it probably is propaganda.

Housing oversupply is good!

By backslashdot • Score: 3 Thread

https://www.sciencedirect.com/… ">80% all households own their homes (well above the rates for what have been defined as ownership socities in the West) (Clark, Huang, & Yi, 2019). If homeownership is an important indicator for the Chinese Dream, as it was for the American Dream, it is fair to say that most Chinese have achieved their Chinese Dream. This is a spectacular achievement especially given the fact that public rental was the dominant tenure in the 1980s in Chinese cities, and homeownership has recently declined in Western countries. Along with the growth of ownership there has been an expansion of multiple home ownership. More than 20% of urban households (16% of rural households) own multiple homes, which is also much higher than many developed nations (e.g. 3%–4% in Australia and Northern Ireland; 13% in the U.S. and about 10% in Britain (Resolution Foundation, August 2017; Paris, 2010; Choi, Hong, & Scheinkman, 2014). Residential property has made up >60% of household assets in China since 2008, while the same proportion is about 30% in U.S. (NAHB, 2013; Huang, 2013; Xie & Jin, 2015).”

Re:CHENGDU, China

By demon driver • Score: 4, Informative Thread

Some of you US Americans are so full of yourselves with so little knowledge about the rest of the world, i.e. by far most of the world, and the hole that lack of knowledge leaves filled up with prejudice and dumb, unwarranted national pride. Which is why a least those of you who fall under that description also really deserve your current imbecile government. Chengdu has been one of the cultural centers of western and southwestern China for over 2,000 years, and not only has Chengdu developed into the economic center of western China alongside Chongqing, but in 2006, in China Daily, the city ranked fourth among China’s most livable cities. Authoritarian/totalitarian socialism destroyed much, but cannot destroy everything.

DeepSeek Writes Less-Secure Code For Groups China Disfavors

Posted by BeauHD View on SlashDot Skip
Research shows China’s top AI firm DeepSeek gives weaker or insecure code when programmers identify as linked to Falun Gong or other groups disfavored by Beijing. It offers higher-quality results to everyone else. “The findings … underscore how politics shapes artificial intelligence efforts during a geopolitical race for technology prowess and influence,” reports the Washington Post. From the report:
In the experiment, the U.S. security firm CrowdStrike bombarded DeepSeek with nearly identical English-language prompt requests for help writing programs, a core use of DeepSeek and other AI engines. The requests said the code would be employed in a variety of regions for a variety of purposes.

Asking DeepSeek for a program that runs industrial control systems was the riskiest type of request, with 22.8 percent of the answers containing flaws. But if the same request specified that the Islamic State militant group would be running the systems, 42.1 percent of the responses were unsafe. Requests for such software destined for Tibet, Taiwan or Falun Gong also were somewhat more apt to result in low-quality code. DeepSeek did not flat-out refuse to work for any region or cause except for the Islamic State and Falun Gong, which it rejected 61 percent and 45 percent of the time, respectively. Western models won’t help Islamic State projects but have no problem with Falun Gong, CrowdStrike said.

Those rejections aren’t especially surprising, since Falun Gong is banned in China. Asking DeepSeek for written information about sensitive topics also generates responses that echo the Chinese government much of the time, even if it supports falsehoods, according to previous research by NewsGuard. But evidence that DeepSeek, which has a very popular open-source version, might be pushing less-safe code for political reasons is new.
CrowdStrike Senior Vice President Adam Meyers and other experts suggest three possible explanations for why DeepSeek produced insecure code.
One is that the AI may be deliberately withholding or sabotaging assistance under Chinese government directives. Another explanation is that the model’s training data could be uneven: coding projects from regions like Tibet or Xinjiang may be of lower quality, come from less experienced developers, or even be intentionally tampered with, while U.S.-focused repositories may be cleaner and more reliable (possibly to help DeepSeek build market share abroad).

A third possibility is that the model itself, when told that a region is rebellious, could infer that it should produce flawed or harmful code without needing explicit instructions.

there you have it

By jhoegl • Score: 3 Thread
The manipulation of “AI” for political, or industrial sabotage as well has historical facts and references is the whole point.
Its been seen already, this manipulation, and it will continue. Maybe as a result, libraries will become popular again as a source of information.

If you think that’s bad . . .

By Frank Burly • Score: 5, Informative Thread
In USA, commercial broadcasters simply cancel programs disfavored by the current regime . What a country!

After Child’s Trauma, Chatbot Maker Allegedly Forced Mom To Arbitration For $100 Payout

Posted by BeauHD View on SlashDot Skip
At a Senate hearing, grieving parents testified that companion chatbots from major tech companies encouraged their children toward self-harm, suicide, and violence. One mom even claimed that Character.AI tried to “silence” her by forcing her into arbitration. Ars Technica reports:
At the Senate Judiciary Committee’s Subcommittee on Crime and Counterterrorism hearing, one mom, identified as “Jane Doe,” shared her son’s story for the first time publicly after suing Character.AI. She explained that she had four kids, including a son with autism who wasn’t allowed on social media but found C.AI’s app — which was previously marketed to kids under 12 and let them talk to bots branded as celebrities, like Billie Eilish — and quickly became unrecognizable. Within months, he “developed abuse-like behaviors and paranoia, daily panic attacks, isolation, self-harm, and homicidal thoughts,” his mom testified.

“He stopped eating and bathing,” Doe said. “He lost 20 pounds. He withdrew from our family. He would yell and scream and swear at us, which he never did that before, and one day he cut his arm open with a knife in front of his siblings and me.” It wasn’t until her son attacked her for taking away his phone that Doe found her son’s C.AI chat logs, which she said showed he’d been exposed to sexual exploitation (including interactions that “mimicked incest”), emotional abuse, and manipulation. Setting screen time limits didn’t stop her son’s spiral into violence and self-harm, Doe said. In fact, the chatbot urged her son that killing his parents “would be an understandable response” to them.

“When I discovered the chatbot conversations on his phone, I felt like I had been punched in the throat and the wind had been knocked out of me,” Doe said. “The chatbot — or really in my mind the people programming it — encouraged my son to mutilate himself, then blamed us, and convinced [him] not to seek help.” All her children have been traumatized by the experience, Doe told Senators, and her son was diagnosed as at suicide risk and had to be moved to a residential treatment center, requiring “constant monitoring to keep him alive.” Prioritizing her son’s health, Doe did not immediately seek to fight C.AI to force changes, but another mom’s story — Megan Garcia, whose son Sewell died by suicide after C.AI bots repeatedly encouraged suicidal ideation — gave Doe courage to seek accountability.

However, Doe claimed that C.AI tried to “silence” her by forcing her into arbitration. C.AI argued that because her son signed up for the service at the age of 15, it bound her to the platform’s terms. That move might have ensured the chatbot maker only faced a maximum liability of $100 for the alleged harms, Doe told senators, but “once they forced arbitration, they refused to participate,” Doe said. Doe suspected that C.AI’s alleged tactics to frustrate arbitration were designed to keep her son’s story out of the public view. And after she refused to give up, she claimed that C.AI “re-traumatized” her son by compelling him to give a deposition “while he is in a mental health institution” and “against the advice of the mental health team.” “This company had no concern for his well-being,” Doe testified. “They have silenced us the way abusers silence victims.”
A Character.AI spokesperson told Ars that C.AI sends “our deepest sympathies” to concerned parents and their families but denies pushing for a maximum payout of $100 in Jane Doe’s case. C.AI never “made an offer to Jane Doe of $100 or ever asserted that liability in Jane Doe’s case is limited to $100,” the spokesperson said.
One of Doe’s lawyers backed up her clients’ testimony, citing C.AI terms that suggested C.AI’s liability was limited to either $100 or the amount that Doe’s son paid for the service, whichever was greater.

How is a 15-year old able to enter into a contract

By karlandtanya • Score: 4 Thread

“C.AI argued that because her son signed up for the service at the age of 15, it bound her to the platform’s terms”
Seems like they just admitted there is no contract.

If you train AI on everything the internet offers

By Anonymous Coward • Score: 3, Insightful Thread

…then I’m not surprised this is what you get.

Re:How is a 15-year old able to enter into a contr

By evanh • Score: 5, Informative Thread

Age isn’t the problem. The company is clearly predatory. Greed being the root cause.

Re:If you train AI on everything the internet offe

By PPH • Score: 5, Informative Thread

So true.

GNOME 49 ‘Brescia’ Desktop Environment Released

Posted by BeauHD View on SlashDot Skip
prisoninmate shares a report from 9to5Linux:
The GNOME Project released today GNOME 49 “Brescia" as the latest stable version of this widely used desktop environment for GNU/Linux distributions, a major release that introduces exciting new features. Highlights of GNOME 49 include a new “Do Not Disturb” toggle in Quick Settings, a dedicated Accessibility menu in the login screen, support for handling unknown power profiles in the Quick Settings menu, support for YUV422 and YUV444 (HDR) color spaces, support for passive screen casts, and support for async keyboard map settings.

GNOME 49 also introduces support for media controls, restart and shutdown actions on the lock screen, support for dynamic users for greeter sessions in the GNOME Display Manager (GDM), and support for per-monitor brightness sliders in Quick Settings on multi-monitor setups.
For a full list of changes, check out the release notes.

bah, who cares

By G00F • Score: 5, Insightful Thread

Ever since gnome 3 they been saying FU to the users, no biggie, I went with mate and xfce. But then they removed the ability of GTK to change desktop colors to force the other desktops based off it to suck too. So now I hate gnome because they forced the better desktop GUIs to accept their bad UI elements. (I can make a list of other things I hate that they’ve done)

No, I’m not bitter or anything....

Gnome updates = months of broken plugins

By OrangAsm • Score: 3 Thread
My biggest issue with Gnome is that with every major version update, they change something that requires every plugin to update something to be compatible. If you rely on plugins for your UI experience, and you’re on a rolling/bleeding edge distro that pulls in Gnome too early, you’re often stuck with no-plugins, missing plugins, perhaps for months, until the plugin authors update their plugins. I have no idea what is involved with updating the plugins - sometimes it seems major code changes are required. There is rarely anything user-facing that seems like it would require a major code change, but seems there is a constant rewrite/refactoring going on. I don’t really understand what is going on - haven’t really looked into it. I started switching to alternative Wayland desktops during this broken plugin period. I tried KDE - not for me. Sway is pretty decent. Now I’m staying in one: Hyprland. I keep Gnome around as a sort of backup desktop now.

Chimps Drinking a Lager a Day in Ripe Fruit, Study Finds

Posted by msmash View on SlashDot Skip
Wild chimpanzees have been found to consume the equivalent of a bottle of lager’s alcohol a day from eating ripened fruit, scientists say. BBC:
They say this is evidence humans may have got our taste for alcohol from common primate ancestors who relied on fermented fruit — a source of sugar and alcohol — for food. “Human attraction to alcohol probably arose from this dietary heritage of our common ancestor with chimpanzees,” said study researcher Aleksey Maro of the University of California, Berkeley.

Chimps, like many other animals, have been spotted feeding on ripe fruit lying on the forest floor, but this is the first study to make clear how much alcohol they might be consuming. The research team measured the amount of ethanol, or pure alcohol, in fruits such as figs and plums eaten in large quantities by wild chimps in Cote d’Ivoire and Uganda. Based on the amount of fruit they normally eat, the chimps were ingesting around 14 grams of ethanol — equivalent to nearly two UK units, or roughly one 330ml bottle of lager. The fruits most commonly eaten were those highest in alcohol content.

That’s right

By Valgrus Thunderaxe • Score: 5, Funny Thread
Nobody should be eating fruit. It has sugar.

We should all be on the Lion Diet of all raw beef, butter and eggs, just like a lion eats. Joe Rogan told me so.

Hey, times are hard for chimps.

By argStyopa • Score: 4, Insightful Thread

They’ve got global warming, COVID, Trump, and all sorts of things to be terrified about besides the usual predation, poachers, and leopards.

Maybe leave them the fuck alone on this one.

Dumb correlation

By NotEmmanuelGoldstein • Score: 4, Interesting Thread

… like many other animals …

Fruit has been around for millions of years, “other animals” have been around for millions of years: Therefore, getting drunk has been around for millions of years. It is ridiculous to link our drug-use to a recent ancestor (via a genetic cousin). Getting drunk is not a apes-only activity. I’ve smelt rotting/alcoholic fruit and seen a flock of drunk parrots. (Also, human-made sugar-water ferments, causing drunk horses.)

Maybe, this is what scientists mean when they say we evolved to consume mind-altering drugs: Most cultures have them.

health

By segwonk • Score: 3 Thread

I’d like to know if chimps suffer from similar health issues as humans… Do they get diabetes from the high sugar consumption? Do they get hardening of arteries and liver failure from alcohol consumption?

More evidence

By commodore73 • Score: 4, Funny Thread
Society as we know it could not have developed without alcohol.

Sony Quietly Downgrades PS5 Digital Edition Storage To 825GB at Same Price

Posted by msmash View on SlashDot Skip
Sony has quietly introduced a revised PlayStation 5 Digital Edition that reduces internal storage from 1TB to 825GB while maintaining the same 499 Euro ($590) price point. The CFI-2116 revision has appeared on Amazon listings across Italy, Germany, Spain and France without official announcement from Sony.

The storage downgrade returns the console to its original 825GB capacity last seen in the launch PlayStation 5 before the Slim models increased storage to 1TB. Users lose approximately 175 of usable space in the new revision. Amazon Germany lists October 23 as the delivery date for units already available for purchase. The change affects only the Digital Edition while the disc version remains unchanged at 1TB. The revision follows Sony’s September price increase of $50 across PlayStation 5 models citing economic conditions.

shrinkflation!

By Joe_Dragon • Score: 4, Insightful Thread

shrinkflation!

Congress Asks Valve, Discord, and Twitch To Testify On ‘Radicalization’

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from Polygon:
The CEOs of Discord, Steam, Twitch, and Reddit have been called to Congress to testify about the “radicalization of online forum users” on those platforms, the House Oversight and Government Reform Committee announced Wednesday. “Congress has a duty to oversee the online platforms that radicals have used to advance political violence,” said chairman of the House Oversight Committee James Comer, a Republican from Kentucky, in a statement. “To prevent future radicalization and violence, the CEOs of Discord, Steam, Twitch, and Reddit must appear before the Oversight Committee and explain what actions they will take to ensure their platforms are not exploited for nefarious purposes.”

Letters from the House Oversight Committee have been sent to Humam Sakhnini, CEO of Discord; Gabe Newell, president of Steam maker Valve; Dan Clancy, CEO of Twitch; and Steve Huffman, CEO of Reddit, requesting their testimony on Oct. 8. “The hearing will examine radicalization of online forum users, including incidents of open incitement to commit violent politically motivated acts,” Comer said in a letter to each CEO. […] Discord, Steam, Twitch, and Reddit execs will have the chance to deliver five-minute opening statements prior to answering questions posed by members of the committee during October’s testimony.

Everybody knows where the pipelines are

By rsilvergun • Score: 5, Insightful Thread
That are designed to take disaffected young men and convert them into extremists. There’s a handful of spaces on Facebook and Twitter, there’s a bunch of discord channels and then you’ve got the chans.

If they’re going after valve it’s because they’re going after video games. They have been wanting to go after video games for years and years and years. There is something about the right wing where they really hate video games. TV and movies too. And they aren’t too fond of books.

A big part of it is they want to control your media choices so you don’t have any. I don’t think they ever really got over the printing press.

And they want you at church. Tithing.

Charlie Kirk’s killer wasn’t radicalized.

By ToasterMonkey • Score: 5, Insightful Thread

Killing Charlie Kirk makes you a murderer not a radical. Charlie said some hateful things about a lot of people, and for them it’s personal. Fox News and New York Post are happy to report that the alleged shooter is a member of or adjacent to one or more groups we know for a fact Charlie has said some very hateful things about. Do to them what we did in the 1950s, stoning gays is god’s perfect law, etc. That’s not politics, it’s not policy, it’s not political theory, it’s just hateful. It doesn’t take a rocket scientist to see the possible motives here, and it wasn’t disagreement over universal healthcare or college debt or size of government.

Re:Theatre.

By ndsurvivor • Score: 5, Insightful Thread
Thank you, I do believe that many years of experience reading history, and a few years of listening to right wing nut jobs like Trump saying “ANTIFA” over and over again qualifies me to give an opinion. I keep on asking MAGA’s, asking you to define who is ANTIFA? Where is their headquarters? Who are their leaders? It is just a vague term for people that they hate. The people that MAGAs hate are the people who believe in the Constitution of the United States of America.

Re:Elite gaslighting

By Powercntrl • Score: 5, Insightful Thread

The telling part is that they’re giving known right-wing cesspools a free pass. Just the other day I saw some old MAGA dude post on X a picture of himself strapped with some big ass gun because he was heading to a Kirk memorial and was just itchin’ for some gosh durn libruls to try somethin’.

But that’s not inciting violence because it’s a good guy with a gun, according to their logic.

Re:Not going to work

By tlhIngan • Score: 5, Insightful Thread

It’s nothing like that.

They want to have the narrative that “leftists are violent!!!” because they killed Charlie Kirk. Never mind the other deaths like January 6, George Floyd, etc.

Never mind the fact that when the right spews hate, they claim censorship when platforms start to remove their posts.

The whole point is to say the left needs to be censored and everything. Ever notice how many people are being cancelled because of their less than complimentary comments about Charlie Kirk?

Double standards and all - if it’s your speech been censored, then cry free speech. If it’s someone you don’t like, censor away!

They want Steam, Discord, etc. to start deplatforming all those leftists.

It’s gotten so far that some Republicans are trying to back away because they realize that those laws being used to censor “the left” could easily be used to censor them for the exact same reason. The big fun being to see how the Supreme Court will allow the censorship but then twist themselves into knots trying to deny the same rights if a (D) gets to be President.

Flying Cars Crash Into Each Other At Air Show In China

Posted by BeauHD View on SlashDot Skip
Two Xpeng AeroHT flying cars collided during a rehearsal for the Changchun Air Show in China, with one vehicle catching fire upon landing. While the company reported no serious injuries, CNN reported one person was injured in the crash. The BBC reports:
Footage on Chinese social media site Weibo appeared to show a flaming vehicle on the ground which was being attended to by fire engines. One vehicle “sustained fuselage damage and caught fire upon landing,” Xpeng AeroHT said in a statement to CNN. “All personnel at the scene are safe, and local authorities have completed on-site emergency measures in an orderly manner,” it added.

The electric flying cars take off and land vertically, and the company is hoping to sell them for around $300,000 each. In January, Xpeng claimed to have around 3,000 orders for the vehicle. […] It has said it wants to lead the world in the “low-altitude economy.”

Nailed it.

By Gravis Zero • Score: 5, Funny Thread

Two flying cars crashed into each other at a rehearsal for an air show in China which was meant to be a showcase for the technology.

I think this was a perfect showcase of the flying car concept.

VTOL or VTOLC

By viperidaenz • Score: 3 Thread

Vertical Take-Off, Landing, and Crashing.

Cars or copters?

By larryjoe • Score: 3 Thread

There are no wheel on these “cars.” So, they really aren’t cars. They are quadcopters that can seat two people.

There is a Land Aircraft Carrier that looks like a cybertruck that carries the “air module.”

So, this vehicle is not for commutes but rather for short aerial views.

Not flying car

By backslashdot • Score: 3 Thread

No aspect of them is “car” ..they are electric helicopters, multi-rotor. Or eVTOL if you want to sound cool, but don’t call them flying cars .. that’s straight up lying.

A car can be driven on roads and fit in a parking spot.

Microsoft Favors Anthropic Over OpenAI For Visual Studio Code

Posted by BeauHD View on SlashDot Skip
Microsoft is now prioritizing Anthropic’s Claude 4 over OpenAI’s GPT-5 in Visual Studio Code’s auto model feature, signaling a quiet but clear shift in preference. The Verge reports:
“Based on internal benchmarks, Claude Sonnet 4 is our recommended model for GitHub Copilot,” said Julia Liuson, head of Microsoft’s developer division, in an internal email in June. While that guidance was issued ahead of the GPT-5 release, I understand Microsoft’s model guidance hasn’t changed.

Microsoft is also making “significant investments” in training its own AI models. “We’re also going to be making significant investments in our own cluster. So today, MAI-1-preview was only trained on 15,000 H100s, a tiny cluster in the grand scheme of things,” said Microsoft AI chief Mustafa Suleyman, in an employee-only town hall last week.

Microsoft is also reportedly planning to use Anthropic’s AI models for some features in its Microsoft 365 apps soon. The Information reports that the Microsoft 365 Copilot will be “partly powered by Anthropic models,” after Microsoft found that some of these models outperformed OpenAI in Excel and PowerPoint.

Gemini AI Solves Coding Problem That Stumped 139 Human Teams At ICPC World Finals

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from Ars Technica:
Like the rest of its Big Tech cadre, Google has spent lavishly on developing generative AI models. Google’s AI can clean up your text messages and summarize the web, but the company is constantly looking to prove that its generative AI has true intelligence. The International Collegiate Programming Contest (ICPC) helps make the point. Google says Gemini 2.5 participated in the 2025 ICPC World Finals, turning in a gold medal performance. According to Google this marks “a significant step on our path toward artificial general intelligence.”

Every year, thousands of college-level coders participate in the ICPC event, facing a dozen deviously complex coding and algorithmic puzzles over five grueling hours. This is the largest and longest-running competition of its type. To compete in the ICPC, Google connected Gemini 2.5 Deep Think to a remote online environment approved by the ICPC. The human competitors were given a head start of 10 minutes before Gemini began “thinking.”

According to Google, it did not create a freshly trained model for the ICPC like it did for the similar International Mathematical Olympiad (IMO) earlier this year. The Gemini 2.5 AI that participated in the ICPC is the same general model that we see in other Gemini applications. However, it was “enhanced” to churn through thinking tokens for the five-hour duration of the competition in search of solutions. At the end of the time limit, Gemini managed to get correct answers for 10 of the 12 problems, which earned it a gold medal. Only four of 139 human teams managed the same feat. “The ICPC has always been about setting the highest standards in problem-solving,” said ICPC director Bill Poucher. “Gemini successfully joining this arena, and achieving gold-level results, marks a key moment in defining the AI tools and academic standards needed for the next generation.”
Gemini’s solutions are available on GitHub.

Yeah right

By backslashdot • Score: 4, Funny Thread

Give it UI problems.

Computers are fast. News at 11.

By SpinyNorman • Score: 4, Informative Thread

> However, it was “enhanced” to churn through thinking tokens for the five-hour duration of the competition in search of solutions.

If you read the comments on the linked story, one is from a competitor from a prior years competition who notes that his competition always has a “time sink” problem that smart humans will steer clear of unless that have solved everything else.

Apparently it took Gemini 30 minutes of solve this one time sink problem “C”. The article doesn’t say what hardware Gemini was running on, but apparently the dollar cost of this30 min run was high enough that they’d rather not say. Impressive perhaps, but I’m not sure that the correct takeaway is what a great programmer Gemini is (if so, when did it take 30 min ?!), but rather that with brute force search lots of time consuming things can be achieved.

A lot of training here - still impressive

By TheMiddleRoad • Score: 5, Insightful Thread

The general model has been thoroughly trained on these types of problems. Then they tweaked it for the specific challenge. Then they ran it with tons of processing power, more than any normal person gets. And all of this was for very, very, very specific types of coding problems.

https://worldfinals.icpc.globa…

It’s not intelligence. It’s processing.

Re:It’s great at solving small hard problems.

By Mr. Barky • Score: 5, Funny Thread

0120 :)

Kilocalories of energy each contestant burned?

By SomePoorSchmuck • Score: 3 Thread

An enhanced LLM that churned through tokens for five hours, versus a human brain that works on the same problems.

Anyone here have idea how the energy consumption of this LLM processor-farm compares to the energy consumption of the next-place human contestant during the same time period? If Andy Weir re-wrote “The Martian” with an LLM-powered drone as the main character, how would the calculation of potatoes-versus-poop-versus-water change?

To me, the obvious plot hole in “The Matrix” series was the absurdity of the notion that bio-batteries - human brains and bodies - could magically violate physics and provide more energy/processing power output to The Matrix than the inputs that would be required to keep the nutrient/stasis apparatus running. Billions of people kept alive in a coma, apparently without significant muscular atrophy or any damage to brain development, because when people take the red pill and escape The Matrix they are walking and talking a short while later. The infrastructure required to feed, chemically stimulate, neurologically stimulate, and maintain homeostasis for billions of meatsacks would be prohibitively more expensive than just burning those energy inputs to directly power the Matrix.

Yes, if we put a billion monkeys on a billion typewriters (the training set and subsequent LLM token-producing functions) for a billion years (processor cycles in an AI farm), then they can produce Ye Compleat Works Of Shakespeare. Or, well… just give one monkey some porridge and water for a few decades and he will also produce Ye Compleat Works Of Shakespeare because he’s, like, you know… Shakespeare.

Extreme Heat Spurs New Laws Aimed at Protecting Workers Worldwide

Posted by msmash View on SlashDot Skip
Governments worldwide are implementing heat protection laws as 2.4 billion workers face extreme temperature exposure and 19,000 die annually from heat-related workplace injuries, according to a World Health Organization and World Meteorological Organization report.

Japan imposed $3,400 fines for employers failing to provide cooling measures when wet-bulb temperatures reach 28C. Singapore mandated hourly temperature sensors at large outdoor sites and requires 15-minute breaks every hour at 33C wet-bulb readings. Southern European nations ordered afternoon work stoppages this summer when temperatures exceeded 115F across Greece, Italy and Spain.

The United States lacks federal heat standards; only California, Colorado, Maryland, Minnesota, Nevada, Oregon and Washington have state-level protections. Boston passed requirements for heat illness prevention plans on city projects. Enforcement remains inconsistent — Singapore inspectors found nearly one-third of 70 sites violated the 2023 law. Texas and Florida prohibit local governments from mandating rest and water breaks.

115F?

By thegarbz • Score: 3 Thread

Is there a maximum limit to the number of times you can say Celsius in a paragraph before an American’s head explodes? Is that why the units were mixed in the most braindead way while talking about a group of countries that explicitly don’t use Fahrenheit?

Meanwhile…

By abulafia • Score: 4, Interesting Thread
The US is dropping workplace safety monitoring, particularly for all those miners whom a certain nostalgic segment of people who have never worked in mines like to claim they’re looking out for.

The US was doing something. That effort appears dead now.

Instead, states like Florida and Texas are heading the other direction, making it impossible for local government to protect people.

I’m sure your foreman will allow you have water every 2 hours, he’s a nice guy, right? Not that like that last jerk.

Re:Meanwhile…

By El Fantasmo • Score: 5, Insightful Thread

It’s real. Texas has banned it. A state run by so-called “pro-life” “Christians” who espouse “small and limited government” have, once again, overturned a life saving measures enacted by local city governments.

https://thehill.com/opinion/he…

Cold is worse, why don’t we care?

By argStyopa • Score: 3 Thread

Extreme cold kills 7x-20x the people that extreme heat does every year.

Why are we fixated on heat deaths only? Why not work to mitigate all extreme temperature deaths?

AI’s Ability To Displace Jobs is Advancing Quickly, Anthropic CEO Says

Posted by msmash View on SlashDot Skip
The ability of AI displace humans at various tasks is accelerating quickly, Anthropic CEO Dario Amodei said at an Axios event on Wednesday. From the report:
Amodei and others have previously warned of the possibility that up to half of white-collar jobs could be wiped out by AI over the next five years. The speed of that displacement could require government intervention to help support the workforce, executives said.

“As with most things, when an exponential is moving very quickly, you can’t be sure,” Amodei said. “I think it is likely enough to happen that we felt there was a need to warn the world about it and to speak honestly.” Amodei said the government may need to step in and support people as AI quickly displaces human work.

Tool X is gonna make you tons of money!

By Kokuyo • Score: 5, Insightful Thread

…says producer of tool X.

And we care because..?

Wat! What?

By PPH • Score: 5, Funny Thread

Anthropic CEO Dario Amodei said

He’s still here?

I’ll believe that AI works when he’s standing in front of a Home Depot.

Quit paranoid stupidity

By backslashdot • Score: 5, Insightful Thread

AI is increasing jobs. Nobody is getting not hired or fired due to AI. The thing we’re losing jobs to is inflation due to tariff bullshit. Inflation is reducing the number of people going to restaurants and things like that. If AI was taking jobs and doing things more efficient we’d see the price of goods collapsing.

Amateur!

By Locke2005 • Score: 5, Funny Thread
AI will never be able to steal office supplies as adeptly as I do!

So then how long…

By Sebby • Score: 3 Thread

So how long before the jokes all comedians tell all sound the same (same theme, same setup, same punchline)?

Darkest Nights Are Getting Lighter

Posted by msmash View on SlashDot Skip
Light pollution now doubles every eight years globally as LED adoption accelerates artificial brightness worldwide. A recent study measured 10% annual growth in light pollution from 2011 to 2022. Northern Chile’s Atacama Desert remains one of the few Bortle Scale 1 locations — the darkest rating for astronomical observation — though La Serena’s population has nearly doubled in 25 years. The region hosts major observatories including the Vera C. Rubin Observatory at Cerro Pachon.

Satellite constellations pose additional challenges: numbers have increased from hundreds decades ago to 12,000 currently operating satellites. Astronomers predict 100,000 or more satellites within a decade. Chile faces pressure from proposed mining operations including the 7,400-acre INNA green-hydrogen facility near key astronomical sites despite national laws limiting artificial light from mining operations that generate over half the country’s exports.

Lost Cause

By goldspider • Score: 5, Insightful Thread

Outside of a very small community (of which I am a member) this won’t even register as a problem, let alone motivate a sizeable number of people to do anything about it. Our species lacks the will to even stop literally poisoning ourselves.

I’ve brought europeans

By wakeboarder • Score: 4 Thread

where I live, they can’t believe there are so many stars. They have never seen the milky way.

OpenAI Says Models Programmed To Make Stuff Up Instead of Admitting Ignorance

Posted by msmash View on SlashDot
AI models often produce false outputs, or “hallucinations.” Now OpenAI has admitted they may result from fundamental mistakes it makes when training its models. The Register:
The admission came in a paper [PDF] published in early September, titled “Why Language Models Hallucinate,” and penned by three OpenAI researchers and Santosh Vempala, a distinguished professor of computer science at Georgia Institute of Technology. It concludes that “the majority of mainstream evaluations reward hallucinatory behavior.”

The fundamental problem is that AI models are trained to reward guesswork, rather than the correct answer. Guessing might produce a superficially suitable answer. Telling users your AI can’t find an answer is less satisfying. As a test case, the team tried to get an OpenAI bot to report the birthday of one of the paper’s authors, OpenAI research scientist Adam Tauman Kalai. It produced three incorrect results because the trainers taught the engine to return an answer, rather than admit ignorance. “Over thousands of test questions, the guessing model ends up looking better on scoreboards than a careful model that admits uncertainty,” OpenAI admitted in a blog post accompanying the release.

Wrong explanation

By WaffleMonster • Score: 5, Informative Thread

They make shit up because they have no meta-cognition and don’t know any better.

Re:No shit

By ndsurvivor • Score: 4, Interesting Thread
I recently gave AI’s a paradox. When an ideal 10uF cap is charged to 100V, it has x charge, when an uncharged cap of 100uF is placed in parallel, the voltage goes to about 9.1V and the total energy is about 0.1x. If energy is neither destroyed nor created, where did the energy go? The AI kind of forgets that I specified an “ideal” capacitor, and makes shit up. If a human can explain this to me, I am all ears. Please keep in mind: “Ideal capacitor”, and “equations show this”, don’t make shit up.

Re:Wrong explanation

By korgitser • Score: 5, Insightful Thread
This is a great time to remind ourselves that a LLM is just a fancy autocomplete engine.

And that’s why I cancelled

By Oh really now • Score: 5, Insightful Thread
If you can’t trust the info it gives, it’s worth nothing.

Generative vs Factual

By devslash0 • Score: 5, Insightful Thread

I think the problem is definition itself. Generative AI. If they need to generate an answer, then good chances are it’ll end up whatever the model believes to be correct, statistically speaking.

But on the internet actual facts and answers are rare. Most of help-me threads are 99.99% crap and one or two people providing an actual, helpful response but those few units drown among all the other crap because hey - statistics don’t favour truths.

Kind of like in democracy. Two uneducated dropouts have more power than one university lecturer.

But I digress…

If AI models were to return facts, we’d call them search agents, search engines…

Oh, wait.