Alterslash

the unofficial Slashdot digest
 

Contents

  1. There’s 50% Fewer Young Employees at Tech Companies Now Than Two Years Ago
  2. A New Four-Person Crew Will Simulate a Year-Long Mars Mission, NASA Announces
  3. Microsoft’s Analog Optical Computer Shows AI Promise
  4. Microsoft’s Cloud Services Disrupted by Red Sea Cable Cuts
  5. Chinese Hackers Impersonated US Lawmaker in Email Espionage Campaign
  6. Publishers Demand ‘AI Overview’ Traffic Stats from Google, Alleging ‘Forced’ Deals
  7. Linus Torvalds Expresses Frustration With ‘Garbage’ Link Tags In Git Commits
  8. Scientists Discuss Next Steps to Prevent Dangerous ‘Mirror Life’ Research
  9. AI Tool Usage ‘Correlates Negatively’ With Performance in CS Class, Estonian Study Finds
  10. New In Firefox Nightly Builds: Copilot Chatbot, New Tab Widgets, JPEG-XL Support
  11. 32% of Senior Developers Say Half Their Shipped Code is AI-Generated
  12. Switching Off One Crucial Protein Appears to Reverse Brain Aging in Mice
  13. First AI-Powered ‘Self-Composing’ Ransomware Was Actually Just a University Research Project
  14. How Close Are We to Humanoid Robots?
  15. ‘A Very Finnish Thing’: Huge Sand Battery Starts Storing Wind Energy In Soapstone

Alterslash picks up to the best 5 comments from each of the day’s Slashdot stories, and presents them on a single page for easy reading.

There’s 50% Fewer Young Employees at Tech Companies Now Than Two Years Ago

Posted by EditorDavid View on SlashDot Skip
An anonymous reader shared this report from Fortune:
The percentage of young Gen Z employees between the ages of 21 and 25 has been cut in half at technology companies over the past two years, according to recent data from compensation management software business Pave with workforce data from more than 8,300 companies.

These young workers accounted for 15% of the workforce at large public tech firms in January 2023. By August 2025, they only represented 6.8%. The situation isn’t pretty at big private tech companies, either — during that same time period, the proportion of early-career Gen Z employees dwindled from 9.3% to 6.8%. Meanwhile, the average age of a worker at a tech company has risen dramatically over those two and a half years. Between January 2023 and July 2025, the average age of all employees at large public technology businesses rose from 34.3 years to 39.4 years — more than a five year difference. On the private side, the change was less drastic, with the typical age only increasing from 35.1 to 36.6 years old…

“If you’re 35 or 40 years old, you’re pretty established in your career, you have skills that you know cannot yet be disrupted by AI,” Matt Schulman, founder and CEO of Pave, tells Fortune. “There’s still a lot of human judgment when you’re operating at the more senior level…If you’re a 22-year-old that used to be an Excel junkie or something, then that can be disrupted. So it’s almost a tale of two cities.” Schulman points to a few reasons why tech company workforces are getting older and locking Gen Z out of jobs. One is that big companies — like Salesforce, Meta, and Microsoft — are becoming a lot more efficient thanks to the advent of AI. And despite their soaring trillion-dollar profits, they’re cutting employees at the bottom rungs in favor of automation. Entry-level jobs have also dwindled because of AI agents, and stalling promotions across many agencies looking to do more with less. Once technology companies weed out junior roles, occupied by Gen Zers, their workforces are bound to rise in age.
Schulman tells Fortune Gen Z also has an advantage: that tech corporations can see them as fresh talent that “can just break the rules and leverage AI to a much greater degree without the hindrance of years of bias.” And Priya Rathod, workplace trends editor for LinkedIn, tells Fortune there’s promising tech-industry entry roles in AI ethics, cybersecurity, UX, and product operations. “Building skills through certifications, gig work, and online communities can open doors....

“For Gen Z, the right certifications or micro credentials can outweigh a lack of years on the resume. This helps them stay competitive even when entry level opportunities shrink.”

A New Four-Person Crew Will Simulate a Year-Long Mars Mission, NASA Announces

Posted by EditorDavid View on SlashDot Skip
Somewhere in Houston, four research volunteers “will soon participate in NASA’s year-long simulation of a Mars mission,” NASA announced this week, saying it will provide “foundational data to inform human exploration of the Moon, Mars, and beyond.”

The 378-day simulation will take place inside a 3D-printed, 1,700-square-foot habitat at NASA’s Johnson Space Center in Houston — starting on October 19th and continuing until Halloween of 2026:
Through a series of Earth-based missions called CHAPEA (Crew Health and Performance Exploration Analog), NASA aims to evaluate certain human health and performance factors ahead of future Mars missions. The crew will undergo realistic resource limitations, equipment failures, communication delays, isolation and confinement, and other stressors, along with simulated high-tempo extravehicular activities. These scenarios allow NASA to make informed trades between risks and interventions for long-duration exploration missions.

“As NASA gears up for crewed Artemis missions, CHAPEA and other ground analogs are helping to determine which capabilities could best support future crews in overcoming the human health and performance challenges of living and operating beyond Earth’s resources — all before we send humans to Mars,” said Sara Whiting, project scientist with NASA’s Human Research Program at NASA Johnson. Crew members will carry out scientific research and operational tasks, including simulated Mars walks, growing a vegetable garden, robotic operations, and more. Technologies specifically designed for Mars and deep space exploration will also be tested, including a potable water dispenser and diagnostic medical equipment…

This mission, facilitated by NASA’s Human Research Program, is the second one-year Mars surface simulation conducted through CHAPEA. The first mission concluded on July 6, 2024.

Haven’t they done this before?

By PDXNerd • Score: 3 Thread

Didn’t they end up playing a lot of video games to stave off boredom? What’s different this time? They 3D printed a building but they didn’t pressurize it so its not even testing actual simulation conditions and its the same building they used last year…
 
We’ve been doing these isolated human studies since Biosphere, maybe before if you count the isolation experiments by NASA, what exactly are they trying to figure out about human psychology and conditions of isolation? I’m all for space exploration but I really don’t understand these LARP (live action role playing) “Mars simulators”..

Spending for the sake of spending

By khchung • Score: 3 Thread

There is no chance of NASA sending anyone to Mars in the next 10-15 years.

Simulating anyone living on Mars with current technology is the complete waste of time. By the time NASA have any real chance of sending people to Mars, the technology available, e.g. automated robots for chores, AI for companionship, synthetic food, etc, would be so vastly different that the result of this simulation would be no different than studying people living in a cave for a year.

This is purely spending money for the sake of spending, so NASA’s budget would have something there in hopes of it won’t get slashed next year. But it will in any case, we can hope this project is the one getting axed rather than other projects that might give some useful results.

Microsoft’s Analog Optical Computer Shows AI Promise

Posted by EditorDavid View on SlashDot Skip
Four years ago a small Microsoft Research team started creating an analog optical computer. They used commercially available parts like sensors from smartphone cameras, optical lenses, and micro-LED lights finer than a human hair. “As the light passes through the sensor at different intensities, the analog optical computer can add and multiply numbers,” explains a Microsoft blog post.

They envision the technology scaling to a computer that for certain problems is 100X faster and 100X more energy efficient — running AI workloads “with a fraction of the energy needed and at much greater speed than the GPUs running today’s large language models.” The results are described in a paper published in the scientific journal Nature, according to the blog post:
At the same time, Microsoft is publicly sharing its "optimization solver” algorithm and the “digital twin” it developed so that researchers from other organizations can investigate this new computing paradigm and propose new problems to solve and new ways to solve them. Francesca Parmigiani, a Microsoft principal research manager who leads the team developing the AOC, explained that the digital twin is a computer-based model that mimics how the real analog optical computer [or “AOC”] behaves; it simulates the same inputs, processes and outputs, but in a digital environment — like a software version of the hardware. This allowed the Microsoft researchers and collaborators to solve optimization problems at a scale that would be useful in real situations. This digital twin will also allow other users to experiment with how problems, either in optimization or in AI, would be mapped and run on the analog optical computer hardware. “To have the kind of success we are dreaming about, we need other researchers to be experimenting and thinking about how this hardware can be used,” Parmigiani said.

Hitesh Ballani, who directs research on future AI infrastructure at the Microsoft Research lab in Cambridge, U.K. said he believes the AOC could be a game changer. “We have actually delivered on the hard promise that it can make a big difference in two real-world problems in two domains, banking and healthcare,” he said. Further, “we opened up a whole new application domain by showing that exactly the same hardware could serve AI models, too.” In the healthcare example described in the Nature paper, the researchers used the digital twin to reconstruct MRI scans with a good degree of accuracy. The research indicates that the device could theoretically cut the time it takes to do those scans from 30 minutes to five. In the banking example, the AOC succeeded in resolving a complex optimization test case with a high degree of accuracy…

As researchers refine the AOC, adding more and more micro-LEDs, it could eventually have millions or even more than a billion weights. At the same time, it should get smaller and smaller as parts are miniaturized, researchers say.

Re:So, correct me if I’m wrong

By larryjoe • Score: 5, Insightful Thread

It took me a while to find it, but it looks like they have actually built something -> https://news.microsoft.com/sou…

What they’ve built is a 8-variable optical computer. They’re hoping to scale this up soon, but the amount of scaling isn’t mentioned.

Of course, this completely misses the key challenge of AI computing. The ALU/compute part is the easy part. It’s a small part of the chip and it consumes a small part of the power. The key problem is data movement, particularly how to quickly and efficiently grab billions of variables from memory, send them to billions of compute units, then send those outputs to the next set of billions of computer units, and then back to memory. This is one of the reasons that GPU hardware has done so well in the AI space. Microsoft’s optical computer, even if it’s wildly successful only addresses a small part of the challenge.

Oh FFS… Enough with the AI plugs already

By Rosco P. Coltrane • Score: 3 Thread

What next? An AI-power Microsoft mouse pad? AI-powered Microsoft toilet roll holder?

Look, we know you sunk five kajillion dollars in AI and you ain’t got nothing to show for it. Quite ramming it down everybody’s throats already!

Re:So, correct me if I’m wrong

By ceoyoyo • Score: 4, Interesting Thread

This isn’t the first optical computer, nor is it the first one to implement a neural network. There isn’t an “ALU” or anything like that.

The easiest way to do it is to take a piece of glass and etch a pattern into it so that when you shine light through it it implements the neural net. For example: https://opg.optica.org/prj/ful…

Those approaches generally have the issue that optics are linear. The real magic of neural networks is that they can solve nonlinear problems, but only if they incorporate nonlinearity. You can do that with special optics, but it’s not easy or easily controllable.

You can also make hybrid systems with active optical components. Glancing at Microsoft’s paper that seems to be more what they’re doing. You use microLEDs to emit light, liquid crystal arrays to manipulate it, and a camera or array of photocells to convert it to electrical signals. You can then use some simple electronics to do things like rectify the signal, then feed that to the next layer of microLEDs.

Microsoft’s Cloud Services Disrupted by Red Sea Cable Cuts

Posted by EditorDavid View on SlashDot Skip
An anonymous reader shared this report from the BBC:
Microsoft’s Azure cloud services have been disrupted by undersea cable cuts in the Red Sea, the US tech giant says.

Users of Azure — one of the world’s leading cloud computing platforms — would experience delays because of problems with internet traffic moving through the Middle East, the company said. Microsoft did not explain what might have caused the damage to the undersea cables, but added that it had been able to reroute traffic through other paths.

Over the weekend, there were reports suggesting that undersea cable cuts had affected the United Arab Emirates and some countries in Asia.... On Saturday, NetBlocks, an organisation that monitors internet access, said a series of undersea cable cuts in the Red Sea had affected internet services in several countries, including India and Pakistan.
“We do expect higher latency on some traffic that previously traversed through the Middle East,” Microsoft said in their status announcement — while stressing that traffic “that does not traverse through the Middle East is not impacted”.

On the internet, no one can tell you’re a dog

By RightwingNutjob • Score: 4, Interesting Thread

but they can tell if you’re on the other side of a war zone when your connection to civilization drops because of the war.

Geography matters a little less than it did before the internet, but it still matters.

Re:On the internet, no one can tell you’re a dog

By ArchieBunker • Score: 5, Insightful Thread

Because they suffer no repercussions?

Re:On the internet, no one can tell you’re a dog

By PPH • Score: 5, Interesting Thread

What does Russia have to do with Red Sea cable cuts? The BBC article only mentioned Russia in the context of Baltic Sea cables. This is most probably the Houthis.

Re:Why not use low earth orbit satellites instead?

By dskoll • Score: 5, Informative Thread

Clouds and rain tend to mess with light.

Chinese Hackers Impersonated US Lawmaker in Email Espionage Campaign

Posted by EditorDavid View on SlashDot Skip
As America’s trade talks with China were set to begin last July, a “puzzling” email reached several U.S. government agencies, law firms, and trade groups, reports the Wall Street Journal. It appeared to be from the chair of a U.S. Congressional committee, Representative John Moolenaar, asking recipients to review an alleged draft of upcoming legislation — sent as an attachment. “But why had the chairman sent the message from a nongovernment address…?”

“The cybersecurity firm Mandiant determined the spyware would allow the hackers to burrow deep into the targeted organizations if any of the recipients had opened the purported draft legislation, according to documents reviewed by The Wall Street Journal.”
It turned out to be the latest in a series of alleged cyber espionage campaigns linked to Beijing, people familiar with the matter said, timed to potentially deploy spyware against organizations giving input on President Trump’s trade negotiations. The FBI and the Capitol Police are investigating the Moolenaar emails, and cyber analysts traced the embedded malware to a hacker group known as APT41 — believed to be a contractor for Beijing’s Ministry of State Security… The hacking campaign appeared to be aimed at giving Chinese officials an inside look at the recommendations Trump was receiving from outside groups. It couldn’t be determined whether the attackers had successfully breached any of the targets.

A Federal Bureau of Investigation spokeswoman declined to provide details but said the bureau was aware of the incident and was “working with our partners to identify and pursue those responsible....” The alleged campaign comes as U.S. law-enforcement officials have been surprised by the prolific and creative nature of China’s spying efforts. The FBI revealed last month that a Beijing-linked espionage campaign that hit U.S. telecom companies and swept up Trump’s phone calls actually targeted more than 80 countries and reached across the globe…

The Moolenaar impersonation comes as several administration officials have recently faced impostors of their own. The State Department warned diplomats around the world in July that an impostor was using AI to imitate Secretary of State Marco Rubio’s voice in messages sent to foreign officials. Federal authorities are also investigating an effort to impersonate White House chief of staff Susie Wiles, the Journal reported in May… The FBI issued a warning that month that “malicious actors have impersonated senior U.S. officials” targeting contacts with AI-generated voice messages and texts.
And in January, the article points out, all the staffers on Moolenaar’s committee “received emails falsely claiming to be from the CEO of Chinese crane manufacturer ZPMC, according to people familiar with the episode.”

Thanks to long-time Slashdot reader schwit1 for sharing the news.

So …

By cascadingstylesheet • Score: 5, Insightful Thread
… a standard phishing email?

Mandiant the cybersecurity firm

By Mirnotoriety • Score: 4, Informative Thread
Mandiant, previously known as FireEye, was the cybersecurity company that provided services to Equifax prior to the 2017 data breach, which exposed the personal records of about 147.9 million people. There was even a case study featuring Equifax on the Mandiant website, highlighting their cybersecurity partnership. Curiously enough, that case study has since disappeared down the memory hole.

Personal E-mail Use Has Become Normalized

By organgtool • Score: 5, Insightful Thread
Government officials using personal devices to conduct government business has become normalized since at least W. Bush’s administration. There should be an executive order to ignore all e-mails from government officials if it doesn’t originate from a .gov address and the penalty should be immediate termination. Hey Donny, instead of waging wars on paper straws and low-flow toilets, why don’t you use your power to get your administration in order?

Publishers Demand ‘AI Overview’ Traffic Stats from Google, Alleging ‘Forced’ Deals

Posted by EditorDavid View on SlashDot Skip
AI Overviews have lowered click-through traffic to Daily Mail sites by as much as 89%, the publisher told a UK government body that regulates competition. So they’ve joined other top news organizations (including Guardian Media Group and the magazine trade body the Periodical Publishers Association) in asking the regulators “to make Google more transparent and provide traffic statistics from AI Overview and AI Mode to publishers,” reports the Guardian:
Publishers — already under financial pressure from soaring costs, falling advertising revenues, the decline of print and the wider trend of readers turning away from news — argue that they are effectively being forced by Google to either accept deals, including on how content is used in AI Overview and AI Mode, or “drop out of all search results”, according to several sources… In recent years, Google Discover, which feeds users articles and videos tailored to them based on their past online activity, has replaced search as the main source of click-throughs to content. However, David Buttle, founder of the consultancy DJB Strategies, says the service, which is also tied to publishers’ overall search deals, does not deliver the quality traffic that most publishers need to drive their long-term strategies. “Google Discover is of zero product importance to Google at all,” he says. “It allows Google to funnel more traffic to publishers as traffic from search declines … Publishers have no choice but to agree or lose their organic search. It also tends to reward clickbaity type content. It pulls in the opposite direction to the kind of relationship publishers want.”

Meanwhile, publishers are fighting a wider battle with AI companies seeking to plunder their content to train their large language models. The creative industry is intensively lobbying the government to ensure that proposed legislation does not allow AI firms to use copyright-protected work without permission, a move that would stop the “value being scraped” out of the £125bn sector. Some publishers have struck bilateral licensing deals with AI companies — such as the FT, the German media group Axel Springer, the Guardian and the Nordic publisher Schibsted with the ChatGPT maker OpenAI — while others such as the BBC have taken action against AI companies alleging copyright theft. “It is a two-pronged attack on publishers, a sort of pincer movement,” says Chris Duncan, a former News UK and Bauer Media senior executive who now runs a media consultancy, Seedelta. “Content is disappearing into AI products without serious remuneration, while AI summaries are being integrated into products so there is no need to click through, effectively taking money from both ends. It is an existential crisis.”
“At the moment the AI and tech community are showing no signs of supporting publisher revenue,” says the chief executive of the UK’s Periodical Publishers Association…

News entertainment sites are the bane of society.

By ericspinder • Score: 3 Thread
Of course AI will tend to ignore new entertainment websites. While often more truthful than some will admit, the format tends to regularly bend reporting for profits, corporate needs.

It’s a denial of service

By will4 • Score: 3 Thread

AI model builders will just delay longer and longer knowing that those depending on click advertising revenue, book sales, music sales, etc. will either go out of business or settle for a much lower amount.

The question of AI model arms race for future self-defense and battlefield tactic will keep the government busy during this time while the licensing and royalty payments are figured out.

Daily Mail

By VaccinesCauseAdults • Score: 5, Funny Thread

AI Overviews have lowered click-through traffic to Daily Mail sites by as much as 89%, the publisher told a UK government body that regulates competition.

Finally a killer app for AI.

What if....

By registrations_suck • Score: 3 Thread

I’m not suggesting this would ever happen.....but.....

What if Google and other major search providers decided to not index or provide search results for any “news” sites at all, and constrained their results to “non-news” sites. Just suppose, for a moment, that happened.

Would news sites still bitch?

Would various “do gooder” 3rd parties still bitch? …and would they demand Google (and the like) resume their previous activity?

Would traffic to news sites plummet into the abyss (aka, would most people stop going to news sites?)?

Would some new player enter the market for indexing and providing search results news sites ONLY?

Roll their own

By registrations_suck • Score: 3 Thread

What if the major news sites were to cut Google (and the like) off, preventing them from indexing their news sites, and introduced their own dedicated news search site? If they were successful, what then?

Suppose the new news only search engine were called “NewsCo”, just to have a name to refer to, and run by a jointly owned (by publishers) company of that name.

If advertisers wanted to reach news site readers, they would have to deal with NewsCo.

NewsCo could go ahead and provide AI summaries, not worrying if people clicked through to publisher sites, since NewsCo would display ads and could proportion revenue based on what AI summaries were getting viewed, in addition to what news stories were being clicked through to.

NewsCo would effectively entirely own the ad market for news sites.

NewsCo would of course need a global anti-trust exemption to operate this way. Should they get it? Why or why not?

I’d write more, but wife is calling me for dinner. Please discuss. (:

Linus Torvalds Expresses Frustration With ‘Garbage’ Link Tags In Git Commits

Posted by EditorDavid View on SlashDot Skip
“I have not pulled this, I’m annoyed by having to even look at this, and if you actually expect me to pull this I want a real explanation and not a useless link,” Linus Torvalds posted Friday on the Linux kernel mailing list.

Phoronix explains:
It’s become a common occurrence seeing “Link: " tags within Git commits for the Linux kernel that point to the latest Linux kernel mailing list patches of the same patch… Linus Torvalds has had enough and will be more strict against accepting pull requests that have link tags of no value. He commented yesterday on a block pull request that he pulled and then backed out of:

“And dammit, this commit has that promising ‘Link:' argument that I hoped would explain why this pointless commit exists, but AS ALWAYS that link only wasted my time by pointing to the same damn information that was already there. I was hoping that it would point to some oops report or something that would explain why my initial reaction was wrong.

“Stop this garbage already. Stop adding pointless Link arguments that waste people’s time. Add the link if it has *ADDITIONAL* information....

“Yes, I’m grumpy. I feel like my main job — really my only job — is to try to make sense of pull requests, and that’s why I absolutely detest these things that are automatically added and only make my job harder.”
A longer discussion ensued

Torvalds also had two responses to a poster who’d said “IMHO it’s better to have a Link and it _potentially_ being useful than not to have it and then need to search around for it.”

Torvalds points out he’s brought this up four times before — once in 2022.


Re:Who is this for?

By 93 Escort Wagon • Score: 5, Informative Thread

So if I’m understanding the context correctly… the problem is the commit messages don’t actually explain what the change does, it just gives a URL to the mailing list discussion regarding the bug. e.g. the full commit message is

“https://lkml.org/lkml/2025/9/7/262”

versus

“Update call sites in `task.rs` to import `ARef` and `AlwaysRefCounted` from `sync::aref` instead of `types`. Additional info at https://lkml.org/lkml/2025/9/7…"

(Please note that I just picked a mailing list message at random)

Re:Documentation is a tech skill

By fahrbot-bot • Score: 5, Funny Thread

Or just skip the documentation. If it was hard to write, it should be hard to understand. :-) :-)

Re:Documentation is a tech skill

By fahrbot-bot • Score: 5, Insightful Thread

You have me both laughing and banging my head against the desk… the latter is because I’ve known people who seem to actually think like that.

And documentation isn’t just for others. Wait long enough and you’ll probably need it for you own code — I learned that (way, way back) early on with Perl and regular expressions. I always try to code, document (or do sysadmin) thinking, what if I get hit by a bus tomorrow and someone else has to take over. Also, “If you don’t have time to do it right the first time, when will you the second time?”

Re:unconcerned

By molarmass192 • Score: 5, Insightful Thread

OMG THIS!!! A million times this!!! You have hit on every pet peeve of mine that has become a norm with the advent of the “everyone can code” movement. What used to be an elegant art form of writing bullet proof software to withstand the test of time has devolved into a “we’ll clean it up later” enshitification of the craft. I used to see mind-blowing artisans at work, then came the StackOverflow cut-and-pasters, then the “duct tape 20 packages together” to print a string to stdout, and now the vibe coders with 3000-line PRs of AI slop.

Every sign points to a new COBOL-esque super cycle where graybeards get pulled in to scape away countless layers of machine-generated “code” in order to get a broken system back on track.

Re:Documentation is a tech skill

By Dragonslicer • Score: 5, Funny Thread
You aren’t a Real Software Engineer until you’ve looked at a piece of code, asked “What fucking idiot wrote this”, and found out it was you.

Scientists Discuss Next Steps to Prevent Dangerous ‘Mirror Life’ Research

Posted by EditorDavid View on SlashDot Skip
USA Today has an update on the curtailing of “mirror life” research:
Kate Adamala had been working on something dangerous. At her synthetic biology lab, Adamala had been taking preliminary steps toward creating a living cell from scratch with one key twist: All the organism’s building blocks would be flipped. Changing these molecules would create an unnatural mirror image of a cell, as different as your right hand from your left. The endeavor was not only a fascinating research challenge, but it also could be used to improve biotechnology and medicine. As Adamala and her colleagues talked with biosecurity experts about the project, however, grave concerns began brewing. “They started to ask questions like, ‘Have you considered what happens if that cell gets released or what would happen if it infected a human?’" said Adamala, an associate professor at the University of Minnesota. They hadn’t.

So researchers brought together dozens of experts in a variety of disciplines from around the globe, including two Nobel laureates, who worked for months to determine the risks of creating “mirror life” and the chances those dangers could be mitigated. Ultimately, they concluded, mirror cells could inflict “unprecedented and irreversible harm” on our world. “We cannot rule out a scenario in which a mirror bacterium acts as an invasive species across many ecosystems, causing pervasive lethal infections in a substantial fraction of plant and animal species, including humans,” the scientists wrote in a paper published in the journal Science in December alongside a 299-page technical report

[Report co-author Vaughn Cooper, a professor at the University of Pittsburgh who studies how bacteria adapt to new environments] said it’s not yet possible to build a cell from scratch, mirror or otherwise, but researchers have begun the process by synthesizing mirror proteins and enzymes. He and his colleagues estimated that given enough resources and manpower, scientists could create a complete mirror bacteria within a decade. But for now, the world is probably safe from mirror cells. Adamala said virtually everyone in the small scientific community that was interested in developing such cells has agreed not to as a result of the findings.

The paper prompted nearly 100 scientists and ethicists from around the world to gather in Paris in June to further discuss the risks of creating mirror organisms. Many felt self-regulation is not enough, according to the institution that hosted the event, and researchers are gearing up to meet again in Manchester, England, and Singapore to discuss next steps.

Re: We would be more dangerous to it.

By RightwingNutjob • Score: 5, Insightful Thread

Not quite. The concern is that they will metabolize the non-chiral building blocks of life into mirror compounds that our metabolic mechanisms can’t break down or clear out. It’s symmetric if there’s parity in numbers. It’s very much asymmetric at the actual starting point where there’s lots of building blocks, lots of us, and a small amount of it.

we’re already doing this

By v1 • Score: 5, Informative Thread

Go check out the artifical sweetener “L-glucose”, it’s glucose, but mirrored. It still tastes sweet, but the body can’t metabolize it.

Re:Even scientists can be morons, apparently.

By Kernel Kurtz • Score: 5, Insightful Thread

Do these people not watch any TV shows? Just screwing around in their lab, apparently not a care in the world, and not once they any of them wonder what would happen if something went wrong.

I recently re-watched Steven Soderbergh’s 2011 film Contagion. The prescience of that movie is mind blowing. It’s like a documentary on the COVID pandemic filmed a decade before it actually happened. Life imitating art in a not good way.

Re: Seriously?

By umopapisdn69 • Score: 5, Informative Thread
Seriously, back at you. As you just quoted, they absolutely DID stop and consider the risks. Early, rather than late. The article describes what would seem a very best practice for such consideration. Openly, transparently, and in concert with many other experts. Sheesh!

Re: We would be more dangerous to it.

By umopapisdn69 • Score: 5, Interesting Thread
I think the far greater risk, that these experts all understood without having to spell it out, is the basic chemistry risks that infected organisms would be unable to defend against reverse-chirality bacteria. Antibodies almost always depend on the chirality of the proteins they are built to recognize. It’s probably chemically impossible for our bodies to produce reverse-chirality antibodies. And similar with a large proportion of antibiotic drugs.

AI Tool Usage ‘Correlates Negatively’ With Performance in CS Class, Estonian Study Finds

Posted by EditorDavid View on SlashDot Skip
How do AI tools impact college students? 231 students in an object-oriented programming class participated in a study at Estonia’s University of Tartu (conducted by an associate professor of informatics and a recently graduated master’s student).
They were asked how frequently they used AI tools and for what purposes. The data were analyzed using descriptive statistics, and Spearman’s rank correlation analysis was performed to examine the strength of the relationships. The results showed that students mainly used AI assistance for solving programming tasks — for example, debugging code and understanding examples. A surprising finding, however, was that more frequent use of chatbots correlated with lower academic results. One possible explanation is that struggling students were more likely to turn to AI. Nevertheless, the finding suggests that unguided use of AI and over-reliance on it may in fact hinder learning.
The researchers say their report provides “quantitative evidence that frequent AI use does not necessarily translate into better academic outcomes in programming courses.”

Other results from the survey:

Such a surprise

By gweihir • Score: 5, Interesting Thread

I have two personal data points:

1. My IT security students (several different classes and academic institutions) all view AI as last resort or something to be used only after they have solved a task to verify they got it all. This comes from negative experiences they made. They say AI misses important aspects, prioritizes wrongly, hallucinates (apparently IT security is niche enough that this happens often), and generally it takes more time to check its results than to come up with things directly. They also mislike that you often do not get references and sources for AI claims.

2. I taught a Python coding class in the 2nd semester for engineering students (they needed a lecturer and I had time). The students there told me that AI can at max be asked to explain one line of code, it routinely already failed at two connected ones. And for anything larger it was completely unusable. They also found that AI was often clueless and hallucinated some crap.

Hence I conclude that average-to-smarter students are well aware of the dangers and keep a safe distance. Below average ones are struggling anyways and may just try whatever they can get their hands on. And at least here, 30-50% of the initial participants drop out of academic STEM courses because it is too much for them. AI may have the unfortunate effect of having them drop out later, but overall I do not think it will create incompetent graduates. Oh, and I do my exams on paper whenever possible or online-no-AI for coding exams (so they can use the compiler). The latter gets more problematic because of integrated AI tools. I expect we will have to move coding exams to project work (on site) or something in the near future and have them take a full day or the like and maybe group work and pass/fail grading. As I do not teach coding anymore from this year on, I am not involved in any respective decisions though.

Correlation does not mean causation…

By Fons_de_spons • Score: 4, Insightful Thread
Don’t jump to conclusions… Maybe the students that struggle are driven to LLMs to get things done.

Hinges Strongly on “HOW” They Use AI

By Slicker • Score: 5, Informative Thread

Initially, I found the same in myself—a real degradation overall in my productivity. I am a software Engineer. It has not been easy learning how to use generative AI to actually increase and improve productivity. At first, you think it can do almost anything for you but gradually over time realize it greatly over-promises.

Overall, the key is that you need to remain in charge of your work. It is an assistant that can be entrusted more or less to small tasks with oversight, at best. For example, frame out your project and clearly define your rules and preferences in fine detail. Then..

It’s good at:
- Researching, summarizing, and giving examples of protocols, best practices, etc.
- Help you identify considerations you might have overlooked.
- Writing bits of code where the inputs/outputs and constraints are clearly defined.

It’s bad at:
- Full projects
- Writing anything novel (it knows common patterns and can’t work much beyond them.
- Being honest and competent — it cheat on writing code and writing tests for that code; when you catch it red handed, it will weasel its way out.

The bottom line: you are in charge. You need to review what it gives you. You need to work back and forth with it.

Also — I am still learning.

—Matthew

In Other News

By John Allsup • Score: 3 Thread

It turns out that even though you can cover 5 miles quicker in a car, it negatively correlates with health outcomes compared to running or cycling the same distance. Using AI is like taking a taxi.

Re:Such a surprise

By tlhIngan • Score: 4, Insightful Thread

It could be as simple as students using AI where they used to use someone else.

After all, in the “pre-AI” age, students would routinely copy code and other things from other students, and you can tell they did it because their grades generally were worse.

These days, I’m sure instead of asking other students, they are asking AI, and all we’re seeing is the same thing - the students of the past simply copied one another, the students today, rather than copying, ask AI. Just like students of the past used paper mill sites to write their homework, now use ChatGPT to do same.

Of course, I can’t say I am completely innocent of the practice - we were in a group doing a group project. I and someone else smarter than me were working on something extremely complex for the rest of the group while they worked on something much simpler for another class. In the end we figured out the complex work, and presented it to the rest of the group to learn from our knowledge, and they submitted the group project with all our names on it. I saw what they submitted and studied from it so I did learn, and the rest of the group learned from us where we all got good grades in the end. It’s just if it was divided equally among everyone the two projects would’ve been a mess and it was simpler if we split the work the other way

New In Firefox Nightly Builds: Copilot Chatbot, New Tab Widgets, JPEG-XL Support

Posted by EditorDavid View on SlashDot Skip
The blog OMG Ubuntu notes that Microsoft Copilot chatbot support has been added in the latest Firefox Nightly builds. “Firefox’s sidebar already offers access to popular chatbots, including OpenAI’s ChatGPT, Anthropic’s Claude, Le Chat’s Mistral and Google’s Gemini. It previously offered HuggingChat too.”
As the testing bed for features Mozilla wants to add to stable builds (though not all make it — eh, rounded bottom window corners?), this is something you can expect to find in a future stable update… Copilot in Firefox offers the same features as other chatbots: text prompts, upload files or images, generate images, support for entering voice prompts (for those who fancy their voice patterns being analysed and trained on). And like those other chatbots, there are usage limits, privacy policies, and (for some) account creation needed. In testing, Copilot would only generate half a summary for a webpage, telling me it was too long to produce without me signing in/up for an account.

On a related note, Mozilla has updated stable builds to let third-party chatbots summarise web pages when browsing (in-app callout alerts users to the ‘new’ feature). Users yet to enable chatbots are subtly nudged to do so each time they right-click on web page. [Between “Take Screenshot” and “View Page Source” there’s a menu option for “Ask an AI Chatbot.”] Despite making noise about its own (sluggish, but getting faster) on-device AI features that are privacy-orientated, Mozilla is bullish on the need for external chatbots.
The article suggests Firefox wants to keep up with Edge and Chrome (which can “infuse first-party AI features directly.”) But it adds that Firefox’s nightly build is also testing some non-AI features, like new task and timer widgets on Firefox’s New Tab page. And “In Firefox Labs, there are is an option to enable JPEG XL support, a super-optimised version of JPEG that is gaining traction (despite Google’s intransigence).

Other Firefox news:

Finally!

By groobly • Score: 5, Insightful Thread

Finally! Even more features I don’t need or want.

Happy

By SlashbotAgent • Score: 3 Thread

Happily, my version 140.2.0esr has none of these things and likely won’t for quite some time.

And yet

By quonset • Score: 5, Insightful Thread

Still no checkbox to disable being harassed by update notifications. Something so simple yet so far away.

There’s also a localhost option

By TuringTest • Score: 4, Interesting Thread

You can turn on a setting in about:config that will add an option to connect to a locally hosted LLM model, typically running in your GPU with something like Ollama, LM Studio or Anything LLM.

I would like that such option was displayed as prominent as those connecting to commercial services, but at least it’s there for those in the know.

JPEG XL

By bill_mcgonigle • Score: 3 Thread

JPEG XL is actually pretty cool.

Can replace most non-video image file formats, smart psychovisual modeling, fast, and not threatened by Nokia patents.

Somehow I thought for a while that is was basically JPEG 2000, but that was very wrong. Much more comprehensive and a modern pedigree.

Google seems to have NIH flu about it.

https://cloudinary.com/blog/ho…

https://en.m.wikipedia.org/wik…

32% of Senior Developers Say Half Their Shipped Code is AI-Generated

Posted by EditorDavid View on SlashDot Skip
In July 791 professional coders were surveyed by Fastly about their use of AI coding tools, reports InfoWorld. The results?

“About a third of senior developers (10+ years of experience) say over half their shipped code is AI-generated,” Fastly writes, “nearly two and a half times the rate reported by junior developers (0-2 years of experience), at 13%.”
“AI will bench test code and find errors much faster than a human, repairing them seamlessly. This has been the case many times,” one senior developer said…

Senior developers were also more likely to say they invest time fixing AI-generated code. Just under 30% of seniors reported editing AI output enough to offset most of the time savings, compared to 17% of juniors. Even so, 59% of seniors say AI tools help them ship faster overall, compared to 49% of juniors. Just over 50% of junior developers say AI makes them moderately faster. By contrast, only 39% of more senior developers say the same.

But senior devs are more likely to report significant speed gains: 26% say AI makes them a lot faster, double the 13% of junior devs who agree. One reason for this gap may be that senior developers are simply better equipped to catch and correct AI’s mistakes… Nearly 1 in 3 developers (28%) say they frequently have to fix or edit AI-generated code enough that it offsets most of the time savings. Only 14% say they rarely need to make changes. And yet, over half of developers still feel faster with AI tools like Copilot, Gemini, or Claude.

Fastly’s survey isn’t alone in calling AI productivity gains into question. A recent randomized controlled trial (RCT) of experienced open-source developers found something even more striking: when developers used AI tools, they took 19% longer to complete their tasks. This disconnect may come down to psychology. AI coding often feels smooth… but the early speed gains are often followed by cycles of editing, testing, and reworking that eat into any gains. This pattern is echoed both in conversations we’ve had with Fastly developers and in many of the comments we received in our survey…

Yet, AI still seems to improve developer job satisfaction. Nearly 80% of developers say AI tools make coding more enjoyable… Enjoyment doesn’t equal efficiency, but in a profession wrestling with burnout and backlogs, that morale boost might still count for something.
Fastly quotes one developer who said their AI tool “saves time by using boilerplate code, but it also needs manual fixes for inefficiencies, which keep productivity in check.”

The study also found the practice of green coding “goes up sharply with experience. Just over 56% of junior developers say they actively consider energy use in their work, while nearly 80% among mid- and senior-level engineers consider this when coding.”

AI coding

By phantomfive • Score: 5, Insightful Thread
For me, it has replaced Stack Overflow as a resource. After it gives me a solution, I still have to read the documentation and test it, just like I did with Stack Overflow. ChatGPT is remarkably good as a search engine replacement.

That said, Google’s AI is remarkably bad as a search engine replacement. It’s bad enough that my reflexive reaction is that it is lying to me (a reflex learned unfortunately from experience).

Developers are efficient

By allo • Score: 5, Interesting Thread

Artists may still be debating if they think art is defined by a lot of work. Developers are taking “shortcuts” since many decades. Copy & Paste code I wrote for another project? Of course! Refactor code into functions to be reusable? Not doing so is a code smell! Using libraries maintained by others? Yes, thank you! Standard snippets, code completion, refactoring features in the IDE? Why not? Using high level programming languages with a large STL? Yes, please!

Ignoring tools that may make you more efficient would be stupid. You don’t have to use all or use them all the time, but of course people will evaluate when they are an advantage and then use them. 32% sounds like the number one can expect from people using AI as an advantage but not shoehorning it into everything just because. You won’t get to 100% now or any time soon, but a third is what you can achieve from using it for productivity.

Re: AI coding

By liqu1d • Score: 5, Insightful Thread
They have started a bad habit of ignoring what you search for and giving results for what they think you mean. Add in all the SEO/AI spam taking up the top results not answering the questions you have posed and suddenly the library is looking more appealing again. If youre lucky enough to still have a local library.

Re:AI coding

By MpVpRb • Score: 5, Interesting Thread

This is how I use AI, to help me find answers
I recently started using a new embedded processor. It had a 2000 page datasheet
I got answers faster by asking Perplexity than I could by manually searching the doc
I still read the doc and wrote all of the code myself, but the AI helped a lot
Another area where AI helps is implementing a new function. I ask for sample code, read it, understand it, then write my real code

Can’t trust dev estimates

By Todd Knarr • Score: 5, Interesting Thread

The problem with this survey is we can’t trust developer estimates of how long it took them or how much time they saved. The METR report and Mike Judge’s write-up show that quite clearly. Talk to me when Fastly includes actual timings of how long developers actually took to do the job with AI vs. without showing a statistically significant difference.

Switching Off One Crucial Protein Appears to Reverse Brain Aging in Mice

Posted by EditorDavid View on SlashDot Skip
A research team just discovered older mice have more of the protein FTL1 in their hippocampus, reports ScienceAlert. The hippocampus is the region of the brain involved in memory and learning. And the researchers’ paper says their new data raises “the exciting possibility that the beneficial effects of targeting neuronal ferritin light chain 1 (FTL1) at old age may extend more broadly, beyond cognitive aging, to neurodegenerative disease conditions in older people.”
FTL1 is known to be related to storing iron in the body, but hasn’t come up in relation to brain aging before… To test its involvement after their initial findings, the researchers used genetic editing to overexpress the protein in young mice, and reduce its level in old mice. The results were clear: the younger mice showed signs of impaired memory and learning abilities, as if they were getting old before their time, while in the older mice there were signs of restored cognitive function — some of the brain aging was effectively reversed…

“It is truly a reversal of impairments,” says biomedical scientist Saul Villeda, from the University of California, San Francisco. “It’s much more than merely delaying or preventing symptoms.” Further tests on cells in petri dishes showed how FTL1 stopped neurons from growing properly, with neural wires lacking the branching structures that typically provide links between nerve cells and improve brain connectivity…

“We’re seeing more opportunities to alleviate the worst consequences of old age,” says Villeda. “It’s a hopeful time to be working on the biology of aging.”
The research was led by a team from the University of California, San Francisco — and published in Nature Aging..

Yeah, “exciting”

By Chuck Hamlin • Score: 5, Funny Thread
..until RFK Jr cancels the research.

Re:Different

By Baron_Yam • Score: 5, Insightful Thread

In the shorter term, it doesn’t matter if suppressing this gene is correcting an age-related overexpression or if it is forcing an underexpression to correct for some other age-related failure.

Even if the machinery keeps falling apart and it doesn’t offer a single extra day of life, it’s one less symptom of a failing body you’d have to deal with when you’re older.

Re: Different

By Ol Olsoc • Score: 5, Interesting Thread

Completely agree! It’s better if you can fix the ultimate cause, but if all we do is play whack a mole on the faults or outcomes it’s still progress…

My point was that the author of the FA noted that this suppression of FTL1 protein may not mean a thing. while the scientist claims it actually rolls back aging. Read TFA, then tell me who has what outlook. Who do you agree with? Here’s what he is quoted: “It is truly a reversal of impairments, it’s much more than merely delaying or preventing symptoms.”

Not a lick of ambiguity there, the problem is solved according to him.

I get these contradictory responses that really have nothing to do with my point. Are you telling me that the quoted scientist is right - we can roll back aging in this case, the problem of dementia is now solved by the suppression of FTL1 protein?

I know that it would be awesome to erase all dementia, no doubt, investigate FTL1, perhaps suppressing or eliminating it’s effects will be a great thing, perhaps it has no other use than to cause dementia, we can maybe genetically engineer humanity to eliminate it altogether, and no one will suffer dementia ever again. But sometimes there is a lot of wishful thinking. We’ve had a lot of that wishfulness over the years. Things that do something in Petri dishes and lab mice always translate to humans?

There is an old southern saying “Wish in one hand, and shit in the other, and see which one fills up fastest”.

Re:Yeah, “exciting”

By SNRatio • Score: 5, Interesting Thread
This sort of stuff isn’t in his crosshairs; it’s also catnip for aging billionaires. The real risk will be pressure to approve drugs based on this research that have very little evidence of actually providing a real benefit but do have severe side effects, like the recent crop of Alzheimer’s drugs (Aduhelm, leqembi, Kisunla)

Re:No distinction between neurotypical and neurodi

By SNRatio • Score: 4, Informative Thread
One of the purposes of using mouse models is to study a problem in the simplest and cheapest scenario. Once a proof of principle is established, it makes sense to explore the boundaries of how far that principle can be extended. At any rate, they ran the experiments in C57BL/6J, C57BL/6 and a B6;129 cross. Which mouse lines do you think they should try in the future? Should they run those experiments before they look at mouse models of Alzheimers, FTD, and Parkinsons?

First AI-Powered ‘Self-Composing’ Ransomware Was Actually Just a University Research Project

Posted by EditorDavid View on SlashDot Skip
Cybersecurity company ESET thought they’d discovered the first AI-powered ransomware in the wild, which they’d dubbed “PromptLock”. But it turned out to be the work of university security researchers…

“Unlike conventional malware, the prototype only requires natural language prompts embedded in the binary,” the researchers write in a research paper, calling it “Ransomware 3.0: Self-Composing and LLM-Orchestrated.” Their prototype “uses the gpt-oss:20b model from OpenAI locally" (using the Ollama API) to “generate malicious Lua scripts on the fly.” Tom’s Hardware said that would help PromptLock evade detection:
If they had to call an API on [OpenAI’s] servers every time they generate one of these scripts, the jig would be up. The pitfalls of vibe coding don’t really apply, either, since the scripts are running on someone else’s system.
The whole thing was actually an experiment by researchers at NYU’s Tandon School of Engineering. So “While it is the first to be AI-powered,” the school said in an announcement, “the ransomware prototype is a proof-of-concept that is non-functional outside of the contained lab environment.”

An NYU spokesperson told Tom’s Hardware a Ransomware 3.0 sample was uploaded to malware-analsys platform VirusTotal, and then picked up by the ESET researchers by mistake:
But the malware does work: NYU said “a simulation malicious AI system developed by the Tandon team carried out all four phases of ransomware attacks — mapping systems, identifying valuable files, stealing or encrypting data, and generating ransom notes — across personal computers, enterprise servers, and industrial control systems.” Is that worrisome? Absolutely. But there’s a significant difference between academic researchers demonstrating a proof-of-concept and legitimate hackers using that same technique in real-world attacks. Now the study will likely inspire the ne’er-do-wells to adopt similar approaches, especially since it seems to be remarkably affordable.

“The economic implications reveal how AI could reshape ransomware operations,” the NYU researchers said. “Traditional campaigns require skilled development teams, custom malware creation, and substantial infrastructure investments. The prototype consumed approximately 23,000 AI tokens per complete attack execution, equivalent to roughly $0.70 using commercial API services running flagship models.”

As if that weren’t enough, the researchers said that “open-source AI models eliminate these costs entirely,” so ransomware operators won’t even have to shell out the 70 cents needed to work with commercial LLM service providers…
“The study serves as an early warning to help defenders prepare countermeasures,” NYU said in an announcement, “before bad actors adopt these AI-powered techniques.”

ESET posted on Mastodon that “Nonetheless, our findings remain valid — the discovered samples represent the first known case of AI-powered ransomware.”

And the ESET researcher who’d mistakenly thought the ransomware was “in the wild” had warned that looking ahead, ransomware “will likely become more sophisticated, faster spreading, and harder to detect.... This makes cybersecurity awareness, regular backups, and stronger digital hygiene more important than ever.”

How Close Are We to Humanoid Robots?

Posted by EditorDavid View on SlashDot Skip
At CES in January, Nvidia’s CEO Jensen Huang “stood flanked by 14 humanoid robots from different companies,” remembers the Washington Post. But how close are we to real-world robot deployments?

Agility Robotics “says its factory is designed to eventually manufacture 10,000 robots a year,” the Post adds (with “some” of its robots “already at work in e-commerce warehouses and auto parts factories.”) Amazon even invested $150 million in the 10-year-old company (spun out from Oregon State University’s robotics lab) in 2022, according to the article, “and has tested the company’s robots in its warehouses.”
The e-commerce revolution has spawned sprawling warehouses across the country where products must be organized and customer orders assembled and shipped, but some human workers have said the repetitive work is low paid and leaves them prone to injury. Agility rents out its robots to warehouse owners it says have struggled to keep their human jobs filled, including logistics company GXO, which uses them at a warehouse for Spanx shapewear in Flowery Branch, Georgia, northeast of Atlanta. The robots pick up baskets of clothing from wheeled robots and walk them over to conveyor belts that take them to other parts of the facility.

Agility Chief Business Officer Daniel Diez said facilities like this represent a first step for humanoid robots into gainful employment. “This work gets paid, and we have eyes on large-scale deployments just doing this, and that’s what we’re focused on,” he said. German auto parts company Schaeffler uses Agility robots to load and unload equipment at a factory in Cheraw, South Carolina. Auto part plants have become a favored proving ground for humanoid robots, with Boston Dynamics, the company famous for its videos of back-flipping robots, doing tests with its majority owner, Hyundai.
But meanwhile, RoboForce makes a robot that has two arms on a base with four wheels, the article notes, “providing stability and making it possible to lift more weight than a bipedal robot.” Humanoid designs make sense “if it is so important to justify the trade-off and sacrifice of other things,” RoboForce CEO Leo Ma tells the Post. “Other than that, there is a great invention called wheels.”

Still, the article argues there’s “a new drive to make humanoid robots practical,” fueled by “the surge of investment in AI” combined with advancements in robotics that “make humanoid designs more capable and affordable.”
Years of steady progress have made legged robots better at balancing and stepping through tricky terrain. Improved batteries allow them to operate for longer without trailing industrial power cords. AI developers are adapting the innovations behind services like ChatGPT to help humanoids act more independently… The progress has triggered a frenzy of investment in humanoid robots and made them into a mascot for the idea that AI will soon reorder the world on the scale tech leaders have promised… Venture capitalists have invested over $5 billion in humanoid robotics start-ups since the beginning of 2024, according to financial data firm Pitchbook, and the largest tech corporations are also placing bets… Meta is working on integrating its own AI technology with humanoid robots, and Google researchers are collaborating with Austin-based humanoid robot start-up Apptronik… A host of humanoid robot companies has spawned in China, the world leader in complex manufacturing, where the government is subsidizing the industry. Six of the 14 robots that shared the stage with Nvidia’s Huang were made by Chinese companies; five were American.
“China’s Unitree sells a 77-pound humanoid that stands 4-foot-3 for $16,000…”

What does it do?

By Todd Knarr • Score: 5, Insightful Thread

What exactly does Agility’s robot do that can’t be done just as easily by a fixed robotic arm with an attachment to grab and hold the baskets? The fixed arm would be cheaper and wouldn’t have battery-life issues, and probably would require less maintenance (fewer moving parts). This sounds like a solution in search of a problem.

We *HAVE* them, they’re just pointless.

By Anonymous Freak • Score: 5, Insightful Thread

They exist now. They’re either small toys, or large horrendously expensive limited-purpose things.

The problem is that they’re pointless. Anything a humanoid robot can do in an automated manner, a specialized non-humanoid robot could do much cheaper.

I don’t need Rosie the Robot to use my regular stand-up vacuum cleaner. I have a Roomba.

I don’t need a humanoid robot to sit in the driver’s seat of a car to drive me around, Waymo exists.

I don’t need a humanoid robot to stand in a factory using a spray can to paint a car, automated industrial robots that can do tasks like that (or welding) have existed for decades.

As for the “why”?

By bradley13 • Score: 5, Insightful Thread

A lot of comments are just unrelentingly hostile to the idea of humanoid robots. Sure, industrial robots don’t need to be humanoid. Welding together the frame of a car? Packing boxes onto pallets? Special purpose robots rule.

However, there are other use cases. First, any robot that needs to interact in flexible ways in the human world. Open doors? Move around in a room full of furniture? Grasp objects designed for human hands? Look at displays placed at human head-height? Obviously, a humanoid form will be most practical.

Second, robots that are designed to interact with humans in sympathetic ways. To take one of the most obvious use-cases: caring for the elderly. That is a hugely difficult and draining task: it is difficult to find enough people to do it, and do it well. Perhaps robots can take on some of the load, but: in order to be accepted by the patients, they will need to come across as friendly and helpful. That means humanoid.

Re:I don’t want a humanoid, I want my laundry done

By registrations_suck • Score: 5, Funny Thread

Reminds me of the old joke (quick version):

1) man meets woman in bar. She’s hot, dressed and “ready to go”. Tells him, in her best “fuck me now” voice, that her name is Hannah, and she will do anything he wants for $200.

2). He gulps hard, starts to sweat, hands shaking a bit, and he says “Anything?”

3). She pulls close, her face inches away, she confirms, “Anything! I won’t leave until you’re completely satisfied and can’t take any more.”

4). He says, “uh....ok....I only live 5 minutes away. Let’s go.”

5). They get in his car, drive to his house. They go inside. Right away, they are greeted by his hot, sexy wife.”

6). The woman from the bar sees the wife and is about the panic, wondering how this man will get himself out of this one!

7). To her surprise, the man’s wife greets them warmly. The man tells her, “Sweetie, this is Hannah. She’s going to mow the grass, paint the house, do the dishes and all the laundry, and it’s only going to cost me $200!

batteries

By gurps_npc • Score: 4, Interesting Thread

For quite some time the only thing we need for a humanoid robot is farbetter batteries.

Large Language Models currently good enough to fool idiots into thinking they are Artificially Intelligent. We have the physical motors capable of making a robot arm and legs move at low strength levels. We have cameras and microphones. We have complex motion sensors that can tell the robot when it is about to fall down. We have software to unite everything.

The main things we are missing is good touch sensors so they know when they are holding something tightly enough but not crushing it, and a battery capable of lasting more than a couple of minutes while running all of these electricity intensive stuff.

When they talk about other problems, they usually mean “well sure we can do that already, but our current methods drain the battery”.

That is why immobile robots work well in factories. Also while wheeled robots do better than walking ones (wheels use much less battery and do not require things like balance sensors).

Note, this battery issue is also why most sci-fi things do not work. Can’t have nanites take over the world if they run out of power at night. Can’t make super powerful hand held laser weapons if they need to be plugged into a nuclear power plant.

‘A Very Finnish Thing’: Huge Sand Battery Starts Storing Wind Energy In Soapstone

Posted by EditorDavid View on SlashDot
This week Finland inaugurated the world’s largest sand battery, according to the Independent, “capable of storing vast amounts of energy generated from renewable sources like solar and wind.”

The battery “will enable residents to eliminate oil from their district heating network, thereby cutting emissions by nearly 70%,” notes EuroNews:
Euronews Green previously spoke to the young Finnish founders, Tommi Eronen and Markku Ylönen, who engineered the technology… Lithium batteries work well for specific applications, explains Markku, but aside from their environmental issues and expense, they cannot take in a huge amount of energy. Grains of sand, it turns out, are surprisingly roomy when it comes to energy storage… The sand can store heat at around 500C for several days to even months, providing a valuable store of cheaper energy during the winter… The battery’s thermal energy storage capacity equates to almost one month’s heat demand in summer and a one-week demand in winter in Pornainen, Polar Night Energy says…

Polar Night Energy has big ambitions to take its technology worldwide, and is currently in “active discussions” with both Finnish and international partners.
This project (in the Finnish city of Pornainen) “is really important for us because now we can show that this really works,” a spokesperson for Polar Night told Clean Technica:
The profitability of the sand battery is based on charging it according to electricity prices and Fingrid’s reserve markets. Its large storage capacity enables balancing the electricity grid and optimizing consumption over several days or even weeks… “The Pornainen plant can be adjusted quickly and precisely,” explained Jukka-Pekka Salmenkaita, vice president of AI and special projects at Elisa Industriq, “and it also has a remarkably long energy buffer, making it well suited for reserve market optimization. Our AI solution automatically identifies the best times to charge and discharge the Sand Battery and allocates flexibility capacity to the reserve products that need it most. Continuous optimization makes it a genuinely profitable investment.”
Thanks to Slashdot reader AleRunner for sharing the news.

Facts behind it

By Luckyo • Score: 5, Informative Thread

So I read the actual source, rather than all the silly editorials.

https://www.loviisanlampo.fi/b…

Then I followed up on some of the links in it leading to relevant companies.

It’s basically a sponsored grade local thing that seems to be done mostly for PR/environmental credits/environmental promises reasons for participants. 1MW hypothetical output, 100MWh potential storage. Thermal only, intended for remote heating. Blog breaks down project sponsors as follows:

Municipal government has their own net zero project, so they chipped in. Most of their main buildings are remote heated, so they also have investment in this working.
Region has a world’s largest (according to them) manufacturing company for heat storing fireplaces (og. Finnish: varaava takka), and company that makes them has a lot of stone sand waste from making said fireplaces. They’re providing the stone sand used plus some funding, and this gets them some “circular economy” certifications which makes their loans and credit lines cheaper in some cases.
Heating company basically says that this will let them stop using some of their thermal peaking stations, so they’re projecting total removal of oil based peaker and significant reduction in wood chip burning peaker. They’re also owned by an environmental investment focused fund and that one is chipping in for the costs.
Finally they’re getting government subsidy from government’s business fund.

I also suspect this is about the fact that 2/5 Finnish nukes sit in the same municipality that this central heating company operates out of, which leads to complexities of running heat peakers because of how electric grid has to be set up. Many if not most of the heat peakers are dual use, and provide heat as a secondary function of electricity production (i.e.you just add an additional circuit in a typical power plant, where some of the steam is directed into a separate heat exchanger to heat remote heating circuit contents of which are then pumped across remote heating network).

Overall, an interesting idea but seems like something that can be only really done on a very small level, and in very specific locations where they can easily source sand from that specific type of rock that is really good at reserving heat as some waste of a specific production line. Scaling is a very big question mark both in availability of this kind of sand, and in just how little heat you can actually get out of it (1MW maximum out of 100MWh capacity isn’t great, and they’re claiming very high efficiency (85-90% for smaller units) which seems rather high for what this is. I suspect they’re only giving us efficiency of only some part of the system, rather than the whole thing.

Re:Heat?

By jenningsthecat • Score: 5, Informative Thread

Heat is useful in Finland’s metropolitan heating systems, no doubt, but I wonder how they convert that heat into electricity, which is something a chemical battery doesn’t need to do. They clearly have chosen a solution,since there are surely many…I’m just too lazy to look it up.

I RTFA, and I’m pretty sure that they don’t convert the heat back to electricity. I think that they use it for home heating - including hot water for showers, laundry, etc. - and to provide some heat for industry.

If I’m correct, the “optimizing consumption over several days or even weeks” is entirely about converting electricity to heat when supply is high and demand is otherwise low, allowing the stored heat to be used directly as needed rather than converting it back into electricity.

So it works in a cold climate like Finland’s, but probably wouldn’t work so well in warmer places. Unless, of course, there’s industry that could use that heat to reduce the percentage of heat generated using sources which emit CO2.

Re:Facts behind it

By Firethorn • Score: 4, Informative Thread

(i.e.you just add an additional circuit in a typical power plant, where some of the steam is directed into a separate heat exchanger to heat remote heating circuit contents of which are then pumped across remote heating network).

Depending on how the heating system is set up, typically the steam is used for heating AFTER passing through the turbine. Basically, rather than going through a condensor that puts the heat into the air or a water source like a river, or even evaporating water, like nuclear power plant cooling towers, it’s used for district heating.

High grade dry steam is used for electricity generation, low grade wet steam for district heating. Done that way, it’s practically free except for the infrastructure to utilize it. Basically, allows the plant to produce the electricity at the ~50-60% it can manage, but be effectively 90% efficient for how high efficiency furnaces for heat can be.

Now, I was up in Alaska, heating demands often exceeded electricity production, so they did indeed have the ability to use the high grade steam for heating, but it’s all about the ratios, I guess.

Re:Heat?

By AmiMoJo • Score: 4, Informative Thread

Indeed, MacMann will be getting a boner as he reads this because it makes inflexible nuclear plants a bit less redundant on a highly flexible grid, as when nobody wants their expensive energy they can dump it into heating sand.

Re:It rings a bell…

By serviscope_minor • Score: 4, Interesting Thread

I haven’t read that (or not read it), but it would not be quite reasonable. Probably you mean cast, though. Either way I gather that kind of thing (slow cooling) is important in large casts, since you want the stresses to equalize as it goes down to room temperature. That’s probably more a case of insulating it very carefully so it cools slowly.