Alterslash

the unofficial Slashdot digest
 

Contents

  1. Simulation of Crashed Boeing 787 Put Focus on a Technical Flaw
  2. Drones Used by California Cities to Patrol for Illegal Fireworks and Issue Fines
  3. Is China Quickly Eroding America’s Lead in the Global AI Race?
  4. The FSF Faces Active ‘Ongoing and Increasing’ DDoS Attacks
  5. Interstellar Navigation Demonstrated for the First Time With NASA’s ‘New Horizons’
  6. Police Department Apologizes for Sharing AI-Doctored Evidence Photo on Social Media
  7. These Tiny Lasers Are Completely Edible
  8. Diffusion + Coding = DiffuCode. How Apple Released a Weirdly Interesting Coding Language Model
  9. ‘Vibe Coder’ Who Doesn’t Know How to Code Keeps Winning Hackathons in San Francisco
  10. Tesla Launches Solar-Powered ‘Oasis’ Supercharger Station: 30-Acre Solar Farm, 39 MWh of Off-Grid Batteries
  11. How Do You Teach Computer Science in the Age of AI?
  12. KDE Plasma 6.4 Has Landed in OpenBSD
  13. UK Scientists Achieve First Commercial Tritium Production
  14. Microsoft Open Sources Copilot Chat for VS Code on GitHub
  15. A Common Assumption About Aging May Be Wrong, Study Suggests

Alterslash picks up to the best 5 comments from each of the day’s Slashdot stories, and presents them on a single page for easy reading.

Simulation of Crashed Boeing 787 Put Focus on a Technical Flaw

Posted by EditorDavid View on SlashDot Skip
Investigators of a deadly Boeing 787 crash “are studying possible dual engine failure as a scenario that prevented the Boeing Co. 787 jet from staying airborne,” reports Bloomberg:
Pilots from the airline reenacted the doomed aircraft’s parameters in a flight simulator, including with the landing gear deployed and the wing flaps retracted, and found those settings alone didn’t cause a crash, according to people familiar with the investigation. [Also, analysis of the wreckage “suggests the wing flaps and slats, which help an aircraft increase lift during takeoff, were extended correctly.”]

The result, alongside the previous discovery that an emergency-power turbine deployed seconds before impact, has reinforced the focus on a technical failure as one possible cause, said the people, who asked not to be identified discussing nonpublic deliberations… [The turbine deploys “in the case of electrical failure,” the article points out, and “was activated before the plane crashed, according to previous findings. That fan helps provide the aircraft with vital power, though it’s far too small to generate any lift.”]

Pilots who reviewed the footage have pointed to the fact that the landing gear was already partially tilted forward, suggesting the cockpit crew had initiated the retraction sequence of the wheels. At the same time, the landing-gear doors had not opened, which pilots say might mean that the aircraft experienced a loss of power or a hydraulic failure — again pointing to possible issues with the engines that provide the aircraft’s electricity.

Drones Used by California Cities to Patrol for Illegal Fireworks and Issue Fines

Posted by EditorDavid View on SlashDot Skip
“California residents who lit illegal fireworks over the July 4 holiday may be in for a nasty surprise in the mail thanks to covert fire department operations,” reports SFGate.

“A number of California cities, including Sacramento, have begun using drones to locate people shooting off illegal fireworks.”
From Wednesday to Saturday night, the Sacramento Fire Department’s special fireworks task force patrolled the streets with unmarked cars and drones, focusing on neighborhoods where they’ve had prior complaints. Task force officers and the drones took photos of the illegal activity, and within 30 days the property owner where the fireworks were used could receive a fine in the mail…

This year, Sacramento upped the fine to $1,000 for the first firework, $2,500 for the second and $5,000 per firework after that. If you lit a firework on city property, such as a park or a school, the fine goes up to $10,000 each. There’s no limit to how many fines you can be issued… This year, a number of cities across the state announced they would be using drones to find scofflaws, among them Indio, Riverside, Hemet, Brea and towns in Tulare County…

Fox40 reported on Saturday that around 60 citations were being prepared in Sacramento, with more likely on the way as fire officials review surveillance footage.
Last year for illegal fireworks, one Sacramento-area resident received a $100,000 fine.

Is China Quickly Eroding America’s Lead in the Global AI Race?

Posted by EditorDavid View on SlashDot Skip
China “is pouring money into building an AI supply chain with as little reliance on the U.S. as possible,” reports the Wall Street Journal.

And now Chinese AI companies “are loosening the U.S.‘s global stranglehold on AI,” reports the Wall Street Journal, “challenging American superiority and setting the stage for a global arms race in the technology.”
In Europe, the Middle East, Africa and Asia, users ranging from multinational banks to public universities are turning to large language models from Chinese companies such as startup DeepSeek and e-commerce giant Alibaba as alternatives to American offerings such as ChatGPT… Saudi Aramco, the world’s largest oil company, recently installed DeepSeek in its main data center. Even major American cloud service providers such as Amazon Web Services, Microsoft and Google offer DeepSeek to customers, despite the White House banning use of the company’s app on some government devices over data-security concerns.

OpenAI’s ChatGPT remains the world’s predominant AI consumer chatbot, with 910 million global downloads compared with DeepSeek’s 125 million, figures from researcher Sensor Tower show. American AI is widely seen as the industry’s gold standard, thanks to advantages in computing semiconductors, cutting-edge research and access to financial capital. But as in many other industries, Chinese companies have started to snatch customers by offering performance that is nearly as good at vastly lower prices. A study of global competitiveness in critical technologies released in early June by researchers at Harvard University found China has advantages in two key building blocks of AI, data and human capital, that are helping it keep pace…

Leading Chinese AI companies — which include Tencent and Baidu — further benefit from releasing their AI models open-source, meaning users are free to tweak them for their own purposes. That encourages developers and companies globally to adopt them. Analysts say it could also pressure U.S. rivals such as OpenAI and Anthropic to justify keeping their models private and the premiums they charge for their service… On Latenode, a Cyprus-based platform that helps global businesses build custom AI tools for tasks including creating social-media and marketing content, as many as one in five users globally now opt for DeepSeek’s model, according to co-founder Oleg Zankov. “DeepSeek is overall the same quality but 17 times cheaper,” Zankov said, which makes it particularly appealing for clients in places such as Chile and Brazil, where money and computing power aren’t as plentiful…

The less dominant American AI companies are, the less power the U.S. will have to set global standards for how the technology should be used, industry analysts say. That opens the door for Beijing to use Chinese models as a Trojan horse for disseminating information that reflects its preferred view of the world, some warn.... The U.S. also risks losing insight into China’s ambitions and AI innovations, according to Ritwik Gupta, AI policy fellow at the University of California, Berkeley. “If they are dependent on the global ecosystem, then we can govern it,” said Gupta. “If not, China is going to do what it is going to do, and we won’t have visibility.”
The article also warns of other potential issues:

Re:AI ‘race’

By RossCWilliams • Score: 5, Insightful Thread
“China” is doing both. This is a WSJ article. Its sources have some purpose in creating alarm at China’s progress in AI. Its not really clear who that message serves so identifying likely sources is hard to do. But chances are pretty good that its about money.

Not if but when

By marcle • Score: 5, Insightful Thread

Whether or not China “wins” the “AI race” (whatever that means) in the short term, our cuts to science and education will insure that China surpasses us technically in the long term.

Actually, you might want to root for China

By oumuamua • Score: 4, Interesting Thread
What is the worst a ‘communist AI’ could do - provide people with food and shelter?
Meanwhile, wise words from Andrew Yang whose Forward Party may merge with Musk’s American party:

Anyone who’s kept up with me over the last number of years knows that I’ve been driven by the fact that AI is going to transform our economy in ways that push more and more Americans to the side. That is playing out before our eyes right now in real time, with [Anthropic CEO] Dario Amodei coming out saying that entry-level white collar work is going to be automated, and that we need to think bigger about solutions. I think that Dario is right. I’ve been making the same case since 2019, 2018. I’d ask anyone who is reading this right now, “What is the current plan when it comes to the economic changes that are going to be brought by AI?” The answer is, “Not much.” Because our current political class does not have to address that issue, or any of a panoply of other issues in order to keep power. They have done an expert job of gerrymandering the country into red zones and blue zones, such that all of us are looking up, wondering, “What the heck is going on?”

https://www.politico.com/news/…

Re:Well… no

By HiThere • Score: 5, Insightful Thread

Well, advanced lithography equipment isn’t easy to make, so it’s not surprising they’re having problems. If they solve those problems it will be a permanent benefit to them.

Also, there’s no particular reason to believe that “the AI bubble” will pop. Certainly parts of it will, but other parts are already solid successes. The rest is “work in progress”, which, of course, may fail…but the odds are that large portions will succeed. (Much of the stuff that’s “not ready for prime time” is just being pushed out too quickly, before the bugs have been squashed.)

How to say you don’t read the news …

By ihadafivedigituid • Score: 5, Informative Thread
… without saying you don’t read the news.

China deployed more PV solar last year than the USA has in total. BYD is killing competition anywhere they are allowed to compete (e.g. Australia), and not just because of price. Chip embargoes have accelerated China’s homegrown silicon R&D. You cannot imagine the amount of rail they have built. They are kicking ass. There is a lot of waste and corruption, of course, but the numbers speak for themselves.

The FSF Faces Active ‘Ongoing and Increasing’ DDoS Attacks

Posted by EditorDavid View on SlashDot Skip
The Free Software Foundation’s services face “ongoing (and increasing) distributed denial of service (DDoS) attacks,” senior systems administrator Ian Kelling wrote Wednesday. But “Even though we are under active attack, gnu.org, ftp.gnu.org, and savannah.gnu.org are up with normal response times at the moment, and have been for the majority of this week, largely thanks to hard work from the Savannah hackers Bob, Corwin, and Luke who’ve helped us, your sysadmins.”

“We’ve shielded these sites for almost a full year of intense attacks now, and we’ll keep on fighting these attacks for as long as they continue.”
Our infrastructure has been under attack since August 2024. Large Language Model (LLM) web crawlers have been a significant source of the attacks, and as for the rest, we don’t expect to ever know what kind of entity is targeting our sites or why.

- In the fall Bulletin, we wrote about the August attack on gnu.org. That attack continues, but we have mitigated it. Judging from the pattern and scope, the goal was likely to take the site down and it was not an LLM crawler. We do not know who or what is behind the attack, but since then, we have had more attacks with even higher severity.

- To begin with, GNU Savannah, the FSF’s collaborative software development system, was hit by a massive botnet controlling about five million IPs starting in January. As of this writing, the attack is still ongoing, but the botnet’s current iteration is mitigated. The goal is likely to build an LLM training dataset. We do not know who or what is behind this.

- Furthermore, gnu.org and ftp.gnu.org were targets in a new DDoS attack starting on May 27, 2025. Its goal seems to be to take the site down. It is currently mitigated. It has had several iterations, and each has caused some hours of downtime while we figured out how to defend ourselves against it. Here again, the goal was likely to take our sites down and we do not know who or what is behind this.

- In addition, directory.fsf.org, the server behind the Free Software Directory, has been under attack since June 18. This likely is an LLM scraper designed to specifically target Media Wiki sites with a botnet. This attack is very active and now partially mitigated…

Even though we are under active attack, gnu.org, ftp.gnu.org, and savannah.gnu.org are up with normal response times at the moment, and have been for the majority of this week, largely thanks to hard work from the Savannah hackers Bob, Corwin, and Luke who’ve helped us, your sysadmins. We’ve shielded these sites for almost a full year of intense attacks now, and we’ll keep on fighting these attacks for as long as they continue.
The full-time FSF tech staff is just two systems administrators, “and we currently lack the funds to hire more tech staff any time soon,” Kelling points out. Kelling titled his post “our small team vs millions of bots,” suggesting that supporters purchase FSF memberships “to improve our staffing situation… Can you join us in our crucial work to guard user freedom and defy dystopia?”

Kelling also points out they’re also facing “run-of-the-mill standard crawlers, SEO crawlers, crawlers pretending to be normal users, crawlers pretending to be other crawlers, uptime systems, vulnerability scanners, carrier-grade network address translation, VPNs, and normal browsers hitting our sites…”

“Some of the abuse is not unique to us, and it seems that the health of the web has some serious problems right now.”

do not understand

By Deadbolt • Score: 4, Interesting Thread

they could deploy anubis — which is free software! — and put an immediate stop to 99% of the problematic crawlers, but they’ve decided it violates their principles because it does computations the user doesn’t want and is therefore malware.

I guess TLS negotiation is also malware?

even more reason to not donate to them if they’re going to burn it on running two sysadmins ragged when a free software solution already exists.

Re:How about using what other sites use

By AleRunner • Score: 5, Insightful Thread

Crowdflare, CAPTCHAs, … ?

Proprietary software running proprietary services. Much of the good stuff we have today exists only because people involved in the FSF refused to use proprietary software. We see from the recent changes to the AOSP how completely dangerous the decision to rely on proprietary drivers and blobs has been. Let’s be grateful for the method in the madness.

Re:do not understand

By EditorDavid • Score: 5, Informative Thread
> they could deploy anubi

The Free Software Foundation’s position (from the linked-to article)…

“Anubis makes the website send out a free JavaScript program that acts like malware. A website using Anubis will respond to a request for a webpage with a free JavaScript program and not the page that was requested. If you run the JavaScript program sent through Anubis, it will do some useless computations on random numbers and keep one CPU entirely busy. It could take less than a second or over a minute. When it is done, it sends the computation results back to the website. The website will verify that the useless computation was done by looking at the results and only then give access to the originally requested page.

“At the FSF, we do not support this scheme because it conflicts with the principles of software freedom. The Anubis JavaScript program’s calculations are the same kind of calculations done by crypto-currency mining programs. A program which does calculations that a user does not want done is a form of malware. Proprietary software is often malware, and people often run it not because they want to, but because they have been pressured into it. If we made our website use Anubis, we would be pressuring users into running malware. Even though it is free software, it is part of a scheme that is far too similar to proprietary software to be acceptable. We want users to control their own computing and to have autonomy, independence, and freedom. With your support, we can continue to put these principles into practice.”

Interstellar Navigation Demonstrated for the First Time With NASA’s ‘New Horizons’

Posted by EditorDavid View on SlashDot Skip
Three space probes are leaving our solar system — yet are still functioning. After the two Voyager space probes, New Horizons “was launched in 2006, initially to study Pluto,” remembers New Scientist. But “it has since travelled way beyond this point, ploughing on through the Kuiper belt, a vast, wide band of rocks and dust billions of miles from the sun. It is now speeding at tens of thousands of kilometres per hour…”

And it’s just performed the first ever example of interstellar navigation…
As it hurtles out of our solar system, NASA’s New Horizons spacecraft is so far from Earth that the stars in the Milky Way appear in markedly different positions compared with our own view… due to the parallax effect. This was demonstrated in 2020 when the probe beamed back pictures of two nearby stars, Proxima Centauri and Wolf 359, to Earth.

Now, Tod Lauer at the US National Optical-Infrared Astronomy Research Laboratory in Arizona and his colleagues have used this effect to work out the position of New Horizons… Almost all spacecraft calculate their bearings to within tens of metres using NASA’s Deep Space Network, a collection of radio transmitters on Earth that send regular signals out to space. In comparison, the parallax method was far less accurate, locating New Horizons within a sphere with a radius of 60 million kilometres, about half the distance between Earth and the sun. “We’re not going to put the Deep Space Network out of business — this is only a demo proof of concept,” says Lauer. However, with a better camera and equipment they could improve the accuracy by up to 100 times, he says.

Using this technique for interstellar navigation could offer advantages over the DSN because it could give more accurate location readings as a spacecraft gets further away from Earth, as well as being able to operate autonomously without needing to wait for a radio signal to come from our solar system, says Massimiliano Vasile at the University of Strathclyde, UK. “If you travel to an actual star, we are talking about light years,” says Vasile. “What happens is that your signal from the Deep Space Network has to travel all the way there and then all the way back, and it’s travelling at the speed of light, so it takes years.”
Just like a ship’s captain sailing by the stars, “We have a good enough three-dimensional map of the galaxy around us that you can find out where you are,” Lauer says.

So even when limiting your navigation to what’s on-board the spacecraft, “It’s a remarkable accuracy, with your own camera!”

Re:Proof of concept is one thing

By nevermindme • Score: 4, Interesting Thread
We ran this with thought experiment, with economists to physicists in the room back in the 1990s. With a MR. Fusion technology, Something 100 times to 1M times bigger than the international space station on 50 year crise at .05C to is very technically feasible at $10/kg (2025 dollars) to LEO. The enemy of the mission is boredom, generations becoming to dumb to repair and, just plain water loss that allows crops fail in a way the seed bank is useless. The real problem is the communications from earth the next mission blowing past them at .3C about 2/3s of the way into the mission. The first launched interstellar travelers will probably land in a world with a greeting party of fellow earthlings from the 5th or 6th follow-up missions, that was a 10,000 x bigger and had a gravity ring with a dude ranch and cattle. Scifi counters these risks with deep sleeping people, but what super brilliant 20 year old is voluntarily going to sign a life away, their kids and their grandkids up for what could be a stuck in minivan sized living quarters for the next 100 years.

Just send 1M robotic space probes to 5k targets at .1C and tell the AI to look for the trash piles and dumpster fires before even bothering sending back a signal burst. My expectation is all civilizations float to a common mean of not my problem for this generation.

Police Department Apologizes for Sharing AI-Doctored Evidence Photo on Social Media

Posted by EditorDavid View on SlashDot Skip
A Maine police department has now acknowledged “it inadvertently shared an AI-altered photo of drug evidence on social media,” reports Boston.com:
The image from the Westbrook Police Department showed a collection of drug paraphernalia purportedly seized during a recent drug bust on Brackett Street, including a scale and white powder in plastic bags. According to Westbrook police, an officer involved in the arrests snapped the evidence photo and used a photo editing app to insert the department’s patch. “The patch was added, and the photograph with the patch was sent to one of our Facebook administrators, who posted it,” the department explained in a post. “Unbeknownst to anyone, when the app added the patch, it altered the packaging and some of the other attributes on the photograph. None of us caught it or realized it.”

It wasn’t long before the edited image’s gibberish text and hazy edges drew criticism from social media users. According to the Portland Press Herald, Westbrook police initially denied AI had been used to generate the photo before eventually confirming its use of the AI chatbot ChatGPT. The department issued a public apology Tuesday, sharing a side-by-side comparison of the original and edited images.

“It was never our intent to alter the image of the evidence,” the department’s post read. “We never realized that using a photoshop app to add our logo would alter a photograph so substantially.”

The core of the problem

By JamesTRexx • Score: 5, Insightful Thread

What I’m reading here is that a program altered data without explicit permission from the user, and without notification. Even this “feature” was apparently unknown.

That is what I call something to worry about. Not quite unexpected, but still fuel for paranoia.

unnecessary

By Bahbus • Score: 5, Insightful Thread

The police shouldn’t be altering or editing evidence photos in any way, shape, or form. There is no reason to add the department logo.

Re:The core of the problem

By Scutter • Score: 5, Insightful Thread

The core of the problem was that they initially denied that they did anything at all. It wasn’t until after they were pressured to actually investigate that they finally admitted they doctored the photo. In other words, they straight up lied right from the get-go. Their default position was to lie about.

Auto Beautify

By Chelloveck • Score: 5, Insightful Thread

The article has a comparison of the photo before and after. The department logo was added, and it looks like an “auto beautify” or clean-up pass was that made all the AI artifacts. So far so good.

But they *also* removed some items and tried to disguise the fact that they were removed. It looks like some stickers (maybe?) reading “COOKIES” were edited out. That may have been by an AI “remove this” sort of feature or it may have been by hand. Either way it’s a pretty poor job. I could freehand the replacement background (a yellow sticky pad) better than they did. Also, a rubber band was removed for no obvious reason. I can imagine reasons why they might want to remove the stickers, but removal of something as innocuous as a rubber band is baffling. Especially because the rubber band goes in front of one item and behind another translucent item, which means it at least takes some effort to remove. Why bother?

I’m willing to give the police the benefit of the doubt and say that the AI artifacts were unintended. I don’t know if they’re the result of the sticker removal or if they were put there by a separate auto-beautify feature but I don’t think there was any malice intended.

I’m less willing to forgive the sticker removal. I don’t know why they were removed, but it should have been done with a black “REDACTED” box so the viewer knows that the image has been modified.

IMHO (and IANAL), any changes at all should be obvious. The department logo should be in an inset box or clearly an overlaid watermark; as it is it looks like it might have been a physical plaque on the wall. Guys, this is an evidence photo. Even though it’s (probably) not intended for use in court, you have no business modifying it. Adding a logo or making redactions is fine, as long as it’s obvious they’re not actually part of the photo. Otherwise, keep your grubby little hands off! If for no other reason than it gives the impression that you’re being dishonest.

Re: never attribute to malice…

By Firethorn • Score: 5, Insightful Thread

The original photo is evidence; it was still intact. The edited photo with the police badge watermark was to be a publicity tool, not evidence.

Though I’ll state that you don’t even need layers for this - just open the .jpg or whatever you got from the evidence in an email or whatever in paint, save as a new file, paste in the watermark, save again.

These Tiny Lasers Are Completely Edible

Posted by EditorDavid View on SlashDot Skip
“Scientists have created the first lasers made entirely from edible materials,” reports Science magazine “which could someday help monitor and track the properties of foods and medications with sensors that can be harmlessly swallowed.”
[The researchers’ report] shows that tiny droplets of everyday cooking oils can act like echo chambers of light, otherwise known as lasers. By providing the right amount of energy to an atom, the atom’s electrons will excite to a higher energy level and then relax, releasing a photon of light in the process. Trap a cloud of atoms in a house of mirrors and blast them with the right amount of energy, and the light emitted by one excited atom will stimulate one of its neighbors, amplifying the atoms’ collective glow…

[The researchers] shot purple light at droplets of olive oil, whose surfaces can keep photons of light bouncing around, trapping them in the process. This reflected light excited the electrons in the oil’s chlorophyll molecules, causing them to emit photons that triggered the glow of other chlorophyll molecules — transforming the droplet into a laser. The energy of the chlorophyll’s radiation depends on the oil droplets’ size, density, and other properties. The study’s authors suggest this sensitivity can be exploited to track different properties of food or pharmaceutical products.

When researchers added oil droplets to foods and then measured changes in the laser light the droplets emitted, they could reliably infer the foods’ sugar concentration, acidity, exposure to high temperatures, and growth of microorganisms. They also used the lasers to encode information, with droplets of different diameters functioning like the lines of a barcode. By mixing in sunflower oil droplets of seven specific sizes — all less than 100 microns wide — the researchers encoded a date directly into peach compote: 26 April, 2017, the first international Stop Food Waste Day.
Thanks to long-time Slashdot reader sciencehabit for sharing the news.

Diffusion + Coding = DiffuCode. How Apple Released a Weirdly Interesting Coding Language Model

Posted by EditorDavid View on SlashDot Skip
“Apple quietly dropped a new AI model on Hugging Face with an interesting twist,” writes 9to5Mac. “Instead of writing code like traditional LLMs generate text (left to right, top to bottom), it can also write out of order, and improve multiple chunks at once.”

“The result is faster code generation, at a performance that rivals top open-source coding models.”
Traditionally, most LLMs have been autoregressive. This means that when you ask them something, they process your entire question, predict the first token of the answer, reprocess the entire question with the first token, predict the second token, and so on. This makes them generate text like most of us read: left to right, top to bottom… An alternative to autoregressive models is diffusion models, which have been more often used by image models like Stable Diffusion. In a nutshell, the model starts with a fuzzy, noisy image, and it iteratively removes the noise while keeping the user request in mind, steering it towards something that looks more and more like what the user requested…

Lately, some large language models have looked to the diffusion architecture to generate text, and the results have been pretty promising… This behavior is especially useful for programming, where global structure matters more than linear token prediction… [Apple] released an open-source model called DiffuCode-7B-cpGRPO, that builds on top of a paper called DiffuCoder: Understanding and Improving Masked Diffusion Models for Code Generation, released just last month… [W]ith an extra training step called coupled-GRPO, it learned to generate higher-quality code with fewer passes. The result? Code that’s faster to generate, globally coherent, and competitive with some of the best open-source programming models out there.

Even more interestingly, Apple’s model is built on top of Qwen2.5-7B, an open-source foundation model from Alibaba. Alibaba first fine-tuned that model for better code generation (as Qwen2.5-Coder-7B), then Apple took it and made its own adjustments. They turned it into a new model with a diffusion-based decoder, as described in the DiffuCoder paper, and then adjusted it again to better follow instructions. Once that was done, they trained yet another version of it using more than 20,000 carefully picked coding examples.
“Although DiffuCoder did better than many diffusion-based coding models (and that was before the 4.4% bump from DiffuCoder-7B-cpGRPO), it still doesn’t quite reach the level of GPT-4 or Gemini Diffusion…” the article points out.

But “the bigger point is this: little by little, Apple has been laying the groundwork for its generative AI efforts with some pretty interesting and novel ideas.”

Yes, but…

By RogueWarrior65 • Score: 3 Thread

It generates results in Objective C.

‘Vibe Coder’ Who Doesn’t Know How to Code Keeps Winning Hackathons in San Francisco

Posted by EditorDavid View on SlashDot Skip
An anonymous reader shared this report from the San Francisco Standard:
About an hour into my meeting with the undisputed hackathon king of San Francisco, Rene Turcios asked if I wanted to smoke a joint with him. I politely declined, but his offer hardly surprised me. Turcios has built a reputation as a cannabis-loving former professional Yu-Gi-Oh! player who resells Labubus out of his Tenderloin apartment when he’s not busy attending nearly every hackathon happening in the city. Since 2023, Turcios, 29, has attended more than 200 events, where he’s won cash, software credits, and clout. “I’m always hustling,” he said.

The craziest part: he doesn’t even know how to code.

“Rene is the original vibe coder,” said RJ Moscardon, a friend and fellow hacker who watched Turcios win second place at his first-ever hackathon at the AGI House mansion in Hillsborough. “All the engineers with prestigious degrees scoffed at him at first. But now they’re all doing exactly the same thing....” Turcios was vibe coding long before the technique had a name — and was looked down upon by longtime hackers for using AI. But as Tiger Woods once said, “Winning takes care of everything....”

Instead of vigorously coding until the deadline, he finished his projects hours early by getting AI to do the technical work for him. “I didn’t write a single line of code,” Turcios said of his first hackathon where he prompted ChatGPT using plain English to generate a program that can convert any song into a lo-fi version. When the organizers announced Turcios had won second place, he screamed in celebration.... “I realized that I could compete with people who have degrees and fancy jobs....”

Turcios is now known for being able to build anything quickly. Businesses reach out to him to contract out projects that would take software engineering teams weeks — and he delivers in hours. He’s even started running workshops to teach non-technical groups and experienced software engineers how to get the most out of AI for coding.
“He grew up in Missouri to parents who worked in an international circus, taming bears and lions…”

Re:The writing is on the wall

By phantomfive • Score: 5, Interesting Thread
It’s not that big of a deal. If you knew the libraries involved, his winning entry would take you 10 minutes to code. The advantage of the AI here is that it suggests which libraries to use, and how to connect them. Here is the code if you want to check it out.

Single-functionality apps are easy

By devslash0 • Score: 5, Informative Thread

So he’s AI-built an app that takes some input, processes audio and spits out the result while having a fairly well-described win condition.

This is entry-level shit.

Now, go and show me how your AI skills fare in a multi-million lines codebases of a complex commercial system on a large team. An environment of competing priorities where you have to satisfy multivariate conditions (without compromise on any), support legacy systems while keeping the app readable and maintainable.

Re:but is this winning?

By Registered Coward v2 • Score: 5, Insightful Thread

‘But as Tiger Woods once said, “Winning takes care of everything....”'

Does winning by cheating take care of everything?

Since when is using another coding source in a coding competition NOT cheating?

I guess it depends on the rules to decide what is cheating. If the rules simply require delivering a product by a certain time, is using AI cheating any more than using a library to perform tasks instead of coding it yourself, or not coding in machine language? I get the concern that all of a sudden the skills developed over time are suddenly threatened by a tool that means someone without that expereince can compete effectively with you, but technology has done that to occupations forever.

Re:Perfect examples of “good enough”

By serviscope_minor • Score: 5, Insightful Thread

Ah but did you notice? President Dwayne Herbert Elionzo mountain dew Commacho values experts, raises that not sure is the smartest guy in the world and tries to get him to fix the problems. Sure it goes wrong but even so.

I’m this reality, the president thinks he’s the smartest person in the world and experts are shunned.

Re:Perfect examples of “good enough”

By fafalone • Score: 5, Insightful Thread
Even more impossible in this reality, Camacho was presented with evidence he was wrong about the plan not working, and actually listened and changed his mind based on evidence. That’s completely unrealistic for any GOP president, let alone Trump, a man who makes every character in the movie seem brilliant and well spoken.
Idiocracy is now a story of hope; a much better functioning government run by smarter and more reasonable leaders than we could possibly hope to get from our current one.

Tesla Launches Solar-Powered ‘Oasis’ Supercharger Station: 30-Acre Solar Farm, 39 MWh of Off-Grid Batteries

Posted by EditorDavid View on SlashDot Skip
“Tesla has launched its new Oasis Supercharger,” reports Electrek, “the long-promised EV charging station of the future, with a solar farm and off-grid batteries.”
Early in the deployment of the Supercharger network, Tesla promised to add solar arrays and batteries to the Supercharger stations, and CEO Elon Musk even said that most stations would be able to operate off-grid… Last year, Tesla announced a new project called ‘Oasis’, which consists of a new model Supercharger station with a solar farm and battery storage enabling off-grid operations in Lost Hills, California.

Tesla has now unveiled the project and turned on most of the Supercharger stalls. The project consists of 168 chargers, with half of them currently operational, making it one of the largest Supercharger stations in the world. However, that’s not even the most notable aspect of it. The station is equipped with 11 MW of ground-mounted solar panels and canopies, spanning 30 acres of land, and 10 Tesla Megapacks with a total energy storage capacity of 39 MWh. It can be operated off-grid, which is the case right now, according to Tesla.

With off-grid operations, Tesla was about to bring 84 stalls online just in time for the Fourth of July travel weekend. The rest of the stalls and a lounge are going to open later this year.
The article makes that point that “This is what charging stations should be like: fully powered by renewable energy.”

Let’s see…

By Rei • Score: 5, Interesting Thread

The solar, for a farm of this size in this location, maybe $1,20/W installed to be a bit pessimistic? But hmm, there’s no AC conversion or grid connection, so maybe more like $1/W? Again, probably pessimistic, but let’s go with it. 11MW = $11M

Tesla’s calculator for 38,5MWh of Megapacks is $9,7M. Reduce the cost to get Tesla’s internal cost, but increase it back to an even $10M for installation.

So total we’re probably in the ballpark of $20M. Divided across 168 stall, that puts our capital cost in the ballpark of $120k per stall. V4 Superchargers are $40k per stall, so that’s a total of $160k.

Assuming 20% capacity factor, there’s a mean solar production of 2,2MW (call it 2MW after losses). So there’s a mean power per stall of 11,9kW (not mean charge speed, as most of the time, any given stall is idle) - let’s round to 12kW. If the cost is say $0,45/kWh, then each stall is earning a mean of $5,40/hr, or ~$47k, yielding a mean payback time of 3,4y.

This is of course a gross oversimplification - doesn’t include maintenance, construction costs of other things at the site, other site revenue (convenience stores / cafes / etc), on and on and on. And per the article they also have a 1,5MW grid link, so it’s not truly offgrid (just *mainly* offgrid). But the ballpark number makes this look very viable.

A trip through the Australian outback

By thegarbz • Score: 5, Interesting Thread

A friend of mine recently took a trip all around Australia including outback roads through the desert in a BYD Atto 3. Some of the things she came across:

Fast chargers powered by solar + batteries.
Fast chargers powered by generators running from left over cooking oil - usually attached to outback restaurants and you need to go to the bar to get them to start it for you.
Fast chargers powered by small windmills + batteries.
Fast chargers with diesel gensets behind them.

Now granted these are all small volume and won’t work if everyone has an EV, but I’m genuinely surprised at the ingenuity of some of these systems, especially given that most commercial fast chargers have battery systems internal to them already to prevent a demand spike on the upstream grid. - Yes chargers aren’t just rated in kW when you buy them, they are rated in kW for a given time and then a different kW rating beyond that.

Re:Yay

By Rei • Score: 4, Insightful Thread

You won’t be “hanging out” - your car will be ready to leave before you are. By the time you go in, use the restroom, buy a drink or a snack, and get back to your car, you’ll have already added the range you need to go to the next site.

Unless you need to get really full because you’re in a charging desert (charging slows near the upper end), it basically is this way already, if you have a fast-charging EV and a powerful charger. And speeds just keep rising.

Re:But … but …

By Tough Love • Score: 5, Informative Thread

Yes.

Re:this is dumb

By gweihir • Score: 5, Informative Thread

And why do you think that cannot be combined? Have you done some minimal research? No, obviously not.

In actual reality, solar and farming now get combined to the benefit of both.

How Do You Teach Computer Science in the Age of AI?

Posted by EditorDavid View on SlashDot Skip
“A computer science degree used to be a golden ticket to the promised land of jobs,” a college senior tells the New York Times. But “That’s no longer the case.”

The article notes that in the last three years there’s been a 65% drop from companies seeking workers with two years of experience or less (according to an analysis by technology research/education organization CompTIA), with tech companies “relying more on AI for some aspects of coding, eliminating some entry-level work.”

So what do college professors teach when AI “is coming fastest and most forcefully to computer science”?
Computer science programs at universities across the country are now scrambling to understand the implications of the technological transformation, grappling with what to keep teaching in the AI era. Ideas range from less emphasis on mastering programming languages to focusing on hybrid courses designed to inject computing into every profession, as educators ponder what the tech jobs of the future will look like in an AI economy… Some educators now believe the discipline could broaden to become more like a liberal arts degree, with a greater emphasis on critical thinking and communication skills.

The National Science Foundation is funding a program, Level Up AI, to bring together university and community college educators and researchers to move toward a shared vision of the essentials of AI education. The 18-month project, run by the Computing Research Association, a research and education nonprofit, in partnership with New Mexico State University, is organising conferences and roundtables and producing white papers to share resources and best practices. The NSF-backed initiative was created because of “a sense of urgency that we need a lot more computing students — and more people — who know about AI in the workforce,” said Mary Lou Maher, a computer scientist and a director of the Computing Research Association.

The future of computer science education, Maher said, is likely to focus less on coding and more on computational thinking and AI literacy. Computational thinking involves breaking down problems into smaller tasks, developing step-by-step solutions and using data to reach evidence-based conclusions. AI literacy is an understanding — at varying depths for students at different levels — of how AI works, how to use it responsibly and how it is affecting society. Nurturing informed skepticism, she said, should be a goal.
The article raises other possibilities. Experts also suggest the possibility of “a burst of technology democratization as chatbot-style tools are used by people in fields from medicine to marketing to create their own programs, tailored for their industry, fed by industry-specific data sets.” Stanford CS professor Alex Aiken even argues that “The growth in software engineering jobs may decline, but the total number of people involved in programming will increase.”

Last year, Carnegie Mellon actually endorsed using AI for its introductory CS courses. The dean of the school’s undergraduate programs believes that coursework “should include instruction in the traditional basics of computing and AI principles, followed by plenty of hands-on experience designing software using the new tools.”

It will still have value

By MpVpRb • Score: 5, Interesting Thread

“A computer science degree used to be a golden ticket to the promised land of jobs”
It will still have value for the talented who study hard
The days of mediocre programmers making big bucks is over

Re: Conversations with a robot

By bradley13 • Score: 5, Insightful Thread
It still takes experience? That is exactly the challenge. AI is good enough to replace junior programmers. But then, how do we get senior programmers?

Ahead of the Game

By RossCWilliams • Score: 5, Insightful Thread
People are still working at figuring out how AI works. What do you teach people without having learned to use it? It seems to me the idea of colleges teaching it is getting ahead of the game. The world is still working at figuring it out.

Re:I may be “old fashoned”, but…

By evanh • Score: 5, Insightful Thread

Replacing Z80 with ARM would make it valid today. Basic still does the job.

Re: Conversations with a robot

By gweihir • Score: 5, Insightful Thread

Exactly. And, one step further into the argument, how do we get AI on the level of an inexperienced junior programmer for new languages or after larger changes?

Yep, we do not. In fact, LLM models already show signs of ageing because updating training data gets more and more tricky due to too much AI Slop out there and model collapse.

Actual understanding and working on problems yourself cannot be replaced by anything at this time. Maybe if (and that is a big if) if we get AGI at some time. Or not.

KDE Plasma 6.4 Has Landed in OpenBSD

Posted by EditorDavid View on SlashDot Skip
OpenBSD Journal writes:
Yes, you read that right: KDE 6.4.0 Plasma is now in OpenBSD packages… The news was announced 2025-07-04 via a fediverse post and of course the commit message itself, where the description reads....

"[I]n 6.4 the KDE Kwin team split kwin into kwin-x11 and kwin (wayland). This seems to be the sign that X11 is no longer of interest and we are focussing on Wayland. As we currently only support X11, kwin-x11 has been added as a runtime dependency to kwin. So nobody should have to install anything later. This ports update also includes Aurorae; a theme engine for KWin window decorations.”

Re:How much of KDE is Linux dependencies?

By ChunderDownunder • Score: 4, Informative Thread

As per the summary, Wayland isn’t on OpenBSD (yet).

From the release notes, the main obvious Linuxisms seemed to be Pipewire and Polkit, which seem to have been ported to underlying native BSD subsystems.

UK Scientists Achieve First Commercial Tritium Production

Posted by EditorDavid View on SlashDot Skip
Interesting Engineering reports:
Astral Systems, a UK-based private commercial fusion company, has claimed to have become the first firm to successfully breed tritium, a vital fusion fuel, using its own operational fusion reactor. This achievement, made with the University of Bristol, addresses a significant hurdle in the development of fusion energy....

Scientists from Astral Systems and the University of Bristol produced and detected tritium in real-time from an experimental lithium breeder blanket within Astral’s multi-state fusion reactors. “There’s a global race to find new ways to develop more tritium than what exists in today’s world — a huge barrier is bringing fusion energy to reality,” said Talmon Firestone, CEO and co-founder of Astral Systems. “This collaboration with the University of Bristol marks a leap forward in the search for viable, greater-than-replacement tritium breeding technologies. Using our multi-state fusion technology, we are the first private fusion company to use our reactors as a neutron source to produce fusion fuel.”

Astral Systems’ approach uses its Multi-State Fusion (MSF) technology. The company states this will commercialize fusion power with better performance, efficiency, and lower costs than traditional reactors. Their reactor design, the result of 25 years of engineering and over 15 years of runtime, incorporates recent understandings of stellar physics. A core innovation is lattice confinement fusion (LCF), a concept first discovered by NASA in 2020. This allows Astral’s reactor to achieve solid-state fuel densities 400 million times higher than those in plasma. The company’s reactors are designed to induce two distinct fusion reactions simultaneously from a single power input, with fusion occurring in both plasma and a solid-state lattice.
The article includes this quote from professor Tom Scott, who led the University of Bristol’s team, supported by the Royal Academy of Engineering and UK Atomic Energy Authority. “This landmark moment clearly demonstrates a potential path to scalable tritium production in the future and the capability of Multi-State Fusion to produce isotopes in general.”

And there’s also this prediction from the company’s web site:
“As we progress the fusion rate of our technology, aiming to exceed 10 trillion DT fusions per second per system, we unlock a wide range of applications and capabilities, such as large-scale medical isotope production, fusion neutron materials damage testing, transmutation of existing nuclear waste stores, space applications, hybrid fusion-fission power systems, and beyond.”
“Scientists everywhere are racing to develop this practically limitless form of energy,” write a climate news site called The Cooldown. (Since in theory nuclear fusion “has an energy output four times higher than that of fission, according to the International Atomic Energy Agency.”)

Thanks to long-time Slashdot reader fahrbot-bot for sharing the news.

So just to avoid misunderstandings…

By ffkom • Score: 4, Interesting Thread
… the “15 years of runtime” of their “operational fusion reactor” never produced any net energy gain - they were just after the isotope production, right?

Which means there is still only that tiny little detail missing before fusion reactors will replace all the other sources of energy… that detail being “becoming net energy positive, at costs where with the surplus energy can be sold cheaper than from established sources of energy”.

Not much tritium

By joe_frisch • Score: 3 Thread

1e13 Tritiums / second * 3 * 1.6e-27 (atomic mass kg) * 3e7 seconds / year = 1.4e-6 kg/year. Tritium is about $30,000 / gm so this is about $4/year in tritium value.

They are aiming to get there. Doesn’t sound economically viable.

Re:So just to avoid misunderstandings…

By Mspangler • Score: 5, Insightful Thread

A breakthrough is possible, but I’m sixty-something. Fusion has been “just around the corner” my whole life. So yes I’m skeptical this bunch of no-name yahoos has been belting out even a pilot-plant scale 20 MW onto the grid for any significant time.

The first controlled fission reactor was in 1942. The Nautilus went to sea in 1955. That’s what actual progress looks like.

To paraphrase Mitch Hedberg

By algaeman • Score: 3 Thread
Their reactor design incorporates recent understandings of stellar physics- it’s the same understanding we used to have of stellar physics, but we still understand it.

Microsoft Open Sources Copilot Chat for VS Code on GitHub

Posted by EditorDavid View on SlashDot Skip
“Microsoft has released the source code for the GitHub Copilot Chat extension for VS Code under the MIT license,” reports BleepingComputer.
This provides the community access to the full implementation of the chat-based coding assistant, including the implementation of “agent mode,” what contextual data is sent to large language models (LLMs), and the design of system prompts. The GitHub repository hosting the code also details telemetry collection mechanisms, addressing long-standing questions about data transparency in AI-assisted coding tools…

As the VS Code team explained previously, shifts in AI tooling landscape like the rapid growth of the open-source AI ecosystem and a more level playing field for all have reduced the need for secrecy around prompt engineering and UI design. At the same time, increased targeting of development tools by malicious actors has increased the need for crowdsourcing contributions to rapidly pinpoint problems and develop effective fixes. Essentially, openness is now considered superior from a security perspective.
“If you’ve been hesitant to adopt AI tools because you don’t trust the black box behind them, this move opensources-github-copilot-chat-vscode/offers something rare these days: transparency,” writes Slashdot reader BrianFagioli"
Now that the extension is open source, developers can audit how agent mode actually works. You can also dig into how it manages your data, customize its behavior, or build entirely new tools on top of it. This could be especially useful in enterprise environments where compliance and control are non negotiable.

It is worth pointing out that the backend models powering Copilot remain closed source. So no, you won’t be able to self host the whole experience or train your own Copilot. But everything running locally in VS Code is now fair game. Microsoft says it is planning to eventually merge inline code completions into the same open source package too, which would make Copilot Chat the new hub for both chat and suggestions.

Transparency?

By Cley Faye • Score: 4, Insightful Thread

I wasn’t really worried about how my IDE would be able to read, edit, and write file, nor how it could highlight some differences, or how it would grab something I typed and send it to a backend.
I’m worried about that backend, receiving everything needed to supposedly make decisions about the code, being fully closed, operated by an unreliable third party, with said third party promising to play fair as the only security net.

More open source is great, but considering this a move to improve transparency and trust into AI “agent” or whatever is a joke. “you can audit everything up to the part you’re suspicious about”, eh?

A Common Assumption About Aging May Be Wrong, Study Suggests

Posted by EditorDavid View on SlashDot
“Some of our basic assumptions about the biological process of aging might be wrong,” reports the New York Times — citing new research on a small Indigenous population in the Bolivian Amazon. [Alternate URL here.]
Scientists have long believed that long-term, low-grade inflammation — also known as “inflammaging” — is a universal hallmark of getting older. But this new data raises the question of whether inflammation is directly linked to aging at all, or if it’s linked to a person’s lifestyle or environment instead. The study, which was published Monday, found that people in two nonindustrialized areas experienced a different kind of inflammation throughout their lives than more urban people — likely tied to infections from bacteria, viruses and parasites rather than the precursors of chronic disease. Their inflammation also didn’t appear to increase with age.

Scientists compared inflammation signals in existing data sets from four distinct populations in Italy, Singapore, Bolivia and Malaysia; because they didn’t collect the blood samples directly, they couldn’t make exact apples-to-apples comparisons. But if validated in larger studies, the findings could suggest that diet, lifestyle and environment influence inflammation more than aging itself, said Alan Cohen, an author of the paper and an associate professor of environmental health sciences at Columbia University. “Inflammaging may not be a direct product of aging, but rather a response to industrialized conditions,” he said, adding that this was a warning to experts like him that they might be overestimating its pervasiveness globally.

“How we understand inflammation and aging health is based almost entirely on research in high-income countries like the U.S.,” said Thomas McDade, a biological anthropologist at Northwestern University. But a broader look shows that there’s much more global variation in aging than scientists previously thought, he added… McDade, who has previously studied inflammation in the Tsimane group, speculated that populations in nonindustrialized regions might be exposed to certain microbes in water, food, soil and domestic animals earlier in their lives, bolstering their immune response later in life.
More from The Independent:
Chronic inflammation is thought to speed up the ageing process and contribute to various health conditions such as Alzheimer’s disease, arthritis, cancer, heart disease, and Type 2 diabetes… However, other experts shared a word of caution before jumping to conclusions from the study. Vishwa Deep Dixit, director of the Yale Center for Research on Aging, told the New York Times it’s not surprising that people less exposed to pollution would see lower rates of chronic disease.
Aurelia Santoro, an associate professor at the University of Bologna, also cautioned about the results, according to the Times. “While they had lower rates of chronic disease, the two Indigenous populations tended to have life spans shorter than those of people in industrialized regions, meaning they may simply not have lived long enough to develop inflammaging, Santoro said.”

And Bimal Desai, a professor of pharmacology who studies inflammation at the University of Virginia School of Medicine, told the Times that the study “sparks valuable discussion” but needs more follow-up “before we rewrite the inflammaging narrative.”

"…may be wrong…” implies that it may be right

By SlithyMagister • Score: 3 Thread
The title is an example of why science is meeting with scepticism is the modern era

Why not “Evidence indicates problems with a common assumption about aging”

Other problematic phrasing such as “scientists believe…” tells readers that all their hard work, research and training has resulted in a mere belief — and beliefs are subjective. Thus while scientists believe one thing someone else opposite belief is just as valid — evidence notwithstanding