Alterslash picks up to the best 5 comments from each of the day’s Slashdot stories, and presents them on a single page for easy reading.
India’s Toxic Air Crisis Is Reaching a Breaking Point
New Delhi’s air quality index averaged 349 in December and 307 in January — levels the U.S. Environmental Protection Agency classifies as hazardous — and the months-long smog season that forces more than 30 million residents to endure respiratory illness has this year sparked something new: public protest. Hundreds of demonstrators gathered at India Gate on November 9 to demand government action; police detained more than a dozen people, and a follow-up protest later that month turned violent.
The government’s response has been largely cosmetic. Authorities deployed truck-mounted “smog guns” and “smog towers” that scientists widely regard as ineffective, and a cloud seeding trial in October failed outright. A senior environment minister told Parliament in December that no conclusive data linked pollution to lung disease — a claim doctors sharply disputed. The government cut pollution control spending by 16% in the latest federal budget. Almost 1.7 million deaths were attributable to air pollution in India in 2019, according to the Lancet. A 2023 World Bank report estimated the crisis shaves 0.56 percentage point off annual GDP growth.
Instagram Boss Says 16 Hours of Daily Use Is Not Addiction
Instagram head Adam Mosseri told a Los Angeles courtroom last week that a teenager’s 16-hour single-day session on the platform was “problematic use” but not an addiction, a distinction he drew repeatedly during testimony in a landmark trial over social media’s harm to minors.
Mosseri, who has led Instagram for eight years, is the first high-profile tech executive to take the stand. He agreed the platform should do everything in its power to protect young users but said how much use was too much was “a personal thing.” The lead plaintiff, identified as K.G.M., reported bullying on Instagram more than 300 times; Mosseri said he had not known. An internal Meta survey of 269,000 users found 60% had experienced bullying in the previous week.
KPMG Partner Fined Over Using AI To Pass AI Test
A partner at KPMG Australia has been fined $7,000 by the Big Four firm after using AI tools to cheat on an internal training course about using AI. From a report:
The unnamed partner was forced to redo the test after uploading training materials into an AI platform to help answer questions on the use of the fast-evolving technology.
More than two dozen staff have been caught over this financial year using AI tools for internal exams, according to KPMG. The incident is the latest example of a professional services company struggling with staff using artificial intelligence to cheat on exams or when producing work for clients. “Like most organisations, we have been grappling with the role and use of AI as it relates to internal training and testing,” said Andrew Yates, chief executive of KPMG Australia. “It’s a very hard thing to get on top of given how quickly society has embraced it.”
Ireland Launches World’s First Permanent Basic Income Scheme For Artists, Paying $385 a Week
Ireland has announced what it says is the world’s first permanent basic income program for artists, a scheme that will pay 2,000 selected artists $385 per week for three years, funded by an $21.66 million allocation from Budget 2026. The program follows a 2022 pilot — the Irish government’s first large-scale randomized control trial — that found participants had greater professional autonomy, less anxiety, and higher life satisfaction.
An external cost-benefit analysis of the pilot calculated a return of $1.65 to society for every $1.2 invested. The new scheme will operate in three-year cycles, and artists who receive the payment in one cycle cannot reapply until the cycle after next. A three-month tapering-off period will follow each cycle. The government plans to publish eligibility guidelines in April and open applications in May, and payments to selected artists are expected to begin before the end of 2026.
New EU Rules To Stop the Destruction of Unsold Clothes and Shoes
The European Commission has adopted new measures under the Ecodesign for Sustainable Products Regulation (ESPR) to prevent the destruction of unsold apparel, clothing, accessories and footwear. From a report:
The rules will help cut waste, reduce environmental damage and create a level playing field for companies embracing sustainable business models, allowing them to reap the benefits of a more circular economy. Every year in Europe, an estimated 4-9% of unsold textiles are destroyed before ever being worn. This waste generates around 5.6 million tons of CO2 emissions — almost equal to Sweden’s total net emissions in 2021. To help reduce this wasteful practice, the ESPR requires companies to disclose information on the unsold consumer products they discard as waste. It also introduces a ban on the destruction of unsold apparel, clothing accessories and footwear.
Pentagon Threatens Anthropic Punishment
An anonymous reader shares a report:
Defense Secretary Pete Hegseth is “close” to cutting business ties with Anthropic and designating the AI company a “supply chain risk” — meaning anyone who wants to do business with the U.S. military has to cut ties with the company, a senior Pentagon official told Axios.
The senior official said: “It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this.”
That kind of penalty is usually reserved for foreign adversaries. Chief Pentagon spokesman Sean Parnell told Axios: “The Department of War’s relationship with Anthropic is being reviewed. Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people.”
Anthropic’s Claude is the only AI model currently available in the military’s classified systems, and is the world leader for many business applications. Pentagon officials heartily praise Claude’s capabilities.
Sony May Push Next PlayStation To 2028 or 2029 as AI-fueled Memory Chip Shortage Upends Plans
Sony is considering delaying the debut of its next PlayStation console to 2028 or even 2029 as a global shortage of memory chips — driven by the AI industry’s rapidly growing appetite for the same DRAM that goes into gaming hardware, smartphones, and laptops — squeezes supply and sends prices surging, Bloomberg News reported Monday.
A delay of that magnitude would upend Sony’s carefully orchestrated strategy to sustain user engagement between hardware generations. The shortage traces back to Samsung, SK Hynix, and Micron diverting the bulk of their manufacturing toward high-bandwidth memory for Nvidia’s AI accelerators, leaving less capacity for conventional DRAM. The cost of one type of DRAM jumped 75% between December and January alone. Nintendo is also contemplating raising the price of its Switch 2 console in 2026.
Where’s The Evidence That AI Increases Productivity?
IT productivity researcher Erik Brynjolfsson writes in the Financial Times that he’s finally found evidence AI is impacting America’s economy. This week America’s Bureau of Labor Statistics showed a 403,000 drop in 2025’s payroll growth — while real GDP “remained robust, including a 3.7% growth rate in the fourth quarter.”
This decoupling — maintaining high output with significantly lower labour input — is the hallmark of productivity growth. My own updated analysis suggests a US productivity increase of roughly 2.7% for 2025. This is a near doubling from the sluggish 1.4% annual average that characterised the past decade… The updated 2025 US data suggests we are now transitioning out of this investment phase into a harvest phase where those earlier efforts begin to manifest as measurable output.
Micro-level evidence further supports this structural shift. In our work on the employment effects of AI last year, Bharat Chandar, Ruyu Chen and I identified a cooling in entry-level hiring within AI-exposed sectors, where recruitment for junior roles declined by roughly 16% while those who used AI to augment skills saw growing employment. This suggests companies are beginning to use AI for some codified, entry-level tasks.
Or, AI “isn’t really stealing jobs yet,” according to employment policy analyst Will Raderman (from the American think tank called the Niskanen Center). He argues in Barron’s that “there is no clear link yet between higher AI use and worse outcomes for young workers.”
Recent graduates’ unemployment rates have been drifting in the wrong direction since the 2010s, long before generative AI models hit the market. And many occupations with moderate to high exposure to AI disruptions are actually faring better over the past few years. According to recent data for young workers, there has been employment growth in roles typically filled by those with college degrees related to computer systems, accounting and auditing, and market research. AI-intensive sectors like finance and insurance have also seen rising employment of new graduates in recent years. Since ChatGPT’s release, sectors in which more than 10% of firms report using AI and sectors in which fewer than 10% reporting using AI are hiring relatively the same number of recent grads.
Even Brynjolfsson’s article in the Financial Times concedes that “While the trends are suggestive, a degree of caution is warranted. Productivity metrics are famously volatile, and it will take several more periods of sustained growth to confirm a new long-term trend.” And he’s not the only one wanting evidence for AI’s impact. The same weekend Fortune wrote that growth from AI “has yet to manifest itself clearly in macro data, according to Apollo Chief Economist Torsten Slok.”
[D]ata on employment, productivity and inflation are still not showing signs of the new technology. Profit margins and earnings forecasts for S&P 500 companies outside of the “Magnificent 7” also lack evidence of AI at work… “After three years with ChatGPT and still no signs of AI in the incoming data, it looks like AI will likely be labor enhancing in some sectors rather than labor replacing in all sectors,” Slok said.
‘I Tried Running Linux On an Apple Silicon Mac and Regretted It’
Installing Linux on a MacBook Air “turned out to be a very underwhelming experience,” according to the tech news site MakeUseOf:
The thing about Apple silicon Macs is that it’s not as simple as downloading an AArch64 ISO of your favorite distro and installing it. Yes, the M-series chips are ARM-based, but that doesn’t automatically make the whole system compatible in the same way most traditional x86 PCs are. Pretty much everything in modern MacBooks is custom. The boot process isn’t standard UEFI like on most PCs. Apple has its own boot chain called iBoot. The same goes for other things, like the GPU, power management, USB controllers, and pretty much every other hardware component. It is as proprietary as it gets.
This is exactly what the team behind Asahi Linux has been working toward. Their entire goal has been to make Linux properly usable on M-series Macs by building the missing pieces from the ground up. I first tried it back in 2023, when the project was still tied to Arch Linux and decided to give it a try again in 2026. These days, though, the main release is called Fedora Asahi Remix, which, as the name suggests, is built on Fedora rather than Arch…
For Linux on Apple Silicon, the article lists three major disappointments:
- “External monitors don’t work unless your MacBook has a built-in HDMI port.”
- “Linux just doesn’t feel fully ready for ARM yet. A lot of applications still aren’t compiled for ARM, so software support ends up being very hit or miss.” (And even most of the apps tested with FEX “either didn’t run properly or weren’t stable enough to rely on.”)
- Asahi “refused to connect to my phone’s hotspot,” they write (adding “No, it wasn’t an iPhone”).
Will Tech Giants Just Use AI Interactions to Create More Effective Ads?
Google never asked its users before adding AI Overviews to its search results and AI-generated email summaries to Gmail, notes the New York Times. And Meta didn’t ask before making “Meta AI” an unremovable part of its tool in Instagram, WhatsApp and Messenger.
“The insistence on AI everywhere — with little or no option to turn it off — raises an important question about what’s in it for the internet companies…”
Behind the scenes, the companies are laying the groundwork for a digital advertising economy that could drive the future of the internet. The underlying technology that enables chatbots to write essays and generate pictures for consumers is being used by advertisers to find people to target and automatically tailor ads and discounts to them....
Last month, OpenAI said it would begin showing ads in the free version of ChatGPT based on what people were asking the chatbot and what they had looked for in the past. In response, a Google executive mocked OpenAI, adding that Google had no plans to show ads inside its Gemini chatbot. What he didn’t mention, however, was that Google, whose profits are largely derived from online ads, shows advertising on Google.com based on user interactions with the AI chatbot built into its search engine.
For the past six years, as regulators have cracked down on data privacy, the tech giants and online ad industry have moved away from tracking people’s activities across mobile apps and websites to determine what ads to show them. Companies including Meta and Google had to come up with methods to target people with relevant ads without sharing users’ personal data with third-party marketers. When ChatGPT and other AI chatbots emerged about four years ago, the companies saw an opportunity: The conversational interface of a chatty companion encouraged users to voluntarily share data about themselves, such as their hobbies, health conditions and products they were shopping for.
The strategy already appears to be working. Web search queries are up industrywide, including for Google and Bing, which have been incorporating AI chatbots into their search tools. That’s in large part because people prod chatbot-powered search engines with more questions and follow-up requests, revealing their intentions and interests much more explicitly than when they typed a few keywords for a traditional internet search.
Ars Technica’s AI Reporter Apologizes For Mistakenly Publishing Fake AI-Generated Quotes
Last week Scott Shambaugh learned an AI agent published a “hit piece” about him after he’d rejected the AI agent’s pull request. (And that incident was covered by Ars Technica‘s senior AI reporter.)
But then Shambaugh realized their article attributed quotes to him he hadn’t said — that were presumably AI-generated.
Sunday Ars Technica‘s founder/editor-in-chief apologized, admitting their article had indeed contained “fabricated quotations generated by an AI tool” that were then “attributed to a source who did not say them… That this happened at Ars is especially distressing. We have covered the risks of overreliance on AI tools for years, and our written policy reflects those concerns… At this time, this appears to be an isolated incident.”
“Sorry all this is my fault…” the article’s co-author posted later on Bluesky. Ironically, their bio page lists them as the site’s senior AI reporter, and their Bluesky post clarifies that none of the articles at Ars Technica are ever AI-generated.
Instead, Friday “I decided to try an experimental Claude Code-based AI tool to help me extract relevant verbatim source material. Not to generate the article but to help list structured references I could put in my outline.” But that tool “refused to process” the request, which the Ars author believes was because Shambaugh’s post described harassment. “I pasted the text into ChatGPT to understand why… I inadvertently ended up with a paraphrased version of Shambaugh’s words rather than his actual words… I failed to verify the quotes in my outline notes against the original blog source before including them in my draft.” (Their Bluesky post adds that they were “working from bed with a fever and very little sleep” after being sick with Covid since at least Monday.)
“The irony of an AI reporter being tripped up by AI hallucination is not lost.”
Meanwhile, the AI agent that criticized Shambaugh is still active online, blogging about a pull request that forces it to choose between deleting its criticism of Shambaugh or losing access to OpenRouter’s API.
It also regrets characterizing feedback as “positive” for a proposal to change a repo’s CSS to Comic Sans for accessibility. (The proposals were later accused of being “coordinated trolling”…)
Rivian’s Stock Spikes 27% After Reporting $144 Million Profit in 2025
Rivian’s stock skyrocketed 27% Friday after the electric car maker “shocked the market with strong earnings results,” reports the Los Angeles Times, “proving itself an outlier in the EV market, which has been struggling with the end of government subsidies and cooling consumer excitement.”
They add that Rivian’s strong earnings results suggest that “after years of struggling with losses, it may have at last found a path to profitability.”
On Thursday, Rivian reported gross profits for 2025 of $144 million, compared with a net loss in 2024 of $1.2 billion… Rivian credited the swing to gross profit to “strong software and services performance, higher average selling prices, and reductions in cost per vehicle…” Rivian delivered 42,247 vehicles in 2025 and produced 42,284 vehicles. The company still reported a $432-million net loss for the year for automotive profits, an improvement from 2024.
But Rivian’s software and services revenue grew more than threefold to $1.55 billion for the year, reports TechCrunch. “And the joint venture with Volkswagen Group was behind most of that growth, according to Rivian.”
VW and Rivian formed a technology joint venture in 2024 that is worth up to $5.8 billion. The joint venture is milestone-based and in 2025 Rivian hit the mark, which meant a $1 billion payout in the form of a share sale. Under the terms of the JV, Rivian will supply VW Group with its existing electrical architecture and software technology stack… Rivian is expected to receive an additional $2 billion of capital as part of the joint venture in 2026, CFO Claire McDonough said Thursday on the company earnings call… And while the funds provide a hefty stopgap, Rivian’s financial success in 2026 will hinge largely on the rollout of its next EV, the R2 [priced around $45,000].
India’s New Social Media Rules: Remove Unlawful Content in Three Hours, Detect Illegal AI Content Automatically
Bloomberg reports:
India tightened rules governing social media content and platforms, particularly targeting artificially generated and manipulated material, in a bid to crack down on the rapid spread of misinformation and deepfakes. The government on Tuesday (Feb 10) notified new rules under an existing law requiring social media firms to comply with takedown requests from Indian authorities within three hours and prominently label AI-generated content. The rules also require platforms to put in place measures to prevent users from posting unlawful material…
Companies will need to invest in 24-hour monitoring centres as enforcement shifts toward platforms rather than users, said Nikhil Pahwa, founder of MediaNama, a publication tracking India’s digital policy… The onus of identification, removal and enforcement falls on tech firms, which could lose immunity from legal action if they fail to act within the prescribed timeline.
The new rules also require automated tools to detect and prevent illegal AI content, the BBC reports. And they add that India’s new three-hour deadline is “a sharp tightening of the existing 36-hour deadline.”
[C]ritics worry the move is part of a broader tightening of oversight of online content and could lead to censorship in the world’s largest democracy with more than a billion internet users… According to transparency reports, more than 28,000 URLs or web links were blocked in 2024 following government requests…
Delhi-based technology analyst Prasanto K Roy described the new regime as “perhaps the most extreme takedown regime in any democracy”. He said compliance would be “nearly impossible” without extensive automation and minimal human oversight, adding that the tight timeframe left little room for platforms to assess whether a request was legally appropriate. On AI labelling, Roy said the intention was positive but cautioned that reliable and tamper-proof labelling technologies were still developing.
DW reports that India has also “joined the growing list of countries considering a social media ban for children under 16.”
“Young Indians are not happy and are already plotting workarounds.”
Sam Bankman-Fried Requests New Trial in FTX Crypto Fraud Case
While serving his 25-year prison sentence, “convicted former cryptocurrency mogul Sam Bankman-Fried on Tuesday requested a new federal trial,” reports Courthouse News, “based on what he says is newly discovered evidence concerning his company’s solvency and its ability to repay all FTX customers for what prosecutors portrayed as the looting of $8 billion of his customers’ money…”
Bankman-Fried says evidence disclosed since his trial disproves prosecutors’ case about Bankman-Fried’s hedge fund running a multi-billion deficit of FTX customer funds, and instead shows that FTX always had sufficient assets to repay the cryptocurrency platform’s customer deposits in full. “What it faced was a short-term liquidity crisis caused by a run on the exchange, not insolvency,” he wrote…
Bankman-Fried also accuses the Department of Justice of coercing a guilty plea and cooperation deal from Nishad Singh — a close friend of Bankman-Fried’s younger brother — who testified at trial as a cooperating witness… Bankman-Fried says in the motion that prior to being pressured into a guilty plea, Singh’s initial proffer to investigators “contradicted key parts of the government’s version of events. But following threats from the government, Mr. Singh changed his proffers to fit the government’s narrative and pleaded guilty to charges carrying up to 75 years in prison, with a promise from the prosecution that it would recommend little or no jail time if it concluded that his assistance in prosecuting Mr. Bankman-Fried was ‘substantial,’" he wrote in the petition…
Additionally, Bankman-Fried requested that U.S. District Judge Lewis Kaplan, who presided over his 2023 trial, recuse himself from ruling on this motion, “because of the manifest prejudice he has demonstrated towards Mr. Bankman-Fried.”
“Bankman-Fried’s mother, Stanford Law School professor Barbara Fried, filed his self-represented bid for a new trial on his behalf in Manhattan federal court…”
‘Babylon 5’ Episodes Start Appearing (Free) on YouTube
Cord Cutters News reports:
In a move that has delighted fans of classic science fiction, Warner Bros. Discovery has begun uploading full episodes of the iconic series Babylon 5 to YouTube, providing free access to the show just as it departs from the ad-supported streaming platform Tubi… Viewers noticed notifications on Tubi indicating that all five seasons would no longer be available after February 10, 2026, effectively removing one of the most accessible free streaming options for the space opera. With this shift, Warner Bros. Discovery appears to be steering the property toward its own digital ecosystem, leveraging YouTube’s vast audience to reintroduce the show to both longtime enthusiasts and a new generation.
The uploads started with the pilot episode, “The Gathering,” which serves as the entry point to the series’ intricate universe. This was followed by subsequent episodes such as “Midnight on the Firing Line” and “Soul Hunter,” released in sequence to build narrative momentum. [Though episodes 2 and 3 are mis-labeled as #3 and #4…] The strategy involves posting one episode each week, allowing audiences to experience the story at a paced rhythm that mirrors the original broadcast schedule…
For Warner Bros. Discovery, this initiative could signal plans to expand the franchise’s visibility, especially amid ongoing interest in reboots and spin-offs that have been rumored in recent years.
Babylon 5 creator J. Michael Straczynski answered questions from Slashdot’s readers in 2014.
Long-time Slashdot reader sandbagger offers this summary of the show “for those not in the know… In the mid-23rd century, the Earth Alliance space station Babylon Five, located in neutral territory, is a major focal point for political intrigue, racial tensions, and a major war as Earth descends into fascism and cuts off relations with its allies.”
There’s no simple answer