Alterslash

the unofficial Slashdot digest
 

Contents

  1. Australia Readies Social Media Court Action Citing Teen Ban Breaches
  2. Claude Code’s Source Code Leaks Via npm Source Maps
  3. Euro-Office Wants To Replace Google Docs and Microsoft Office
  4. US Paves Way For Private Assets To Be Included In 401(k) Retirement Plans
  5. Quadratic Gravity Theory Reshapes Quantum View of Big Bang
  6. Scientists Shocked To Find Lab Gloves May Be Skewing Microplastics Data
  7. AI Data Centers Can Warm Surrounding Areas By Up To 9.1C
  8. Microsoft Plans To Build 100% Native Apps For Windows 11
  9. After 16 Years and $8 Billion, the Military’s New GPS Software Still Doesn’t Work
  10. Samsung Is Bringing AirDrop-Style Sharing to Older Galaxy Devices
  11. OkCupid Settles FTC Case On Alleged Misuse of Its Users’ Personal Data
  12. Life With AI Causing Human Brain ‘Fry’
  13. Judge Allows BitTorrent Seeding Claims Against Meta, Despite Lawyers ‘Lame Excuses’
  14. Microsoft Copilot Is Now Injecting Ads Into Pull Requests On GitHub
  15. Sony Shuts Down Nearly Its Entire Memory Card Business Due To SSD Shortage

Alterslash picks up to the best 5 comments from each of the day’s Slashdot stories, and presents them on a single page for easy reading.

Australia Readies Social Media Court Action Citing Teen Ban Breaches

Posted by BeauHD View on SlashDot Skip
Australia is preparing possible court action against major social media platforms that are failing to enforce the country’s social media ban on under-16s. “Three months after the ban came into effect, the eSafety Commissioner said it was probing Meta’s Instagram and Facebook, Google’s YouTube, Snapchat and TikTok for possible breaches of the law,” reports Reuters. From the report:
Communications Minister Anika Wells said the government was gathering evidence “so that the eSafety Commissioner can go to the Federal Court and win.” “We have spent the summer building that evidence base of all the stories that no doubt you have all heard … about how kids are getting around that,” Wells told reporters in Canberra. The legal threat is a striking change of tone from a government which had hailed tech giants’ shows of cooperation when the ban went live in December.

Under the Australian law, platforms must show they are taking reasonable steps to keep out underage users or face fines of up to $34 million per breach, something eSafety would need to pursue in a civil court. The regulator previously said it would only take enforcement action in cases of systemic noncompliance. But in its first comprehensive compliance report since the ban took effect, eSafety said measures taken by the platforms were substandard and it would make a decision about next steps by mid-year. “We are now moving âinto an enforcement stance,” said commissioner Julie Inman Grant in a statement.

The regulator reported major compliance gaps, including platforms prompting children who had previously declared ages under 16 to do fresh age checks, allowing repeated attempts at age-assurance tests until a child got a result over 16 and poor pathways for people to report underage accounts. Some platforms did not use age-inference, which estimates age based on someone’s online activity, and some only used age-assurance measures like photo-based checks after a user tried to change their age, rather than at sign-up. That made it “likely many Australian children aged under 16 have been able to create accounts on age-restricted social media platforms by simply declaring they are 16 or older”, the regulator said. Nearly one-third of parents reported their under-16 child had at least one social media account after the ban took effect, of which two-thirds said the platform had not asked the child’s age, it added.

Claude Code’s Source Code Leaks Via npm Source Maps

Posted by BeauHD View on SlashDot Skip
Grady Martin writes:
A security researcher has leaked a complete repository of source code for Anthropic’s flagship command-line tool. The file listing was exposed via a Node Package Manager (npm) mapping, with every target publicly accessible on a Cloudflare R2 storage bucket.
There’s been a number of discoveries as people continue to pore over the code. The DEV Community outlines some of the leak’s most notable architectural elements and the key technical choices:

Architecture Highlights
The Tool System (~40 tools): Claude Code uses a plugin-like tool architecture. Each capability (file read, bash execution, web fetch, LSP integration) is a discrete, permission-gated tool. The base tool definition alone is 29,000 lines of TypeScript.
The Query Engine (46K lines): This is the brain of the operation. It handles all LLM API calls, streaming, caching, and orchestration. It’s by far the largest single module in the codebase.
Multi-Agent Orchestration: Claude Code can spawn sub-agents (they call them “swarms”) to handle complex, parallelizable tasks. Each agent runs in its own context with specific tool permissions.
IDE Bridge System: A bidirectional communication layer connects IDE extensions (VS Code, JetBrains) to the CLI via JWT-authenticated channels. This is how the “Claude in your editor” experience works.
Persistent Memory System: A file-based memory directory where Claude stores context about you, your project, and your preferences across sessions.

Key Technical Decisions Worth Noting
Bun over Node: They chose Bun as the JavaScript runtime, leveraging its dead code elimination for feature flags and its faster startup times.
React for CLI: Using Ink (React for terminals) is bold. It means their terminal UI is component-based with state management, just like a web app.
Zod v4 for validation: Schema validation is everywhere. Every tool input, every API response, every config file.
~50 slash commands: From /commit to /review-pr to memory management — there’s a command system as rich as any IDE.
Lazy-loaded modules: Heavy dependencies like OpenTelemetry and gRPC are lazy-loaded to keep startup fast.

Euro-Office Wants To Replace Google Docs and Microsoft Office

Posted by BeauHD View on SlashDot Skip
Euro-Office is a new open-source project supported by several European companies that aims to offer a “truly open, transparent and sovereign solution for collaborate document editing,” using OnlyOffice as a starting point. The project is positioned around European digital independence and familiar Office-style editing, though it has already drawn pushback from OnlyOffice over alleged licensing violations. “The company behind OnlyOffice is also based in Russia, and Russia is still heavily sanctioned by most European nations due to the country’s ongoing invasion of Ukraine,” adds How-To Geek. From the report:
Euro-Office is a new open-source project supported by Nextcloud, EuroStack, Wiki, Proton, Soverin, Abilian, and other companies based in Europe. The goal is to build an online office suite that can open and edit standard Microsoft Office documents (DOCX, PPTX, XLSX) and the OpenDocument format (ODS, ODT, ODP) used by LibreOffice and OpenOffice. The current design is remarkably close to Microsoft Office and its tabbed toolbars, so there shouldn’t be much of a learning curve for anyone used to Word, Excel, or PowerPoint.

Importantly, Euro-Office is only the document editing component. It’s designed to be added to cloud storage services, online wikis, project management tools, and other software. For example, you could have some Word documents in your Nextcloud file storage, and clicking them in a browser could open the Euro-Office editor. That way, Nextcloud (or Proton, or anyone else) doesn’t have to build its own document editor from scratch.

Euro-Office is based on OnlyOffice, which is open-source under the AGPL license. The project explained that “Contributing is impossible or greatly discouraged” with OnlyOffice’s developers, with outside code changes rarely accepted, so a hard fork was required. The company behind OnlyOffice is also based in Russia, and Russia is still heavily sanctioned by most European nations due to the country’s ongoing invasion of Ukraine. The project’s home page explains, “A lot of users and customers require software that is not potentially influenced or controlled by the Russian government.”
As for why OnlyOffice was chosen over LibreOffice, the project simply said: “We believe open source is about collaboration, and we look for opportunities to integrate and collaborate with the LibreOffice community and companies like Collabora.”
UPDATE: Slashdot reader Elektroschock shares a statement from OnlyOffice CEO Lev Bannov, expressing his concerns about the Euro-Office inclusion of its software with trademarks removed: “We liked the AGPL v3 license because its 7th clause allows us to ensure that our code retains its original attributes, so that users are able to clearly identify the developers and the brand behind the program…”

Bannov continued: “The core issue here isn’t just about what the AGPL license states, but about the additional provisions we, as the authors, have included. This is a critical distinction, even if some may argue otherwise. We firmly assert that the Euro-Office project is currently infringing on our copyright in a deliberate and unacceptable manner.”

“As the creators of ONLYOFFICE, we want to make our position unequivocally clear: we do not grant anyone the right to remove our branding or alter our open-source code without proper attribution. This principle is non-negotiable and will never change. We demand that the Euro-Office project either restore our branding and attributions or roll back all forks of our project, refraining from using our code without proper acknowledgment of ONLYOFFICE.”

uhh

By nomadic • Score: 5, Insightful Thread

“As for why OnlyOffice was chosen over LibreOffice, the project simply said: “We believe open source is about collaboration, and we look for opportunities to integrate and collaborate with the LibreOffice community and companies like Collabora.”"

Ok, since they just refuse to answer the question, does anyone else know why OnlyOffice was chosen over LibreOffice?

Guessing

By DrMrLordX • Score: 4, Insightful Thread

Just a guess but it seems like the Euro-Office team is keen on violating a license or two, and perhaps they found it easier/simpler to violate the OnlyOffice license.

OnlyOffice CEO doesn’t understand AGPL

By jarkus4 • Score: 4, Insightful Thread

Clearly OnlyOffice tried to run around AGPL terms by forcing any derivative work to use their logo while simultaneously refusing to allow its use (trademark). They tried to do this by using clause 7(b):
> you may (…) supplement the terms of this License with terms: (…)
> b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it

Unfortunately for them logo is not legal notice nor atribution, so they basically created a “further restriction” under the terms of AGPL, which is allowed to be dropped
> If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term.

Re:Guessing

By DeBaas • Score: 5, Informative Thread

IANAL but It is the assumption of OnlyOffice that there is a violation. EuroOffice in commit message on Github

Remove unenforceable and non-obligatory Section 7 additions from core
Under AGPLv3 Section 7, downstream recipients may remove terms that constitute “further restrictions” beyond what Section 7(a)-(f) permits, as affirmed by the FSF.

Logo retention requirement (Section 7(b)): Section 7(b) permits requiring preservation of “legal notices or author attributions”. A product logo is a trademark/brand element, not a legal notice or author attribution. It therefore exceeds the scope of 7(b), qualifies as a “further restriction” under Section 10, and may be removed.

Trademark disclaimer (Section 7(e)): Purely declaratory — the AGPLv3 does not grant trademark rights in any case. The disclaimer creates no affirmative obligation on the licensee and removing it changes no rights or obligations. There is no legal basis requiring its preservation.

Apparently AGPLv3 allows some additions in Section 7. What is allowed is defined in a-f. OnlyOffice feels that 7-b allows them to demand that the attribution means that they can demand the Logo and brand elements need to stay. Euro-Office apparently disagrees.
Euro-Office also claims that 7e gives no legal basis for it.

I can’t assess who is right.

As to why OnlyOffice over Collabora. In my experience, as OnlyOffice uses the OOXML format of MS, there are a few less issues with MS Office files. In my experience there are indeed a few less lay-out issues. Another thing I once notices was embedded media files in a Powerpoint file that did work in OnlyOffice and not in Libre.

Although OnlyOffice is now officially based in the EU, there remains some doubts on them as they originated in Russia.

Re: I get that they don’t like MS office

By Fons_de_spons • Score: 4, Informative Thread
I like Microsoft office, better than Libreoffice, which I used a few years before I got Ms office at home through work. But since Snowden, it was pretty clear to me that there probably were intentional back doors.
Best case, this was only used for (inter)national security. Worst case? This was abused to do industrial espionage, extort people, … With Trump? You bet they will abuse it without a second thought. We are switching back to Libreoffice now. Too bad though. I miss PowerPoint and OneDrive. Oh well… we will adapt. It is for the greater good.

US Paves Way For Private Assets To Be Included In 401(k) Retirement Plans

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from Reuters:
The Trump administration on Monday issued a long-awaited proposed rule to open up retirement plans to alternative assets, paving the way for private equity and cryptocurrencies to be added to 401(k) accounts. The measure, announced by the U.S. Department of Labor, is intended to ease longstanding barriers to incorporating these less liquid and less transparent assets into American retirement plans. It follows an executive order from President Donald Trump last summer and could clear the way for alternative asset management firms to tap a large new source of capital.

Industry groups have argued private market investments can enhance long-term returns and diversification for retirement savers, while skeptics warn higher fees, complexity and limited liquidity could limit those gains and pose risks for retail investors. Some private market funds that are already available to wealthier individual investors have shown signs of strain in recent months. Private credit funds known as business development companies have seen a wave of withdrawals. Treasury Secretary Scott Bessent said the proposed rule was “an initial step” and aimed to be “mindful of the importance of protecting retirement assets.”

The guidance lays out how plan trustees, who have a legal fiduciary duty to act in the best interest of members, can incorporate these assets. They would have to “objectively, thoroughly, and analytically consider, and make determinations on factors including performance, fees, liquidity, valuation, performance benchmarks, and complexity,” the DOL said. Trustees who abide by them will be granted safe harbor that protects them from lawsuits, it added. The Supreme Court agreed earlier this year to hear one such case filed in 2019 by a former Intel employee claiming trustees made “imprudent” decisions by investing in hedge funds and private equity funds.

MAGIC BEANS!

By Thud457 • Score: 5, Insightful Thread
Finally some sanity, we can invest our retirement nest-egg 100% in Trumpcoin! WINNING!

Holy…

By nightflameauto • Score: 4, Informative Thread

These sick fucks won’t be satisfied until they can do whatever they want with our 401k money. From guaranteed company pensions to 401k plans that are simply fodder for the Wall Street assholes to claim priority over our retirement savings to 401k money being used to prop up the crypto bros in one lifetime’s span. Amazing. Our society just keeps finding ways to tell us that our main priority is making sure the rich keep getting richer at the expense of the rest of us.

And the grift continues

By sizzzzlerz • Score: 5, Informative Thread

Of course, the trump crime family has huge investments in crypto and stands to make millions from the suckers. Nothing slimy or sleezy in that!

Elon Musk is going to dump 1.5 trillion

By rsilvergun • Score: 5, Informative Thread
Of bad stock into your 401k. The YouTuber Patrick Boyle has a detailed video on the subject.

Basically SpaceX is going to be valued at 1.5 trillion. However it is impossible for it to reach that valuation in the real world.

SpaceX already has all the launch customers that can possibly get even under the best case scenario. And in unfavorable administration would almost certainly start looking for alternatives because Elon meddled in a war.

So the only possible growth sector for SpaceX is launching its own satellites, specifically the ones for internet.

But that’s a dead end too because there aren’t enough customers who can afford high-speed internet and also do not have access to some form of landline based internet like cable or DSL

the only other growth sector would be AI bullshit but Elon has lost most of his engineers to other companies. SpaceX got this huge boost because Elon had a mystique and he was talking about going to Mars so a shitload of rocket engineers took lower pay than they could get in any other job and work longer hours to work for spacex. That isn’t happening with elon’s AI companies. So he can’t compete and the stuff he’s building is barely better than what you could build yourself and run off your own GPU.

Everybody knows this, at least everybody who is investing that kind of money, so in order to get the kind of money he wants he’s doing a weird stock scheme that limits access to the stock in order to drive up the price. Basically a few insiders will get all the profit and it’s going to leave a huge amount of worthless stock that needs to be sent somewhere.

Normally it would be dumped into public pensions but those have been maxed out with bad stock already. So we are 401K is going to get hammered.

This is just the largest of many scams that are going to loot your retirement and there’s basically nothing you can do about it except vote for pro-consumer politicians who want to regulate Wall Street but that’s going to be annoying people like Elizabeth Warren and AOC and Bernie Sanders and frankly people don’t like them… And in politics likeability is basically everything now.

What I’m saying is that if you are retiring or even if you’re just retired you’re a fucked. You have money and somebody wants it and they’re going to get it

Re:It’s all legalized gambling anyway....

By Comboman • Score: 4, Insightful Thread

>>I don’t see why I care about government trying to protect people from themselves with this one?

Because when they lose everything and have nothing left when to retire on, guess who will end up paying to bail them out? It’s not the scammers who got rich selling them snake oil, it’s the rest of us. And don’t think they won’t get a bail out; retired people are an important voting block and will support whoever promises them the most.

Quadratic Gravity Theory Reshapes Quantum View of Big Bang

Posted by BeauHD View on SlashDot Skip
Researchers at the University of Waterloo say a new “quadratic quantum gravity” framework could explain the universe’s rapid early expansion without adding extra ingredients to Einstein’s theory by hand. The idea is especially notable because it makes testable predictions, including a minimum level of primordial gravitational waves that future experiments may be able to detect. “Even though this model deals with incredibly high energies, it leads to clear predictions that today’s experiments can actually look for,” said Dr. Niayesh Afshordi, professor of physics and astronomy at the University of Waterloo and Perimeter Institute (PI). “That direct link between quantum gravity and real data is rare and exciting.” Phys.org reports:
The research team found that the Big Bang’s rapid early expansion can emerge naturally from this simple, consistent theory of quantum gravity, without adding any extra ingredients. This early burst of expansion, often called inflation, is a central idea in modern cosmology because it explains why the universe looks the way it does today.

Their model also predicts a minimum amount of primordial gravitational waves, which are tiny ripples in spacetime geometry created in the first moments after the Big Bang. These signals may be detectable in upcoming experiments, offering a rare chance to test ideas about the universe’s quantum origins.

[…] The team plans to refine their predictions for upcoming experiments to explore how their framework connects to particle physics and other puzzles about the early universe. Their long-term goal is to strengthen the bridge between quantum gravity and observational cosmology.
The research has been published in the journal Physical Review Letters.

Doing the editor’s job.

By greytree • Score: 5, Informative Thread
“Quadratic gravity (QG) is an extension of general relativity obtained by adding all local quadratic-in-curvature terms to the Einstein–Hilbert Lagrangian.[1] Doing this makes the theory renormalizable.[1] This is one of numerous alternatives to general relativity.”

https://en.wikipedia.org/wiki/Quadratic_gravity

Get Off My Lawn

By Spinlock_1977 • Score: 5, Funny Thread

Don’t you young whipper-snappers go gettin’ yer new-fangled quadratic quantum gravity up in my 11-dimensional string theory!

Re:Got some questions

By burtosis • Score: 5, Interesting Thread

Wouldn’t you have to be on the very edge of the universe to feel ancient gravitational waves? It’s not like they bounce like sound waves.

There is no edge, every point is at the precise center including your two eyes. Because light, gravity waves, and causality travel at a single fixed speed, the further something is the farther back in time it is until you reach a point where you cannot see beyond because it is too far back in time and approaches the Big Bang. Gravity waves from the Big Bang will be rippling through all points always just as you can look in any direction and see the microwave background which is the Big Bang but stretched out to the point it’s far cooler and of longer wavelengths.

And don’t they dissipate the further they get from the source, making them undetectable?

Gravity waves are fundamentally undetectable, even in principle. If you want a nearly exact example you are probably familiar with think of two floating bits on a still lake. Perception only occurs along the surface of the water, they cannot see or measure or perceive up and down. When a ripple passes the two bits move toward and away from each other as the surface stretches and shrinks to accommodate the wave and that is the distortion that is measured not the wave itself. It boils down to the second derivative of the mass quadrupole moment tensor and it falls off linearly with distance so is not like other directly measured waves that fall off exponentially.

And how does this explain the ridiculous notion that matter traveled faster than the speed of light shortly after the big bang?

The universe is the same everywhere at the largest scales including being at the exact temperature despite not being casually connected if you look at how causality works on our scales, times, and energy levels. The most reasonable thing is that the universe was once all touching in close contact, even points 90 billion light years away from each other. The universe is also expanding the same everywhere on the largest scales so if you rewind time everything goes back to one point even if there isn’t a “center”. So the crazy thing is to look at all the evidence for it (many other examples of measurement also confirm this is how it looks) and say it’s all wrong because it does not meet personal expectations. That’s not how science works.

Scientists Shocked To Find Lab Gloves May Be Skewing Microplastics Data

Posted by BeauHD View on SlashDot Skip
Researchers found that common nitrile and latex lab gloves can shed stearate particles that closely resemble microplastics, potentially “increasing the risk of false positives when studying microplastic pollution,” reports ScienceDaily.

“We may be overestimating microplastics, but there should be none,” said Anne McNeil, senior author of the study and U-M professor of chemistry, macromolecular science and engineering. “There’s still a lot out there, and that’s the problem.” From the report:
Researchers found that these gloves can unintentionally transfer particles onto lab tools used to analyze air, water, and other environmental samples. The contamination comes from stearates, which are not plastics but can closely resemble them during testing. Because of this, scientists may be detecting particles that are not true microplastics. To reduce this issue, U-M researchers Madeline Clough and Anne McNeil recommend using cleanroom gloves, which release far fewer particles.

Stearates are salt-based, soap-like substances added to disposable gloves to help them separate easily from molds during manufacturing. However, their chemical similarity to certain plastics makes them difficult to distinguish in lab analyses, increasing the risk of false positives when studying microplastic pollution.
“For microplastics researchers who have these impacted datasets, there’s still hope to recover them and find a true quantity of microplastics,” said researcher and recent doctoral graduate Madeline Clough. “This field is very challenging to work in because there’s plastic everywhere,” McNeil said. “But that’s why we need chemists and people who understand chemical structure to be working in this field.”
The findings have been published in the journal Analytical Methods.

Re:Latex schmubs

By bn-7bc • Score: 5, Informative Thread
Look they don’t say thay the microplastics are all from their gloves, they just say that their estimate might have been skewed by the gloves ie “plz be aware the tour result maey need more sctutany” . No need to trow all their results out at ones

Re: Latex schmubs

By devslash0 • Score: 5, Informative Thread

Actually - there is a very valid reason to throw previous results out of the window. In science, a contaminated sample completely invalidates the result. You can’t rely on any findings built on false premises.

To summarise then

By EldoranDark • Score: 5, Insightful Thread
We still know microplastics are bad because deliberately introducing them to lab animals produces all sorts of bad stuff. Our samples might be skewed towards showing more of them in more places than there actually are, so perhaps things are not quite as bad yet. On the other hand, the same bias could mean that micro plastics becomes a problem at smaller quantities than previously thought.

Re:And media selection of alarmist data

By Rei • Score: 5, Interesting Thread

So, when we say microplastics, we really mainly mean nanoplastics - the stuff made from, say, drinking hot liquids from low-melting-point plastic containers. And yeah, they very much look like a problem. The strongest evidence is for cardiovascular disease. The 2024 NEJM study for example found that for patients with above-threshold levels of nanoplastics in cartoid artery plaque were 4,5x more likely to suffer from a heart attack. Neurologically, they cross the brain-blood barrier (and quite quickly). A 2023 study found that they cause alpha-synuclein to misfold and clump together, a halmark of Parkinsons and various kinds of dementia. broadly, they’re associated with oxidative stress, neuroinflammation, protein aggregation, and neurotransmitter alterations. Oxidative stress is due to cells struggling to break down nanoplastics in them. They’re also associated with immunotoxicity, inflammatory bowel disease, and reproductive dysfunction, including elevating inflammatory markers, impairing sperm quality, and modulating the tumor microenvironment. With respect to reproduction, they’re also associated with epigenetic dysregulation, which can lead to heritable changes.

And here’s one of the things that get me - and let me briefly switch to a different topic before looping back. All over, there’s a rush to ban polycarbonate due to concerns over a degradation product (bisphenol-A), because it’s (very weakly) estrogenic. But typical effective estrogenic activity from typical levels of bisphenol-A are orders of magnitude lower than that of phytoestrogens in food and supplements; bisphenol-A is just too rare to exert much impact. Phytoestrogens have way better PR than bisphenol-A, and people spend money buying products specifically to consume more of them. Some arguments against bisphenol-A focus on what type of estrogenic activity it can promote (more proliferative activity), but that falls apart given that different phytoestrogens span the whole gamut of types of activation. Earlier research arguing for an association with estrogen-linked cancer seems to have fallen apart in more recent studies. It does seem associated with PCOS, but it’s hard to describe it as a causal association, because PCOS is associated with all sorts of things, including diet (which could change the exposure rate vs. non-PCOS populations) and significant hormonal changes (which could change the clearance rate of bisphenol-A vs. non-PCOS populations). In short, bisphenol-A from polycarbonate is not without concern, but the concern level seems like it should be much lower than with nanoplastics.

Why bring this up? Because polycarbonate is a low-nanoplastic-emitting material. It is a quite resilient, heat tolerant plastic, and thus - being much further from its glass transition temperature - is not particularly prone to shedding nanoplastics. By contrast, its replacements - polyethylene, polypropylene, polyethylene terephthate, etc - are highly associated with nanoplastic release, particularly with hot liquids. So by banning polycarbonate, we increase our exposure to nanoplastics, which are much better associated with actual harms. And unlike bisphenol-A, which is rapidly eliminated from the body, nanoplastics persist. You can’t get rid of them. If some big harm is discovered with bisphenol-A that suddenly makes the risk picture seem much bigger than with nanoplastics, we can then just stop using it, and any further harm is gone. But we can’t do that with nanoplastics.

People seriously need to think more about substitution risks when banning products. The EU in p

Re:And media selection of alarmist data

By Rei • Score: 5, Interesting Thread

A bit more about the latter. Beyond organophosphates, the main other alternative is pyrethroids. These are highly toxic to aquatic life, and they’re contact poisons to pollinators just landing on the surface (some anti-insect clothing is soaked in pyrethrin for its effect). Also, neonicotinoids are often applied as seed coatings (which are taken up and spread through the plant), which primarily just affect the plant itself. Alternatives are commonly foliar sprays. This means drift to non-target impacts as well, such as in your shelterbelts, private gardens, neighbors’ homes, etc. You also have to use far higher total pesticide quantities with foliar sprays instead of systematics, which not only drift, but also wash off, etc. Neonicotinoids can impact floral visitors, with adverse sublethal impacts but e.g. large pyrethroid sprayings can cause massive immediate fatal knockdown events of whole populations of pollinators.

Regrettable substitution is a real thing. We need to factor it in better. And that applies to nanoplastics as well.

AI Data Centers Can Warm Surrounding Areas By Up To 9.1C

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from New Scientist:
Andrea Marinoni at the University of Cambridge, UK, and his colleagues saw that the amount of energy needed to run a data centre had been steadily increasing of late and was likely to “explode” in the coming years, so wanted to quantify the impact. The researchers took satellite measurements of land surface temperatures over the past 20 years and cross-referenced them against the geographical coordinates of more than 8400 AI data centers. Recognizing that surface temperature could be affected by other factors, the researchers chose to focus their investigation on data centers located away from densely populated areas.

They discovered that land surface temperatures increased by an average of 2C (3.6F) in the months after an AI data center started operations. In the most extreme cases, the increase in temperature was 9.1C (16.4F). The effect wasn’t limited to the immediate surroundings of the data centers: the team found increased temperatures up to 10 kilometers away. Seven kilometers away, there was only a 30 percent reduction in the intensity. “The results we had were quite surprising,” says Marinoni. “This could become a huge problem.”

Using population data, the researchers estimate that more than 340 million people live within 10 kilometers of data centers, so live in a place that is warmer than it would be if the data centre hadn’t been built there. Marinoni says that areas including the Bajio region in Mexico and the Aragon province in Spain saw a 2C (3.6F) temperature increase in the 20 years between 2004 and 2024 that couldn’t otherwise be explained.
University of Bristol researcher Chris Preist said the findings may be more complicated than they look. “It would be worth doing follow-up research to understand to what extent it’s the heat generated from computation versus the heat generated from the building itself,” he says. For example, the building being heated by sunlight may be part of the effect.
The findings of the study, which has not yet been peer-reviewed, can be found on arXiv.

Re:Farm pasture versus concrete buildings?

By maladroit • Score: 5, Interesting Thread

From page 5:

“These results are dramatically impressive, especially considering that the typical LST increase caused by the quintessential example of compound of anthropogenic activities - the urban heat island effect - has been estimated in the 4 and 6 [degrees] C interval. This apparent step function emphasize the clear effect of AI hyperscalers on their surrounding areas, so much that it can match the impact of “islands” of higher temperatures: therefore, we call this the data heat island effect.”

Re:7 KM away

By Mr. Dollar Ton • Score: 5, Interesting Thread

The comparison is always a cost/benefit analysis. Steel mills actually deliver a useful product, so the costs of their existence is justified. An “AI” datacenter produces hallucinations, its existence is completely unjustifiable.

Re:Farm pasture versus concrete buildings?

By maladroit • Score: 5, Insightful Thread

Since concrete buildings and paved parking lots are part of the urban heat island effect, yes it does.

The question could be phrased more generally: how much of the *data* heat island effect is because it’s a data center and not another type of building.

The answer, apparently: a lot.

Re:Farm pasture versus concrete buildings?

By maladroit • Score: 5, Informative Thread

From the summary:

“Recognizing that surface temperature could be affected by other factors, the researchers chose to focus their investigation on data centers located away from densely populated areas.”

You’ll have to look at the study data to see if that completely addresses your concern, but unsurprisingly the professional researchers have put some thought into what controls a study like this might need.

Re:In a Word

By ArchieBunker • Score: 5, Insightful Thread

If global warming is indeed a hoax, why can’t any real ‘murican researchers disprove it? You’d be hailed as a hero to the MAGA camp and might be allowed to kiss dear leaders ring. Everyone keeps coming to the same conclusion except for a few fringe nutjobs who start going off about angels and the bible.

Microsoft Plans To Build 100% Native Apps For Windows 11

Posted by BeauHD View on SlashDot Skip
Microsoft is reportedly shifting Windows 11 app development back toward fully native apps. Rudy Huyn, a Partner Architect at Microsoft working on the Store and File Explorer, said in a post on X that he is building a new team to work on Windows apps. “You don’t need prior experience with the platform.. what matters most is strong product thinking and a deep focus on the customer,” he wrote. “If you’ve built great apps on any platform and care about crafting meaningful user experiences, I’d love to hear from you.” Huyn later said in a reply on X that the new Windows 11 apps will be “100% native.” TechSpot reports:
The description stands out at a time when many of Microsoft’s built-in tools, including Clipchamp and Copilot, rely on web technologies and Progressive Web App architectures. The company’s commitment to native performance suggests that some long-standing frustrations around responsiveness, memory use, and interface consistency could finally be addressed.

For Windows developers, Huyn’s comments hint at a change in direction. Microsoft’s recent development priorities have leaned heavily on web-based approaches, with Progressive Web Apps (PWAs) replacing or supplementing many native programs. […] Exactly which applications will be rebuilt, or how strictly “100% native” will be enforced, remains unclear. Some current Microsoft apps classified as native still depend on WebView for specific features. But the renewed emphasis already has developers paying attention.

Finally?

By TheDarkMaster • Score: 5, Insightful Thread
Finally? I’m tired of seeing apps in Windows 11 that are an integral part of the operating system and should therefore be native, but were built with that total, complete, and absolute shit that is “web apps”. “Web apps” only make sense when you really need independence from the OS to the point of accepting a loss of performance and very bad resource usage. Web apps have absolutely no place on where they would never be used on another operating system.

Re:They don’t want to make other OSes more attract

By TheDarkMaster • Score: 5, Informative Thread
Just a few years ago, an app with almost the same functionality as WhatsApp (though it wouldn’t have video or audio, since that wasn’t feasible back then on dial-up or DSL connections) wouldn’t have used more than 50MB even under heavy use. Nowadays, however, an app with the same goals easily exceeds 1.5GB of RAM.

1.5 GB of RAM for an instant messaging app. It was possible to run the entire Windows XP system plus user applications on 128MB of RAM… 256MB was a luxury.

And for those complete idiots who keep going on and on about how “memory that isn’t used is wasted memory,” I have two things to say to those clowns:

1) There is absolutely no reason to use 1GB of RAM for a task that you can easily handle with just 10MB of RAM. Just because your computer has 32GB of RAM doesn’t mean you have to use all of it just for your application;

2) Your application isn’t the only thing running on the user’s computer. What happens if the dozens of processes running on the user’s computer all have the same idiotic idea of trying to reserve all the computer’s memory for themselves?

Re: the only reason

By liqu1d • Score: 5, Informative Thread
You’re on iOS right? Disable “smart quotes” in keyboard settings will fix it. One day slashdot might vibe code some fixes and join the modern web… or they might not.

Re: Here we go again....

By Archfeld • Score: 5, Insightful Thread

I used the hell out of MS Works. It did what I needed quickly and without the HUGE bloat of M$Office and Word. Though I came from a GML background so it was easy…

Re:developer market share

By DrXym • Score: 5, Insightful Thread
I’ve programmed Win32 for decades and while it was fine for the time, much of the user facing APIs are obsolete for modern GUI development and some of the non-user facing stuff too. But Microsoft really hasn’t produced a credible replacement for it and has shat out a succession of technologies one after the other that devs are supposed to use before Microsoft abandons them for the next - Win32 (and layers on top like MFC), WinForms, WPF (and Silverlight), UWP, Windows App SDK / WinUI.

Some of these technologies are overlapping, but each was intended to coral devs into making Metro apps or Windows Store apps and burn their bridges in the process. It went down like a lead balloon. Now they’re dialing back trying to make WinUI somewhat platform agnostic to the version of Windows its running on but who knows if it will stick. It’s not the only pain point because Microsoft even extended the C++ language to deal with these APIs with new types like “ref”, “partial” and hat notation to deal with garbage collected objects, auto generated classes and other things that also impedes portability.

So it’s no wonder that app developers have gone for web apps (and QT) because it’s makes it easier to write portable apps and acts as insulation from Microsoft’s mercurial view of the world.

After 16 Years and $8 Billion, the Military’s New GPS Software Still Doesn’t Work

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from Ars Technica:
Last year, just before the Fourth of July holiday, the US Space Force officially took ownership of a new operating system for the GPS navigation network, raising hopes that one of the military’s most troubled space programs might finally bear fruit. The GPS Next-Generation Operational Control System, or OCX, is designed for command and control of the military’s constellation of more than 30 GPS satellites. It consists of software to handle new signals and jam-resistant capabilities of the latest generation of GPS satellites, GPS III, which started launching in 2018. The ground segment also includes two master control stations and upgrades to ground monitoring stations around the world, among other hardware elements.

RTX Corporation, formerly known as Raytheon, won a Pentagon contract in 2010 to develop and deliver the control system. The program was supposed to be complete in 2016 at a cost of $3.7 billion. Today, the official cost for the ground system for the GPS III satellites stands at $7.6 billion. RTX is developing an OCX augmentation projected to cost more than $400 million to support a new series of GPS IIIF satellites set to begin launching next year, bringing the total effort to $8 billion.

Although RTX delivered OCX to the Space Force last July, the ground segment remains nonoperational. Nine months later, the Pentagon may soon call it quits on the program. Thomas Ainsworth, assistant secretary of the Air Force for space acquisition and integration, told Congress last week that OCX is still struggling.
The GAO found the OCX program was undermined by “poor acquisition decisions and a slow recognition of development problems.” By 2016, it had blown past cost and schedule targets badly enough to trigger a Pentagon review for possible cancellation.
Officials also pointed to cybersecurity software issues, a “persistently high software development defect rate,” the government’s lack of software expertise, and Raytheon’s “poor systems engineering” practices. Even after the military restructured the program, it kept running into delays and overruns, with Ainsworth telling lawmakers, “It’s a very stressing program” and adding, “We are still considering how to ensure we move forward.”

Where is DOGE now?

By edi_guy • Score: 5, Insightful Thread

Subject is the question. Where is DOGE on the big stuff? The Pentagon wastes more every month in fraud, waster, and abuse than USAID spends annually. But somehow charity gets the axe and Ratheon keeps getting multi-billion dollar contracts, no strings attached. Can anyone put aside the woke distraction and look at the serious problems?!?

Re:Aerospace FFRDC role?

By david.emery • Score: 5, Insightful Thread

Failing audits is frankly independent from failing programs. The audits usually have problems tracking money flows and then property within the government. The contractor’s expenditures are closely monitored. That doesn’t mean they’re in-line, but they’re auditable. And when the audit discovers problems, there are ways for the government to respond. I’ve seen those applied rather frequently.

One common pattern is a program starts down the wrong path, and blows initial cost/schedule/performance. But that capability is needed badly (often because its predecessor program didn’t deliver). So the Service piles on more requirements and ‘readjusts the baseline’ for additional funding, because “if we don’t get it in this Program of Record, it’ll be at least a decade before we can start a new Program of Record to get what we need.” That just adds requirements to something that is already behind. If I had to guess what happened here, I bet there’s some of that flavor over the execution. In my experience, most programs started with the combination of unachievable or under-specified requirements AND unachievable/unrealistic schedule.

(A ‘Program of Record’, by the way, consists of an approved requirements document, an approved POM budget for the next 5 years showing the RDTE money, the OPA purchasing money, and the OMA maintenance money FOR EACH FISCAL YEAR. If you run out of RDTE money but haven’t finished the design, you’re in trouble. The third element is the approved procurement strategy, that says how you’ll buy it. That includes the kind of contract, firm fixed price or cost plus, the kinds of oversight, when and how prototypes will be delivered and tested, etc.)

Not Surprised

By Bahbus • Score: 5, Insightful Thread

RTX Corporation, formerly known as Raytheon, hasn’t produced a single thing of halfway decent quality in many decades and only hires the shittiest engineers.

Re:It’s called corruption

By gtall • Score: 5, Insightful Thread

Reagan was on a privatization kick. It resulted in wonderful growth for the Beltway Bandits. Another reason privatization does not work for government is because government is not private enterprise that can decide what markets to enter and which to exit. Programs are mandated by law. Sure laws can change but it is a long arduous path. And you wouldn’t want it any other way. Changing things on a whim has brought current U.S. to its knees, and the damage appears to be long lasting.

Want to see privatization at work? Look at the U.S. health system. Those insurance companies use actuaries to determine who gets covered. A good team of actuaries can put a price on your grandmother and her cat and will if you ask them to. As a consequence, we have a health system which can send you to the poor house in under a year because of a medical condition.

Re:Recommended reading

By Registered Coward v2 • Score: 5, Interesting Thread

“The Pentagon Wars” by Col. James F Burton. Burton was part of the 1980s “Fighter Mafia” who got the F-16 built, against Pentagon tendencies for every new plane to be twice the weight and twice the cost of the last one. (The F-35 continues the tradition.)

John Boyd was part of those battles, recounted in Comer’s excellent biography of him, titled “Boyd: The Fighter Pilot Who Changed the Art of War.”

They were the ones who publicized the $400 hammer and $600 toilet seat.

While I like those stories and many pointed out serious problems in government procurement, some also fail to tell the whole story because the headline is what someone wants since it causes outrage.

We used bolts that cost a lot of money and looked the same as a 10 cent one from a hardware store, but ours were designed to perform to an exact spec, traceable to the ore, and tested to ensure they meet specs. You don’t want that bolt to fail to performa at a critical moment when you are above, on, or below the ocean. We had bronze tools that cost a lot but looked like ordinary tools because sparks around oxygen tanks can cause issues. My point: There is a lot of wastefull spending and overpriced stuff in government contracting, but sometimes there is more to the story than simply “Military spends $x00 for something that is $10 at Walmart…”

One big problem is how the government budgets. You generally have to spend all you money on an annual basis, and if you give money back that you saved, you risked getting less next year because “you didn’t need as much as you said last year…” So, come Q4, you go on a spending spree to spend whatever is left; trying to spend more is better than trying to spend less.

Samsung Is Bringing AirDrop-Style Sharing to Older Galaxy Devices

Posted by BeauHD View on SlashDot Skip
Samsung is reportedly planning to roll out AirDrop-style file sharing for older Galaxy phones via a Quick Share update. Early reports suggest the feature is appearing on devices from the Galaxy S22 through the S25, though it is not actually working yet. Android Central reports:
As spotted by Reddit users (via Tarun Vats on X), a Quick Share app update is rolling out via the Galaxy Store on older Samsung devices that appears to add support for AirDrop file sharing with Apple devices. Users report seeing the same new “Share with Apple devices” section we first saw on Galaxy S26 devices in the Settings app after updating Quick Share.

The update is reportedly showing up on Galaxy models ranging from the Galaxy S22 to last year’s Galaxy S25 series. The catch, however, is that the feature doesn’t seem to be working yet. It’s appearing on devices running One UI 8 as well as the One UI 8.5 beta, but enabling the toggle doesn’t activate the functionality for now.

Users say that turning on the feature doesn’t make their device visible to Apple devices, and no Apple devices show up in Quick Share either. It’s possible Samsung or Google still needs to enable it server-side, but it does confirm that broader rollout to older Galaxy devices is coming. The feature could arrive fully with the One UI 8.5 update.

LocalSend already works with everything

By slaker • Score: 3 Thread

I don’t know why I should care about limited compatibility for a subset of devices with another subset of devices. There’s some of everything in my home. I found a tool called LocalSend years ago that allows me to do mildly obnoxious data transfers between arbitrary devices regardless of platform.

OkCupid Settles FTC Case On Alleged Misuse of Its Users’ Personal Data

Posted by BeauHD View on SlashDot Skip
OkCupid and parent company Match Group settled an FTC case dating back to 2014 over allegations that the dating app shared users’ photos and other personal data with a third party without proper disclosure or opt-out rights. Engadget reports:
According to the FTC, OkCupid’s privacy policy at the time noted that the company wouldn’t share a user’s personal information with others, except for some cases including “service providers, business partners, other entities within its family of businesses.” However, the lawsuit accused OkCupid of sharing three million photos of its users to Clarifai, which the FTC claims is a “unrelated third party” that didn’t fall under the allowed entities. On top of that, the lawsuit alleged that OkCupid didn’t inform its users of this data sharing, nor give them a chance to opt out.

Moving forward, the settlement would “permanently prohibit” Match Group, which owns OkCupid, and Humor Rainbow, which operates OkCupid, from misrepresenting what kind of personal information it collects, the purpose for collecting the data and any consumer choices to prevent data collection. Even after the 2014 incident, OkCupid was found with security flaws that could’ve exposed user account info but, which were quickly patched in 2020.

Every time

By Valgrus Thunderaxe • Score: 5, Insightful Thread
a company says they won’t share personal data, they do just that. Always, and without fail.

I’d settle too

By DarkOx • Score: 3 Thread

Moving forward, the settlement would “permanently prohibit” Match Group, which owns OkCupid, and Humor Rainbow, which operates OkCupid, from misrepresenting what kind of personal information it collects, the purpose for collecting the data and any consumer choices to prevent data collection.

So basically the FCC said guys, say your really sorry and promise not do it again.

“with no monetary penalty”?

By balaam’s ass • Score: 3 Thread

How then is it a settlement? And how are users to be compensated?

Life With AI Causing Human Brain ‘Fry’

Posted by BeauHD View on SlashDot Skip
fjo3 shares a report from France 24:
Too many lines of code to analyze, armies of AI assistants to wrangle, and lengthy prompts to draft are among the laments by hard-core AI adopters. Consultants at Boston Consulting Group (BCG) have dubbed the phenomenon “AI brain fry,” a state of mental exhaustion stemming “from the excessive use or supervision of artificial intelligence tools, pushed beyond our cognitive limits.”

The rise of AI agents that tend to computer tasks on demand has put users in the position of managing smart, fast digital workers rather than having to grind through jobs themselves. “It’s a brand-new kind of cognitive load,” said Ben Wigler, co-founder of the start-up LoveMind AI. “You have to really babysit these models.” […] “There is a unique kind of reward hacking that can go on when you have productivity at the scale that encourages even later hours,” Wigler said.

[Adam Mackintosh, a programmer for a Canadian company] recalled spending 15 consecutive hours fine-tuning around 25,000 lines of code in an application. “At the end, I felt like I couldn’t code anymore,” he recalled. “I could tell my dopamine was shot because I was irritable and didn’t want to answer basic questions about my day.”

BCG recommends in a recently published study that company leaders establish clear limits regarding employee use and supervision of AI. However, “That self-care piece is not really an America workplace value,” Wigler said. “So, I am very skeptical as to whether or not its going to be healthy or even high quality in the long term.”
Notably, the report says everyone interviewed for the article “expressed overall positive views of AI despite the downsides.” In fact, a recent BCG study actually found a decline in burnout rates when AI took over repetitive work tasks.

Re: Thinking vs drudge work

By Big Hairy Gorilla • Score: 5, Insightful Thread
I don’t think you’re wrong but.. we aren’t just talking about “thinking”. You can’t analyze code without experience. Junior people can’t understand without having DONE stuff. They won’t have anything to compare too.

Re:25,000 lines of code

By toxonix • Score: 5, Insightful Thread

It takes a lot of time to create GOOD code with LLMs. The first thing it generates might be good, but not good enough to ship. All the happy-path tests and unnecessary string equals checks (like testing that a hard coded message is the exact string we specified… come on now) aren’t going to tell you about all the edge cases you missed. It can only generate what you tell it to. There will be bugs.

Reviewing code is more effort than writing code

By BytePusher • Score: 5, Interesting Thread
I’ve been writing software for 30 years. Honest code reviews take more mental effort than writing the code yourself, unless the changes are small and clearly and verifiably well tested. Proper design for unit testing is hard and beyond the capabilities of AI. Hence, you can’t really do better than a human software engineer, yet.

The problems I’ve heard are a few

By rsilvergun • Score: 5, Insightful Thread
First being in AI programmer is like having a limitless supply of Junior programmers doing their very very first gig and you are their manager.

Second what ends up happening is if the AI doesn’t work you’re doubling up your work because your boss tells you the AI must be working so you must be more productive. And if the AI does work it’s just doing the grunt work and now instead of having a little bit of grunt work throughout the day to rest your mind in between the hard stuff you’re expected to be full on 24/7 banging out the most difficult aspects of code one after another.

Basically it either doesn’t work and now you have double the workload without any new tools to manage that workload or it does work and now your boss expects you to crank out super code 24/7. Either way your job just got a whole lot harder and a whole lot more miserable.

Re: 25,000 lines of code

By liqu1d • Score: 5, Funny Thread
That’s why I just run a find and replace removing line breaks. Removes so many lines of code.

Judge Allows BitTorrent Seeding Claims Against Meta, Despite Lawyers ‘Lame Excuses’

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from TorrentFreak:
In an effort to gather material for its LLM training, Meta used BitTorrent to download pirated books from Anna’s Archive and other shadow libraries. According to several authors, Meta facilitated the infringement of others by "seeding” these torrents. This week, the court granted the authors permission to add these claims to their complaint, despite openly scolding their counsel for “lame excuses” and “Meta bashing.” […] The judge acknowledged that the contributory infringement claim could and should have been added back in November 2024, when the authors amended their complaint to include the distribution claim. After all, both claims arise from the same factual allegations about Meta’s torrenting activity.

“The lawyers for the named plaintiffs have no excuse for neglecting to add a contributory infringement claim based on these allegations back in November 2024,” Judge Chhabria wrote. The lawyers of the book authors claimed that the delay was the result of newly produced evidence that had “crystallized” their understanding of Meta’s uploading activity. However, that did not impress the judge. He called it a “lame excuse” and “a bunch of doubletalk,” noting that if the missing discovery truly prevented the contributory claim from being added in November 2024, the same logic would have prevented the distribution claim from being added at that time as well. “Rather than blaming Meta for producing discovery late, the plaintiffs’ lawyers should have been candid with the Court, explaining that they missed an issue in a case of first impression..,” the order reads.

Judge Chhabria went further, noting that the authors’ law firm, Boies Schiller, showed “an ongoing pattern” of distracting from its own mistakes by attacking Meta. He pointed specifically to the dispute over when Meta disclosed its fair use defense to the distribution claim, which we covered here recently, characterizing it as a false distraction. “The lawyers for the plaintiffs seem so intent on bashing Meta that they are unable to exercise proper judgment about how to represent the interests of their clients and the proposed class members,” the order reads. Despite the criticism, Chhabria granted the motion. […] For now, the case moves forward with a fourth amended complaint, three new loan-out companies added as named plaintiffs, and a growing list of BitTorrent-related claims for Judge Chhabria to resolve.

Microsoft Copilot Is Now Injecting Ads Into Pull Requests On GitHub

Posted by BeauHD View on SlashDot Skip
Microsoft Copilot is reportedly injecting promotional “tips” into GitHub pull requests, with Neowin claiming more than 1.5 million PRs have been affected by messages advertising integrations like Raycast, Slack, Teams, and various IDEs. From the report:
According to Melbourne-based software developer Zach Manson, a team member used the AI to fix a simple typo in a pull request. Copilot did the job, but it also took the liberty of editing the PR’s description to include this message: “Quickly spin up Copilot coding agent tasks from anywhere on your macOS or Windows machine with Raycast.” A quick search of that phrase on GitHub shows that the same promotional text appears in over 11,000 pull requests across thousands of repositories. Even merge requests on GitLab aren’t safe from the injection.

So what’s happening? Well, Raycast has a Copilot extension that can do things like create pull requests from a natural language command. The ad directly names Raycast, so you might think that Raycast is injecting the promo into the PRs to market its own app. But it is more likely that Microsoft is the one doing the injecting. If you look at the raw markdown of the affected pull requests, there is a hidden HTML comment, “START COPILOT CODING AGENT TIPS” placed right just before the ad tip. This suggests Microsoft is using the comment to insert a “tip” that points back to its own developer ecosystem or partner integrations.
UPDATE: Following backlash from developers, Microsoft has removed Copilot’s ability to insert “tips” into pull requests. Tim Rogers, principal product manager for Copilot at GitHub, said the move was intended “to help developers learn new ways to use the agent in their workflow.”

“On reflection,” Rogers said he has since realized that letting Copilot make changes to PRs written by a human without their knowledge “was the wrong judgement call.”

Could be worse

By Locke2005 • Score: 5, Funny Thread
They could have Clippy pop up and say things like, “It looks like you are trying to insert trojans into this encryption code. Would you like help with that?”

Clever approach

By TwistedGreen • Score: 5, Interesting Thread

I really like this approach. The next step would be to put ads directly into the source code. Soon you’ll see comments like

/**
** THIS FUNCTION WAS BROUGHT TO YOU BY EXPRESSVPN
** USE CODE COPILOT26 FOR 90% OFF THE 3 YEAR PLAN
**/

You could quickly monetize your open source projects this way.

Re:Enshitification of Github Proceeds Apace

By Narcocide • Score: 5, Insightful Thread

Leela: “Didn’t you have ads in the 20th century?”
Fry: “Well sure, but not in our dreams! Only on tv and radio…and in magazines…and movies. And at ball games, on buses, and milk cartons, and t-shirts, and bananas, and written on the sky. But not in dreams! No sirree.”

Title Correction:

By Sebby • Score: 5, Insightful Thread

Microsoft Copilot Is Now Injecting Ads Into Pull Requests On GitHub

Microslop Copilot Is Now Injecting Ads Into Pull Requests On GitHub

There FTFY.

Reminder everyone

By thegarbz • Score: 5, Insightful Thread

Microslop does not like you calling them Microslop. It is offensive to call their slop slop and associated slop with Microslop.

Sony Shuts Down Nearly Its Entire Memory Card Business Due To SSD Shortage

Posted by BeauHD View on SlashDot
For the “foreseeable future,” Sony says it has stopped accepting new orders for most of its CFexpress and SD memory card lines due to the an ongoing memory supply shortage. “Due to the global shortage of semiconductors (memory) and other factors, it is anticipated that supply will not be able to meet demand for CFexpress memory cards and SD memory cards for the foreseeable future,” the company said in a notice. “Therefore, we have decided to temporarily suspend the acceptance of orders from our authorized dealers and from customers at the Sony Store from March 27, 2026 onwards. PetaPixel reports:
The suspension includes all of Sony’s memory card lines, including CFexpress Type A, CFexpress Type B, and SD cards. The 240GB, 480GB, 960GB, and 1920GB capacity Type A cards have been suspended, as have the 480GB and 240GB Type B cards. The full gamut of Sony’s high-end SD cards has also been suspended, including the 256GB, 128GB, and 64GB TOUGH-branded cards and the lower-end 512GB, 256GB, 128GB, and 256GB plainly-branded Sony cards, which cap out at V60 speeds. Even Sony’s lower-end, V30 128GB and 64GB SD cards have been suspended, showcasing that the SSD shortage affects all types of solid state, not just the high-end ones.

It appears that only the 960GB CFexpress Type B card and the lowest-end SF-UZ series SD cards remain in production. However, those UHS-I SD cards are discontinued in the United States outside of a scant few retailers and resellers. “We sincerely apologize for any inconvenience this may cause our customers,” Sony concludes.

Dear Tech Gods

By Tablizer • Score: 5, Funny Thread

Make the AI bubble pop already. I’ll stop wanking off for a year if you pop it, I swear on my rosie palm.

What good is AI

By ebunga • Score: 5, Insightful Thread

If there are no computers that are able to access it and no new data to feed it?

Sam Altman’s response

By 93 Escort Wagon • Score: 4, Funny Thread

“You don’t really need to take pictures anyway - let our AI create any image you want!”