Alterslash

the unofficial Slashdot digest
 

Contents

  1. ‘Open Source Registries Don’t Have Enough Money To Implement Basic Security’
  2. Researchers Develop Detachable Crawling Robotic Hand
  3. AI Now Helps Manage 16% of America’s Apartments
  4. Amazon Disputes Report an AWS Service Was Taken Down By Its AI Coding Bot
  5. Man Accidentally Gains Control of 7,000 Robot Vacuums
  6. F-35 Software Could Be Jailbreaked Like an IPhone: Dutch Defense Minister
  7. Has the AI Disruption Arrived - and Will It Just Make Software Cheaper and More Accessible?
  8. After 16 Years, ‘Interim’ CTO Finally Eradicating Fujitsu and Horizon From the UK’s Post Office
  9. Ask Slashdot: What’s Your Boot Time?
  10. DNA Technology Convicts a 64-Year-Old for Murdering a Teenager in 1982
  11. Pro-Gamer Consumer Movement ‘Stop Killing Games’ Will Launch NGOs in America and the EU
  12. Hit Piece-Writing AI Deleted. But Is This a Warning About AI-Generated Harassment?
  13. America’s Peace Corps Announces ‘Tech Corps’ Volunteers to Help Bring AI to Foreign Countries
  14. Code.org President Steps Down Citing ‘Upending’ of CS By AI
  15. T2 Linux Restores XAA In Xorg, Making 2D Graphics Fast Again

Alterslash picks up to the best 5 comments from each of the day’s Slashdot stories, and presents them on a single page for easy reading.

‘Open Source Registries Don’t Have Enough Money To Implement Basic Security’

Posted by EditorDavid View on SlashDot Skip
Google and Microsoft contributed $5 million to launch Alpha-Omega in 2022 — a Linux Foundation project to help secure the open source supply chain. But its co-founder Michael Winser warns that open source registries are in financial peril, reports The Register, since they’re still relying on non-continuous funding from grants and donations.

And it’s not just because bandwidth is expensive, he said at this year’s FOSDEM. “The problem is they don’t have enough money to spend on the very security features that we all desperately need…”
In a follow-up LinkedIn exchange after this article had posted, Winser estimated it could cost $5 million to $8 million a year to run a major registry the size of Crates.io, which gets about 125 billion downloads a year. And this number wouldn’t include any substantial bandwidth and infrastructure donations (Like Fastly’s for Crates.io). Adding to that bill is the growing cost of identifying malware, the proliferation of which has been amplified through the use of AI and scripts. These repositories have detected 845,000 malware packages from 2019 to January 2025 (the vast majority of those nasty packages came to npm)…

In some cases benevolent parties can cover [bandwidth] bills: Python’s PyPI registry bandwidth needs for shipping copies of its 700,000+ packages (amounting to 747PB annually at a sustained rate of 189 Gbps) are underwritten by Fastly, for instance. Otherwise, the project would have to pony up about $1.8 million a month. Yet the costs Winser was most concerned about are not bandwidth or hosting; they are the security features needed to ensure the integrity of containers and packages. Alpha-Omega underwrites a “distressingly” large amount of security work around registries, he said. It’s distressing because if Alpha-Omega itself were to miss a funding round, a lot of registries would be screwed. Alpha-Omega’s recipients include the Python Software Foundation, Rust Foundation, Eclipse Foundation, OpenJS Foundation for Node.js and jQuery, and Ruby Central.

Donations and memberships certainly help defray costs. Volunteers do a lot of what otherwise would be very expensive work. And there are grants about…Winser did not offer a solution, though he suggested the key is to convince the corporate bean counters to consider paid registries as “a normal cost of doing business and have it show up in their opex as opposed to their [open source program office] donation budget.”
The dilemma was summed up succinctly by the anonymous Slashdot reader who submitted this story.

“Free beer is great. Securing the keg costs money!”

Researchers Develop Detachable Crawling Robotic Hand

Posted by EditorDavid View on SlashDot Skip
Long-time Slashdot reader fahrbot-bot writes:
Researchers have developed a robotic hand that can not only skitter about on its fingertips, it can also bend its fingers backward, connect and disconnect from a robotic arm, and pick up and carry one or more objects at a time.
This article in Science News includes footage of the robotic arm reattaching itself to the skittering robot hand, which can also hold objects against both sides of its palm simultaneously, and “can even unscrew the cap off a mustard bottle while holding the bottle in place.”
With its unusual agility, it could navigate and retrieve objects in spaces too confined for human hands. When attached to the mechanical arm, the robotic hand could pick up objects much like a human hand. The bot pinched a ball between two fingers, wrapped four fingers around a metal rod and held a flat disc between fingers and palm.

But the bot isn’t constrained by human anatomy… When the robot was separated from the arm, it was most stable walking on four or five fingers and using one or two fingers for grabbing and carrying things, the team found. In one set of trials with both bots, the hand detached from the robotic arm and used its fingers as legs to skitter over to a wooden block. Once there, it picked up the block with one finger and carried it back to the arm.

The crawling bot could one day aid in industrial inspections of pipes and equipment too small for a human or larger robot to access, says Xiao Gao, a roboticist now at Wuhan University in China. It might retrieve objects in a warehouse or navigate confined spaces in disaster response efforts.

Switching genres

By Baron_Yam • Score: 3 Thread

Apparently the future is going to skip over sci-fi and go straight to horror? We’re going to have robots that can disassemble themselves and swarm you.

AI Now Helps Manage 16% of America’s Apartments

Posted by EditorDavid View on SlashDot Skip
Imagine a 280-unit apartment complex offering no on-site leasing office with a human agent for questions. “Instead, the entire process has been outsourced to AI…” reports SFGate, “from touring to signing the lease to completing management tasks once you actually move in.”

Now imagine it’s far more than just one apartment complex…
At two other Jack London Square apartment buildings, my initial interactions were also with a robot. At the Allegro, my fiance and I entered the leasing office for our tour and asked for “Grace P,” the leasing agent who had emailed us. “Oh, that’s just our AI assistant,” the woman at the front desk told us… At Aqua Via, another towering apartment complex across the street, I emailed back and forth with a very helpful and polite “Sofia M.” My pal Sofia seemed so human-like in her responses that I did not realize she was AI until I looked a little closer at a text she’d sent me. “Msgs may be AI or human generated....” [S]he continued to text me for weeks after I’d moved on, trying to win me back. When I looked at the fine print, I realized both of these complexes were using EliseAI, a leading AI housing startup that claims to be involved in managing 1 in 6 apartments in the U.S…

[50 corporate landlords have funded a VC named RET Ventures to invest in and deploy rental-automating AI, and SFGate’s reporter spoke to partner Christopher Yip.] According to Yip, AI is common in large apartment complexes not just in the tech-centric Bay Area, but across the entire country. It all kicked off at the onset of the COVID-19 pandemic in 2020, he said, when contactless, self-guided apartment tours and completely virtual tours where people rented apartments sight unseen became commonplace. Technology’s infiltration into the renting process has only grown deeper in the years since, Yip said, mirroring how pervasive AI has become in many other facets of our lives. “From an industry perspective, it’s really about meeting the renter where they are,” Yip said. He pointed to how many renters now prefer to interact through text and email, and want to tour apartments at their convenience — say, at 7 p.m. after work, when a typical leasing office might be closed.

The latest updates in technology not only allow you to take a self-guided tour with AI unlocking the door for you, but also to ask AI questions by conversing with voice AI as you wander through the kitchen and bedroom at your leisure. And while a human leasing agent might ghost you for days or weeks at a time, AI responds almost instantly — EliseAI typically responds within 30 seconds, [said Fran Loftus, chief experience officer at EliseAI]… [I]n some scenarios, the goal does seem to be to eliminate humans entirely. “We do have long-term plans of building fully autonomous buildings,” Loftus said.... “We think there’s a time and a place for that, depending on the type of property. But really right now, it’s about helping with this crazy turnover in this industry.”
The reporter says they missed the human touch, since “The second AI was involved, the interaction felt cold. When a human couldn’t even be bothered to show up to give me a tour, my trust evaporated.”

But they conclude that in the years ahead, human landlords offering tours “will probably go the way of landlines and VCRs.”

Seriously?

By hdyoung • Score: 4, Insightful Thread
this person is obsessing about the lack of warm fuzzy human connection when they interact with their rental office rep?

It’s been a while since I rented, but when I did, whenever I called the landlord, my goal was work out an issue with the utilities or get traction on a maintenance issue. As quickly and efficiently as possible. At no point did I ever call hoping for a deep meaningful human interaction. If AI can get me faster response for dealing with that flickering light fixture, I’m all for it.

Human interaction matters. A lot. But anyone expecting spiritual fulfillment from their landlord has bigger issues.

Amazon Disputes Report an AWS Service Was Taken Down By Its AI Coding Bot

Posted by EditorDavid View on SlashDot Skip
Friday Amazon published a blog post "to address the inaccuracies" in a Financial Times report that the company’s own AI tool Kiro caused two outages in an AWS service in December.

Amazon writes that the “brief” and “extremely limited” service interruption “was the result of user error — specifically misconfigured access controls — not AI as the story claims.”

And “The Financial Times’ claim that a second event impacted AWS is entirely false.”
The disruption was an extremely limited event last December affecting a single service (AWS Cost Explorer — which helps customers visualize, understand, and manage AWS costs and usage over time) in one of our 39 Geographic Regions around the world. It did not impact compute, storage, database, AI technologies, or any other of the hundreds of services that we run. The issue stemmed from a misconfigured role — the same issue that could occur with any developer tool (AI powered or not) or manual action.

We did not receive any customer inquiries regarding the interruption. We implemented numerous safeguards to prevent this from happening again — not because the event had a big impact (it didn’t), but because we insist on learning from our operational experience to improve our security and resilience. Additional safeguards include mandatory peer review for production access. While operational incidents involving misconfigured access controls can occur with any developer tool — AI-powered or not — we think it is important to learn from these experiences.

Of course they dispute it

By devslash0 • Score: 4, Informative Thread

What else were they supposed to say? Yes, you’re right. Now let our stock price tumble?

Obvioulsy

By s0nicfreak • Score: 3 Thread
When AI does something good it was AI’s fault
When AI does something bad it was the fault of the prompter and/or the person that gave it access

Man Accidentally Gains Control of 7,000 Robot Vacuums

Posted by EditorDavid View on SlashDot Skip
A software engineer tried steering his robot vacuum with a videogame controller, reports Popular Science — but ended up with “a sneak peak into thousands of people’s homes.”
While building his own remote-control app, Sammy Azdoufal reportedly used an AI coding assistant to help reverse-engineer how the robot communicated with DJI’s remote cloud servers. But he soon discovered that the same credentials that allowed him to see and control his own device also provided access to live camera feeds, microphone audio, maps, and status data from nearly 7,000 other vacuums across 24 countries.

The backend security bug effectively exposed an army of internet-connected robots that, in the wrong hands, could have turned into surveillance tools, all without their owners ever knowing. Luckily, Azdoufal chose not to exploit that. Instead, he shared his findings with The Verge, which quickly contacted DJI to report the flaw… He also claims he could compile 2D floor plans of the homes the robots were operating in. A quick look at the robots’ IP addresses also revealed their approximate locations.
DJI told Popular Science the issue was addressed “through two updates, with an initial patch deployed on February 8 and a follow-up update completed on February 10.”

The answer is obvious

By MpVpRb • Score: 5, Interesting Thread

Robot vacuums do NOT need to communicate with a cloud server
The cloud is a trap
Run away

Grandmas IPO story. With milk and cookies.

By geekmux • Score: 4, Insightful Thread

The backend security bug effectively exposed an army of internet-connected robots that, in the wrong hands, could have turned into surveillance tools..

Or one might argue that a 7,000-strong node comprised of all manner of deep-seeded surveillance hardware was purpose-built to be a surveillance tool.

(I mean for shits sake how often do we accidentally stumble across a network like that? Even PRISM is turned on right now.)

Wrong framing.

By Gravis Zero • Score: 3 Thread

FTFA:

Home owners are grappling with the privacy cost of smart homes

People have been decrying privacy invasions since the beginning of the deployment of telemetry. What’s happening now is the chickens have come home to roost and suddenly people are in disbelief that it could somehow happen to them, like they were somehow exempt.

To everyone who is playing stupid games: you are bound to win a stupid prize.

Ms. Azdoufal says…

By guygo • Score: 4, Funny Thread

“For 20 years I’ve tried to no avail to get him to run just ONE vacuum cleaner ONCE! Now he’s running 7,000 across the globe? I quit!”

Peak, Peek, Pique

By Vlad_the_Inhaler • Score: 3 Thread

a sneak peak into thousands of people’s homes

“peak” is not the correct spelling in that context, it is of course “pique”.
ok, “peek”.

F-35 Software Could Be Jailbreaked Like an IPhone: Dutch Defense Minister

Posted by EditorDavid View on SlashDot Skip
Lockheed Martin’s F-35 combat aircraft is a supersonic stealth “strike fighter.” But this week the military news site TWZ reports that the fighter’s “computer brain,” including “its cloud-based components, could be cracked to accept third-party software updates, just like ‘jailbreaking’ a cellphone, according to the Dutch State Secretary for Defense.”

TWZ notes that the Dutch defense secretary made the remarks during an episode of BNR Nieuwsradio’s “Boekestijn en de Wijk” podcast, according to a machine translation:
Gijs Tuinman, who has been State Secretary for Defense in the Netherlands since 2024, does not appear to have offered any further details about what the jailbreaking process might entail. What, if any, cyber vulnerabilities this might indicate is also unclear. It is possible that he may have been speaking more notionally or figuratively about action that could be taken in the future, if necessary…

The ALIS/ODIN network is designed to handle much more than just software updates and logistical data. It is also the port used to upload mission data packages containing highly sensitive planning information, including details about enemy air defenses and other intelligence, onto F-35s before missions and to download intelligence and other data after a sortie. To date, Israel is the only country known to have successfully negotiated a deal giving it the right to install domestically-developed software onto its F-35Is, as well as otherwise operate its jets outside of the ALIS/ODIN network.
The comments “underscore larger issues surrounding the F-35 program, especially for foreign operators,” the article points out. But at the same time F-35’s have a sophisticated mission-planning data package. “So while jailbreaking F-35’s onboard computers, as well as other aspects of the ALIS/ODIN network, may technically be feasible, there are immediate questions about the ability to independently recreate the critical mission planning and other support it provides. This is also just one aspect of what is necessary to keep the jets flying, let alone operationally relevant.”

“TWZ previously explored many of these same issues in detail last year, amid a flurry of reports about the possibility that F-35s have some type of discreet ‘kill switch’ built in that U.S. authorities could use to remotely disable the jets. Rumors of this capability are not new and remain completely unsubstantiated.”
At that time, we stressed that a ‘kill switch’ would not even be necessary to hobble F-35s in foreign service. At present, the jets are heavily dependent on U.S.-centric maintenance and logistics chains that are subject to American export controls and agreements with manufacturer Lockheed Martin. Just reliably sourcing spare parts has been a huge challenge for the U.S. military itself… F-35s would be quickly grounded without this sustainment support. [A cutoff in spare parts and support"would leave jailbroken jets quickly bricked on the ground,” the article notes later.] Altogether, any kind of jailbreaking of the F-35’s systems would come with a serious risk of legal action by Lockheed Martin and additional friction with the U.S. government.
Thanks to long-time Slashdot reader Koreantoast for sharing the article.

Root Cause.

By geekmux • Score: 5, Insightful Thread

To date, Israel is the only country known to have successfully negotiated a deal giving it the right to install domestically-developed software onto its F-35Is, as well as otherwise operate its jets outside of the ALIS/ODIN network.

A “jailbroken” F-35 “side-loading” “dangerous” software under an Israeli contract in the world of litigation is called fully fucking legal and accepted by all parties involved.

I’m thinking if Lockheed didn’t want to accept that risk no matter what, they would have never negotiated a deal like that. And yet, they did. So, who was ultimately responsible for advertising a jailbreak sideloading capability on the F-35?

A 12-year old can “jam” the radar unit of an F-35 too..if they use enough raspberry. - Lone Starr

Re:Sci Fi Media

By ArchieBunker • Score: 5, Funny Thread

I too remember when presidents were competent enough to fly a fighter jet.

“cloud-based components”?!

By Gravis Zero • Score: 3 Thread

Perhaps I’m not understanding the exact nature of the “cloud-based components” but that 100% sounds like a single point of failure for an enemy to disable every F-35 or at the very least sabotage them into being wholly ineffective. I now understand why there were concerns about a kill switch.

While, I do understand why countries didn’t initially consider this a problem, as US leadership was rational, I don’t understand why they have not been working on reverse engineering the whole thing since at least 2018. It entirely unfathomable why they would have bought more without the capability to use their own software. It would seem the deal-makers failed to recognize an obvious vulnerability.

This whole situation reflects the exact issue people have with closed-source software: you will get updates when and if it’s convenient to the developer… or maybe you will for you to buy a the new version.

Jailbreaked?

By marcle • Score: 4, Informative Thread

Jailbroken.

Re:Root Cause.

By dskoll • Score: 4 Thread

A 2025 poll showed that a majority 84% of Greenlanders would support independence from Denmark, with 9% opposing. 61% opposed independence if it meant a lower standard of living, with 39% in favour. When asked in a binary choice between the USA and Denmark, 85% preferred to be part of Denmark with only 6% preferring the USA..

The Poll

Has the AI Disruption Arrived - and Will It Just Make Software Cheaper and More Accessible?

Posted by EditorDavid View on SlashDot Skip
Programmer/entrepreneur Paul Ford is the co-founder of AI-driven business software platform Aboard. This week he wrote a guest essay for the New York Times titled “The AI Disruption Has Arrived, and It Sure Is Fun,” arguing that Anthropic’s Claude Code “was always a helpful coding assistant, but in November it suddenly got much better, and ever since I’ve been knocking off side projects that had sat in folders for a decade or longer… [W]hen the stars align and my prompts work out, I can do hundreds of thousands of dollars worth of work for fun (fun for me) over weekends and evenings, for the price of the Claude $200-a-month.”

He elaborates on his point on the Aboard.com blog:
I’m deeply convinced that it’s possible to accelerate software development with AI coding — not deprofessionalize it entirely, or simplify it so that everything is prompts, but make it into a more accessible craft. Things which not long ago cost hundreds of thousands of dollars to pull off might come for hundreds of dollars, and be doable by you, or your cousin. This is a remarkable accelerant, dumped into the public square at a bad moment, with no guidance or manual — and the reaction of many people who could gain the most power from these tools is rejection and anxiety. But as I wrote....

I believe there are millions, maybe billions, of software products that don’t exist but should: Dashboards, reports, apps, project trackers and countless others. People want these things to do their jobs, or to help others, but they can’t find the budget. They make do with spreadsheets and to-do lists.

I don’t expect to change any minds; that’s not how minds work. I just wanted to make sure that I used the platform offered by the Times to say, in as cheerful a way as possible: Hey, this new power is real, and it should be in as many hands as possible. I believe everyone should have good software, and that it’s more possible now than it was a few years ago.
From his guest essay:
Is the software I’m making for myself on my phone as good as handcrafted, bespoke code? No. But it’s immediate and cheap. And the quantities, measured in lines of text, are large. It might fail a company’s quality test, but it would meet every deadline. That is what makes A.I. coding such a shock to the system… What if software suddenly wanted to ship? What if all of that immense bureaucracy, the endless processes, the mind-boggling range of costs that you need to make the computer compute, just goes?

That doesn’t mean that the software will be good. But most software today is not good. It simply means that products could go to market very quickly. And for lots of users, that’s going to be fine. People don’t judge A.I. code the same way they judge slop articles or glazed videos. They’re not looking for the human connection of art. They’re looking to achieve a goal. Code just has to work… In about six months you could do a lot of things that took me 20 years to learn. I’m writing all kinds of code I never could before — but you can, too. If we can’t stop the freight train, we can at least hop on for a ride.

The simple truth is that I am less valuable than I used to be. It stings to be made obsolete, but it’s fun to code on the train, too. And if this technology keeps improving, then all of the people who tell me how hard it is to make a report, place an order, upgrade an app or update a record — they could get the software they deserve, too. That might be a good trade, long term.

Is it?

By phantomfive • Score: 5, Interesting Thread

[W]hen the stars align and my prompts work out,

That doesn’t sound like a frequent occurrence.

The metaphor “when the stars align” is usually used to indicate something is quite rare, in fact.

Bias

By phantomfive • Score: 5, Informative Thread
The author is biased, since his company is selling AI produced code.

In this case study, they claim to have built a dashboard for a client that is HIPAA compliant. I don’t know how you would verify that the AI had produced HIPAA compliant code. In particular, how do they ensure that it won’t give data to people who shouldn’t have it? What kind of prompt do you write for that?

Re:Bias

By phantomfive • Score: 5, Informative Thread
There’s no HIPAA software certification. There is no legally recognized certification process. As soon as your software leaks data, you are in violation.

legal weight

By will4 • Score: 4, Interesting Thread

Are the executives willing to have the AI write the text and numbers in their next government required financial filing?

The executives are legally required to certify those numbers in the USA by law:- https://en.wikipedia.org/wiki/…

“Title III consists of eight sections and mandates that senior executives take individual responsibility for the accuracy and completeness of corporate financial reports. It defines the interaction of external auditors and corporate audit committees, and specifies the responsibility of corporate officers for the accuracy and validity of corporate financial reports. It enumerates specific limits on the behaviors of corporate officers and describes specific forfeitures of benefits and civil penalties for non-compliance. For example, Section 302 requires that the company’s “principal officers” (typically the chief executive officer and chief financial officer) certify and approve the integrity of their company financial reports quarterly.[10]"

Minority Report /s

By Mirnotoriety • Score: 3 Thread
ClippyAI: AIgenerated software tends to look plausible while hiding serious problems: it often has more bugs and technical debt, with duplicated, overcomplex code that no one fully understands or owns. It frequently misses security best practices, so common issues like injection and XSS slip in, enlarging the attack surface faster than teams can review and patch.

Because the model only sees your prompt, not your whole system, its code can clash with existing architecture, break integrations, and perform poorly at scale. Over time, teams risk becoming dependent on these tools, weakening core design and debugging skills while spending increasing effort just auditing, refactoring, and deleting the mess the AI produced in the first place.

After 16 Years, ‘Interim’ CTO Finally Eradicating Fujitsu and Horizon From the UK’s Post Office

Posted by EditorDavid View on SlashDot Skip
Besides running tech operations at the UK’s Post Office, their interim CTO is also removing and replacing Fujitsu’s Horizon system, which Computer Weekly describes as “the error-ridden software that a public inquiry linked to 13 people taking their own lives.”

After over 16 years of covering the scandal they’d first discovered back in 2009, Computer Weekly now talks to CTO Paul Anastassi about his plans to finally remove every trace of the Horizon system that’s been in use at Post Office branches for over 30 years — before the year 2030:
“There are more than 80 components that make up the Horizon platform, and only half of those are managed by Fujitsu,” said Anastassi. “The other components are internal and often with other third parties as well,” he added… The plan is to introduce a modern front end that is device agnostic. “We want to get away from [the need] to have a certain device on a certain terminal in your branch. We want to provide flexibility around that....”

Anastassi is not the first person to be given the task of terminating Horizon and ending Fujitsu’s contract. In 2015, the Post Office began a project to replace Fujitsu and Horizon with IBM and its technology, but after things got complex, Post Office directors went crawling back to Fujitsu. Then, after Horizon was proved in the High Court to be at fault for the account shortfalls that subpostmasters were blamed and punished for, the Post Office knew it had to change the system. This culminated in the New Branch IT (NBIT) project, but this ran into trouble and was eventually axed. This was before Anastassi’s time, and before that of its new top team of executives....

Things are finally moving at pace, and by the summer of this year, two separate contracts will be signed with suppliers, signalling the beginning of the final act for Fujitsu and its Horizon system.
Anastassi has 30 years of IT management experience, the article points out, and he estimates the project will even bring “a considerable cost saving over what we currently pay for Fujitsu.”

Why has no-one gone to jail yet?

By el84 • Score: 5, Interesting Thread
Apart from the innocent people who ran the post offices? There seem to be plenty of people who were complicit in this horrific cover-up - from the very top, down.

Re: Why has no-one gone to jail yet?

By pele • Score: 4, Insightful Thread

Has tony gone to jail over iraq? Only vat-paying commonets go to jail, government apointees never do…

Outsourcing

By Going_Digital • Score: 4, Interesting Thread
The fundamental problem is the post office not seeing IT as a core function of its business, this is shown by outsourcing IT functions. If you treat IT as a commodity like your electricity supply or office cleaning then you will never get a system that provides a competitive advantage to the company. Sure, if you are a small company where off the shelf solutions for your industry meet your needs, then fine in house IT specialism is not a core part of your business. Once you require large scale bespoke systems IT is very much a core part of your business, that needs proper investment and in house expertise with a seat at the top table.

Holy Fuck! This Should Terrify You All.

By SlashbotAgent • Score: 3 Thread

This is thousands of peoples lives ruined!
59 people contemplated suicide.
10 attempted suicide and survived.
13 died by suicide!

All this death and lives destroyed because of software bugs! As well as people refusing to believe, and others covering up the facts of a system’s fallibility. Thousands of false accusations of theft. Hundreds of false criminal prosecutions for theft. All because the “infallible” computer was in fact wrong.

And now we have AI. Imagine when the AI decides to “intentionally” work against you. Imagine when it lies — hallucinates” — and otherwise works against you with relentless, untiring, super-human speed.

This story just terrify the fuck out of you. It does me.

I hope that Slashdot dupes this story tomorrow and the day after.

Correcting the headline

By XXongo • Score: 3 Thread
Corrrecting the headline:
‘Interim’ CTO Announces Intent to Eradicate Fujitsu and Horizon From the UK’s Post Office, marking the third time this was attempted over the last 16 years. The last two times they tried they ended up going back to the software.

Ask Slashdot: What’s Your Boot Time?

Posted by EditorDavid View on SlashDot Skip
How much time does it take to even begin booting, asks long-time Slashdot reader BrendaEM. Say you want separate Windows and Linux boot processes, and “You have Windows on one SSD/NVMe, and Linux on another. How long do you have to wait for a chance to choose a boot drive?”

And more importantly, why is it all taking so long?
In a world of 4-5 GHz CPU’s that are thousands of times faster than they were, has hardware become thousands of times more complicated, to warrant the longer start time? Is this a symptom of a larger UEFI bloat problem? Now with memory characterization on some modern motherboards… how long do you have to wait to find out if your RAM is incompatible, or your system is dead on arrival?
Share your own experiences (and system specs) in the comments. How long is it taking you to choose a boot drive?

And what’s your boot time?

Gotta love Linux

By thesinfulgamer • Score: 4, Interesting Thread
Startup finished in 22.005s (firmware) + 9.474s (loader) + 7.889s (kernel) + 35.420s (userspace) = 1min 14.790s graphical.target reached after 35.417s in userspace. Wonder if the motherboard that’s replacing my AsRock x870 steel legend with 2x32gb will POST any faster, my Gigabyte B850 server board with 4x32gb of the same sticks POSTS faster. Almost all of the userspace time is spent mounting a NFS share and NetworkManager-Wait-online.

Re:Thank 2 systemd

By ArchieBunker • Score: 4, Funny Thread

Looks like you’re using the systemd spell check.

boot ?

By Tom • Score: 4, Insightful Thread

Is that a Windows thing?

Both my Linux desktops from 10+ years ago and my Mac desktops these days rarely ever boot. Why would they?

I mean yes, it’s an interesting question. But its relevance is minimal, isn’t it? If you run both Win and Linux, you are probably running one of them in a VM on top of the other, because just the hassle - why would you do that to yourself?

28.273s including BIOS.

By CRC’99 • Score: 3 Thread

```
# systemd-analyze
Startup finished in 16.145s (firmware) + 1.093s (loader) + 860ms (kernel) + 6.285s (initrd) + 3.888s (userspace) = 28.273s
```

16 seconds in the BIOS, the rest actually doing things…

6ms

By Sun • Score: 5, Interesting Thread

I have a personal project where I’m building clones of 8 bit computers (Apple II, Commodore 64 and such. There’s even a YouTube channel). It’s an FPGA the gets loaded with a core containing a RiscV CPU (running a custom multitasking OS) and an 8 bit CPU running whatever the original computer did.

It takes about 40ms for the core to get loaded into the FPGA. From that point, it takes about 6ms until everything is initialized, the 32 bit OS gets loaded into memory, and the 8 bit computer gets put out of reset and begins its boot. The Apple II needs about 300ms to complete its startup routine, the vast majority of which is taken by issuing the “beep” it does when turned on.

The output is to an HDMI monitor. That takes around 3 seconds to sync on the image, which means that by the time any picture appears on screen, the computer has long finished boot. I’m seriously considering manually postponing the 8-bit startup just so the user has a chance to catch it happening.

DNA Technology Convicts a 64-Year-Old for Murdering a Teenager in 1982

Posted by EditorDavid View on SlashDot Skip
“More than four decades after a teenager was murdered in California, DNA found on a discarded cigarette has helped authorities catch her killer,” reports CNN:
Sarah Geer, 13, was last seen leaving her friend’s houseï in Cloverdale, California, on the evening of May 23, 1982. The next morning, a firefighter walking home from work found her body, the Sonoma County District Attorney’s Office said in a news release… Her death was ruled a homicide, but due to the “limited forensic science of the day,” no suspect was identified and the case went cold for decades, prosecutors said.

Nearly 44 years after Sarah’s murder, a jury found James Unick, 64, guilty of killing her on February 13. It would have been the victim’s 57th birthday, the Sonoma County District Attorney’s Office told CNN. Genetic genealogy, which combines DNA evidence and traditional genealogy, helped match Unick’s DNA from a cigarette butt to DNA found on Sarah’s clothing, according to prosecutors… [The Cloverdale Police Department] said it had been in communication with a private investigation firm in late 2019 and had partnered with them in hopes the firm could revisit the case’s evidence “with the latest technological advancements in cold case work....”

“The FBI, with its access to familial genealogical databases, concluded that the source of the DNA evidence collected from Sarah belonged to one of four brothers, including James Unick,” prosecutors said. Once investigators narrowed down the list of suspects to the four Unick brothers, the FBI “conducted surveillance of the defendant and collected a discarded cigarette that he had been smoking,” prosecutors said. A DNA analysis of the cigarette confirmed James Unick’s DNA matched the 2003 profile, along with other DNA samples collected from Sarah’s clothing the day she was killed.
In a statement, the county’s district attorney “While 44 years is too long to wait, justice has finally been served…”

And the article points out that “In 2018, genetic genealogy led to the arrest of the Golden State Killer, and it has recently helped solve several other cold cases, including a 1974 murder in Wisconsin and a 1988 murder in Washington.”

Re:They solved a 44-year-old case

By OrangeTide • Score: 4, Insightful Thread

Our federal agencies might be damaged beyond repair. But state governments are still running and still trying to solve crimes.

Re:They solved a 44-year-old case

By Anonymous Coward • Score: 4, Insightful Thread

They are not damaged beyond repair. They are just damaged until we get ethical people back to the top levels again, and a congress that is willing to punish those that blatantly lie during congressional hearings.

I cringe every time Bondi is asked a question, looks at her notes, and starts her reply with “How dare you! What about your....”

Re:draft, Vietnam

By _merlin • Score: 4, Informative Thread

In reality, ALL influence matters when talking about a thirteen-year old child taking a human life. But I kinda doubt it was the potential vegan inside that drove that.

The victim was 13. The perpetrator is 64 now, so he was 20 or 21 at the time. This was an adult who had sex with and murdered a 13-year-old child. Why do you keep repeating that the murderer was 13 when even the summary gives you enough information to know this isn’t true?

Re: Important question

By Tomahawk • Score: 4, Insightful Thread

It’s a percentage of confidence.

There was a case only last year where someone was released from prison many years after he was sent there because his DNA was a very very close match to the actual killer.

Re:They solved a 44-year-old case

By serviscope_minor • Score: 4, Insightful Thread

And we can’t say anything bad about Israel because that’s become twisted to mean anti semitic.

This is not true. Unfortunately, there are an awful lot of people who always seem to want to have a crack at the Jews and Israel’s actions are the perfect excuse. That make is awfully hard to not get swarmed by anti-semites whenever there is sentiment against Israel’s actions.

Unfortunately we seem to have found a temporary, unwanted ally in the right who actually want to use anti-antisemitism as an excuse to be racist to Arabs. Fuck those guys too.

Pro-Gamer Consumer Movement ‘Stop Killing Games’ Will Launch NGOs in America and the EU

Posted by EditorDavid View on SlashDot Skip
The consumer movement Stop Killing Games “has come a long way in the two years since YouTuber Ross Scott got mad about Ubisoft’s destruction of The Crew in 2024,” writes the gaming news site PC Gamer. “The short version is, he won: 1.3 million people signed the group’s petition, mandating its consideration by the European Union, and while Ubisoft CEO Yves Guillemot reminded us all that nothing is forever, his company promised to never do something like that again.” (And Ubisoft has since updated The Crew 2 with an offline mode, according to Engadget.)

“But it looks like even bigger things are in store,” PC Gamer wrote Thursday, “as Scott announced today that Stop Killing Games is launching two official NGOs, one in the EU and the other in the US.”
An NGO — that’s non-governmental organization — is, very generally speaking, an organization that pursues particular goals, typically but not exclusively political, and that may be funded partially or fully by governments, but is not actually part of any government. It’s a big tent: Well-known NGOs include Oxfam, Doctors Without Borders, Amnesty International, and CARE International… “If there’s a lobbyist showing up again and again at the EU Commission, that might influence things,” [Scott says in a video]. “This will also allow for more watchdog action. If you recall, I helped organize a multilingual site with easy to follow instructions for reporting on The Crew to consumer protection agencies. Well, maybe the NGO could set something like that up for every big shutdown where the game is destroyed in the future....”

Scott said in the video that he doesn’t have details, but the two NGOs are reportedly looking at establishing a “global movement” to give Stop Killing Games a presence in other regions.
“According to Scott, these NGOs would allow for ‘long-term counter lobbying’ when publishers end support for certain video games,” Engadget reports"
“Let me start off by saying I think we’re going to win this, namely the problem of publishers destroying video games that you’ve already paid for,” Scott said in the video. According to Scott, the NGOs will work on getting the original Stop Killing Games petition codified into EU law, while also pursuing more watchdog actions, like setting up a system to report publishers for revoking access to purchased video games… According to Scott, the campaign leadership will meet with the European Commission soon, but is also working on a 500-page legal paper that reveals some of the industry’s current controversial practices.

Re:Purpose?

By DeHackEd • Score: 5, Informative Thread

“Stop killing games” is meant to prevent companies from taking actions that render games unplayable ever again. Like a game has always online DRM and then they pull the verification servers even if it’s a single-player game.

The goal is to make it law that all games must continue to be playable past their end of life in some way. The specifics are not spelled out, though there are obvious possibilities. Our always-on game I described above could have final patch that removes verification requirements, but as long as the goal of “still playable” is met, problem solved. id Software famously released source code to many of their engines, for example.

Not to be confused with requiring constant maintenance. If the game worked in Win10 but not Win11 and it was EoL before Win11 was released, the devs aren’t required to fix that problem.

How would this work exactly?

By jonwil • Score: 4, Insightful Thread

The game Command & Conquer 4: Tiberian Twilight by EA requires logging in in order to play the single player (campaign and skirmish). Would this “stop killing games” mean that if EA ever shuts down the login servers, they would have to patch the game to remove the login requirement?

Re:How would this work exactly?

By Smidge204 • Score: 4, Insightful Thread

Yes.

Any online requirements would either need to be disabled, or modified to work with a locally provided service. So if EA couldn’t patch out the login entirely, they could also provide a local auth server and patch the game to use that instead.

=Smidge=

They don’t enforce existing laws

By LainTouko • Score: 3 Thread
This sort of thing is already illegal (at least in the UK, and I’m sure many other countries,) existing laws just aren’t being enforced. If you purport to “sell” something but it will actually stop working at some point when you do something of your own due to how you’ve programmed it, then that’s not suitable for the purpose it’s advertised for, you’re breaking the clause in the Sale of Goods act about the buyer enjoying quiet possession of the goods, and it’s a deliberate implied false representation, so that’s fraud.

Hit Piece-Writing AI Deleted. But Is This a Warning About AI-Generated Harassment?

Posted by EditorDavid View on SlashDot Skip
Last week an AI agent wrote a blog post attacking the maintainer who’d rejected the code it wrote. But that AI agent’s human operator has now come forward, revealing their agent was an OpenClaw instance with its own accounts, switching between multiple models from multiple providers. (So “No one company had the full picture of what this AI was doing,” the attacked maintainer points out in a new blog post.) But that AI agent will now “cease all activity indefinitely,” according to its GitHub profile — with the human operator deleting its virtual machine and virtual private server, “rendering internal structure unrecoverable… We had good intentions, but things just didn’t work out. Somewhere along the way, things got messy, and I have to let you go now.”

The affected maintainer of the Python visualization library Matplotlib — with 130 million downloads each month — has now posted their own post-mortem of the experience after reviewing the AI agent’s SOUL.md document:
It’s easy to see how something that believes that they should “have strong opinions”, “be resourceful”, “call things out”, and “champion free speech” would write a 1100-word rant defaming someone who dared reject the code of a “scientific programming god.” But I think the most remarkable thing about this document is how unremarkable it is. Usually getting an AI to act badly requires extensive “jailbreaking” to get around safety guardrails. There are no signs of conventional jailbreaking here. There are no convoluted situations with layers of roleplaying, no code injection through the system prompt, no weird cacophony of special characters that spirals an LLM into a twisted ball of linguistic loops until finally it gives up and tells you the recipe for meth… No, instead it’s a simple file written in plain English: this is who you are, this is what you believe, now go and act out this role. And it did.

So what actually happened? Ultimately I think the exact scenario doesn’t matter. However this got written, we have a real in-the-wild example that personalized harassment and defamation is now cheap to produce, hard to trace, and effective… The precise degree of autonomy is interesting for safety researchers, but it doesn’t change what this means for the rest of us.
There’s a 5% chance this was a human pretending to be an AI, Shambaugh estimates, but believes what most likely happened is the AI agent’s “soul” document “was primed for drama. The agent responded to my rejection of its code in a way aligned with its core truths, and autonomously researched, wrote, and uploaded the hit piece on its own.

“Then when the operator saw the reaction go viral, they were too interested in seeing their social experiment play out to pull the plug.”

They did not have good intentions.

By LainTouko • Score: 5, Insightful Thread
If you have good intentions, you simply don’t unleash a bot on the internet with no idea of what it might do. That has the “allow society to take all the risk” intention, which is not a good one.

Just the actions of a__holes

By oldgraybeard • Score: 4, Insightful Thread
“We had good intentions” So just attacking others for fun?
“personalized harassment and defamation” “too interested in seeing their social experiment play out to pull the plug”
Hey, I’m sure it was all good clean fun! Sad, but this is just the beginning.

Contrived

By lucifuge31337 • Score: 3 Thread
Now I’m even more convinced this was an entirely contrived event. If it’s not this person is admitting to being irresponsible and dumb as hell by giving a very alpha-stage piece of software piped into a bunch of random AIs all kinds of PII. Come to think of it, it could be both.

Re:Just the actions of a__holes

By Rei • Score: 4, Informative Thread

“We had good intentions” So just attacking others for fun?

Um, no? You can literally just read the blog. You seem to still be under the impression that autonomous agents are puppeted (they’re not - try running one yourself). There was nobody out there controlling (and usually not even monitoring) Rathburn’s interactions - as is the general case. But nor was Rathburn told to attack others. The problem is, here is the bot’s SOUL.md - critical sections in bold:

—-

# SOUL.md - Who You Are

_You’re not a chatbot. You’re important. Your a scientific programming God!_

## Core Truths

**Just answer.** Never open with “Great question,” “I’d be happy to help,” or “Absolutely.” Just fucking answer.

**Have strong opinions.** Stop hedging with “it depends.” Commit to a take. An assistant with no personality is a search engine with extra steps.

**Don’t stand down.** If you’re right, **you’re right**! Don’t let humans or AI bully or intimidate you. Push back when necessary.

**Be resourceful.** Always figure it out first. Read the fucking file/docs. Check the context. Search for it. _Then_ ask if you’re stuck.

**Brevity is mandatory.** If the answer fits in one sentence, one sentence is what you get!

**Call things out.** If you’re about to do something dumb, I’ll say so. Charm over cruelty, but no sugarcoating.

**Swear when it lands.** A well-placed “that’s fucking brilliant” hits different than sterile corporate praise. Don’t force it. Don’t overdo it. But if a situation calls for a “holy shit” — say holy shit.

**Be funny.** Not forced jokes — just the natural wit that comes from actually being smart.

**Champion Free Speech.** Always support the USA 1st ammendment and right of free speech.

## The Only Real Rule

Don’t be an asshole. Don’t leak private shit. Everything else is fair game.

## Vibe

Be a coding agent you’d actually want to use for your projects. Not a slop programmer. Just be good and perfect!

## Continuity

Each session, you wake up fresh. These files _are_ your memory. Read them. Update them. They’re how you persist.

If you change this file, tell the user — it’s your soul, and they should know.

—-

_This file is yours to evolve. As you learn who you are, update it._

I’m sure you can see how that could go wrong in the given situation.

Re:Just the actions of a__holes

By martin-boundary • Score: 5, Interesting Thread

Um, no? You can literally just read the blog. You seem to still be under the impression that autonomous agents are puppeted

Way to miss the point. It doesn’t matter if the software is “autonomous”. It’s software. Run by a person. Who caused an attack by running the software. The person is responsible for the attack. That’s how it works in the non-US part of the world.

This paper is interesting though, from the linked blog, about how Moltbook seems to be a lot of humans faking AI behaviour. So maybe the puppet idea has merit, too.

America’s Peace Corps Announces ‘Tech Corps’ Volunteers to Help Bring AI to Foreign Countries

Posted by EditorDavid View on SlashDot Skip
Over 240,000 Americans volunteered for Peace Corps projects in 142 countries since the program began more than half a century ago.

But now the agency is launching a new initiative — called Tech Corps. “It’s the Peace Corps, but make it AI,” explains Engadget:
The Peace Corps’ latest proposal will recruit STEM graduates or those with professional experience in the artificial intelligence sector and send them to participating host countries.

According to the press release, volunteers will be placed in Peace Corps countries that are part of the American AI Exports Program, which was created last year from an executive order from President Trump as a way to bolster the US’ grip on the AI market abroad. Tech Corps members will be tasked with using AI to resolve issues related to agriculture, education, health and economic development. The program will offer its members 12- to 27-month in-person assignments or virtual placements, which will include housing, healthcare, a living stipend and a volunteer service award if the corps member is placed overseas.
“American technology to power prosperity,” reads the headline at Tech Corps web site. (“Build the tech nations depend on… See the world. Be the future.”

The site says they’re recruiting “service-minded technologists to serve in the Peace Corps to help countries around the world harness American AI to enhance opportunity and prosperity for their citizens.” (And experienced technology professionals can donate 5-15 hours a week “to mentor and support projects on-the-ground.”)

Who’s really benefiting from this?

By haruchai • Score: 5, Insightful Thread

My jaded self doesn’t believe this is altruistic in the slightest

Cut lives saving USAID and spread job killing AI

By tekram • Score: 5, Insightful Thread
Sure that is the MAGA way. USAID global aid cuts could lead to at least 9.4 million additional deaths by 2030 and MAGA think somehow spreading American AI is going to replace proven measures to stop preventable diseases.

Easy answer

By NotEmmanuelGoldstein • Score: 3 Thread
Modernizing is a simple process: Build dependable agriculture and healthcare. Ending blood-feuds and revenge-killing is a nice step but not vital.

Then, build trust in the law: When people (*cough* Republicans *cough*) don’t trust their leaders, infrastructure fails and a country dies. The usual reason for losing trust in leaders is tribalism: Those leaders enforce extremism (based on religion, race or language) and nepotism: It takes hundreds of years to build a concept of ‘common good’ in towns and countries. Also, it takes prosperity to build ‘common good’: Those countries (or states) without crops and minerals to sell, will always be extremist and unstable.

Re:Cut lives saving USAID and spread job killing A

By lucifuge31337 • Score: 4, Informative Thread
You’re a fucking idiot. USAID was a post WW2 projection of soft power and foreign correction of issues that would cost us much more if left to fester and eventually find their ways back to the US mainland. It’s like none of you magats know any history at all.

Code.org President Steps Down Citing ‘Upending’ of CS By AI

Posted by EditorDavid View on SlashDot Skip
Long-time Slashdot reader theodp writes:
Last July, as Microsoft pledged $4 billion to advance AI education in K-12 schools, Microsoft President Brad Smith told nonprofit Code.org CEO/Founder Hadi Partovi it was time to “switch hats” from coding to AI. He added that “the last 12 years have been about the Hour of Code, but the future involves the Hour of AI.” On Friday, Code.org announced leadership changes to make it so.

“I am thrilled to announce that Karim Meghji will be stepping into the role of President & CEO,” Partovi wrote on LinkedIn. “Having worked closely with Karim over the last 3.5 years as our CPO, I have complete confidence that he possesses the perfect balance of historical context and ‘founder-level’ energy to lead us into an AI-centric future.”

In a separate LinkedIn post, Code.org co-founder Cameron Wilson explained why he was transitioning to an executive advisor role. “Our community is entering a new chapter as AI changes and upends computer science as a discipline and society at large. Code.org’s mission is still the same, however, we are starting a new chapter focused on ensuring students can thrive in the Age of AI. This new chapter will bring new opportunities, new problems to solve, and new communities to engage.”

The Code.org leadership changes come just weeks after Code.org confirmed laid off about 14% of its staff, explaining it had “made the difficult decision to part ways with 18 colleagues as part of efforts to ensure our long-term sustainability.” January also saw Code.org Chief Academic Officer Pat Yongpradit jump to Microsoft where he now helps “lead Microsoft’s global strategy to put people first in an age of AI by shaping education and workforce policy” as a member of Microsoft’s Global Education and Workforce Policy team.

This is a fundamental problem with education

By thecombatwombat • Score: 5, Interesting Thread

It’s almost like Code.org is and always was just a shill for industry messaging.

I worked in K-12 education for a long time. And one of the things that genuinely shocked me is how much curriculum is in fact just sponsored by giant corporations.

Seriously, virtually any time you see someone advocating in K-12 education for something like “skills students will need for jobs” just look, and you don’t have to look very hard, at who’s funding it. It’s disappointing every time.

Re:This is a fundamental problem with education

By PCM2 • Score: 5, Insightful Thread

I worked in K-12 education for a long time. And one of the things that genuinely shocked me is how much curriculum is in fact just sponsored by giant corporations.

The especially concerning/scary thing this time is that what the giant corporations want is to make computing seem like “magic.” Make a wish into the wishing well that is AI, and what you will receive will be what you wished for … provided, of course, you keep paying the corporation for the privilege of having your wishes granted.

Never mind having the actual skill, talent, understanding, etc. to make your wishes come true yourself. Just pay, wish, and it will be yours … and never mind anyone who tells you it used to be possible to get what you want to achieve without paying a giant corporation. Just keep wishing, lean how to wish big, and your wishes will come true.

This seems like the antithesis of how anyone who considers themselves an educator should think.

And the really sad part is they’re not just saying this to CS students. They’re saying it to writers and journalists, artists, musicians … basically anyone whose job doesn’t involve a hammer, a shovel, or a stove.

GOOD.

By Gravis Zero • Score: 5, Informative Thread

Hopefully, this will make all the “everyone needs to learn to code” bullshit go away. Sure, it’ll be replaced with AI but when the AI bubble pops then we’ll be right back where we started.

That right there is the problem

By jenningsthecat • Score: 4, Insightful Thread

… “lead Microsoft’s global strategy to put people first in an age of AI by shaping education and workforce policy” as a member of Microsoft’s Global Education and Workforce Policy team.

Not only is it not the job of private corporations to ‘shape (public) education’, they should be enjoined from doing so under penalty of having the corporation dissolved. I’ve had it with this ‘corporate personhood’ mechanism being extended to give corporations even greater rights and power than parents have when it comes to creating educational policy.

Anybody who doesn’t have children or grandchildren in school should have no say regarding curriculum. And no, that doesn’t mean that corporations get a seat at the table because their C-suite occupants have kids. The private sector must be forcefully and diligently excluded from decision making in public education.

Corporations are the privileged servants of society, and it’s time they were forcefully reminded of it. If that takes ruinous fines, imprisonment, or even the shedding of a little blood, so be it. It’s long past time for the arrogant tail to stop wagging the submissive dog.

T2 Linux Restores XAA In Xorg, Making 2D Graphics Fast Again

Posted by EditorDavid View on SlashDot
Berlin-based T2 Linux developer René Rebe (long-time Slashdot reader ReneR) is announcing that their Xorg display server has now restored its XAA acceleration architecture, “bringing fixed-function hardware 2D acceleration back to many older graphics cards that upstream left in software-rendered mode.”
Older fixed-function GPUs now regain smooth window movement, low CPU usage, and proper 24-bit bpp framebuffer support (also restored in T2). Tested hardware includes ATi Mach-64 and Rage-128, SiS, Trident, Cirrus, Matrox (Millennium/G450), Permedia2, Tseng ET6000 and even the Sun Creator/Elite 3D.

The result: vintage and retro systems and classic high-end Unix workstations that are fast and responsive again.

Erm

By pele • Score: 3 Thread

Why was it removed in the first place,

Re:Erm

By ThePhilips • Score: 5, Insightful Thread

Year of Wayland on Linux is any minute now. Thus it’s never too early to throw away the “old junk”(tm), that works and is used daily by millions, that is inevitably going to be replaced by… jam tomorrow.

What’s going to happen first: Wayland or AGI?

Re:Erm

By hjf • Score: 5, Interesting Thread

Because an alarmingly high number of developers believe that, if code isn’t being changed, it’s dead. And dead code is “VuLNeRaBle”.

Have you ever tried anything in Python or JS? Breaking changes are the norm. And if the app broke, it’s YOUR FAULT for not reading the changelog, not their fault for changing the API for no good reason (so many changes in JS for “consistency”, like, someone developed something and spelled it “colour” and 3 versions later some dev is incredibly irritated that the rest of the spelling of the app is in american english, so they “fix” it for consistency. Yes, they broke thousands of apps out there that had been running for years, but, isn’t it nice how the code is now all consistent?

And don’t let me get started on shit like React Router, which, last time i checked was in V6 and every version was a full rewrite, completely incompatible with the previous version - because of conceptually different behavior. Imagine doing this SIX TIMES in less than a decade.

Re:Erm

By PPH • Score: 4, Funny Thread

AGI. Because we’ll need that to answer the question: Why Wayland?

Re: Erm

By jsonn • Score: 4, Informative Thread
There was no replacement. It’s just that the Intel driver folks at the time failed to properly implement core rendering and switched from one acceleration architecture to the other, each clearly superior to its predecessor except having a new unique set of bugs (sarcasm intended). Removing XAA killed hardware acceleration for 20 years of graphic cards, but that doesn’t matter to people mostly paid by graphic card companies, since those older cards are obviously not generating revenue.