Alterslash

the unofficial Slashdot digest
 

Contents

  1. Physicists Find Possible Errors In 100-Year-Old Model of the Universe
  2. OpenAI Trial Wraps Up With ‘Jackass’ Trophy For Challenging Musk
  3. Man Who Stole Beyonce’s Hard Drives Gets Five-Year Sentence
  4. SOLAI Launches $399 Solode Neo Linux AI Computer
  5. Software Developers Say AI Is Rotting Their Brains
  6. Windows Update Is Getting Automatic Rollbacks For Faulty Drivers
  7. Fragnesia Made Public As Latest Linux Local Privilege Escalation Vulnerability
  8. LinkedIn Planning To Lay Off 5% of Staff In Latest Tech-Sector Cuts
  9. KDE Receives $1.4 Million Investment From Sovereign Tech Fund
  10. Harvard Votes On Limiting ‘A’ Grades
  11. Meta Employees Launch Protest Against Mouse-Tracking Tech At US Offices
  12. CERN Open Sources Its KiCad Component Libraries
  13. Why Are Some People Mosquito Magnets?
  14. Sam Altman Testifies That Elon Musk Wanted Control of OpenAI
  15. South Korea Floats ‘Citizen Dividend’ Using AI Profits

Alterslash picks up to the best 5 comments from each of the day’s Slashdot stories, and presents them on a single page for easy reading.

Physicists Find Possible Errors In 100-Year-Old Model of the Universe

Posted by BeauHD View on SlashDot Skip
A trio of preprint papers suggests the universe may not be perfectly uniform on the largest scales, finding tentative 2-to-4-sigma deviations from a core assumption of standard cosmology known as FLRW geometry. Live Science reports:
The work combines observations of distant exploding stars and large-scale galaxy surveys to probe whether the universe truly follows a nearly 100-year-old mathematical framework known as Friedmann-Lemaitre-Robertson-Walker (FLRW) cosmology. The analyses revealed mild-but-intriguing deviations from the predictions of the standard model. “We saw a surprising violation of an FLRW curvature consistency test, hinting at new physics beyond the standard model,” study co-author Asta Heinesen, a physicist at the Niels Bohr Institute in Copenhagen and Queen Mary University in London, told Live Science via email, referring to the assumption that the space’s curvature is the same everywhere. “This could potentially be due to various effects, but more research is needed to address the cause of the FLRW violation that we see empirically.”

[…] The analyses revealed small but potentially important departures from the predictions of standard FLRW cosmology. Depending on the dataset and analysis method, the discrepancy reached a statistical significance of about 2 to 4 sigma. In physics, sigma measures how likely a result is to arise purely by chance; a 5-sigma result is typically required before scientists claim a discovery, so the new findings remain tentative. Still, the results suggest that something unexpected may be affecting the geometry or expansion of the universe. “The main finding is that you can directly measure Dyer-Roeder and backreaction effects from available cosmological data, and clearly distinguish these effects from other alterations of the standard cosmological model, such as evolving dark energy and modified gravity theories,” Heinesen said. “This was previously not possible in such a direct way, and this is what I think is the breakthrough in our work.”

“If these indicated deviations from an FLRW geometry are real, it would signify that most of the cosmological solutions considered for solving the cosmological tensions — evolving or interacting dark energy, new types of matter or energy, modified gravity and related ideas within the FLRW framework — are ruled out,” the researchers wrote. The next step will involve applying the new theoretical framework to larger and more precise datasets. “It is to apply our theoretical results to data to test the standard model and to produce constraints on the Dyer-Roeder and backreaction effects,” Heinesen said.

Good

By evanh • Score: 4, Interesting Thread

Here’s hoping this is born out in the long term. The idea of Dark Energy being a new force never felt right.

OpenAI Trial Wraps Up With ‘Jackass’ Trophy For Challenging Musk

Posted by BeauHD View on SlashDot Skip
After three weeks of testimony, the Musk v. Altman trial is nearing its end. OpenAI has rested its case, closing arguments are set for Thursday, and jury deliberations are expected to begin afterward. An anonymous reader quotes a report from Business Insider:
Joshua Achiam, OpenAI’s chief futurist, was probably the most memorable witness of the day. He told jurors about a companywide meeting where Musk answered questions about his planned departure from OpenAI in 2018. Musk told the crowd of 50 or 60 people that he was leaving OpenAI to start his own competing AI. He said he wanted to “build it very fast, because he was very worried that someone else, if they got it, would do the wrong thing with it,” Achiam said. Achaim said he challenged Musk on the safety of this approach, which he called “unsafe and reckless.” “How did Musk respond,” OpenAI’s lawyer Randall Jackson asked. “Defensively,” Achiam said. “We had a pretty tense exchange, and he snapped and called me a jackass.”

In an effort to prove Achiam’s story, OpenAI’s lawyers brought a trophy to court that the futurist said he received after his heated exchange with Musk. On the witness stand, Achiam described the trophy as “a small golden jackass, inscribed with: ‘never stop being a jackass for safety.’" He said his then-colleagues, Dario Amodei and David Luan, gave it to him as a thank-you for standing up to the Tesla CEO. Lead OpenAI attorney William Savitt told reporters after the day’s session that Wednesday had been the first time he’d touched the statue. The futurist had to do without the visual aid, however. Judge Yvonne Gonzalez Rogers did not accept the trophy as evidence, so it did not appear before the jury.

Musk and Altman have presented dueling experts on a question at the core of the trial — was the nonprofit that runs OpenAI hurt or helped by its $13 billion partnership with Microsoft? Musk’s expert testified last week that the partnership was indeed hurt, supporting the Tesla CEO’s contention that in partnering with Microsoft, OpenAI betrayed the company’s nonprofit origins and mission. But on Thursday, OpenAI’s expert, John Coates, used Musk’s expert’s own pie chart and testimony against him. The partnership has “generated value for the nonprofit that I believe he himself accepted was in the $200 billion range in his own testimony,” Coates said, referencing Musk expert Daniel Schizer. “If that’s not faring well, I don’t know what faring well is.”

In a scored point for Musk, the jury learned Thursday that Microsoft’s own CTO once raised concerns about how OpenAI’s early nonprofit donors, including LinkedIn cofounder Reid Hoffman, would react to a partnership. “I wonder if the big OpenAI donors are aware of these plans,” Chief Technology Officer Kevin Scott said in a 2018 email he was asked to read aloud to jurors. In it, Scott said he doubted donors would appreciate OpenAI using their seed money to “go build a for-profit thing.” Scott was being questioned by an OpenAI lawyer, who may have wanted jurors to quickly hear Scott’s explanation: that he only had a “vague awareness” of what was happening at OpenAI at the time. Scott also told the jury he wasn’t thinking about Musk when he made the remark. “Primarily, I was thinking about Reid Hoffman. He was the OpenAI donor I knew,” Scott said, adding, “I wasn’t thinking about anyone besides him.”
Recap:
Sam Altman Testifies That Elon Musk Wanted Control of OpenAI (Day Ten)
Microsoft CEO Satya Nadella Testifies In OpenAI Trial (Day Nine)
Sam Altman Had a Bad Day In Court (Day Eight)
Sam Altman’s Management Style Comes Under the Microscope At OpenAI Trial (Day Seven)
Brockman Rebuts Musk’s Take On Startup’s History, Recounts Secret Work For Tesla (Day Six)
OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five)
Musk Concludes Testimony At OpenAI Trial (Day Four)
Elon Musk Says OpenAI Betrayed Him, Clashes With Company’s Attorney (Day Three)
Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two)
Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)

As the late Grumpy Cat would’ve said

By Powercntrl • Score: 3 Thread

I hope they both lose.

Man Who Stole Beyonce’s Hard Drives Gets Five-Year Sentence

Posted by BeauHD View on SlashDot Skip
A man accused of stealing hard drives containing unreleased Beyonce music, tour plans, and other materials from a rental car in Atlanta has pleaded guilty and accepted a five-year sentence, including two years in custody. Slashdot Bruce66423 shares a report from The Guardian:
Kelvin Evans was by the Atlanta police department in September in connection to a July 2025 car robbery where two suitcases containing Beyonce music and tour plans were stolen from a rental car. […] According to a July police report, Beyonce choreographer Christopher Grant and dancer Diandre Blue called 911 to report a theft from their rental vehicle, a 2024 Jeep Wagoneer, before Beyonce’s Cowboy Carter tour dates in Atlanta. An October indictment stated that Evans entered the car on July 8 “with the intent to commit theft.”

The stolen hard drives contained “watermarked music, some unreleased music, footage plans for the show and past and future set list,” according to a police report. Clothing, designer sunglasses, laptops and AirPods headphones were also stolen, Grant and Blue said. Local law enforcement searched for the location of one of the stolen laptops and the AirPods to try and locate the property. One police officer wrote in the report: “I conducted a suspicious stop in the area, due to the information that was relayed to me. There were several cars in the area also that the AirPods were pinging to in that area also. After further investigation, a silver [redacted], which had traveled into zone 5 was moving at the same time as the tracking on the AirPods.”

Evans was arrested several weeks after Grant and Blue filed a report, and was publicly named as the suspect in September. He was released on a $20,000 bond a month later. At the time of his arrest, Atlanta police said that the stolen property had not been recovered. It is unclear whether it has since been found.
Bruce66423 commented: “Just for stealing a couple of suitcases from a car. Funny how the elite punish those who inconvenience them. Can you imagine an ordinary victim see their offender get that sort of sentence?”

Re:Bruce66423 is delusional

By Meekrobe • Score: 4, Insightful Thread
a lengthy sentence because of unrealized potentials when the original crime was basic theft of goods from a car is some crazy shit.

Re:“Just for stealing a couple of suitcases”

By Valgrus Thunderaxe • Score: 4, Informative Thread
Bullshit.

Who Cares

By Bahbus • Score: 3, Insightful Thread

Beyonce, and “her” music, is overrated. Also not at all relevant or interesting news for this site.

Not his first time

By clovis • Score: 5, Informative Thread

Kelvin Evans has lengthy criminal record, near two dozen arrests and was on parole when he stole Beyonce’s stuff.

Re:Got off lightly

By fahrbot-bot • Score: 4, Insightful Thread

in Georgia, felony theft can result in up to 20 years prison.

Felony theft from The U.S. Capital (and/or beating a police officer) during an Insurrection - pardon.

SOLAI Launches $399 Solode Neo Linux AI Computer

Posted by BeauHD View on SlashDot Skip
BrianFagioli writes:
SOLAI has launched the Solode Neo, a $399 Linux-based mini PC designed for always-on AI agents, browser automation, and persistent developer workflows. The compact system ships with an Intel N150 processor, 12GB LPDDR5 memory, 128GB SSD storage, Gigabit Ethernet, WiFi, Bluetooth, and a Linux-based operating system called Solode AI OS. The company says the device supports frameworks and tools including Claude Code, OpenAI Codex, Gemini CLI, and Hermes, while emphasizing local control, automation, and privacy-focused workflows running directly from a home network.

While SOLAI markets the Solode Neo as an “AI computer,” the hardware itself appears aimed more at lightweight automation and cloud-assisted agent tasks than heavy local inference. The low-power Intel N150 should be sufficient for browser automation, scheduling, monitoring, containers, and smaller AI workloads, but the system is unlikely to compete with higher-end local AI hardware designed for running larger models offline. Even so, the idea of a dedicated low-power Linux appliance for persistent AI and automation tasks may appeal to homelab users and self-hosting enthusiasts looking for a simpler alternative to building their own always-on workflow box from scratch.

How is this a story?

By muffen • Score: 5, Insightful Thread
A company releases an overpriced, low specced computer, slaps âAIâ on it and gets free advertisement on slashdot? How did this ever get approved?

Stupid; but cynical.

By fuzzyfuzzyfungus • Score: 5, Informative Thread
So you ship a bottom of the barrel computer and call it an “AI computer” because it can interact with assorted APIs over the internet; then you try to talk up the ‘local’ and ‘privacy’ aspects despite the fact that this is running basically nothing locally because it’s an N150? Cool story.

Piece of crap book PC

By OrangeTide • Score: 3 Thread

Except it’s too low spec to play games or do any heavy browsing. So it becomes a foot-in-the-door for an AI agent to snoop your home networks and copy your personal information. For the low low price of $399. Plus whatever you will need to pay to Anthropic, OpenAI, etc to actually have access to their APIs when free tiers disappear next year.

Software Developers Say AI Is Rotting Their Brains

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from 404 Media:
On Reddit, Hacker News and other places where people in software development talk to each other, more and more people are becoming disillusioned with the promise of code generated by large language models. Developers talk not just about how the AI output is often flawed, but that using AI to get the job done is often a more time consuming, harder, and more frustrating experience because they have to go through the output and fix its mistakes. More concerning, developers who use AI at work report that they feel like they are de-skilling themselves and losing their ability to do their jobs as well as they used to.

“We’re being told to use [AI] agents for broad changes across our codebase. There’s no way to evaluate whether that much code is well-written or secure — especially when hundreds of other programmers in the company are doing the same,” a UX designer at a midsized tech company told me. 404 Media granted all the developers we talked to for this story anonymity because they signed non-disclosure agreements or because they fear retribution from their employers. “We’re building a rat’s nest of tech debt that will be impossible to untangle when these models become prohibitively expensive (any minute now…).”
“I had some issues where I forgot how to implement a Laravel API and it scared the shit out of me. I went to university for this, I’ve been a software engineer for many years now and it feels like I am back before I ever wrote a single line of code,” the software developer at a small web design firm told 404 Media. “It’s making me dumber for sure,” the fintech software developer added.
“It’s like when we got cellphones and stopped remembering phone numbers, but it’s grown to me mentally outsourcing ‘thinking’ in general. I feel my critical thinking and ability to sit and reason about a problem or a design has degraded because the all-knowing-dalai-llama is just a question away from giving me his take. And supposedly I tell myself ill just use it for inspiration but it ends up being my only thought. It gives you the illusion of productivity and expertise but at the end of the day you are more divorced from the output you submit than before.”

A software engineer at the FAANG said: “When I was using it for code generation, I found myself having a lot of trouble building and maintaining a mental model of the code I was working with. Another aspect is that I joined late last year and [the company’s] codebase is massive. As a new hire, part of my job is to learn how to navigate the codebase and use the established conventions, but I think the AI push really hampered my ability to do that.”

Re:You’re doing it wrong

By Big Hairy Gorilla • Score: 5, Interesting Thread
I’m going to throw in with you on this.
I started using gemini and found it’s far better than my best employee ever was.
My best employee was very very good, but I’d have to wait a day to see results of the meeting.
One thing he (best employee) did that AI can’t do is make good judgement calls. No question there.

However, when the AI spits out a half day’s work in 10 seconds, it allows me the analyst/designer/project manager to rapidly analyze the output, and do another iteration of design ideas, immediately, or as fast as I can analyze process and respond.
So I can get dozens of turnarounds per day compared to even a good employee.

Working in small logical work units yields very good results. I haven’t rolled up my sleeves and done any 12 hour days of deep concentration on code for years, and I don’t need to. I have much knowledge and can review code but I don’t need to double check syntax or look for typos, the grunt work.

I don’t think that I’m losing anything, I do the architecture and design. I think I’m getting huge value and speed from gemini… the key to me is that I work at mid to high levels of abstraction, work in small logical units, review the output, and let the tool worry about the grunt work. I work as a product designer, it works as a coder. My designs are improving significantly from having the AI critique my designs and suggest various possible improvements or how to use tools that I did not know about. I don’t need to code. Caveats are that I am not building mission critical or real time software. The reality is maintenance is a dead concept. As the coding agents/models improve, you can conceivably drop your whole codebase into the NEXT better model every time a better model comes out, and it will do the optimizations and grunt work.

Don’t hate me. I can see the future and it is grim for people, coders, entry level people. But YOU WILL USE AI for coding is here for non mission critical applications. It’s sad but true to say that “quality” is a quaint and outdated concept.. (like privacy).. good enough is today’s “quality”. Don’t shoot the messenger, but barely working, is still working. if it don’t work replace it, don’t maintain it.

There will always be a need for true experts, good designers, but the writing is on the wall, AI IS REPLACING all junior functions at this time. If you are doing a web based database system, pfft, it barely matters if there is a bug.. I regret that statement but I feel it’s today’s reality.

Re:We see this problem + AI is a tool, not a relig

By Himmy32 • Score: 5, Insightful Thread

I have to scrutinize pull requests much moreso than ever before

The disturbing part is they seem to have noticeably regressed

And think this is core to the discussion because output from evangelists is going up while hollowing out the skills needed for the next generation to do the review.

Forgot how to implement a Laravel API…

By swillden • Score: 5, Insightful Thread

Dude, I’ve been writing code for 40 years. I’ve used so many different tools, stacks, libraries and APIs that at this point I don’t remember any of them, and I haven’t remembered them for years, and it doesn’t matter at all. Sure, I have to look everything up, but that’s fine, that doesn’t matter. What matters is that I know when something looks wrong, or hard to maintain, or inefficient, or insecure, or… pick the axis. And I can dig in and find the problem. Anyone can tell if code works, that’s easy. Understanding when and why it might break or otherwise impose additional costs, that’s the real skill.

Which, as it happens, is exactly the skill you need to use an LLM effectively. Also the skill you need to understand legacy code, review colleagues’ commits, etc., etc., etc. I used to say that the ability to read and understand code is an underrated skill, but an old friend corrected me at lunch a couple of weeks ago, saying that the ability to read and understand code is the most important software engineering skill, and always has been. Upon reflection, I agreed. And LLMs make this clearer than ever before.

Re:You’re doing it wrong

By Nebulo • Score: 5, Insightful Thread

It might be a red flag if you want them to be focussed on babysitting the probabilistic code generator, but if you want an actual developer who can think through a problem on their own, a lack of AI usage in their studies is a huge benefit.

Re: Brain rot even farther back …

By gtall • Score: 5, Funny Thread

Given your DEI reference, I’d say dementia has already gotten a hold on you. Maybe you weren’t taught empathy in whatever remains of your memory of school.

Windows Update Is Getting Automatic Rollbacks For Faulty Drivers

Posted by BeauHD View on SlashDot Skip
Microsoft is adding a Windows Update feature called Cloud-Initiated Driver Recovery that can automatically roll back faulty drivers to a previously known-good version without waiting for hardware makers or users to fix the problem manually. PCWorld reports:
The way faulty drivers work today is that the hardware partner is responsible for pushing an updated driver, or the end user is responsible for manually uninstalling the problematic driver. “This creates a gap where devices may remain on a low-quality driver for an extended period,” says the blog post. With Cloud-Initiated Driver Recovery, Microsoft will be able to remotely trigger a rollback of the faulty driver to a previously “known-good” version of the driver via the Windows Update pipeline. Microsoft says that testing and verification of Cloud-Initiated Driver Recovery will continue until August this year, aiming to deliver this feature to Windows PCs starting in September.

Blue Screens

By Himmy32 • Score: 5, Interesting Thread

Going back to the pre-XP days where drivers were less isolated and responsible for a lot Blue Screens. Drivers are a perennial place where Microsoft doesn’t have a lot of control, but greatly effects the experience. I am honestly a little surprised that it took this long to try to come up with ways to gain more control than just signing.

As with any new managed experience, the value added versus how much people have to fight the management will be an open question.

Re:Amazing

By MachineShedFred • Score: 5, Insightful Thread

Instead of just having quality control on drivers that get applied by Windows Update, they’ve decided to tack on a bunch of bullshit to remediate shitty drivers being auto-installed by Windows Update.

And then they wonder why everyone hates Windows.

Re:Amazing

By thegarbz • Score: 4, Insightful Thread

This happens so often, apparently, they need to engineer this whole complex subsystem and storage infrastructure to take care of this problem.

Well yes this happens very often. In fact the only kernel panics I’ve ever had on Linux were dodgy drivers. And the single most common problem on Macs are “GPU Panics” due to drivers.

It turns out when you have a piece of code that runs in a very low level written by god knows who, then you need a way to manage them not screwing up your system.

Fun fact: we wrote our own USB driver for a team project at university, one of the most frustrating things was waiting for the computer to reboot so we could have another go.

Re:next up reboot loops

By leonbev • Score: 5, Funny Thread

Or you’ll end up with this situation:

Game XYZ won’t run because it says that your video card drivers are out of date
You update them
The game crashes anyway with a different graphic driver error because it’s bug ridden launch day garbage
The drivers roll back
And Game XYZ won’t run again …and suddenly you wish that you bought a Playstation 5 instead.

Re:Blue Screens

By anoncoward69 • Score: 5, Interesting Thread
Sound blaster Live / Audigy cards back in the day are one I remember. WHQL drivers basically just provided for basic stereo audio out. If you wanted to use any of the EAX or other advanced features of these cards you had to download the full driver pack from Creative.

Fragnesia Made Public As Latest Linux Local Privilege Escalation Vulnerability

Posted by BeauHD View on SlashDot Skip
A new Linux local privilege escalation flaw called Fragnesia has been disclosed as a Dirty Frag-like vulnerability, allowing arbitrary byte writes into the kernel page cache of read-only files through a separate ESP/XFRM logic bug. Phoronix reports:
Proof of concept code for Fragnesia is already out there. There is a two-line patch for addressing the issue within the Linux kernel’s skbuff.c code. That patch hasn’t yet been mainlined or picked up by any mainline kernel releases but presumably will be in short order for addressing this local privilege escalation issue.
More details can be found here.

Year of the Patch

By awwshit • Score: 3 Thread

Patchfest 2026 is going strong.

Re: Year of the Patch

By SeaFox • Score: 5, Informative Thread

Just the thing to erode public perception of the security of open source operating systems that also don’t fit into a master plan of making everyone register themselves for remote identification in some way to “protect young people from harmful content”.

Disclosure Timing Drama Part 2.0

By Himmy32 • Score: 4 Thread

Looks like this time around the disclosure happened when the patch hit netdev. This was even faster than the drama that happened around the Dirty Frag embargo. Meant that no one else could back-engineer and release the vulnerability before the original reporters, but also a greater amount of time between disclosure and when the patches hit downstream distros.

I wonder if that last case of back-engineering on prerelease kernels is going to set a new norm on disclosure timing. If people can back-engineer then getting the mitigations out as quick as possible is more important than trying to hide the issue until the kernel patch actually drops for distros.

Re: Year of the Patch

By Big Hairy Gorilla • Score: 4, Interesting Thread
I found that sometime during the pandemic, 3 or 4 years ago, a cold wind blew over open source. When I would suggest to people that such and such open source software would be a viable alternative to whatever Apple or Microsoft software they were using it was met with suspicion and categorically rejected. “I would ONLY use Apple software”, “I only trust Apple” was the response. Open source seems to be now perceived as criminal. Ironic really, because some of those same people might buy bitcoin because they heard “line go up”. So you trust bitcoin, but you wouldn’t use open source software?

This is nothing

By snookiex • Score: 4, Interesting Thread

If you think this is starting to get frightening, imagine the bug list at Microsoft after running an AI audit to Windows code base. I still think this is for the better, but the next year or two will be interesting, to say the least.

LinkedIn Planning To Lay Off 5% of Staff In Latest Tech-Sector Cuts

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from Reuters:
LinkedIn planned to inform staff of layoffs on Wednesday, two people familiar with the matter told Reuters, in a widening of technology sector cuts this year. The Microsoft-owned social network plans to cut about 5% of its headcount as it reorganizes teams and focuses personnel on areas where its business is growing […].

LinkedIn employs more than 17,500 full-time workers globally, its website says. Reuters was unable to determine the teams affected. The cuts come as revenue at LinkedIn, which sells recruiting tools and subscriptions, rose 12% in the just-ended quarter from a year prior, in an acceleration of growth in 2026, according to Microsoft’s securities filings. The layoff rationale was not for artificial intelligence to replace jobs at LinkedIn, one of the people told Reuters. The specter of AI-fueled disruption has nonetheless hung over software incumbents and workers generally.

Where ya gonna go?

By jenningsthecat • Score: 5, Interesting Thread

The sad irony is that most of the staff being laid off will be using the services of the company that just axed them to try to land a new job.

Re:Fire the DEI - DEI cannot do their jobs.

By r1348 • Score: 4, Insightful Thread

No roasting, just a honest suggestion to pay more attention to your mental health. The entirety of your comments are tirades about woke this and DEI that. This is a technology site, I hope the disconnect is evident.

Re: Where ya gonna go?

By robot5x • Score: 4, Funny Thread
I believe the tech bros call it “dogfooding”

Company Cutting the Dead Wood

By CWCheese • Score: 5, Insightful Thread
17,500 employees at LinkedIn… omg, what in the world do they need seventeen and a half thousand people to do, it’s a silly engagement site that posts alleged job listings are those tens of thousands of folks spending all day every day scrutinizing the job listings for accuracy and credibility?

Re:Company Cutting the Dead Wood

By fropenn • Score: 5, Insightful Thread
1 person maintains the code.

7,499 sell ads.

10,000 send emails asking you to ‘upgrade to premium.’

KDE Receives $1.4 Million Investment From Sovereign Tech Fund

Posted by BeauHD View on SlashDot Skip
The German Sovereign Tech Fund has invested 1.2 million euros ($1.4 million USD) in KDE Plasma technologies to help strengthen the structural reliability and security of the desktop environment’s core infrastructure, including Plasma, KDE Linux, and the frameworks underlying its communication services. Longtime Slashdot reader jrepin shares an excerpt from the announcement:
For 30 years, KDE has been providing the free and open-source software essential for digital sovereignty in personal, corporate, and public infrastructures: operating systems, desktop environments, document viewers, image and video editors, software development libraries, and much more.

KDE’s software is competitive, publicly auditable, and freely available. It can be maintained, adapted, and improved in-house or by local software companies. And modifications (along with their source code) can be freely distributed to all users and departments within an organization.

KDE will use Sovereign Tech Fund’s investment to push its essential software products to the next level, providing every individual, business, and public administration with the opportunity to regain their privacy, security, and control over their digital sovereignty.
Slashdot reader Elektroschock also shared a statement from Fiona Krakenburger, Technical Director at the Sovereign Tech Agency.

“We have long invested in desktop technologies for a reason: they are the primary way people access and use digital services in everyday life,” says Krakenburger. “The desktop holds personal data and mediates nearly every service we depend on, from booking the next medical appointment, to education, to the way we work. We are investing in KDE because it is one of the two major desktop environments used across Linux and plays a key role in how millions of people experience open technology. Strengthening KDE’s testing infrastructure, security architecture, and communication frameworks is how we invest in the resilience and reliability of the core digital infrastructure that modern society depends on.”

Two sad points.

By MIPSPro • Score: 5, Insightful Thread
It’s disappointing more FOSS projects don’t get any funding. It’s pathetic, for example, how many companies use OpenSSH but won’t donate to OpenBSD. However, these guys get less than $2M USD and it sounds like a lot, because relative to what others get, it absolutely IS a lot. Good luck and spend it wisely. You’ll probably not get another one.

This is Pleasing

By charlesTheLurker • Score: 5, Insightful Thread

Certainly a far more useful investment than yet another never-to-be-built AI data center.

Re:Two sad points.

By darkain • Score: 5, Informative Thread

the Sovereign Tech Fund has actually been putting $$$ into multiple projects. They did a similar donation to FreeBSD recently as well, and tons of tools/libraries. And yes, OpenSSH is on the list! https://www.sovereign.tech/tec…

Happy with flyng under the radar

By newbie_fantod • Score: 4, Interesting Thread

Leave the consumerist operating system in place for all the happy consumers. I don’t want Linux adapted and enshitified to meet their needs - which is what will happen if they start switching in significant numbers. Keep the year of Linux on the desktop perpetually somewhere in the future.

Re:Happy with flyng under the radar

By MachineShedFred • Score: 4, Insightful Thread

You know there’s more than one distribution of Linux, yes?

You know that there’s more than one window environment on Linux, yes?

If you don’t like the roadmap that a particular distro has, find another one that you do like.

Harvard Votes On Limiting ‘A’ Grades

Posted by BeauHD View on SlashDot Skip
Harvard faculty are voting on a proposal (PDF) to curb grade inflation by limiting solid A grades to 20% of students in a class, plus four additional A’s per course. Axios reports:
Grade inflation is at a tipping point at Harvard. A move to make A grades harder to come by at one of the world’s leading universities could influence grading debates at peer institutions. Solid A’s account for nearly two-thirds of all undergraduate letter grades. That’s up from roughly a quarter 20 years ago. More than 50 members of last year’s class graduated with perfect GPAs.

[…] Faculty are voting on three separate provisions. Each requires a simple majority to pass. A cap to limit solid-A grades to 20% of enrolled students in a class, plus four additional A’s per course. Changes to how internal honors are calculated, moving from traditional grade point average scoring to an average percentile rank. Allowing courses to use new “satisfactory” or “unsatisfactory” marks with a “satisfactory-plus” distinction.

A pre-vote faculty poll showed around 60% of the 205 respondents favored the 20-plus-four formula over an alternative. Supporters of the cap argue it’s intentionally modest as it places no restrictions on A-minuses. The four-grade buffer is designed to protect small seminars where a higher proportion of students may succeed. […] If passed, changes would take effect in fall 2027, followed by a mandatory three-year review.

Don’t assume Harvard students are elite

By drnb • Score: 5, Informative Thread
If you want to be an “elite” school your grading needs to be “elite” too.

Also, getting into harvard isn’t as hard as you think. About 30% are legacy admissions, a parent (ie a potential donor) graduated there.

My field is computer science and I worked with a Berkeley grad. Not impressed, I’m sure he had great grades, but he wasn’t very useful in software development. Basically, elite college or state university, you generally get out what you put into it. Plenty of ticket punchers doing the minimum at elite schools. Cram, regurgitate book/lecture on demand, forget afterwards.

Re: It’s all about definitions.

By ceoyoyo • Score: 5, Informative Thread

Grades are usually ranked certification systems. Grading gemstones, surface plates, instruments, whatever, aren’t just a ranking system. They’re a certification of belonging to a particular quality class. The ones from accredited educational institutions awarding certifications are certainly not meant to be just a blocky ordering of students in a class.

Re: It’s all about definitions.

By realxmp • Score: 5, Insightful Thread

It’s right in the name: grade. As in “gradation”. The intention of grades is to stack rank. Any system that results in all A’s is not a grading system; it’s a certification system.

Function over form. Defined that way it makes it just about whether you’re lucky enough to be in a year full of dumb people and is rather unhelpful in finding the best candidate for any employer because it doesn’t preserve year on year boundaries. The only exception might be if you’re only recruiting new grads from the milkround. The smartest person in one year with an A might be the dumbest when compared to their fellow students in the next year. I swear the only people advocating for this system are the ones who have a desperate need to feel the best in their peer group, rather than actually caring about how useful the system is.

Re: It’s all about definitions.

By Local ID10T • Score: 5, Insightful Thread

I guarantee that you cannot find a Harvard graduate who was a “DEI” admission who is not objectively significantly above the average college graduate. Even those you proclaim to be lesser because of their race or sex or “injustice over the years” are far above average.

The only Harvard graduates who are not necessarily excellent are Legacy admissions -the children of prior generations of Harvard graduates whose parentage ensures them a place among the elite leaders of industry and nation regardless of objective qualifications.

Re: It’s all about definitions.

By MachineShedFred • Score: 5, Insightful Thread

Sorry, I guess I’m still of the mind that grades are earned based on the number of things you answer correctly.

Why should my mastery be diminished because other people also answer correctly? Why should my grade be effected by other students in any way, when it’s meant to mark personal achievement?

This is fucking stupid.

Meta Employees Launch Protest Against Mouse-Tracking Tech At US Offices

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from Reuters:
Meta employees distributed flyers at multiple U.S. offices on Tuesday to protest the company’s recent installation of mouse-tracking software on their computers, according to photos of the pamphlets seen by Reuters. The flyers, which appeared in meeting rooms, on vending machines and atop toilet paper dispensers at the Facebook owner’s offices, encouraged staffers to sign an online petition against the move. “Don’t want to work at the Employee Data Extraction Factory?” they asked, according to the photos seen by Reuters. […]

The pamphlets and the petition both cite the U.S. National Labor Relations Act, saying “workers are legally protected when they choose to organize for the improvement of working conditions.” In the UK, a group of Meta employees has started organizing a drive for unionization with United Tech and Allied Workers (UTAW), a branch of the Communication Workers Union. The employees set up a website to recruit members using the URL "Leanin.uk,” a reference to former Chief Operating Officer Sheryl Sandberg’s best-selling book encouraging women to seek equal footing in the workplace. “Meta’s workers are paying the price for management’s reckless and expensive bets. While executives chase speculative AI strategies, staff are facing devastating job cuts, draconian surveillance, and the cruel reality of being forced to train the inefficient systems being positioned to replace them,” said Eleanor Payne, an organizer with UTAW.
“If we’re building agents to help people complete everyday tasks using computers, our models need real examples of how people actually use them — things like mouse movements, clicking buttons, and navigating dropdown menus,” said a statement Meta issued earlier.

Yeah you are working for creepy weirdos

By butt0nm4n • Score: 5, Informative Thread

Haven’t you noticed he never blinks and when he laughs it looks like he’s trying to eating your soul?

Indeed.

By apparently • Score: 5, Funny Thread

“If we’re building agents to help people complete everyday tasks using computers, our models need real examples of how people actually use them — things like mouse movements, clicking buttons, and navigating dropdown menus,” said a statement Meta issued earlier.

Early reports on the effectiveness of the training has shown mixed results — the agents are REALLY good at mouse movements, clicking buttons, and navigating dropdown menus, but no matter the prompt provided to it, the agent just opens firefox and starts browsing job postings on Indeed.

Re:Mixed feelings.

By MachineShedFred • Score: 5, Insightful Thread

When you work in a toxic fen, expect toxicity.

Why is anyone surprised at all that Meta would use draconian surveillance on their own employees, when their entire business model is based on draconian surveillance?

Re: Mixed feelings.

By 0123456 • Score: 5, Insightful Thread

We tried the whole “if you don’t like it, leave!” Boomer thing but then it turned out that many companies are run by sociopaths with no concept of normal human behaviour. So no, they don’t get to say Muh Private Company any more.

Re: Mixed feelings.

By bsolar • Score: 5, Interesting Thread

No. When using company equipment, at any company with a reasonable Computer Usage Agreement in place, all data related to said company/equipment remains the property of the company.

True in the US, but definitely not true in general. In the EU this kind of data collection would be 100% illegal.

CERN Open Sources Its KiCad Component Libraries

Posted by BeauHD View on SlashDot Skip
Ancient Slashdot reader ewhac writes:
CERN, a longtime Open Source pioneer, has made several contributions over the years to KiCad (“KEE-kad”), an Open Source EDA (Electronic Design Automation) package widely used in the hobbyist and professional electronics communities. It’s gotten so widely used that users can now submit their KiCad design files directly to several electronics fabricators (rather than the traditional step of converting the layouts to Gerber files). Over the years, CERN has also developed their own symbol and footprint libraries to support their own internal electronic designs. Last week, CERN released those KiCad component libraries, containing over 17,000 symbols, under the CERN Open Hardware License.

Taxpayer-funded should always mean Open Source

By greytree • Score: 5, Insightful Thread
The people of Europe fund CERN through their taxes, so all of CERN’s inventions should ALL be Open Source by default.

converted

By ZipNada • Score: 4, Informative Thread

Apparently CERN doesn’t actually use KiCAD. “The libraries are the result of automatically converting the original Altium Designer source libraries”. And “3D models and datasheets are not included” which is unfortunate, it is very handy to be able to see a 3D rendering of your layout.

Weird place to be complaining about

By Himmy32 • Score: 4, Insightful Thread

CERN has pragmatically delivered arguably the best return on investment for Open Science data and contribution back to Open Source projects. Best effort contribution is about being a good steward of the resources given especially when the funding is really for the science data.

Of any place that could be criticized about doing their work in the open, CERN is probably the worst target. Their whole organization is driven open science principles and policies. I personally have been at conferences with CERN presenters on how they are contributing back to Open Source projects. They already go above and beyond with the resources that they are given.

Being the best steward with good policies and principles is different than a short sighted requirement of distributing and maintaining everything. And what they have already delivered proves that they are doing things correctly.

/. , I am disappoint

By Thud457 • Score: 3 Thread
All this nattering about licensing and we haven’t yet had one comment -

Finally, I can finish the large hadron collider I’m building in my backyard!

Why Are Some People Mosquito Magnets?

Posted by BeauHD View on SlashDot Skip
fjo3 shares a report from Phys.org:
Ever felt like mosquitoes bite you while ignoring everyone else? Scientists are now making progress in deciphering the complex chemical cocktail that makes particular people more enticing to these disease-spreading bloodsuckers. “It’s not a misconception — mosquitoes are attracted to some people more than others,” Frederic Simard of France’s Institute of Research for Development told AFP. “But we are not all magnets all the time,” the medical entomologist added.

A range of sensory cues can cause mosquitoes to pick one human over another — mainly the smell and heat our bodies give off, and the carbon dioxide we exhale. Female mosquitoes — which are the only ones that bite — detect these signals with finely tuned receptors, then choose their target accordingly. “We have known for over 100 years that mosquitoes are attracted by the carbon dioxide that we exhale — this is the first signal that triggers their behavior” when they are dozens of meters away, Swedish scientist Rickard Ignell told AFP. Within around 10 meters, “mosquitoes will start detecting our odor, and in combination with carbon dioxide,” this attracts them even more, said the senior author of a recent study on the subject. As they get closer, body temperature and humidity make particular humans even more enticing.

[…] For Ignell’s recent study, the researchers released Aedes aegypti mosquitoes — known for spreading yellow fever and dengue — on 42 women in a lab, to see which ones they preferred. “We have shown that mosquitoes use a blend of odorous compounds (we identified 27 that the mosquitoes will detect, out of the possible 1,000) for their attraction to us,” Ignell said. The woman the mosquitoes most liked to bite — which included pregnant women in their second trimester — produced a large amount of a particular compound made by a breakdown of the skin oil sebum. That even a small increase of this compound — called “1-octen-3-ol”, or mushroom alcohol — made a difference came as a surprise, Ignell emphasized.

Re:Sugar consumption / Ketogenic metabolism

By NotEmmanuelGoldstein • Score: 5, Interesting Thread
I also met one person who was never bitten. When the mosquitoes were about 40cm from her, they stopped and turned around. Something was sufficiently strong for them to ignore carbon dioxide, sweat and body heat signals.

Missing an entire category of people

By WebHikerOriginal • Score: 5, Interesting Thread

There’s also people like me, who are equally delicious as the next guy, but who do not react at all to the bites.
I can see them on me feeding, but I don’t get the itchy bite, nothing.
It’s as if they were never there.

I think it’s almost a better superpower than not getting bit at all.

Re:Missing an entire category of people

By gtall • Score: 5, Insightful Thread

You can still get one of those yummy diseases from one, and you will have no warning that it has your name on it.

Re:Sugar consumption / Ketogenic metabolism

By rykin • Score: 4, Informative Thread
100% this. I used to get eaten up by mosquitos every year. Due to stomach issues, I did the carnivore diet for about 6 months. That summer, I rarely got bit. To this day, I consume much less sugar than I used to and have noticed a dramatic reduction of mosquito bites compared to years before.

again and again

By groobly • Score: 4, Interesting Thread

This is the 200th article claiming to answer this question that I have seen over the past 50 years. Mosquitoes are evolving faster than they can come up with new answers to the question.

Sam Altman Testifies That Elon Musk Wanted Control of OpenAI

Posted by BeauHD View on SlashDot Skip
OpenAI CEO Sam Altman took the stand Tuesday in Elon Musk’s trial against the company, testifying that Musk repeatedly sought control of OpenAI before leaving in 2018. Altman said he opposed putting AI “under the control of any one person,” while Musk’s lawyer used a pointed cross-examination to attack Altman’s trustworthiness. An anonymous reader shares updates from the testimony via the New York Times:
Before Elon Musk left OpenAI in a power struggle in 2018, he wanted to merge the nonprofit artificial intelligence lab with Tesla, his electric car company. Mr. Musk and other OpenAI co-founders met several times to discuss the merger. OpenAI’s chief executive, Sam Altman, was even offered a seat on Tesla’s board of directors, according to a court document. But folding OpenAI into Tesla would have eliminated the lab’s nonprofit status, and that, Mr. Altman said on the witness stand on Tuesday, was something he wanted to avoid. […] “I believed that A.I. should not be under the control of any one person,” Mr. Altman said. […] Mr. Altman testified about his feud with Mr. Musk. He said he had become worried that Mr. Musk, who provided the early investment money for OpenAI, wanted to take control of the lab. He described what he called a “particularly harrowing moment” when his OpenAI co-founders asked Mr. Musk what would happen to his control of a potential for-profit when he died. Mr. Altman said Mr. Musk had replied that the control would pass to his children. “I was not comfortable with that,” Mr. Altman said. When Mr. Musk lost a power struggle for control of the lab, he left, forcing Mr. Altman to find another big financial backer in Microsoft.

But Mr. Altman ran into trouble in 2023 when OpenAI’s board fired him because, as several of its members have testified in the trial, it didn’t trust him. Steven Molo, Mr. Musk’s lead lawyer, homed in on Mr. Altman’s trustworthiness during an aggressive cross-examination. “Are you completely trustworthy?” Mr. Molo asked. “I believe so,” Mr. Altman answered. After questioning Mr. Altman’s trustworthiness for nearly 20 minutes, Mr. Molo turned to Mr. Altman’s relationship with Mr. Musk. Mr. Altman said that after he met Mr. Musk in the mid-2010s, Mr. Musk had occasionally expressed concern about the dangers of A.I. But Mr. Musk spent far more time saying he was worried that companies like Google would get ahead in A.I. development, Mr. Altman said. (Mr. Musk testified in the trial that he had wanted to create OpenAI to prevent Google from controlling the technology.)

Mr. Altman, the lawyer intimated, took advantage of Mr. Musk’s concerns and was never sincere about his own A.I. fears. “Are you a person who just tells people things they want to hear whether those things are true or not?” Mr. Molo asked. The lawyer also questioned whether Mr. Atman, who became a billionaire through years of tech investments, was self-dealing through OpenAI. Mr. Molo showed a list of Mr. Altman’s personal investments across a number of companies that stand to benefit from their association with OpenAI. They included Helion Energy, a start-up that has deals with Microsoft and OpenAI, and Cerebras, a chip maker in business with OpenAI. Mr. Molo asked if Mr. Altman, who is on OpenAI’s board as well as its chief executive, would ever fire himself. “I have no plans to do that,” Mr. Altman said.

OpenAI’s odd journey from nonprofit lab to what it is today — a well-funded, for-profit company that is still connected to a nonprofit called the OpenAI Foundation with an endowment that could be worth more than $130 billion — provided grist for Mr. Molo’s questions about Mr. Altman’s motivations. He implied that Mr. Altman could have continued to build OpenAI as a pure nonprofit. But the only way to build such a valuable charity was to raise billions through a for-profit venture, Mr. Altman responded. Still, the giant sums being raised appeared to upset Mr. Musk. In late 2022, according to court documents, Mr. Musk sent a text to Mr. Altman complaining that Microsoft was preparing to invest $10 billion in OpenAI. “This is a bait and switch,” Mr. Musk said at the time. But Mr. Altman, under questioning from his own lawyers, said: “Every step of the way, I have done my best to maximize the value of the nonprofit. I would point out that there are not a lot of historical examples of a nonprofit at this scale.”
Before Altman took the stand, OpenAI board chair Bret Taylor continued his testimony that began on Monday. He said Elon Musk’s 2024 bid to buy the company’s assets appeared to conflict with his lawsuit and was rejected because the board did not believe OpenAI’s mission should be controlled by one person. “We did not feel like it was appropriate for one person to control our mission,” he said.
Recap:
Microsoft CEO Satya Nadella Testifies In OpenAI Trial (Day Nine)
Sam Altman Had a Bad Day In Court (Day Eight)
Sam Altman’s Management Style Comes Under the Microscope At OpenAI Trial (Day Seven)
Brockman Rebuts Musk’s Take On Startup’s History, Recounts Secret Work For Tesla (Day Six)
OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five)
Musk Concludes Testimony At OpenAI Trial (Day Four)
Elon Musk Says OpenAI Betrayed Him, Clashes With Company’s Attorney (Day Three)
Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two)
Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)

Surprise, surprise, surprise

By CommunityMember • Score: 5, Funny Thread
Musk wanting control? Who might have guessed?

Re:I’m kinda sick of this tbh

By ClickOnThis • Score: 5, Informative Thread

It may be unpleasant, but we can’t just ignore what these guys are doing. Or the others who are building AI systems.

If it’s allowed to flourish without supervision, AI could be an existential threat to humanity. We need to stay involved with how it will integrate into our lives. And it will, like it or not.

On the next episode of As the Billionaires Turn

By sinkskinkshrieks • Score: 5, Funny Thread
Billionaires 1 whines about billionaire 2 being a big meanie pants to him. Billionaire 2 says “he started it.” Billionaire 1 says “nunnuh.” Billionaire 2 says “I’m telling.”

Re:Can we have the more paranoid one?

By quonset • Score: 5, Insightful Thread

Musk set up OpenAI as an OPEN SOURCE NON PROFIT because he is paranoid about AI.

Sometime people we don’t like can be on the right side of history and grownup people understand that.

Which is why he’s created his own AI and is working to integrate it into whatever he touches.

Re:So ghey

By dfghjk • Score: 5, Informative Thread

“This is being done for the benefit of humanity.”

You are an even worse liar than Musk or Altman, but we’ve all known that for a long time.

South Korea Floats ‘Citizen Dividend’ Using AI Profits

Posted by BeauHD View on SlashDot
South Korea’s presidential policy chief is calling for a “citizen dividend" that would return some AI-driven profits and tax revenue to the public. The Straits Times. From the report:
Presidential policy chief Kim Yong-beom said in a Facebook post that a portion of the profits and tax revenue derived from the artificial intelligence boom “should be structurally returned to all citizens.” That is because, Mr Kim argued, the economic gains from AI are based at least partly on industrial infrastructure built by the country over five decades. Mr Kim’s comments come after tens of thousands of people gathered outside Samsung’s main chip hub in April to demand employees get a greater share of AI profits. The company’s labour union wants 15 per cent of operating profit handed to chip-division employees.

The union has threatened an 18-day strike starting May 21. Workers have pointed to rising payouts at SK Hynix, which in 2025 agreed to allocate 10 per cent of its annual operating profit to a performance bonus pool, as evidence they deserve more pay. “Excess profits in the AI era are, by nature, concentrated,” Mr Kim wrote. Memory companies, core engineers and asset holders are highly likely to receive substantial benefits, while much of the middle class may experience only indirect effects.

Re: fuck ai sayo!

By AvitarX • Score: 5, Insightful Thread

I think the concept is that if a company announces mass layoffs because AI yountax them per employee.

I assume what would actually happen is honesty in layoffs, notbtax revenue.

Wealth redistribution?

By marcle • Score: 3 Thread

People talk about it like it’s a Commie plot, but if we don’t even out the inequality at least a little, it’s gonna be bad for the economy and bad for all of us.

Re: fuck ai sayo!

By ShanghaiBill • Score: 5, Interesting Thread

If you punish companies for firing, you get less hiring.

Countries with inflexible labor markets tend to have higher unemployment.

Re: fuck ai sayo!

By fortfive • Score: 4, Insightful Thread

But better quality of life overall for regular folks.

AI or no AI there is a massive automation push

By rsilvergun • Score: 5, Interesting Thread
And it’s going to result in permanent unemployment. It’s debatable how much but we’re not ever going to see full employment ever again. Not with this much automation.

To be thoroughly honest we are cooking the books using sub minimum wage gig work to pretend that we aren’t already well below full employment. I don’t know South Korea’s numbers but here in America there is only one good job for every five americans. A good job here being defined as paying enough that you can afford a modest house, reliable transportation, healthcare and to save for retirement when you’re too old to physically work anymore. No extravagant luxuries per se. But what people used to call a working class living. Basically 50 years and you get to die in peace.

That kind of living is only available to one in five Americans.

We’re going to have to do something and I suspect that something is going to be world War 3. It’s not a coincidence that world War II kicked off when unemployment hit 25%..

I have seen multiple people who got forced to come back into the office complaining about coworkers that work from home or get to go home and finish their day out. Instead of those people demanding work from home for themselves they demand the people around them also are forced to come into the office. Even though the extra traffic on the streets makes their commute worse and means that they don’t get the nicest parking spots.

But if it’s one thing I’ve seen over and over and over again it’s that for the sake of feeling like it’s all fair people cheerfully stab themselves in the back. The animalistic urge for fairness is easily exploitable. Gets us all into a nice little crabs in a bucket situation.

Meanwhile Elon Musk is getting ready to do a massive stock scam worth almost 2 trillion dollars and it’s going to get dumped in all our retirement plans at some point.