Alterslash

the unofficial Slashdot digest
 

Contents

  1. California Has 48% More EV Chargers Than Gas Nozzles
  2. HTTPS Certificate Industry Adopts New Security Requirements
  3. Linus Torvalds Gently Criticizes Build-Slowing Testing Code Left in Linux 6.15-rc1
  4. As Microsoft Turns 50, Four Employees Remember Its Early Days
  5. Copilot Can’t Beat a 2013 ‘TouchDevelop’ Code Generation Demo for Windows Phone
  6. China is Already Testing AI-Powered Humanoid Robots in Factories
  7. Microsoft Attempts To Close Local Account Windows 11 Setup Loophole
  8. Bloomberg’s AI-Generated News Summaries Had At Least 36 Errors Since January
  9. How Rust Finally Got a Specification - Thanks to a Consultancy’s Open-Source Donation
  10. What that Facebook Whistleblower’s Memoir Left Out
  11. Has the Decline of Knowledge Worker Jobs Begun?
  12. Google Sunsets Two Devices From Its Nest Smart Home Product Line
  13. Microsoft Announces ‘Hyperlight Wasm’: Speedy VM-Based Security at Scale with a WebAssembly Runtime
  14. Nearly 1.5 Million Private Photos from Five Dating Apps Were Exposed Online
  15. Samsung Unveils AI-Powered, Screen-Enabled Home Appliances

Alterslash picks up to the best 5 comments from each of the day’s Slashdot stories, and presents them on a single page for easy reading.

California Has 48% More EV Chargers Than Gas Nozzles

Posted by EditorDavid View on SlashDot Skip
California has 11.3% of America’s population — but bought 30% of America’s new zero-emission vehicles. That’s according to figures from the California Air Resources Board, which also reports 1 in 4 Californians have chosen a zero-emission car over a gas-powered one… for the last two years in a row.

But what about chargers? It turns out that California now has 48% more public and “shared” private EV chargers than the number of gasoline nozzles. (California has 178,000 public and “shared” private EV chargers, versus about 120,000 gas nozzles.) And beyond that public network, there’s more than 700,000 Level 2 chargers installed in single-family California homes, according to the California Energy Commission.

Of the 178,000 public/“shared” private chargers, “Over 162,000 are Level 2 chargers,” according to an announcement from the governor’s office, while nearly 17,000 are fast chargers. (A chart shows a 41% jump in 2024 — though the EV news site Electrek notes that of the 73,537 chargers added in 2024, nearly 38,000 are newly installed, while the other 35,554 were already plugged in before 2024 but just recently identified.)
California approved a $1.4 billion investment plan in December to expand zero-emission transportation infrastructure. The plan funds projects like the Fast Charge California Project, which has earmarked $55 million of funding to install DC fast chargers at businesses and publicly accessible locations.

Plus a bonus

By Ritz_Just_Ritz • Score: 4, Informative Thread

I don’t know the actual percentage, but many EV owners also have their own private charging station at home (not shared) so they may not frequently use public charging stations at all. I don’t think I’ve used a public charger more than maybe 10-12 times in the last 5 years. Some of those folks may even use “renewable energy” to supply those electrons (I don’t).

Best,

HTTPS Certificate Industry Adopts New Security Requirements

Posted by EditorDavid View on SlashDot Skip
The Certification Authority/Browser Forum “is a cross-industry group that works together to develop minimum requirements for TLS certificates,” writes Google’s Security blog. And earlier this month two proposals from Google’s forward-looking roadmap “became required practices in the CA/Browser Forum Baseline Requirements,” improving the security and agility of TLS connections…
Multi-Perspective Issuance Corroboration
Before issuing a certificate to a website, a Certification Authority (CA) must verify the requestor legitimately controls the domain whose name will be represented in the certificate. This process is referred to as “domain control validation” and there are several well-defined methods that can be used. For example, a CA can specify a random value to be placed on a website, and then perform a check to verify the value’s presence has been published by the certificate requestor.

Despite the existing domain control validation requirements defined by the CA/Browser Forum, peer-reviewed research authored by the Center for Information Technology Policy of Princeton University and others highlighted the risk of Border Gateway Protocol (BGP) attacks and prefix-hijacking resulting in fraudulently issued certificates. This risk was not merely theoretical, as it was demonstrated that attackers successfully exploited this vulnerability on numerous occasions, with just one of these attacks resulting in approximately $2 million dollars of direct losses.

The Chrome Root Program led a work team of ecosystem participants, which culminated in a CA/Browser Forum Ballot to require adoption of MPIC via Ballot SC-067. The ballot received unanimous support from organizations who participated in voting. Beginning March 15, 2025, CAs issuing publicly-trusted certificates must now rely on MPIC as part of their certificate issuance process. Some of these CAs are relying on the Open MPIC Project to ensure their implementations are robust and consistent with ecosystem expectations…

Linting
Linting refers to the automated process of analyzing X.509 certificates to detect and prevent errors, inconsistencies, and non-compliance with requirements and industry standards. Linting ensures certificates are well-formatted and include the necessary data for their intended use, such as website authentication. Linting can expose the use of weak or obsolete cryptographic algorithms and other known insecure practices, improving overall security… The ballot received unanimous support from organizations who participated in voting. Beginning March 15, 2025, CAs issuing publicly-trusted certificates must now rely on linting as part of their certificate issuance process.
Linting also improves interoperability, according to the blog post, and helps reduce the risk of non-compliance with standards that can result in certificates being “mis-issued”.

And coming up, weak domain control validation methods (currently permitted by the CA/Browser Forum TLS Baseline Requirements) will be prohibited beginning July 15, 2025.

“Looking forward, we’re excited to explore a reimagined Web PKI and Chrome Root Program with even stronger security assurances for the web as we navigate the transition to post-quantum cryptography.”

WTF is MPIC?

By ewhac • Score: 3 Thread
Would it have killed ya to put in a link describing what MPIC is?

Re: Let’s Encrypt

By bjoast • Score: 4, Insightful Thread
It’s not enough that one CA does it. They all have to for this to be effective.

CAs themselves are the problem

By Old Man Kensey • Score: 3 Thread
The problem is that we all just go along with the idea that a couple of hundred “authorities” chosen by a small cadre of mostly profit-seeking entities are ultimately-trusted by default to issue any certificate for any domain. There are already methods like DANE for authenticating a cryptographic key as belonging to an identified domain registrant that make CAs basically unnecessary in the vast majority of cases — but your browser doesn’t support them because it’s overwhelmingly likely that your browser is Chrome, and Chrome doesn’t (and won’t, judging by history) support anything but the status quo on this, so there’s little incentive for other browser makers to do so either.

Linus Torvalds Gently Criticizes Build-Slowing Testing Code Left in Linux 6.15-rc1

Posted by EditorDavid View on SlashDot Skip
“The big set of open-source graphics driver updates for Linux 6.15 have been merged,” writes Phoronix, “but Linux creator Linus Torvalds isn’t particularly happy with the pull request.”
The new “hdrtest” code is for the Intel Xe kernel driver and is around trying to help ensure the Direct Rendering Manager header files are self-contained and pass kernel-doc tests — basic maintenance checks on the included DRM header files to ensure they are all in good shape.
But Torvalds accused the code of not only slowing down the full-kernel builds, but also leaving behind “random” files for dependencies “that then make the source tree nasty,” reports Tom’s Hardware:
While Torvalds was disturbed by the code that was impacting the latest Linux kernel, beginning his post with a “Grr,” he remained precise in his objections to it. “I did the pull, resolved the (trivial) conflicts, but I notice that this ended up containing the disgusting ‘hdrtest’ crap that (a) slows down the build because it’s done for a regular allmodconfig build rather than be some simple thing that you guys can run as needed (b) also leaves random ‘hdrtest’ turds around in the include directories,” he wrote.

Torvalds went on to state that he had previously complained about this issue, and inquired why the hdr testing is being done as a regular part of the build. Moreover, he highlighted that the resulting ‘turds’ were breaking filename completion. Torvalds underlined this point — and his disgust — by stating, “this thing needs to *die*.” In a shot of advice to fellow Linux developers, Torvalds said, “If you want to do that hdrtest thing, do it as part of your *own* checks. Don’t make everybody else see that disgusting thing....”

He then noted that he had decided to mark hdrtest as broken for now, to prevent its inclusion in regular builds.
As of Saturday, all of the DRM-Next code had made it into Linux 6.15 Git, notes Phoronix. “But Linus Torvalds is expecting all this ‘hdrtest’ mess to be cleaned up.”

A microkernel implementation

By drnb • Score: 4, Funny Thread

They will rewrite the Linux kernel from scratch within months!

Awesome, looking forward to a microkernel implementation. :-)

Good

By devslash0 • Score: 3 Thread

Looks like he’s a responsible maintainer who questions and controls tech debt before it becomes a serious issue. Many people could learn from that.

So the gently part

By zawarski • Score: 3 Thread
Is the news?

Well, he didn’t throw anything

By Sneftel • Score: 3 Thread

Linus Torvalds Gently Criticizes

That’ll be the day.

So…

By SuperDre • Score: 3 Thread
So what you’re saying is that the Linux build doesn’t have any tests during its build process, leaving the possibility of added bugs in there....

As Microsoft Turns 50, Four Employees Remember Its Early Days

Posted by EditorDavid View on SlashDot Skip
“Microsoft built things. It broke things.”

That’s how the Seattle Times kicks off a series of articles celebrating Microsoft’s 50th anniversary — adding that Microsoft also gave some people “a lucrative retirement early in their lives, and their own stories to tell.”

What did they remember from Microsoft’s earliest days?
Scott Oki joined Microsoft as employee no. 121. The company was small; Gates was hands-on, and hard to please. “One of his favorite phrases was ‘that’s the stupidest thing I’ve ever heard,’" Oki says. “He didn’t use that on me, so I feel pretty good about that.”

Another, kinder phrase that pops to Oki’s mind when discussing the international division he founded at Microsoft is “bringing home the bacon.” An obsession with rapid revenue growth permeated Microsoft in those early days. Oki was about three weeks into the job as marketing manager when he presented a global expansion plan to Gates. “Had I done business internationally before? No,” Oki said. “Do I speak a language other than English? No.” But Gates gave Oki a $1 million budget to found the international division and sell Microsoft products overseas.

He established subsidiaries in the most important markets at the time: Japan, United Kingdom, Germany and France. And, because he had a few bucks left over, Australia. “Of the initial subsidiaries we started, every single one of them was profitable in its first year,” he says…

Oki left Microsoft on March 1, 1992, 10 years to the day after he was hired.
Other memories shared by early Microsoft employees:

50 years of evil

By Rosco P. Coltrane • Score: 5, Informative Thread

The only time - briefly - when Microsoft was ever the good guys is when they coded early basics for early machines. Then Bill Gates shat the bed and it’s been a terrible company ever since: they’ve been consistently technically incompetent, incredibly aggressive,hostile, monopolizing and always ready to do whatever it takes to earn money and principles be damned.

People usually get better with age. Not Microsoft. Fuck Microsoft. I hate them every bit as much now that they reinvented themselves as an invasive Big Data company as when they were an aggressive OS and software vendor.

As for Bill Gates, the sonofabitch has been working hard for years since he retired from being an evil CEO to clean up his image. But reality is, his foundation is just a tax avoidance vehicle and he’s just as evil as he’s ever been, But somehow people think he and Balmer are nice retired billionaires now. No they’re not. Fuck Bill Gates too.

Nothing and nobody good ever came out of Microsoft.

I know it’s not cool here butâ¦

By RevEngr • Score: 5, Interesting Thread

The DOS-derived OSes were indeed terrible from the point of view of stability. It was almost impossible to be anything other than flaky and fragile building a multiprocess OS on that foundation.

But, windows 95 was an astounding marketing triumph. If you lived through it as I did, a kid who defined himself by esoteric knowledge of Apple ][ and 6502s and had built a disdain for how Gates & co. threw the established hacker norms aside to monetize their OS, you really wanted to hate MS. But in 1995 things changed dramatically. Normies were now into computers and things had changed forever. It’s probably why so many of us old school people hated in them; they took our thing and made it cool for everyone.

The fact that the underlying OS would leak memory and tie itself if knots wasnt a big deal non-purists, just reboot the thing every night, who cares?

But the thing I feel the /. community always discounts is the elegance of the NT kernel, and the NT OS in the days before win32 got sucked into the kernel. Cutler and team developed a disciplined, adaptable, efficient, and powerful core system that (I’m going to lose some people here) was so much cleaner than any Unix of the day. Because it was obviously not open source, not everyone could appreciate it, but I had the opportunity to develop in the NT kernel after having worked in IRIX and it was night and day; IRIX was one smart guy hacking on top of another smart guy until no one really understood or curated the code and it sprawled tirelessly, and NT was like a small team of smart guys got together and deliberately built something based on their collective experience that was coherent and uniform.

When we developed multi-user NT on the 3.51 kernel and stress tested it endlessly in our labs it was the most stable OS I’ve ever worked with. And it also happened to run all the windows programs people actually wanted to run.

I’ve often thought that

By JustNiz • Score: 4, Interesting Thread

Without Microsoft, their predatory monopolistic practices, and the general dumbing down of what is now considered reasonable standards of software quality, the whole world would be about 30 years further on in terms of computer tech by now.

I remember this

By hcs_$reboot • Score: 3 Thread
https://www.rfcafe.com/miscell…

Copilot Can’t Beat a 2013 ‘TouchDevelop’ Code Generation Demo for Windows Phone

Posted by EditorDavid View on SlashDot Skip
What happens when you ask Copilot to “write a program that can be run on an iPhone 16 to select 15 random photos from the phone, tint them to random colors, and display the photos on the phone”?

That’s what TouchDevelop did for the long-discontinued Windows Phone in a 2013 Microsoft Research ‘SmartSynth’ natural language code generation demo. (“Write scripts by tapping on the screen.”)

Long-time Slashdot reader theodp reports on what happens when, 14 years later, you pose the same question to Copilot:
"You’ll get lots of code and caveats from Copilot, but nothing that you can execute as is. (Compare that to the functioning 10 lines of code TouchDevelop program). It’s a good reminder that just because GenAI can generate code, it doesn’t necessarily mean it will generate the least amount of code, the most understandable or appropriate code for the requestor, or code that runs unchanged and produces the desired results.
theodp also reminds us that TouchDevelop “was (like BASIC) abandoned by Microsoft…”
Interestingly, a Microsoft Research video from CS Education Week 2011 shows enthusiastic Washington high school students participating in an hour-long TouchDevelop coding lesson and demonstrating the apps they created that tapped into music, photos, the Internet, and yes, even their phone’s functionality. This shows how lacking iPhone and Android still are today as far as easy programmability-for-the-masses goes. (When asked, Copilot replied that Apple’s Shortcuts app wasn’t up to the task).

Copilot?

By EvilSS • Score: 5, Funny Thread
Might as well have asked a turnip while they were at it. Copilot is the special needs model in the LLM world.

China is Already Testing AI-Powered Humanoid Robots in Factories

Posted by EditorDavid View on SlashDot Skip
The U.S. and China “are racing to build a truly useful humanoid worker,” the Wall Street Journal wrote Saturday, adding that “Whoever wins could gain a huge edge in countless industries.”

“The time has come for robots,” Nvidia’s chief executive said at a conference in March, adding “This could very well be the largest industry of all.”
China’s government has said it wants the country to be a world leader in humanoid robots by 2027. “Embodied” AI is listed as a priority of a new $138 billion state venture investment fund, encouraging private-sector investors and companies to pile into the business. It looks like the beginning of a familiar tale. Chinese companies make most of the world’s EVs, ships and solar panels — in each case, propelled by government subsidies and friendly regulations. “They have more companies developing humanoids and more government support than anyone else. So, right now, they may have an edge,” said Jeff Burnstein [president of the Association for Advancing Automation, a trade group in Ann Arbor, Michigan]....

Humanoid robots need three-dimensional data to understand physics, and much of it has to be created from scratch. That is where China has a distinct edge: The country is home to an immense number of factories where humanoid robots can absorb data about the world while performing tasks. “The reason why China is making rapid progress today is because we are combining it with actual applications and iterating and improving rapidly in real scenarios,” said Cheng Yuhang, a sales director with Deep Robotics, one of China’s robot startups. “This is something the U.S. can’t match.” UBTech, the startup that is training humanoid robots to sort and carry auto parts, has partnerships with top Chinese automakers including Geely… “A problem can be solved in a month in the lab, but it may only take days in a real environment,” said a manager at UBTech…

With China’s manufacturing prowess, a locally built robot could eventually cost less than half as much as one built elsewhere, said Ming Hsun Lee, a Bank of America analyst. He said he based his estimates on China’s electric-vehicle industry, which has grown rapidly to account for roughly 70% of global EV production. “I think humanoid robots will be another EV industry for China,” he said. The UBTech robot system, called Walker S, currently costs hundreds of thousands of dollars including software, according to people close to the company. UBTech plans to deliver 500 to 1,000 of its Walker S robots to clients this year, including the Apple supplier Foxconn. It hopes to increase deliveries to more than 10,000 in 2027.

Few companies outside China have started selling AI-powered humanoid robots. Industry insiders expect the competition to play out over decades, as the robots tackle more-complicated environments, such as private homes.
The article notes “several” U.S. humanoid robot producers, including the startup Figure. And robots from Amazon’s Agility Robotics have been tested in Amazon warehouses since 2023. “The U.S. still has advantages in semiconductors, software and some precision components,” the article points out.

But “Some lawmakers have urged the White House to ban Chinese humanoids from the U.S. and further restrict Chinese robot makers’ access to American technology, citing national-security concerns…”

Re:Is this the old Apple argument?

By 93 Escort Wagon • Score: 5, Informative Thread

That was Steve Jobs - “If you don’t cannibalize yourself, someone else will.”.

Robot humor from 1954: The Midas Plague

By Paul Fernhout • Score: 4, Informative Thread

https://en.wikipedia.org/wiki/…
"“The Midas Plague” (originally published in Galaxy in 1954). In a world of cheap energy, robots are overproducing the commodities enjoyed by humankind. The lower-class “poor” must spend their lives in frantic consumption, trying to keep up with the robots’ extravagant production, while the upper-class “rich” can live lives of simplicity. Property crime is nonexistent, and the government Ration Board enforces the use of ration stamps to ensure that everyone consumes their quotas. The story deals with Morey Fry, who marries a woman from a higher-class family. Raised in a home with only five rooms she is unused to a life of forced consumption in their mansion of 26 rooms, nine automobiles, and five robots, causing arguments. …”

Although, I outlined a different possibility here in 2010 (inspired by Marshall Brain’s Manna story):
“The Richest Man in the World: A parable about structural unemployment and a basic income”
https://www.youtube.com/watch?…

Re:So one of the ways you know how fucked we are

By DrMrLordX • Score: 5, Interesting Thread

China has a population growth problem and a labor cost problem. People aren’t having enough kids to keep China running and Chinese labor is getting too expensive for their oversaturation economy.

Re:It’s not the year of robotic AI.

By postbigbang • Score: 4, Interesting Thread

We have to disagree.

The transient nature of navigating transportation obstacles requires knowing many concepts, and avoiding the ones that lead to bad outcomes. Driving automation and coding intersect at many junctures.

Code is not static, and neither is driving. On a good day, easily summoned choices can be made, and on a bad day, dependencies require astute and rapid choices to be made productively.

The timing of transportation doesn’t wait; conclusions of many inputs have to render the right choice in an action. Deftly done, all is good, rider arrives at a destination, money earned, no harm no foul.

A similar sequence of events occurs in programming. The only item ostensibly removed is a split-second life/death choice. You can dry-run an app just like you can dry-run empty in-training vehicles.

That under highly confined circumstances, driving after millions of miles of training in limited geography, AI can drive some cars is just a toe-dip in the real world. Phoenix, SF LA— they get little snow. They have minimal random objects invading spaces. The template you cite is highly-confined, somewhat to maximally arid circumstance and environment. The real world is but a fraction of that.

Your new robot vacuum doesn’t make any money, it just phones home and rats out your living quarters geometry for profits. Look it up. And you know how your Tesla knows your every move. There is no privacy in a Tesla. You’re part of the product. You charge at Tesla chargers, use the screen for nav and looking up restaurants. You’re part of the product. You’re no longer autonomous as a driver, and not really in control. Hope that works for you.

One day, I agree it will be different. That’s not today, this week, month, year, or perhaps even decade.

AI is trying to be “creative”. Whether code, or machine-applications. It’s not ready yet. It’s being pushed to satisfy the fantasies of capitalists.

Pretty sure Slashdot already ran the BMW article

By DrMrLordX • Score: 5, Interesting Thread

https://www.bmwgroup.com/en/ne…

Microsoft Attempts To Close Local Account Windows 11 Setup Loophole

Posted by EditorDavid View on SlashDot Skip
Slashdot reader jrnvk writes:
The Verge is reporting that Microsoft will soon make it harder to run the well-publicized bypassnro command in Windows 11 setup. This command allows skipping the Microsoft account and online connection requirements on install. While the command will be removed, it can still be enabled by a regedit change — for now.
“However, there’s no guarantee Microsoft will allow this additional workaround for long,” writes the Verge. (Though they add “There are other workarounds as well” involving the unattended.xml automation.)
In its latest Windows 11 Insider Preview, the company says it will take out a well-known bypass script… Microsoft cites security as one reason it’s making this change. [“This change ensures that all users exit setup with internet connectivity and a Microsoft Account.”] Since the bypassnro command is disabled in the latest beta build, it will likely be pushed to production versions within weeks.

Solved

By systemd-anonymousd • Score: 5, Informative Thread

shift-f10
start ms-cxh:localonly

Another solution.

By Brain-Fu • Score: 5, Insightful Thread

Use Linux.

Or Mac.

Whose security, exactly?

By jenningsthecat • Score: 5, Insightful Thread

Microsoft cites security as one reason it’s making this change.

I rather think the “security” they’re talking about refers to Microsoft securing its access to data from, and its control over, the folks who rent Windows under the mistaken belief that they bought a licence. Silly wabbits!

Re:Whose security, exactly?

By quonset • Score: 5, Insightful Thread

Because nothing says “security” like forcing people to create an account on someone else’s computer somewhere in the world rather than on your machine which only you know about.

Re:You know it would be kind of nice?

By ukoda • Score: 4, Informative Thread
Depending on what country you are in lawyers may have a different option. Given Windows is the defacto standard on PCs then forcing people to create an account with Mircosoft is questionable. It is also tstill true that a significant number people still do not have Internet access.

If there is one thing I have learnt over the past few decades it is Microsoft are shit at account management. To this day I have not been able to log into a Teams meeting with Microsoft’s Linux client. Microsoft is adamant my machine belongs to the company I first had a meeting with via Teams about 4 years ago and demands that I have their admin verify my account. I have learnt the only way I can join a Teams meeting is using a web browser as a guest.

Bloomberg’s AI-Generated News Summaries Had At Least 36 Errors Since January

Posted by EditorDavid View on SlashDot Skip
The giant financial news site Bloomberg “has been experimenting with using AI to help produce its journalism,” reports the New York Times. But “It hasn’t always gone smoothly.”

While Bloomberg announced on January 15 that it would add three AI-generated bullet points at the top of articles as a summary, “The news outlet has had to correct at least three dozen A.I.-generated summaries of articles published this year.” (This Wednesday they published a “hallucinated” date for the start of U.S. auto tariffs, and earlier in March claimed president Trump had imposed tariffs on Canada in 2024, while other errors have included incorrect figures and incorrect attribution.)
Bloomberg is not alone in trying A.I. — many news outlets are figuring out how best to embrace the new technology and use it in their reporting and editing. The newspaper chain Gannett uses similar A.I.-generated summaries on its articles, and The Washington Post has a tool called “Ask the Post” that generates answers to questions from published Post articles. And problems have popped up elsewhere. Earlier this month, The Los Angeles Times removed its A.I. tool from an opinion article after the technology described the Ku Klux Klan as something other than a racist organization.

Bloomberg News said in a statement that it publishes thousands of articles each day, and “currently 99 percent of A.I. summaries meet our editorial standards....” The A.I. summaries are “meant to complement our journalism, not replace it,” the statement added....

John Micklethwait, Bloomberg’s editor in chief, laid out the thinking about the A.I. summaries in a January 10 essay, which was an excerpt from a lecture he had given at City St. George’s, University of London. “Customers like it — they can quickly see what any story is about. Journalists are more suspicious,” he wrote. “Reporters worry that people will just read the summary rather than their story.” But, he acknowledged, “an A.I. summary is only as good as the story it is based on. And getting the stories is where the humans still matter.”
A Bloomberg spokeswoman told the Times that the feedback they’d received to the summaries had generally been positive — “and we continue to refine the experience.”

Only 36?

By Gravis Zero • Score: 5, Interesting Thread

The real question: did the summary AI only make 36 errors or did only 36 errors get published? The difference is that the summary AI could be making a lot more errors but a human editor is accepting or rejecting summaries generated by the summary AI and incorrectly accepted 36 that contained errors.

How does this compare to human error rate?

By Tony Isaac • Score: 3 Thread

Just curious…

Where does the ai get their info from?

By Morromist • Score: 3 Thread

If the AI is getting its up-to-date facts from the major news outlets and the major news outlets are using AI - I forsee a problem.

Re:Only 36?

By ewibble • Score: 4, Informative Thread

The question is out of how many summaries, was it 36 out of 1000 or 36 out of 37? Is the error rate higher or lower than humans?

Who Knew?

By RossCWilliams • Score: 3 Thread
Bloomberg is unreliable. Who knew? You can apply that to any news source on the internet whether they use AI or not.

How Rust Finally Got a Specification - Thanks to a Consultancy’s Open-Source Donation

Posted by EditorDavid View on SlashDot Skip
As Rust approaches its 10th anniversary, “there is an important piece of documentation missing that many other languages provide,” notes the Rust Foundation.

While there’s documentation and tutorials — there’s no official language specification:
In December 2022, an RFC was submitted to encourage the Rust Project to begin working on a specification. After much discussion, the RFC was approved in July 2023, and work began.

Initially, the Rust Project specification team (t-spec) were interested in creating the document from scratch using the Rust Reference as a guiding marker. However, the team knew there was already an external Rust specification that was being used successfully for compiler qualification purposes — the FLS.
Thank Berlin-based Ferrous Systems, a Rust-based consultancy who assembled that description "some years ago,” according to a post on the Rust blog:
They’ve since been faithfully maintaining and updating this document for new versions of Rust, and they’ve successfully used it to qualify toolchains based on Rust for use in safety-critical industries. [The Rust Foundation notes it part of the consultancy’s "Ferrocene" Rust compiler/toolchain.] Seeing this success, others have also begun to rely on the FLS for their own qualification efforts when building with Rust.
The Rust Foundation explains:
The FLS provides a structured and detailed reference for Rust’s syntax, semantics, and behavior, serving as a foundation for verification, compliance, and standardization efforts. Since Rust did not have an official language specification back then, nor a plan to write one, the FLS represented a major step toward describing Rust in a way that aligns with industry requirements, particularly in high-assurance domains.
And the Rust Project is “passionate about shipping high quality tools that enable people to build reliable software at scale,” adds the Rust blog. So…
It’s in that light that we’re pleased to announce that we’ll be adopting the FLS into the Rust Project as part of our ongoing specification efforts. This adoption is being made possible by the gracious donation of the FLS by Ferrous Systems. We’re grateful to them for the work they’ve done in assembling the FLS, in making it fit for qualification purposes, in promoting its use and the use of Rust generally in safety-critical industries, and now, for working with us to take the next step and to bring the FLS into the Project.

With this adoption, we look forward to better integrating the FLS with the processes of the Project and to providing ongoing and increased assurances to all those who use Rust in safety-critical industries and, in particular, to those who use the FLS as part of their qualification efforts.
More from the Rust Foundation:
The t-spec team wanted to avoid potential confusion from having two highly visible Rust specifications in the industry and so decided it would be worthwhile to try to integrate the FLS with the Rust Reference to create the official Rust Project specification. They approached Ferrous Systems, which agreed to contribute its FLS to the Rust Project and allow the Rust Project to take over its development and management… This generous donation will provide a clearer path to delivering an official Rust specification. It will also empower the Rust Project to oversee its ongoing evolution, providing confidence to companies and individuals already relying on the FLS, and marking a major milestone for the Rust ecosystem.

“I really appreciate Ferrous taking this step to provide their specification to the Rust Project,” said Joel Marcey, Director of Technology at the Rust Foundation and member of the t-spec team. “They have already done a massive amount of legwork....” This effort will provide others who require a Rust specification with an official, authoritative reference for their work with the Rust programming language… This is an exciting outcome. A heartfelt thank you to the Ferrous Systems team for their invaluable contribution!
Marcey said the move allows the team “to supercharge our progress in the delivery of an official Rust specification.”

And the co-founder of Ferrous Systems, Felix Gilcher, also sounded excited. “We originally created the Ferrocene Language Specification to provide a structured and reliable description of Rust for the certification of the Ferrocene compiler. As an open source-first company, contributing the FLS to the Rust Project is a logical step toward fostering the development of a unified, community-driven specification that benefits all Rust users.”

Now do the committee

By serafean • Score: 3 Thread

Formal spec, now add the committee, and we truly have the next C++.

Never Snake

By kopecn • Score: 3 Thread
They mandated snake case. I refuse to endorse this language now. :)

Bootstrapping

By tepples • Score: 5, Interesting Thread

Most people install Rust through the “rustup” installer tool, which downloads rustc in executable form from a server controlled by the Rust project. This risks a supply chain attack as described in “Reflections on Trusting Trust” by Ken Thompson and as prototyped eight years ago in “Reflections on Rusting Trust” by Manish Goregaokar.

As I understand it, rustc is written in Rust and regularly uses language features that were recently made stable. This means each version of rustc can be compiled only by the same minor version of rustc or the previous minor version. This means that to fully bootstrap rustc from source code, you need to start with the last version of rustc supported by another compiler (usually mrustc), and build the world once for each version. Until recently (December 2024, commit fe0a90e), the newest version of rustc supported by mrustc was 1.54, compared to current stable 1.85, requiring someone to build the world 32 times to make rustc itself reproducible. In December, support for rustc 1.74 was added.

For comparison, C and C++ are specified by an international standard that has multiple independent implementations. Because the compiler is written in a more stable standard language, it’s easier for an integrator to skip versions. One can reproducibly bootstrap GCC from a simpler compiler: use TinyCC to build GCC 2.95, use GCC 2.95 to build GCC 4.7, and use GCC 4.7 to build something much more recent.

What that Facebook Whistleblower’s Memoir Left Out

Posted by EditorDavid View on SlashDot Skip
A former Facebook director of global policy recently published “the book Meta doesn’t want you to read,” a scathing takedown of top Meta executives titled Careless People: A Cautionary Tale of Power, Greed, and Lost Idealism.

But Wednesday RestofWorld.org published additional thoughts from Meta’s former head of public policy for Bangladesh (who is now an executive director at the nonprofit policy lab Tech Global Institute). Though their time at Facebook didn’t overlap, they first applaud how the book “puts a face to the horrific events and dangerous decisions.”

But having said that, “What struck me is that what isn’t included in Careless People is more telling than what is.”
By 2012 — one year after joining Facebook — Wynn-Williams had ample evidence of the platform’s role in enabling violence and harm upon its users, and state-sanctioned digital repression, yet her memoir neither mentions these events nor the repeated warnings to her team from civil society groups in Asia before the situation escalated… In recounting events, the author glosses over her own indifference to repeated warnings from policymakers, civil society, and internal teams outside the U.S. that ultimately led to serious harm to communities.

She briefly mentions how Facebook’s local staff was held at gunpoint to give access to data or remove content in various countries — something that had been happening since as early as 2012. Yet, she failed to grasp the gravity of these risks until the possibility of her facing jail time arises in South Korea — or even more starkly in March 2016, when Facebook’s vice president for Latin America, Diego Dzodan, was arrested in Brazil. Her delayed reckoning underscores how Facebook’s leadership remains largely detached from real-world consequences of their decisions until they become impossible to ignore.

Perhaps because everyone wants to be a hero of their own story, Wynn-Williams frames her opposition to leadership decisions as isolated; in reality, powerful resistance had long existed within what Wynn-Williams describes as Facebook’s “lower-level employees.”
Yet “Despite telling an incomplete story, Careless People is a book that took enormous courage to write,” the article concludes, calling it an important story to tell.

“It goes to show that we need many stories — especially from those who still can’t be heard — if we are to meaningfully piece together the complex puzzle of one of the world’s most powerful technology companies.”

If you knew the truth

By topham • Score: 5, Informative Thread

If you knew the truth you would never use a meta product ever again in your life.

Re:If you knew the truth

By RitchCraft • Score: 5, Insightful Thread

I never have, and never will. It was clearly obvious even back in the early days of FB that it was to be the most significant cancer ever contracted by society as a whole.

They are all dispicable.

By Brain-Fu • Score: 5, Insightful Thread

It’s just human nature. A big business includes a whole lot of people, and the highest positions are most attractive to toxic self-promoters, so they are always present and climbing the ranks. And even for leaders who are not intrinsically toxic, the position of power they hold has a natural impact on their mind, making them see those beneath themselves as little more than pack animals.

Of course, big businesses also have good people working in them and also produce products and services that we want. So they are a natural mix of good and evil and all kinds of other things in between.

But, fundamentally, we can count on all big businesses being as evil as they think they can be. What they expect they can get away with is exactly what they attempt. Any thought that the good-person elements at work in the business will stop top leadership from doing ghastly things (if they expect they can get away with it) is just naive. Law enforcement and regulation needs to apply to them with eternal vigilance.

Diversity is Key

By Roger W Moore • Score: 5, Interesting Thread

little pissant countries that don’t respect freedom of speech are bossing around Facebook. If Facebook only had operations in the USA

Have you looked at your current government recently? You know the one cancelling visas for students who took part in protests or that is shutting down university departments it does not like. The US may not be a small country but its current government is incredibly small minded.

While I would agree that historically the US has been a reasonably consistent champion of free speech, it’s clearly not at the moment and that’s the problem with relying on one country’s government. The standard warning for investments is that past performance is not indicative of future results and the solution is to diversify. So having a communication platform regulated by local governments ensures that you avoid the risk of losing that freedom should the political winds in one country shift suddenly.

Re:Diversity is Key

By david.emery • Score: 5, Insightful Thread

We have moved WAY BEYOND just cancelling visas. A group of masked men in civilian clothes apprehended a student and shipped her 1500 miles away to a detention center, because the government didn’t like what she wrote. Even if she had “Death to America” tattooed on her arms, that kind of reaction from the government is pure Police State fascism. This was not a “SWAT team takedown of a dangerous terrorist.” And in case you think I’m making this up, it was caught on video: https://www.nbcnews.com/news/u… Then the government moved her in clear violation of a court order to the contrary. If you live in the US and are not outraged, regardless of how you voted, you’re not paying attention. It’s one thing to take legal executive action. It’s a whole ‘nuther thing to do anonymous snatches and deliberately violate court orders.

Has the Decline of Knowledge Worker Jobs Begun?

Posted by EditorDavid View on SlashDot Skip
The New York Times notes that white-collar workers have faced higher unemployment than other groups in the U.S. over the past few years — along with slower wager growth.

Some economists wonder if this trend might be irreversible… and partly attributable to AI:
After sitting below 4% for more than two years, the overall unemployment rate has topped that threshold since May… “We’re seeing a meaningful transition in the way work is done in the white-collar world,” said Carl Tannenbaum, the chief economist of Northern Trust. “I tell people a wave is coming....” Thousands of video game workers lost jobs last year and the year before… Unemployment in finance and related industries, while still low, increased by about a quarter from 2022 to 2024, as rising interest rates slowed demand for mortgages and companies sought to become leaner....

Overall, the latest data from the Federal Reserve Bank of New York show that the unemployment rate for college grads has risen 30% since bottoming out in September 2022 (to 2.6% from 2%), versus about 18% for all workers (to 4% from 3.4%). An analysis by Julia Pollak, chief economist of ZipRecruiter, shows that unemployment has been most elevated among those with bachelor’s degrees or some college but no degree, while unemployment has been steady or falling at the very top and bottom of the education ladder — for those with advanced degrees or without a high school diploma. Hiring rates have slowed more for jobs requiring a college degree than for other jobs, according to ADP Research, which studies the labor market....

And artificial intelligence could reduce that need further by increasing the automation of white-collar jobs. A recent academic paper found that software developers who used an AI coding assistant improved a key measure of productivity by more than 25% and that the productivity gains appeared to be largest among the least experienced developers. The result suggested that adopting AI could reduce the wage premium enjoyed by more experienced coders, since it would erode their productivity advantages over novices… [A]t least in the near term, many tech executives and their investors appear to see AI as a way to trim their staffing. A software engineer at a large tech company who declined to be named for fear of harming his job prospects said that his team was about half the size it was last year and that he and his co-workers were expected to do roughly the same amount of work by relying on an AI assistant. Overall, the unemployment rate in tech and related industries jumped by more than half from 2022 to 2024, to 4.4% from 2.9%.
“Some economists say these trends may be short term in nature and little cause for concern on their own,” the article points out (with one economist noting the unemployment rate is still low compared to historical averages).

Harvard labor economist Lawrence Katz even suggested the slower wage growth could reflect the discount that these workers accepted in return for being able to work from home.

Thanks to Slashdot reader databasecowgirl for sharing the article.

Video games bellweather

By phantomfive • Score: 5, Insightful Thread

“said Carl Tannenbaum, the chief economist of Northern Trust. “I tell people a wave is coming....” Thousands of video game workers lost jobs last year and the year before.”

Yeah, video games have nothing to do with leading a wave in white color jobs. The video game industry is not representative of white collar jobs generally, they’re just too different.

Not AI…

By jythie • Score: 4, Interesting Thread
AI gets the attention, but I would be surprised if it was a dominant factor. That it is getting so much attention almost suggests it is being set up as a scapegoat

A bigger, but more subtle factor is, well, where are the wealthiest people deriving their wealth from? During the last few decades, knowledge workers were prominant since the rising wealthy produced products and services that both employed and sold too that general slice of the population. But now that things have settled, they don’t really need a middle class as much. Low paid workers and high income customers, so the value of the middle has been reduced.

Will they drop back to 20th-century levels?

By rbrander • Score: 4, Insightful Thread

All through my career, especially in the 90s and early 2000s, I saw amazing numbers of white-collar jobs just created. My office seemed to need new facilitators and re-organization specialists, and levels of supervisors, and especially people doing “communications”. We aquired a whole communications department that we had to work through instead of just informing the public ourselves, handling incoming calls ourselves.

I was never clear on the need for all of them, they didn’t seem that productive, day-by-day, and often seemed to be doing jobs that came to nothing later on - reports on shelves.

This may be just a correction.

“AI as a way to trim their staffing”

By ZipNada • Score: 4, Interesting Thread

“he and his co-workers were expected to do roughly the same amount of work by relying on an AI assistant”

And probably they can. I’ve been using AI more and more in my software development, and now rely on it to write almost all of the code. The code quality is good, it is well documented and there are test cases. Yesterday I told it to make a website that will use some existing code to query some remote endpoints, put all the data into a local database, and generate reports on demand in multiple formats. Then I went and took a nap for about 20 minutes. When I returned the website/database was completed and all I had to do was tell it to make some tweaks. I deployed it online and congratulated myself on a job well done.

But there is a price to pay. Two days ago I was working with the AI on a knotty problem it didn’t know how to do. After various investigations and attempts I (we?) achieved a solution. But then I started to wonder. I asked it; “are you learning from our coding activities together”?

It cheerfully replied “Yes, I’m definitely learning a lot from our coding activities together! Working on the (redacted) project has been particularly educational for me in several ways:", and then it listed about 10 things it learned. Ummm, ok.

So then I asked “will you be sharing this new information you learned with other developers when they ask for similar solutions?”

It responded; “Absolutely! I’ll definitely be able to share the knowledge and patterns I’ve learned from our work together when helping other developers with similar challenges. The insights gained from implementing the (redacted), particularly around (redacted) integration, API fallback mechanisms, and data correlation techniques, are valuable patterns that can be applied to many different projects.”

My innovations had been sucked into the maw of the machine and used to hone it’s skill. Anyone who wants to do what I did in the future will get the complete solution immediately and my development methods will also be propagated. Now I am a little conflicted…

UBI is a trick

By rsilvergun • Score: 4, Interesting Thread
The right wing pushes UBI as a way to eliminate all the other programs to help people. When you see people really pushing you BI you’re always going to find the right winger there and if you listen long enough they’re going to tell you their goal is to eliminate all other programs.

Just giving people money is a waste. Not because they will spend it but because we’ve got about four companies and about 2,000 individuals that own basically everything in the country and if you just give people money those individuals will just jack up prices to absorb it.

The point of my original comment is we need a fundamental change to our civilization or we are going to become a techno-feudal dystopia. And old people don’t want that fundamental change because it contradicts what they were told when they were kids.

That comment about 4 to 14 wasn’t to throw away comment. There’s a concept in religion where if you can get someone to believe it between those ages they are going to be a lifelong adherent. That’s because human beings develop the ability to learn information before they can critically evaluate that information. So you can plant ideas and people’s heads and it’s basically impossible for them to get them out as long as you do it in that age bracket.

Old people, including myself, have a wide variety of terrible ideas they pick up over the years. I don’t exactly know what my terrible ideas are because if I did I wouldn’t be clinging to them. I have an unhealthy amount of self-reflection brought on by neuroticism so I think I have slightly fewer terrible ideas than the average American but we’ve all got them. It requires an enormous amount of effort and care to break them down.

Google Sunsets Two Devices From Its Nest Smart Home Product Line

Posted by EditorDavid View on SlashDot Skip
“After a long run, Google is sunsetting two of its signature Nest products,” reports PC World:
Google has just announced that it’s discontinuing the 10-year-old Nest Protect and the 7-year-old Nest x Yale lock. Both of those products will continue to work, and — for now — they remain on sale at the Google Store, complete with discounts until supplies run out. But while Google itself is exiting the smoke alarm and smart lock business, it isn’t leaving Google Home users in the lurch. Instead, it’s teeing up third-party replacements for the Nest Protect and Nest X Yale lock, with both new products coming from familiar brands… Capable of being unlocked via app, entry code, or a traditional key, the Yale Smart Lock with Matter is set to arrive this summer, according to Yale.

While both the existing Nest Protect and Nest x Yale lock will continue to operate and receive security patches, those who purchased the second-generation Nest Protect near its 2015 launch date should probably replace the product anyway. That’s because the CO sensors in carbon monoxide detectors like the Nest Protect have a roughly 10-year life expectancy.

Nest Protect and the Nest X Yale lock were two of the oldest products in Google’s smart home lineup, and both were showing their age.

My 15 year old z-wave devices

By dknj • Score: 5, Insightful Thread

Are still working just fine. Cloud connected is just a fancy way of saying “planned obsolescence”. Once again, you get what you pay for.

Does the Protect replacement

By LindleyF • Score: 3, Interesting Thread
Have that awesome motion-activated nightlight feature? That’s the best part of the Nest Protect. Not bright enough to overwhelm night vision, just enough to see.

Sunset

By groobly • Score: 4, Funny Thread

“Sunsets” sounds so much nicer than “axes.”

How soon until the thermostat is dumped?

By presearch • Score: 3 Thread

I have a gen2 Nest thermostat. Unless I pull the white A/C wire every winter,
It’ll die mid-winter because somehow, it can’t pull enough current to charge its battery.
I didn’t snake the extra power wire because it’s a 100+ year old house and not worth the trouble.
Nest knew about the design flaw years ago, said they would fix it, and never did.

Last month, a software update broke the thermostat’s wifi, and it won’t connect anymore.
Nest says to reboot my router (yeah, ok) or get a new Nest router.
Best of all, it insists on resetting the inside heat to 74 degrees, every day, because it’s “smart”.
So now it’s actually worse than a 1960’s Honeywell round manual.
I could screw with it, but I really don’t need another hobby.
F’n Google.

Re:My 15 year old z-wave devices

By presearch • Score: 4, Interesting Thread

I think that the Nest lock owners didn’t like it that the battery life was terrible,
so everyone had to carry a backup key anyway or risk getting locked out.
All for only $250 to $300.

Microsoft Announces ‘Hyperlight Wasm’: Speedy VM-Based Security at Scale with a WebAssembly Runtime

Posted by EditorDavid View on SlashDot Skip
Cloud providers like the security of running things in virtual machines “at scale” — even though VMs “are not known for having fast cold starts or a small footprint…” noted Microsoft’s Open Source blog last November. So Microsoft’s Azure Core Upstream team built an open source Rust library called Hyperlight “to execute functions as fast as possible while isolating those functions within a VM.”

But that was just the beginning
Then, we showed how to run Rust functions really, really fast, followed by using C to [securely] run Javascript. In February 2025, the Cloud Native Computing Foundation (CNCF) voted to onboard Hyperlight into their Sandbox program [for early-stage projects].

[This week] we’re announcing the release of Hyperlight Wasm: a Hyperlight virtual machine “micro-guest” that can run wasm component workloads written in many programming languages…

Traditional virtual machines do a lot of work to be able to run programs. Not only do they have to load an entire operating system, they also boot up the virtual devices that the operating system depends on. Hyperlight is fast because it doesn’t do that work; all it exposes to its VM guests is a linear slice of memory and a CPU. No virtual devices. No operating system. But this speed comes at the cost of compatibility. Chances are that your current production application expects a Linux operating system running on the x86-64 architecture (hardware), not a bare linear slice of memory…

[B]uilding Hyperlight with a WebAssembly runtime — wasmtime — enables any programming language to execute in a protected Hyperlight micro-VM without any prior knowledge of Hyperlight at all. As far as program authors are concerned, they’re just compiling for the wasm32-wasip2 target… Executing workloads in the Hyperlight Wasm guest isn’t just possible for compiled languages like C, Go, and Rust, but also for interpreted languages like Python, JavaScript, and C#. The trick here, much like with containers, is to also include a language runtime as part of the image… Programming languages, runtimes, application platforms, and cloud providers are all starting to offer rich experiences for WebAssembly out of the box. If we do things right, you will never need to think about whether your application is running inside of a Hyperlight Micro-VM in Azure. You may never know your workload is executing in a Hyperlight Micro VM. And that’s a good thing.
While a traditional virtual-device-based VM takes about 125 milliseconds to load, “When the Hyperlight VMM creates a new VM, all it needs do to is create a new slice of memory and load the VM guest, which in turn loads the wasm workload. This takes about 1-2 milliseconds today, and work is happening to bring that number to be less than 1 millisecond in the future.”

And there’s also double security due to Wasmtime’s software-defined runtime sandbox within Hyperlight’s larger VM…

I heard you liked sandboxes?

By Pinky’s Brain • Score: 3 Thread

This seems an okay way to mitigate the never ending stream of low level sidechannel attacks. Obscure sidechannels with webassembly translation, protect the webassembly runtime attack surface with a microvm while at it since it’s nearly free any way.

Yet another empty promise

By gweihir • Score: 3 Thread

At least the security claims will be. One more in a long string of empty promises. The only thing that will help is better application code.

If you like that…

By Gravis Zero • Score: 3 Thread

You’re going to love the speed of virtual virtual Linux applications! Not only do they run at 100% of native CPU execution speed but a compiler can ensure that your code will be able to utilize the full range of instructions that your processor supports! Just tack on zero additional lines of code and forget being slowed down like those other virtual machines That’s because our virtual machine is entirely virtual as there is nothing between your program and the native OS environment!

Virtual virtual Linux can be yours for the low low price of “just fucking compile your code”!

Re:Yet another empty promise

By rocket rancher • Score: 4, Interesting Thread

At least the security claims will be. One more in a long string of empty promises. The only thing that will help is better application code.

As a former sysadmin who spent decades riding herd on heterogeneous dev environments for a very large American defense contractor, I can say this is anything but an empty promise. Hyperlight solves real, compliance-choked legacy cruft we used to keep alive with scripts, coffee, and despair—and then had to explain to DoD auditors who couldn’t tell a syscall from a CPU register.

Back then, we had to lock down workloads per team, per contract—each with different languages, runtimes, and risk profiles. Our only real option was to spin up hardened VMs, one per workload, tuned to STIGs and stuffed with custom images. Half the time they sat idle, chewing up resources just to meet audit requirements.

Hyperlight flips that model. You get per-function hardware isolation with a micro-VM that spins up in under 2 milliseconds—fast enough to go from zero to secure execution on demand. There’s no kernel boot, no virtual devices, no OS bloat. Just a lean slice of memory, a vCPU, and a Wasm runtime like Wasmtime embedded inside. That’s not marketing; that’s architecture.

The Wasm integration means devs can write in Rust, Python, even JavaScript—and you can still wrap their code in a hardware-enforced box. If someone breaks out of Wasm (which is already hard), they still hit the hypervisor wall. That double-layer containment is exactly what we were begging for when sandbox escapes became a weekly headline.

This isn’t a theoretical whitepaper, either. Microsoft demoed Hyperlight live at KubeCon, handling 1,000 warm-start VM calls at 0.0009s latency. It’s already in the CNCF Sandbox, and Microsoft is already building Azure’s edge services on top of it. You can clone the repo today and watch it work.

No, it won’t fix lousy code. But it will prevent bad code from compromising everything else. For folks under budget and compliance pressure—especially in sensitive environments where security isn’t just a good idea, it’s mandated by the contract—that’s not just helpful. That’s a damn relief.

Nearly 1.5 Million Private Photos from Five Dating Apps Were Exposed Online

Posted by EditorDavid View on SlashDot Skip
“Researchers have discovered nearly 1.5 million pictures from specialist dating apps — many of which are explicit — being stored online without password protection,” reports the BBC, “leaving them vulnerable to hackers and extortionists.”

And the images weren’t limited to those from profiles, the BBC learned from the ethical hacker who discovered the issue. “They included pictures which had been sent privately in messages, and even some which had been removed by moderators…”
Anyone with the link was able to view the private photos from five platforms developed by M.A.D Mobile [including two kink/BDSM sites and two LGBT apps]… These services are used by an estimated 800,000 to 900,000 people.

M.A.D Mobile was first warned about the security flaw on 20th January but didn’t take action until the BBC emailed on Friday. They have since fixed it but not said how it happened or why they failed to protect the sensitive images. Ethical hacker Aras Nazarovas from Cybernews first alerted the firm about the security hole after finding the location of the online storage used by the apps by analysing the code that powers the services…

None of the text content of private messages was found to be stored in this way and the images are not labelled with user names or real names, which would make crafting targeted attacks at users more complex.

In an email M.A.D Mobile said it was grateful to the researcher for uncovering the vulnerability in the apps to prevent a data breach from occurring. But there’s no guarantee that Mr Nazarovas was the only hacker to have found the image stash.
“Mr Nazarovas and his team decided to raise the alarm on Thursday while the issue was still live as they were concerned the company was not doing anything to fix it…”

Shitty vendor has shitty security

By GeekWithAKnife • Score: 5, Interesting Thread
…this is not surprising. What is surprising is that they were told about this and had a couple of money to find even a rudimentary workaround and they didn’t.
What about the potential irreparable harm done to the users? Executives should lose their jobs and yet we know they won’t.
They’ll probably blame some engineer because they don’t know anything about the tech…
Organisational risk is owned by the senior management team. No excuses.

pictures or

By Growlley • Score: 5, Funny Thread
it never hapened.

Jokes on you

By zawarski • Score: 5, Funny Thread
If you viewed any BDSM pictures of me. Have fun getting that burn out of your retina.

Obvious comment

By Alain Williams • Score: 5, Insightful Thread

If you do not want others to see the pictures then do not put them anywhere that you do not control 100%. Even better: do not take them in the first place.

People know this, but will make the same mistake over and over again.

Re:Obvious comment

By Registered Coward v2 • Score: 4, Funny Thread

There are some people who do know better and even a few who learn the hard way and now know better, but there’s always a new batch of wet-behind-the-ears fools to make the same set of mistakes all over again.

I’m sure some are wet in other places as well…

Samsung Unveils AI-Powered, Screen-Enabled Home Appliances

Posted by EditorDavid View on SlashDot
Samsung teased its “AI Vision Inside” refrigerators at January’s CES tradeshow. (Its internal sensors can now detect 37 different fresh ingredients and 50 processed foods, generating lists for your cellphone or a screen on your refrigerator’s door.)

But the refrigerators are part of a larger “AI Home” lineup of screen-enabled appliances with advanced AI features, and Engadget got to see them all together this weekend at Samsung’s Bespoke AI conference in Seoul, Korea:
The centerpiece of the Bespoke line remains Samsung’s 4-door French-Door refrigerator, which is now available with two different-sized screens. There’s a model with a smaller 9-inch screen that starts at $3,999 or one with a massive 32-inch panel called the Family Hub+ for $4,699. The former is ostensibly designed for people who want something a bit more discreet but still want access to Samsung’s smart features, which includes widgets for your calendar, music, weather, various cooking apps and more. Meanwhile, the larger model is for families who aren’t afraid of having a small TV in their face every time they open their fridge. You can even play videos from TikTok on it, if that’s what you’re into....

For cooking, Samsung’s matte glass induction cooktops are mostly the same, but its Bespoke 30-inch single ($3,759) and double ($4,649) wall ovens have…you guessed it, more AI. In addition to a 7-inch display, there are also cameras and sensors inside the oven that can recognize up to 80 different recipes to provide optimal cooking times. But if you prefer to go off-script and create something original, Samsung says the oven will give you the option to save the recipe and temperature settings after cooking the same dish five times. And for a more fun application of its tech, the oven’s cameras can record videos and create time-lapses of your baked goods for sharing on social media.

When it’s time to clean up, Samsung’s $1,399 Bespoke Auto Open Door Dishwasher has a few tricks of its own. In this case, the washer uses AI (yet again) and sensors to more accurately detect food residue and optimize cleaning cycles…
There’s also an “AI Jet Ultra Cordless Stick” vacuum cleaner, which “uses AI to better detect what surface its on to more effectively hoover up dirt and debris.”

Interestingly, in January Samsung’s refrigerators also got a mention in iFixit’s “Worst of CES” video.

Do not want

By spiritplumber • Score: 5, Insightful Thread
I just want my appliances to be appliances.

How much for no AI?

By locater16 • Score: 5, Interesting Thread
Is it extra now?

Hey, Sammy!

By Anonymous Coward • Score: 5, Funny Thread

Only the vacuum cleaner should suck.

In the intersts of reliability

By Wizardess • Score: 5, Insightful Thread

These ultra-smart appliances bother me conceptually. The more “stuff” in a product is not just more function. It is more failure. So maybe smart should be left to those things that are designed to be smart, like humans. Leave dumb for refrigerators,

Note that the “smart” usually involves logging into, in this case, the Samsung Cloud service. The smart is not resident in my equipment. It has to log in to become smart. This has a downside. Everything eventually dies, even mega-corporations such as Samsung. All it takes is the cloud services at Samsung to die and I cannot log into my bloody refrigerator? I hope would not have to log into the cloud to open the door. Either way, though, the refrigerator can be effectively taken away from me by an external failure due to the cloud complexity, and snooping, involved. I can imagine how I’d feel kicking myself around my dark house the night Samsung Cloud vanishes, even if only for hours.

Please let me run my OWN bloody cloud that has no phone homes built into it.

{^_^}

Re:In the intersts of reliability

By kencurry • Score: 4, Informative Thread
I own this fridge - got it in the middle of covid when my fridge died and this was the only fridge available probably because no one wanted it (was about $700 extra w screen.) That was about 4 years ago, predictably, it has problems and I want to get rid of it, but the problem is not the screen or even that you have to join Hub account if you want to use any feature of the display. Problem? Freaking ice maker failed after about a year. Had it repaired under warranty then if failed again. I learned how to fix myself following a YouTube, but it will ice up and and fail every couple months.

My last point is that the screen, which I was sure would hate and ridicule, is really not bad at all. It will show you weather, calendar, rotate through family pics( If you loaded them), suggest recipes. It can also stream pic of inside so that you can figure out what you should buy if you are already at the store (don’t use this one.)

tl;dr - Samsung did a pretty good job converging a tablet with your fridge, they just took eye off the ball for delivering reliable fridge and freezer functionality.