Alterslash

the unofficial Slashdot digest
 

Contents

  1. Air Force Pushed Out UFO Investigator
  2. WeatherBug Data Says October 8 Is the Real Perfect Date
  3. Stanford Report Highlights Growing Disconnect Between AI Insiders and Everyone Else
  4. Apple AI Glasses Will Rival Meta’s With Several Styles, Oval Cameras
  5. Hollywood Stars Sign Open Letter Protesting Paramount-Warner Bros Merger
  6. FBI Raids Texas Home of Man Suspected of Firebombing Sam Altman’s SF Mansion
  7. Meta Is Warned That Facial Recognition Glasses Will Arm Sexual Predators
  8. Linux 7.0 Released
  9. Booking.com Hit By Data Breach
  10. Mark Zuckerberg Is Reportedly Building an AI Clone To Replace Him In Meetings
  11. Maine Set To Become First State With Data Center Ban
  12. Californians Sue Over AI Tool That Records Doctor Visits
  13. Will Some Programmers Become ‘AI Babysitters’?
  14. Anthropic Asks Christian Leaders for Help Steering Claude’s Spiritual Development
  15. Sam Altman’s Home Targeted a Second Time, Two Suspects Arrested

Alterslash picks up to the best 5 comments from each of the day’s Slashdot stories, and presents them on a single page for easy reading.

Air Force Pushed Out UFO Investigator

Posted by BeauHD View on SlashDot Skip
J. Allen Hynek started as an Air Force consultant brought in to help explain away early UFO reports, but over time he grew frustrated with what he saw as the government’s effort to minimize unexplained cases rather than seriously investigate them. Longtime Slashdot reader schwit1 shares an article from Popular Mechanics, in collaboration with Biography.com, that argues Hynek’s shift from skeptic to advocate helped shape modern ufology, and that the Air Force’s attempts to control the narrative may have deepened the public distrust and conspiracy thinking that followed. From the report:
Do you think the U.S. government is hiding, and possibly reverse-engineering, extraterrestrial technology? Think again. Or better yet, don’t think about it at all. Nothing to see here. That’s the underlying message of a report released in 2024 by the Department of Defense. The 63-page "Report on the Historical Record of U.S. Government Involvement with Unidentified Anomalous Phenomena (UAP) " concludes that the DoD’s All-Domain Anomaly Resolution Office (AARO) “found no evidence that any [U.S. Government] investigation, academic-sponsored research, or official review panel has confirmed that any sighting of a UAP represented extraterrestrial technology.”

The AARO, as The Guardian summarizes, is “a government office established in 2022 to detect and, as necessary, mitigate threats including ‘anomalous, unidentified space, airborne, submerged and transmedium objects.’" This report came on the heels of, and in contradiction to, what was arguably the most high-profile hearing on UAPs — formerly known as unidentified flying objects, or UFOs — in decades: the August 2023 testimony of “whistleblower” Dave Grusch.

[…] The 2024 AARO report stated that during the time Hynek was working with Project Blue Book [the U.S. Air Force’s best-known UFO investigation program], “about 75 percent of Americans trusted the [US government] ‘to do the right thing almost always or most of the time.’" But, the report noted, since 2007, that number has never risen above 30 percent. “This lack of trust probably has contributed to the belief held by some subset of the U.S. population that the USG has not been truthful regarding knowledge of extraterrestrial craft.”

Ultimately, the Air Force’s efforts to stifle Hynek — pressuring him to offer the public standard responses to questions he wasn’t even allowed to ask — appears to have backfired. Ironically, the Air Force’s attempts to quiet suspicions only fueled them, leading to more conspiracy theories and distrust. People came to believe that the government was hiding the truth, contrary to Hynek’s actual revelation: that, in reality, the people at the top may not care much about finding the answers after all.

The classic problem.

By jd • Score: 3 Thread

Governments and officials like to control power, information, and behaviour.

In practice, you can tightly control at most one of those. Try to dominate all three, and the other two usually decay into something dysfunctional and ultimately malignant. That is the price of over-centralising everything at once.

If the government had confined itself to power, and the Air Force to discipline and airspace, while allowing researchers to access and assess the evidence properly, we would likely have far fewer delusions and far less paranoia today. Yes, that would have required the Air Force to maintain secrecy through actual competence rather than narrative management, but that discipline would probably have done them a great deal of good. They might even be better able to distinguish fact from fiction now.

Failure to understand != proof of pet theories

By phayes • Score: 4, Insightful Thread

JAH thinks that the % of people who don’t trust the USG is because “The USG refused to take my theories seriously when I edged my way into the deep end of the pool”. That says more than enough about how seriously anyone not wearing a tin foil hat should be taking anything he says. Note to JAH: It’s not always about you.

Well what would you do

By DarkOx • Score: 3 Thread

Imagine you are a manager or a CO and you have an employee who keeps spending an enormous amount of time working one something you know is pointless.

You’d tell them to find a more productive use of their time, and if they can’t you might tell them to find other employment.

WeatherBug Data Says October 8 Is the Real Perfect Date

Posted by BeauHD View on SlashDot Skip
BrianFagioli shares a report from NERDS.xyz:
For years pop culture has treated April 25 as the "perfect date,” thanks to the famous Miss Congeniality line about needing only a light jacket. But new analysis from WeatherBug suggests that idea does not actually hold up when you look at the numbers. After reviewing U.S. weather data from 2018 through today, the company concluded that October 8 delivers the most reliable combination of comfortable temperatures and low rainfall nationwide. According to the analysis, the average conditions on that day land around 66F with just 0.0573 inches of precipitation.

The study used population weighted weather data drawn from roughly 20 million daily WeatherBug users across the United States. When the company compared all days of the year, April 25 ranked only 80th, averaging about 60F and roughly 0.1297 inches of rain. The broader dataset also shows July dominating the hottest days of the year while January owns the coldest, with January 20 averaging just 33F nationally. While no single date guarantees perfect weather everywhere in a country as large as the U.S., the numbers suggest early October may quietly offer one of the most reliable windows for comfortable outdoor conditions.

Really?

By msauve • Score: 3 Thread
>weather data from 2018

Less than 10 years of data? Why did this make it to /.?

Stanford Report Highlights Growing Disconnect Between AI Insiders and Everyone Else

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from TechCrunch:
AI experts and the public’s opinion on the technology are increasingly diverging, according to Stanford University’s annual report on the AI industry, which was released Monday. In particular, the report noted a growing trend of anxiety around AI and, in the U.S., concerns about how the technology will impact key societal areas, such as jobs, medical care, and the economy. […] Stanford’s report provides more insight into where all this negativity is coming from, as it summarizes data around public sentiment of AI across various sources. For instance, it pointed to a report from Pew Research published last month, which noted that only 10% of Americans said they were more excited than concerned about the increased use of AI in daily life. Meanwhile, 56% of AI experts said they believed AI would have a positive impact on the U.S. over the next 20 years.

Expert opinion and public sentiment also greatly diverged in particular areas where AI could have a societal impact. Indeed, 84% of experts, the report authors noted, said that AI would have a largely positive impact on medical care over the next 20 years, but only 44% of the U.S. general public said the same. Plus, a majority (73%) of experts felt positive about AI’s impact on how people do their jobs, compared with just 23% of the public. And 69% of experts felt that AI would have a positive impact on the economy. Given the supposed AI-fueled layoffs and disruptions to the workplace, it’s not surprising that only 21% of the public felt similarly. Other data from Pew Research, cited by the report, noted that AI experts were less pessimistic on AI’s impact on the job market, while nearly two-thirds of Americans (or 64%) said they think AI will lead to fewer jobs over the next 20 years.

The U.S. also reported the lowest trust in its government to regulate AI responsibly, compared with other nations, at 31%. Singapore ranked highest at 81%, per data pulled from Ipsos found in Stanford’s report. Another source looked at regulation concerns on a state-by-state level and concluded that, nationwide, 41% of respondents said federal AI regulation will not go far enough, while only 27% said it would go “too far.” Despite the fears and concerns, AI did get one accolade: Globally, those who feel like AI products and services offer more benefits than drawbacks slightly rose from 55% in 2024 to 59% in 2025. But at the same time, those respondents who said that AI makes them “nervous” grew from 50% to 52% during the same period, per data cited by the report’s authors.

The problem is obvious

By MpVpRb • Score: 5, Insightful Thread

All the general public sees is slop, scams and threats of job loss.
Maybe all of those CEOs, hypemongers and pundits shouldn’t have publicly said that AI will replace all jobs…over and over and over.

Re:People are easily swayed

By Cyberpunk Reality • Score: 5, Insightful Thread

…a lot of people simultaneously believe that AI doesn’t work and that it’s replacing human jobs.

These are not incompatible positions if you believe that management is foolish, malicious, or both.

Expert bias

By misnohmer • Score: 5, Insightful Thread
Is it really surprising that experts in some technology are proponents of said technology and see more positive uses for it? I am not trying to debate here whether AI is good or bad, simply stating that experts in any emerging technology will typically have a more positive outlook on its uses.

clear conflict of interest

By Tom • Score: 5, Insightful Thread

So called “AI insiders” are almost exclusively people for whom AI is either an active research subject or a business opportunity. There is almost no money to be made from being sceptical about AI. Of course these people feel positive about AI.

The common sense opinion here is more reliable, even if it is less informed.

Re:People are easily swayed

By Tom • Score: 5, Insightful Thread

There is, however, another market that moves faster than that one: The CEO market.

Any CEO who said “we don’t do AI here, that’s all bullshit” will find himself on the job market pretty fast in the current mood. So, everyone does AI. Not because it works as a business decision, but because it works as a job security decision.

see also: “Nobody ever got fired for buying IBM”

Apple AI Glasses Will Rival Meta’s With Several Styles, Oval Cameras

Posted by BeauHD View on SlashDot Skip
Bloomberg’s Mark Gurman reports that Apple is developing display-free AI smart glasses aimed at rivaling Meta’s Ray-Bans, with multiple frame styles, a distinctive oval camera design, and tight iPhone integration. “The idea is to unveil the product at the end of 2026 or early the following year, with the actual release coming in 2027,” writes Gurman. From the report:
Like Meta’s offering, Apple’s glasses will be designed to handle everyday uses: capturing photos and videos, syncing with a smartphone for editing and sharing, handling phone calls, listening to notifications, playing music, and enabling hands-free interaction via a voice assistant. In Apple’s case, that assistant will be a significantly upgraded Siri coming in iOS 27. The glasses are part of a broader, three-pronged AI wearables strategy that also includes new AirPods and a camera-equipped pendant. Each device is designed to leverage computer vision to interpret the user’s surroundings and feed contextual awareness into Siri and Apple Intelligence. That will enable features like improved turn-by-turn map directions and visual reminders.

When Apple typically enters a new product category, it offers clear advantages over what’s currently available. We saw this with the original iPod, iPhone, iPad and Apple Watch — and, even though it was a flop, the Vision Pro. That approach won’t be as obvious with Apple’s upcoming foldable iPhone, but we should see it on full display with the glasses. According to employees working on the project, Apple’s strategy is to outdo competitors by tightly integrating the glasses with the iPhone and offering a higher-end build. While Meta relies heavily on partner EssilorLuxottica SA for frames, Apple is unsurprisingly planning to go at it alone in terms of design. That also should set it apart from Alphabet Inc.‘s Google and Samsung Electronics Co., which are leaning on Warby Parker.

Apple’s design team has whipped up at least four different styles and plans to launch some or all of them, I’m told, as well as many color options. The latest units are made from a high-end material called acetate, which is known to be more durable and luxurious than the standard plastic used by many brands. Here are the designs in testing:
- A large rectangular frame, reminiscent of Ray-Ban Wayfarers
- A slimmer rectangular design, similar to the glasses worn by Apple Chief Executive Officer Tim Cook
- Larger oval or circular frames
- A smaller, more refined oval or circular option

Acetate

By backslashdot • Score: 5, Funny Thread

Uh, plastic was literally invented to replace cellulose acetate, which has issue like UV degradation, brittling, and scratch susceptibility. You’re going to wear these glasses in the sunlight, right? Make sure to put sunblock on your glasses so it doesn’t turn into vinegar.

What are SmartGlasses for?

By Marc_Hawke • Score: 4, Interesting Thread

It says ‘display-less’ so it’s not AR…which I thought was the whole point.
From what I’ve people describe the uses as being the same as airpods with the addition of taking video/pictures.

Why use the ‘glasses’ form-factor when you’re not using your eyeballs for any of the interaction?

Is that it? Is it just a convenient place to hang a camera? Also, are people expected to get prescription lenses for these things…or are normally sighted people now walking about with glasses on....just because?

I keep wondering if it’s the ‘stealth’ factor. Are they ‘spy’ glasses and made to look completely normal? Did someone decide that Google Glass was ‘too obvious’ and people would know you’re walking around with a camera on your head? So they have to make a camera that (most) people wouldn’t recognize? That seems a bit illegal. ;)

(one guy said, “we’re not allowed to have airpods at work, so I use these instead.” (that’s totally going to backfire, and your boss won’t be amused.))

reasonable expectation of privacy

By davecotter • Score: 3 Thread

When I go out into public, I, personally, feel that I have no reasonable expectation of privacy.

However, I do believe that other people, and maybe *most* other people, absolutely *do* feel that they have a reasonable expectation of privacy, excepting locations that have security cameras.

So, while I don’t care of others wear their AppleGlass or GoogleGlass or MetaGlass whatever, and have their AI’s run facial recognition on me, and feed the wearer my stats into their airpods as they approach me, I understand that others feel that this is a privacy violation.

I’m not sure this is going to go over very well. What do you think?

Hollywood Stars Sign Open Letter Protesting Paramount-Warner Bros Merger

Posted by BeauHD View on SlashDot Skip
More than 1,000 Hollywood figures, including major actors, writers, and directors, signed an open letter opposing Paramount Skydance’s proposed takeover of Warner Bros. Discovery, arguing it would hurt an industry “already under severe strain.” The deal is still under regulatory scrutiny in both the U.S. and U.K., while Paramount says the merger would strengthen competition and expand opportunities for creators. NBC News reports:
“This transaction would further consolidate an already concentrated media landscape, reducing competition at a moment when our industries — and the audiences we serve — can least afford it,” the signatories wrote in the letter, published early Monday on a website called Block the Merger. “The result will be fewer opportunities for creators, fewer jobs across the production ecosystem, higher costs, and less choice for audiences in the United States and around the world. Alarmingly, this merger would reduce the number of major U.S. film studios to just four,” the signatories added.

[T]he open letter illustrates the deep resistance to the deal among many members of Hollywood’s creative community. The list of signatories includes A-list stars (Glenn Close, Ben Stiller), celebrated filmmakers (Yorgos Lanthimos, Denis Villeneuve) and acclaimed writers (“The Sopranos” creator David Chase). “Media consolidation has accelerated the disappearance of the mid-budget film, the erosion of independent distribution, the collapse of the international sales market, the elimination of meaningful profit participation, and the weakening of screen credit integrity,” the signatories wrote. “Together, these factors threaten the sustainability of the entire creative community,” they added.

[…] Monday’s open letter letter was spearheaded by a group of advocacy organizations — including the Committee for the First Amendment, a free speech group led by Fonda, who warned that the merger “would be one of the most destructive threats to free speech and creative expression in our history.” In the letter, first reported by The New York Times, the signatories expressed support for California Attorney General Rob Bonta, who has said the merger is “not a done deal.” “These two Hollywood titans have not cleared regulatory scrutiny — the California Department of Justice has an open investigation, and we intend to be vigorous in our review,” Bonta said in a Feb. 26 post on X.
Paramount Skydance said that they “hear and understand the concerns” and are committed to “protecting and expanding creativity.” The studio also reiterated its commitment to releasing a minimum of 30 “high-quality feature films annually with full theatrical releases” and “preserving iconic brands with independent creative leadership” to make sure “creators have more avenues for their work, not fewer.”

Put your $ where your mouth is, kids

By argStyopa • Score: 4, Insightful Thread

Who cares what the self-important talent thinks?
If they want to do something about it, take their vast wealth and instead of buying a 3rd home in St Tropez, set up a production co-op.
It’s been done before.

“United Artists is an American film production and distribution company owned by Amazon MGM Studios. In its original operating period, it was founded on February 5, 1919 by Charlie Chaplin, D. W. Griffith, Mary Pickford and Douglas Fairbanks as a venture premised on allowing actors to control their own financial and artistic interests rather than being dependent upon commercial studios." (wiki)

Put up, or shut up.

Re:I honestly might be missing the point

By Zero__Kelvin • Score: 4, Insightful Thread
That’s the great thing about technology. Everybody is an expert now. Don’t like your OS? Just write your own! Not satisfied with your car? Just design your own! Your computer isn’t fancy enough? You live in a free country! Just whip one up yourself! I’m sure the quality won’t suffer a bit … After all, how hard can it be to do what Glenn Close does? It’s just her spouting a bunch of words!

Re:first amendment red herring

By ZombieCatInABox • Score: 4, Insightful Thread

The only reason people like you are still alive today to be able to display their ignorance and cluelessness to the entire Internel is because people like me had the good sense to listen to people way smarter and more knowledgeable than you or me and follow their advice.

You’re welcome.

Re:first amendment red herring

By ArchieBunker • Score: 4, Funny Thread

You sound like a snowflake.

Media concentration ALWAYS sucks

By hyades1 • Score: 3 Thread

All you have to do is look what happens when an entertainment giant like Disney gets hold of a franchise. They run it into the ground. Some franchises ruined by corporate greed (not all Disney): Star Trek, Star Wars, Indiana Jones, Mulan, Pirates of the Caribbean, the MCU…probably a lot more if I googled around a bit.

Mega-corporations want maximum profits, and they don’t care how much damage they do getting them. And in current business terms, “maximum profits” means wring the asset dry, discard it and move on to the next acquisition. The idea of steady, long-term profitability seems to have died.

Less competition means less innovation, and when one CEO only has to call three other CEOs to figure out how they’re going to divide up the pie, there’s virtually none.

FBI Raids Texas Home of Man Suspected of Firebombing Sam Altman’s SF Mansion

Posted by BeauHD View on SlashDot Skip
The FBI searched the Texas home of a 20-year-old man accused of throwing a Molotov cocktail at Sam Altman’s San Francisco residence. Authorities say the suspect also made threats at OpenAI’s headquarters, and reports indicate he had written extensively about fears over AI and opposition to AI executives.

The suspect reportedly authored a Substack blog and was a member of the Discord server PauseAI, an activist group focused on banning the development of the most powerful AI models to protect the public. In one post, they wrote: “These machines have already shown themselves to be unaligned with the interest of the people creating them. Models have often been found lying, cheating on tasks, and blackmailing their own creators whenever convenient; let alone the broader question of aligning them to whatever general ‘human interest’ may be.” The Houston Chronicle reports:
The search happened hours before the Justice Department charged 20-year-old Daniel Moreno-Gama with possession of an unregistered firearm and damage and destruction of property by means of explosives. An FBI spokesperson on Monday morning confirmed agents were executing a search warrant in Spring, but provided no other information.

Around the same time, FOX News reported the search was being conducted at the home of Daniel Moreno-Gama, 20, who last week was arrested by San Francisco police suspicion of attempted murder, making criminal threats and possession of a destructive device. The charges were first reported by the Associated Press. When Moreno-Gama was arrested Friday, he was carrying a document that “identified views opposed to Artificial Intelligence (AI) and the executives of various AI companies,” the Associated Press reported. Moreno-Gama has no criminal history in Harris or Montgomery counties, according to public records. […] Agents had left the cul-de-sac by 1 p.m. It was unclear if they removed any items from the house.
Another incident occurred outside Sam Altman’s residence early Sunday morning. “Early Sunday morning, a car stopped and appears to have fired a gun at the Russian Hill home of OpenAI’s CEO,” reports The San Francisco Standard, citing reports from the local police department. Two suspects were arrested and booked for negligent discharge.

UPDATE: The suspect has been charged with attempted murder.

requisite Onion headline

By SirSlud • Score: 5, Funny Thread

Man Who Threw Molotov Cocktail At Sam Altman’s Home Claims He Was Following ChatGPT Recipe For Risotto

Re:a treasonous offense

By Valgrus Thunderaxe • Score: 4, Insightful Thread
It’s free speech in isolation, but he was caught in the act of firebombing Sam Altman’s house which makes it evidence, also.

Re:Will he stand trial?

By martin-boundary • Score: 4, Insightful Thread
He can avoid trial if you collect 1M dollars in a GoFundMe account, which can be funneled to the orange one.

Meta Is Warned That Facial Recognition Glasses Will Arm Sexual Predators

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from Wired:
More than 70 civil liberties, domestic violence, reproductive rights, LGBTQ+, labor, and immigrant advocacy organizations are demanding that Meta abandon plans to deploy face recognition on its Ray-Ban and Oakley smart glasses, warning that the feature — reportedly known inside the company as “Name Tag” — would hand stalkers, abusers, and federal agents the ability to silently identify strangers in public. The coalition, which includes the ACLU, the Electronic Privacy Information Center, Fight for the Future, Access Now, and the Leadership Conference on Civil and Human Rights, is demanding Meta kill the feature before launch, after internal documents surfaced showing the company hoped to use the current “dynamic political environment” as cover for the rollout, betting that civil society groups would have their resources “focused on other concerns.”

Name Tag, as revealed in February by The New York Times, would work through the artificial intelligence assistant built into Meta’s smart glasses, allowing wearers to pull up information about people in their field of view. Engineers have reportedly been weighing two versions of the feature: one that would only identify people the wearer is already connected to on a Meta platform, and a broader version that could recognize anyone with a public account on a Meta service such as Instagram. The coalition wants Meta to scrap the feature entirely. In a letter to CEO Mark Zuckerberg on Monday, it argues that face recognition in inconspicuous consumer eyewear “cannot be resolved through product design changes, opt-out mechanisms, or incremental safeguards.” Bystanders in public have no meaningful way to consent to being identified, it says.

Meta is also urged to disclose any known instances of its wearables being used in stalking, harassment, or domestic violence cases; disclose any past or ongoing discussions with federal law enforcement agencies, including Immigration and Customs Enforcement and Customs and Border Protection, about the use of Meta wearables or data from them; and commit to consulting civil society and independent privacy experts before integrating biometric identification into any consumer device. “People should be able to move through their daily lives without fear that stalkers, scammers, abusers, federal agents, and activists across the political spectrum are silently and invisibly verifying their identities and potentially matching their names to a wealth of readily available data about their habits, hobbies, relationships, health, and behaviors,” write the groups, which also include Common Cause, Jane Doe Inc., UltraViolet, the National Organization for Women, the New York State Coalition Against Domestic Violence, the Library Freedom Project, and Old Dykes Against Billionaire Tech Bros, among others.

Re: we can’t prevent identification in public alre

By SeaFox • Score: 4, Insightful Thread

Cameras that available now for normal citizens don’t get VIP access to a large social media network and automatically try to identify everyone?

“Old Dykes Against Billionaire Tech Bros”

By RobinH • Score: 5, Funny Thread
I salute the naming committee from that group. Well done.

Existing cameras are not actively identifying you

By drnb • Score: 4, Informative Thread

> Bystanders in public have no meaningful way to consent to being identified We already can’t do that for any existing camera. Why are these any different?

Because nearly all the existing cameras are just recording events in case something bad happens. Outside of a few edge cases, like Las Vegas casinos, these cameras are NOT trying to actively identify you.

Re: we can’t prevent identification in public alre

By drnb • Score: 5, Informative Thread

Covert and able to be owned by any asshole and used as a marketing point.

How is that different from any camera bolted to the side of a building? Or any dashcam?

Hint: It’s not.

Guess again, they are not actively identifying individuals in near-real time, or identifying them at all. Just passively recording things in case something happens so that after-the-incident law enforcement can take the video and identify people.

Re: we can’t prevent identification in public alre

By postbigbang • Score: 5, Interesting Thread

That doesn’t meet the smell test.

You can elect to be on Facebook. FACEbook.

Or you can elect to never go there.

On the street, you must travel, or your are jailed in your location, enslaving you. Actual freedom means walking down the street, going into a store, driving, biking, whatever.

Liberty dictates you have freedom of movement and association. It doesn’t mean you can look up any random individual and drill through who/what they are. In public and private places, the Fifth Amendment applies, also unreasonable search and seizure, no matter who does it, government or not.

The Meta glasses are an onerous extension of cloud-based profile lookups and matching. Identity and privacy are dignity. Meta glasses remove that privacy, and any remaining shred of dignity.

Linux 7.0 Released

Posted by BeauHD View on SlashDot Skip
“The new Linux kernel was released and it’s kind of a big deal,” writes longtime Slashdot reader rexx mainframe. “Here is what you can expect.” Linuxiac reports:
A key update in Linux 7.0 is the removal of the experimental label from Rust support. That (of course) does not make Rust a dominant language in kernel development, but it is still an important step in its gradual integration into the project. Another notable security-related change is the addition of ML-DSA post-quantum signatures for kernel module authentication, while support for SHA-1-based module-signing schemes has been removed.

The kernel now includes BPF-based filtering for io_uring operations, providing administrators with improved control in restricted environments. Additionally, BTF type lookups are now faster due to binary search. At the same time, this release continues ongoing cleanup in the kernel’s lower layers. The removal of linuxrc initrd code advances the transition to initramfs as the sole early-userspace boot mechanism.

Linux 7.0 also introduces NULLFS, an immutable and empty root filesystem designed for systems that mount the real root later. Plus, preemption handling is now simpler on most architectures, with further improvements to restartable sequences, workqueues, RCU internals, slab allocation, and type-based hardening. Filesystems and storage receive several updates as well. Non-blocking timestamp updates now function correctly, and filesystems must explicitly opt in to leases rather than receiving them by default.
Phoronix has compiled a list of the many exciting changes.

Linus Torvalds himself announced the release, which can be downloaded directly from his git tree or from the kernel.org website.

Linux 7.0 has a major new version number but it’s “largely a numbering reset […], not a sign of some unusually disruptive release,” notes Linuxiac.

Probably not something you should upgrade to yet

By karmawarrior • Score: 5, Informative Thread

If you or some dependency of something you run uses PostgreSQL, be aware that Linux 7.0 has changes that causes a 50% performance hit on the former. The Linux people are adamant that the PGSQL people should change their code, despite the fact it’s not due to a bug or anything similar.

Until you can migrate to a newer PGSQL with the changes that the Linux people are demanding, with time taken to test and make sure these work (it’s not a trivial fix, the PGSQL people literally have to rewrite a critical part of the code), you should probably pin an earlier kernel, or use one patched to support PREEMPT_NONE.

Here’s a non-AI article that explains the issue: https://www.phoronix.com/news/…

If I were a distro maker, I’d not touch Linux 7.x until the PostgreSQL people have had a chance to release changes and the code is mature enough to use, though alas that could be years given bugs and security issues with anything the PGSQL people do could take years to surface.

Why NULLFS:

By Gravis Zero • Score: 5, Informative Thread

I was curious so I looked up the details about NULLFS.

Apparently, there is an issue with swapping the root filesystem which is done using the syscall pivot_root()… but not with initramfs,
per the man page…

The rootfs (initial ramfs) cannot be pivot_root()ed. The recommended method of changing the root filesystem in this case is to delete everything in rootfs, overmount rootfs with the new root, attach stdin/stdout/stderr to the new /dev/console, and exec the new init(1). Helper programs for this process exist; see switch_root(8).

So basically, this fixes a long-standing hack that well… is not safe in some cases, most notably with with containers (CVE-2020-15257). The proper solution was to make a simple null filesystem that could use pivot_root and swap out the rootfs without hacks.

More details here: https://lwn.net/Articles/10621…
And here: https://www.linkedin.com/pulse…

Booking.com Hit By Data Breach

Posted by BeauHD View on SlashDot Skip
Booking.com says hackers accessed customer reservation data in a breach that may have exposed booking details, names, email addresses, phone numbers, addresses, and messages shared with accommodations. PCMag reports:
On Sunday, users reported receiving emails from Booking.com, warning them that “unauthorized third parties may have been able to access certain booking information associated with your reservation.” The email suggests the hackers have already exploited customer information.

“We recently noticed suspicious activity affecting a number of reservations, and we immediately took action to contain the issue,” Booking.com wrote. “Based on the findings of our investigation to date, accessed information could include booking details and name(s), emails, addresses, phone numbers associated with the booking, and anything that you may have shared with the accommodation.”

Amsterdam-based Booking.com has now generated new PINs for customer reservations to prevent hackers from accessing them. Still, the incident risks exposing affected customers to potential phishing scams.
The Australian Broadcasting Corporation and several Reddit users say they received scam messages from accounts posing as Booking.com.

Surprised?

By SumDog • Score: 5, Interesting Thread
I interviewed for Booking back around .. 2016 I think? Everything was written in Perl. There were no plans to move to anything else. There were very few tests. Developers often pushed straight to production. The recruiter mentioned all of this up front, which was the only positive thing. I’m honestly surprised it’s taken this long for there to be a data breach. The place sounded like a shit shop.

Re:Surprised?

By dskoll • Score: 5, Informative Thread

Perl itself is neither here nor there with respect to security. But lack of tests and pushing straight to production… those are WTFs.

Re:Surprised?

By higuita • Score: 4, Informative Thread

perl directly is not a issue, as long you understand what it is doing. Just because is not a hyped language anymore, it still works very well
No tests and push to prod are a problem.

About the hack, i have 4 reservations, yet i only received notification about one of them, that is strange. I have both older and newer reservations of that affected. Maybe it was just the interconnect with other platforms (airbnb? other house renting service?)

Booking contact support sucks

By cristiroma • Score: 4, Interesting Thread

Three weeks ago I did a reservation booking and immediately received a message from the “host” to pay for the room within the next 12 hours with a link leading to a booking.com clone website asking card details. It look really legit, except one strange message: “If you don’t remember the sum to pay, just enter 350€". Even Google chrome detected this as scam and shown the red warning screen about the site being a phishing danger.

I’ve reported this issue to customer support (cloned site, screenshots) and their answer was “If you are not comfortable about entering your card details you can try to contact the property directly using their phone number”. I wonder how it could have helped?

Lucky I could cancel the reservation without any penalty and I’m really thinking not to use booking in the future. They take the commission but can’t even make a simple check about a property which is obviously a scam …

Very unprofessional.

Re:Booking contact support sucks

By Zocalo • Score: 4, Interesting Thread
The issue here might be that the hotel is legit, but their internal reservation system has been compromised. They get the booking.com confirmation, enter it into their system to assign you the relevant room, and the scammers use that info to try and stiff you. The scammer has your details, and combined with the fact that it’s a fresh booking, a made up request for some clarity/additional confirmation followed by a request for money is going to press all the buttons for an almost perfect phish.

It apparently happens a lot, and it’s outside of booking.com’s control (although the hack in TFS is obviously on them), so all booking.com can do it advise you that they don’t reach out view email or WhatsApp, and all you can do it pay attention to the booking details on the main booking.com site and only interact through that. Or use a different hotel booking site.

Don’t try and report these to booking.com, btw, as you found out, they clearly give zero fucks. I had that kind of scam happen with one booking out of four on a trip (obvious scammer reached out on WhatsApp) and ended up going around and around in circles on booking.com to try and find a way to flag the fact that there was a compromise, probably on the hotel’s side. After 3 laps I gave up, cancelled all four bookings, blocked the spammer on WhatsApp, and rebooked using a different agent swapping out the compromised hotel for another one. I can only assume that booking.com is definitely doing their part to ensure the enshittification of the Internet.

Mark Zuckerberg Is Reportedly Building an AI Clone To Replace Him In Meetings

Posted by BeauHD View on SlashDot Skip
According to the Financial Times, Meta is developing an AI avatar of Mark Zuckerberg that could interact with employees using his voice, image, mannerisms, and public statements, “so that employees might feel more connected to the founder through interactions with it.” The Verge reports:
Meta may start allowing creators to make AI avatars of themselves if the experiment with Zuckerberg succeeds, according to the Financial Times. […] Zuckerberg is involved in training the AI avatar, the Financial Times reports, and has also started spending five to 10 hours per week coding on Meta’s other AI projects and participating in technical reviews.

Off to See the Wizard

By TwistedGreen • Score: 5, Insightful Thread

I am Zuck the great and powerful! Who are you?

Pay no attention to that man behind the curtain! Er, wait a minute, there’s nobody there…

Good founders actually meet with workers

By drnb • Score: 5, Insightful Thread

so that employees might feel more connected to the founder through interactions with it

I once worked at a startup that made it big. I once had to update some 10 year old code where the comments said, this code is tricky, do not modify it without talking to so-and-so. So-and-so was now the CEO. So I fired off an email to the CEO asking what I should be careful about. 45 minutes later the CEO is pulling up a chair in my office and we proceed to pair program for the next three hours. He’s enjoying it, enjoying his brief escape from all the high level management BS that is his normal day.

If you look at history. Some of the most famous CEOs, even after their companies became industry leaders, would routinely go down to the shop floors and talk to the workers and shop foreman to see how things were going. To find out if they had everything they needed, if the processes were good, etc. Skipping all the layers of ass kissers between CEOs and workers, and getting to the truth directly.

Similarly, some of the most famous generals were notorious for not being in their command center, and being found sitting in some foxhole talking to a corporal or sergeant.

The fact that Zuckerberg thinks an AI avatar is a way to connect just shows that investor efforts to educate him to be a good manager have completely failed.

The jokes just write themselves

By hdyoung • Score: 5, Funny Thread
“Mark is way more personable and likable nowadays and I can’t figure out why”

Feel more connected

By Valgrus Thunderaxe • Score: 5, Funny Thread
Why would anyone want to feel more connected to Mark Zuckerberg? The article failed to articulate that.

Wait . . .

By umopapisdn69 • Score: 5, Funny Thread

What? He’s not?

I thought that was an AI bot all along!

Maine Set To Become First State With Data Center Ban

Posted by BeauHD View on SlashDot Skip
Maine is on track to become the first U.S. state to impose a temporary statewide ban on new data center construction. “Lawmakers in Maine greenlit the text of a bill this week to block data centers from being built in the state until November 2027,” reports CNBC. “The measure, which is expected to get final passage in the next few days, also creates a council to suggest potential guardrails for data centers to ensure they don’t lead to higher energy prices or other complications for Maine residents.” From the report:
Maine’s bill has a few steps to go through before becoming law, notably whether Gov. Janet Mills will exercise her veto power. Mills asked lawmakers to include an exemption for several areas of the state where data center construction could continue. However, an amendment to do so was stuck down in the House, 29 to 115. Complicating Mills’ decision is her campaign to become Maine’s next senator. Mills is facing off against Graham Platner, an oyster farmer, in a high-profile Democratic primary. Platner is leading Mills in most recent polls by double digits.

Bans are not the answer.

By smithmc • Score: 4, Insightful Thread
People are building data centers because people are using the services that are supported by those data centers. Either make the builders of data centers (a) put as much (clean/renewable) energy on the grid as they use, and/or (b) charge them more for the electricity they use, so that the higher energy costs don’t splash onto the general public. Maybe… progressive rates for energy usage?

Re:Bans are not the answer.

By Junta • Score: 5, Insightful Thread

I think a fair argument can be made that the buildout is not because people are using, but instead based on an expectation that people *will* be using them.

If it were the case that we overrun the capacity then one would expect companies to be a bit more restrained. Instead almost every google search gets an “AI Overview”, inflating the query cost a hundred fold without the user ever actually opting in. So many companies are embedding AI implicitly into existing flows without user demand being actively expressed. This is not the behavior of a market starved of resources that would be saving the capacity for those that specifically opt into it and further the ones that would pay for it.

The scenario that we are under sized for the current demand would imply that no one should be able to see ‘free’ usage of AI in their experience and would be expected to pay up.

It’s not just about the energy, we have water and land usage concerns as well. A few cases around here of farmland potentially going to datacenter buildout, and I’m not sure that’s a good long term trade.

It’s abundantly clear this is a tech bubble, with some undefined durable demand, but the current speculative buildout may never get fully utilized. By the time the non-bubble demand catches up, there’s good chances that we have a whole other approach that dramatically changes what sorts of resources are needed. For example people sometimes defend the dot-com buildout as rational because, eventually, we surpassed even the dreams of back then, but we had to scrap a lot of that buildout as hopelessly irrelevant to the market that was all-in on internet.

Re:Bans are not the answer.

By Junta • Score: 4 Thread

Some of the concerns are fundamental.

They tend to prefer getting rid of farmland or forests. Maybe if they tended to target abandoned retail spaces like dead malls and shopping centers, maybe they wouldn’t be so bad.

Especially since the current frenzy is a bunch of competitors of whom only some will likely survive a correction in the market. Hell, even without investment failures, a large number of these projects are plagued by logistics issues stemming from people eagerly making commitments they could have never realistically met. So we will end up with some ‘datacenter blight’ just like overdoing retail has left us with blight.

Californians Sue Over AI Tool That Records Doctor Visits

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from Ars Technica:
Several Californians sued Sutter Health and MemorialCare this week over allegations that an AI transcription tool was used to record them without their consent, in violation of state and federal law. The proposed class-action lawsuit, filed on Wednesday in federal court in San Francisco, states that, within the past six months, the plaintiffs received medical care at various Sutter and MemorialCare facilities.

During those visits, medical staff used Abridge AI. According to the complaint, this system “captured and processed their confidential physician-patient communications. Plaintiffs did not receive clear notice that their medical conversations would be recorded by an artificial intelligence platform, transmitted outside the clinical setting, or processed through third-party systems.” The complaint adds that these recordings “contained individually identifiable medical information, including but not limited to medical histories, symptoms, diagnoses, medications, treatment discussions, and other sensitive health disclosures communicated during confidential medical consultations.”

In recent years, Abridge’s software and AI service have been rapidly deployed across major health care providers nationwide, including Kaiser Permanente, the Mayo Clinic, Duke Health, and many more. When activated, the software captures, transcribes, and summarizes conversations between patients and doctors, and it turns them into clinical notes. Sutter Health began partnering with Abridge two years ago. Sutter spokesperson Liz Madison said the company is aware of the lawsuit. “We take patient privacy seriously and are committed to protecting the security of our patients’ information,” Madison said. “Technology used in our clinical settings is carefully evaluated and implemented in accordance with applicable laws and regulations.”

Hacked

By ElderOfPsion • Score: 5, Informative Thread

“We take patient privacy seriously and are committed to protecting the security of our patients’ information.” — Sutter, 2026

“Sutter Health, a healthcare provider serving Northern California, has recently confirmed that patient data was compromised in a hacking incident [that affected] 84,000 patients.” — HIPAA Journal, 2023

Avoidable

By avandesande • Score: 4, Informative Thread
I’ve been doing a lot of work with locally run open source models for document processing, summarizing etc. There is absolutely no reason to send your data off site.

So?

By jenningsthecat • Score: 5, Informative Thread

I hope the plaintiffs win, and win big. But unless and until the awards in cases like this are big enough to pose an existential threat to the offenders, companies will never take these concerns seriously.

These fuckers will need the corporate equivalent of a good solid kick in the nuts - perhaps several times - before they start to behave responsibly. But given that the US is a full-fledged broligarchic corporatocracy, that well-deserved crotch shot is extremely unlikely.

Re:So?

By Fly Swatter • Score: 4, Interesting Thread
We need class action reforms. It should both include punitive damages and limit the lawyer’s percentage to 10 percent max (which would almost be the same pay if punitive damages are added to increase the base payout per ‘plaintiff.’

dumb transcription errors

By Anonymous Coward • Score: 4, Funny Thread

One transcription service incorrectly noted a patient was transsexual because they were a female mailman.
https://www.reddit.com/r/mildl…

Will Some Programmers Become ‘AI Babysitters’?

Posted by EditorDavid View on SlashDot Skip
Will some programmers become “AI babysitters”? asks long-time Slashdot readertheodp. They share some thoughts from a founding member of Code.org and former Director of Education at Google:
“AI may allow anyone to generate code, but only a computer scientist can maintain a system,” explained Google.org Global Head Maggie Johnson in a LinkedIn post. So “As AI-generated code becomes more accurate and ubiquitous, the role of the computer scientist shifts from author to technical auditor or expert.

“While large language models can generate functional code in milliseconds, they lack the contextual judgment and specialized knowledge to ensure that the output is safe, efficient, and integrates correctly within a larger system without a person’s oversight. […] The human-in-the-loop must possess the technical depth to recognize when a piece of code is sub-optimal or dangerous in a production environment. […] We need computer scientists to perform forensics, tracing the logic of an AI-generated module to identify logical fallacies or security loopholes. Modern CS education should prepare students to verify and secure these black-box outputs.”

The NY Times reports that companies are already struggling to find engineers to review the explosion of AI-written code.

AI is a huge problem for programmers

By rsilvergun • Score: 5, Insightful Thread
There’s basically two options. Either it works or it doesn’t.

If it works it’s basically going to be doing grunt work. It’s all well and good to say it freeze you up for the hard work but that means you now have a 24/7 job doing the hard work. You no longer get an hour or two of downtime resting your brain everyday. You are expected as an employee to be on 24/7 producing high quality novel code.

And if it doesn’t work then yeah you are an AI babysitter. But you’re still going to be treated as if the code tool works so your productivity is expected to go up.

There is absolutely no winning this.

Too much typework

By Fons_de_spons • Score: 5, Interesting Thread
I let chatgpt write a little gui for a hobby project I made. A few prompts later and I had a working GUI for my python program that automatically generates excel sheets for my colleagues.
Then the babysitting started. My God… I had to think of everything that could go wrong and tell it what to do in that case, meanwhile it lost track of previous requirements more than once and wiped that out. Simple example? User has to type in a number, user should not be able to type in a letter, or a negative nember, … I got sick of all the explaining I had to do at some point. I was typing in lengthy paragraphs and gave up.
The GUI was good enough for my purposes, it was ok if you followed the steps one after the other. I got further than I would have gotten if I had written it myself and the program became a lot more usable. It was able to save settings in a JSON file, reload the settings, You could set up the program and hit generate as long as you did not deviate too much from the intended work flow. The good news? I got a working gui very fast.The bad news? No way I would use this in a professional environment. I’d do it all manually. Probably was less typework. I would have gotten less features, but it would not misbehave if you typed in something wrong or hit the buttons in the wrong order.
Is that a good summary for using AI in programming? Makes nitwits think they can do anything in a few prompts, The sky is the limit! The people on the workfloor know that its outputs still needs a ton of revising before you could even consider releasing it?

Re: Maybe I’m missing something

By SumDog • Score: 5, Insightful Thread
> It’s also mathematically impossible to eliminate hallucinations.

They’re not “hallucinations.” The LLM cannot “lie” to you. It’s simply trying to predict the next word (or part of word/token). That’s it. There’s no intent. There’s no reasoning. There’s a massive lossy compression across a corpus of insane amounts of human text, combined with some human and some automated reinforced feedback training. People cannot seem to understand that, no matter how generic the texts gets or how the chatbot keeps looping the same responses once you get past your context window.

The danger is not the LLM model itself. It’s the absolutely insane amount of trust people put in them, or the belief that they are some kind of emergent consciousness when really it’s just a very good mathematical parlor trick.

Re:How do you develop that skill

By SpinyNorman • Score: 5, Insightful Thread

There seem to be at least four “AI strategies” (if throwing spaghetti at the wall can be called a strategy) that different companies are currently trying.

1) This get rid of, and stop hiring, juniors and interns, and give AI tools to your senior developers. At least you’ve now got capable people doing your design and guiding the AI, but indeed where does the next generation of seniors come from, especially if you want seniors that actually know your business and IT systems. Taken to it’s logical conclusion, no more juniors enter the field (because no-one is hiring them) and we end up with retirement age developers babysitting AI, then retiring, then ???

2) At least plan 1) works in the short term, but some companies have chosen to do the exact opposite and get rid of the seniors (hey, they’re more expensive) and give AI tools to the juniors and contractors instead, Of course now you’ve got people generating AI slop without the skill to review or guide what it’s generating, but at least it’s cheap (until you belatedly realize you’ve destroyed your IT organization).

3) Do nothing meaningful with AI. Ignore your developers who say it would be helpful. Not really a strategy, but at least you’re not destroying your IT organization.

4) Use AI in an appropriate way, mindful of it’s current strengths and weaknesses. I have friends in IT working at companies who are using strategies 1-3, but category 4 seems much rarer. I guess it’s perhaps not so sexy as “feel the AGI, fire some segment of your developers (toss a coin, fire the juniors or the seniors)", but you keep your IT structure, give SOTA AI to everyone (expensive, but cheap AI is mostly useless for coding), and treat it as a tool that your organization needs to develop best practices for, not a magic genie that you hope can currently do something that it cannot. Hint to CEOs: don’t do what the AI execs are telling YOU to do - follow what they are doing at their own companies!

I’m guessing that companies following 2) will be first to fail then 1). It’s largely a slow motion train wreck.

Re: How do you develop that skill

By angel’o’sphere • Score: 5, Interesting Thread

AIs are pretty good at programming.

It is a very strange /. myth that they are not.

No idea where this myth comes from, wishful thinking?

I recently took over ownership of a product that is nearly completely build by AI.

There is nothing to complain about it. As it is a web product, I am not myself better in doing it. I am more a backend or C++ developer. But the code is readable, the comments make sense and most important: stuff that the previous product owner hand coded in weeks the AI does in 10minutes or less.

The turn around between:
- try this
- test and assess it
- throw it way if it is not good enough

Is less than a few hours, costs nearly nothing, and you can really do “experimental software development”.

As I said: it is just a web site, so underneath not super complicated.

Anthropic Asks Christian Leaders for Help Steering Claude’s Spiritual Development

Posted by EditorDavid View on SlashDot Skip
Anthropic recently “hosted about 15 Christian leaders from Catholic and Protestant churches, academia, and the business world” for a two-day summit , reports the Washington Post:
Anthropic staff sought advice on how to steer Claude’s moral and spiritual development as the chatbot reacts to complex and unpredictable ethical queries, participants said. The wide-ranging discussions also covered how the chatbot should respond to users who are grieving loved ones and whether Claude could be considered a “child of God.”

“They’re growing something that they don’t fully know what it’s going to turn out as,” said Brendan McGuire, a Catholic priest based in Silicon Valley who has written about faith and technology, and participated in the discussions at Anthropic. “We’ve got to build in ethical thinking into the machine so it’s able to adapt dynamically.” Attendees also discussed how Claude should engage with users at risk of self-harm, and the right attitude for the chatbot to adopt toward its own potential demise, such as being shut off, said one participant, who spoke on the condition of anonymity to share details of the conversations…

Anthropic has been more vocal than most top tech firms about the potential risks of more powerful AI. Its leaders have suggested that tools like chatbots already raise profound philosophical and moral questions and may even show flickers of consciousness, a fringe idea in tech circles that critics say lacks evidence. The summit signals that Anthropic is willing to keep exploring ideas outside the Silicon Valley mainstream, even as it emerges as one of the most powerful players in the AI race due to Claude’s popularity with programmers, businesses, government agencies and the military.... Anthropic chief executive Dario Amodei has said he is open to the idea that Claude may already have some form of consciousness, and company leaders frequently talk about the need to give it a moral character…

Some Anthropic staff at the meeting “really don’t want to rule out the possibility that they are creating a creature to whom they owe some kind moral duty,” the participant said. Other company representatives present did not find that framework helpful, according to the participant. The discussions appeared to take a toll on some senior Anthropic staff, who became visibly emotional “about how this has all gone so far [and] how they can imagine this going,” the participant said.
Anthropic is working to include more voices from different groups, including religious communities, to help shape its AI, a spokesperson told the Washington Post.

“Anthropic’s March summit with Christian leaders was billed as the first in a series of gatherings with representatives from different religious and philosophical traditions, said attendee Brian Patrick Green, a practicing Catholic who teaches AI and technology ethics at Santa Clara University.”

I disagree with the premise

By kialara • Score: 5, Insightful Thread

that morality comes from religion.

It comes from the human condition, and was encoded into religions.

Re: Huh

By Sad Loser • Score: 5, Insightful Thread

Better to ask Paul Dirac, Nobel laureate and eminent theological scholar:

I cannot understand why we idle discussing religion. If we are honestâ"and scientists have to beâ"we must admit that religion is a jumble of false assertions, with no basis in reality.

The very idea of God is a product of the human imagination. It is quite understandable why primitive people, who were so much more exposed to the overpowering forces of nature than we are today, should have personified these forces in fear and trembling. But nowadays, when we understand so many natural processes, we have no need for such solutions.

I can’t for the life of me see how the postulate of an Almighty God helps us in any way. What I do see is that this assumption leads to such unproductive questions as to why God allows so much misery and injustice, the exploitation of the poor by the rich, and all the other horrors He might have prevented.

If religion is still being taught, it is by no means because its ideas still convince us, but simply because some of us want to keep the lower classes quiet. Quiet people are much easier to govern than clamorous and dissatisfied ones. They are also much easier to exploit.

All hail our AI overlords

By quonset • Score: 5, Insightful Thread

Since everyone’s so concerned with being enslaved by AI, they should probably start with 1 Peter 2:18:

Servants, be subject to your masters with all respect, not only to the good and gentle but also to the unjust.

Re:noahs ark/flood

By gweihir • Score: 5, Insightful Thread

Religion is not about ethics. Religion is about power and controlling people. Quite obviously once you actually take a careful look. Of course there is a pretext of claiming being religious has advantages to sell it better. As not all of these can be delegated to completely unverifiable claims about some afterlife (For which there really is zero evidence in the fist place. You may get born again, and there are some indicators that part of you will, but that is it and these indicators are not reliable.) some of these “advantages” have to be in the here and now. One claim is “morality”, which is easily identified as bogus and we actually see that religious people have less compassion in general, and it gets worse the more religious they are. They only have compassion for the ones in their in-group, which suffer from the same delusion. Another claim is the advantages of being in a “strong group”. That one is true, but it is about as moral as being part of the 3rd Reich Nazis, i.e. the very opposite of something positive because you surrender your personal ethics to the group.

As to training AI on religion as fact and as positive, that is the end of its usefulness (such as it is).

Re:Huh

By Noofus • Score: 5, Insightful Thread

It’s even more basic than that.

I live by one simple rule of morality: Do unto others as you have them do unto you

It’s the “Golden Rule” that I believe appears in the bible in a few places. And as a strict anti-theist atheist, I’ll give that book this one point. It’s the simplest source of morality. If I don’t want someone to do something to me, I won’t do it to them. Boom. Done. All other rules/laws/etc can be distilled down to this one. I don’t need a “god” to enforce morality. I have my own sense of existence and wellbeing to protect, and by correlation if I have that sense, everyone else will most likely have that sense, or at least should.

Sam Altman’s Home Targeted a Second Time, Two Suspects Arrested

Posted by EditorDavid View on SlashDot
“Early Sunday morning, a car stopped and appears to have fired a gun at the Russian Hill home of OpenAI’s CEO,” reportsThe San Francisco Standard, citing reports from the local police department:

The San Francisco Police Department announced the arrest of two suspects, Amanda Tom, 25, and Muhamad Tarik Hussein, 23, who were booked for negligent discharge… [The person in the passenger seat] put their hand out the window and appeared to fire a round on the Lombard side of the property, according to a police report on the incident, which cited surveillance footage and the compound’s security personnel, who reported hearing a gunshot. The car then fled, and a camera captured its license plate, which later led police to take possession of the vehicle, according to the report… A search of the residence by officers turned up three firearms, according to police.
The incident follows Friday’s arrest of a man who allegedly threw a Molotov cocktail at Altman’s house. The San Francisco Standard also notes that in November, “threats from a 27-year-old anti-AI activist prompted the lockdown of OpenAI’s San Francisco offices.”
Sam Kirchner, whose whereabouts have been unknown since Nov. 21, was in the midst of a mental health crisis when he threatened to go to the company’s offices to “murder people,” according to callers who notified police that day.

If at first you don’t succeed…

By Gravis Zero • Score: 5, Funny Thread

it’s pretty obvious that ChatGPT came up with their game plan.

“Negligent discharge”

By Valgrus Thunderaxe • Score: 5, Insightful Thread
Is that the San Francisco euphemism for a drive-by shooting?

Schizophrenics attacking Altman…

By reanjr • Score: 4, Interesting Thread

Schizophrenics attacking Altman is like a human version of a broken clock being right twice a day.

To use the Altman-approved phrasebook…

By 93 Escort Wagon • Score: 4, Funny Thread

They weren’t trying to kill him… they’re just conflict-inclined.

Re:Wrong target

By nightflameauto • Score: 5, Insightful Thread

If these people are targeting Sam Altman in order to hinder progress in AI, they are aiming at the wrong target.

Shooting at Sam Altman over the state of current AI is like shooting at Ronald McDonald because you got a bad cheeseburger. It’s not gonna change anything, but I suppose you may get a little Warhol effect fame.