Alterslash

the unofficial Slashdot digest
 

Contents

  1. New Linux ‘Copy Fail’ Vulnerability Enables Root Access On Major Distros
  2. In Real-World Test, an AI Model Did Better Than ER Doctors At Diagnosing Patients
  3. French Prosecutors Link 15-Year-Old To Mega-Breach At State’s Secure Document Agency
  4. World’s Largest Digital Human Rights Conference Suddenly ‘Postponed’
  5. Microsoft Open-Sources ‘Earliest DOS Source Code Discovered To Date’
  6. Convicted Former Harvard Scientist Rebuilds Brain Computer Lab In China
  7. Most Swiss Back Initiative To Cap Population At 10 Million
  8. OpenAI Codex System Prompt Includes Explicit Directive To ‘Never Talk About Goblins’
  9. DOJ Sues Cloudera For Deliberately Excluding American Workers From Tech Jobs
  10. First Tesla Semi Rolls Off High-Volume Production Line
  11. Elon Musk Says OpenAI Betrayed Him, Clashes With Company’s Attorney
  12. New Sam Bankman-Fried Trial Would Be Huge Waste of Court’s Time, Judge Says
  13. Ubuntu’s AI Plans Have Linux Users Looking For a ‘Kill Switch’
  14. Joby Demos Its Air Taxi In NYC
  15. Apple Gives Up On the Vision Pro After M5 Refresh Flop

Alterslash picks up to the best 5 comments from each of the day’s Slashdot stories, and presents them on a single page for easy reading.

New Linux ‘Copy Fail’ Vulnerability Enables Root Access On Major Distros

Posted by BeauHD View on SlashDot Skip
A newly disclosed Linux kernel flaw dubbed "Copy Fail" can let a local, unprivileged attacker gain root access on major Linux distributions, with researchers claiming the bug affects kernels shipped since 2017. “The POC exploit works out of the box today, but a future version that can escape from containers like Docker is promised soon,” writes Slashdot reader tylerni7. “Technical details are available here.” Slashdot reader BrianFagioli shares a report from NERDS.xyz:
A newly disclosed Linux kernel vulnerability called Copy Fail (CVE-2026-31431) allows an unprivileged user to gain root access using a tiny 732-byte script, and it works with unsettling consistency across major distributions. Unlike older exploits that relied on race conditions or fragile timing, this one is a straight-line logic flaw in the kernel’s crypto subsystem. It abuses AF_ALG sockets and splice to overwrite a few bytes in the page cache of a target file, such as /usr/bin/su. Because the kernel executes from the page cache, not directly from disk, the attacker can inject code into a setuid binary in memory and immediately escalate privileges.

What makes this especially concerning is how quiet it is. The file on disk remains unchanged, so standard integrity checks see nothing wrong, while the in-memory version has already been tampered with. The same primitive can also cross container boundaries since the page cache is shared, raising the stakes for multi-tenant environments and Kubernetes nodes. The underlying issue traces back to an in-place optimization added years ago, now being rolled back as part of the fix. Until patched kernels are widely deployed, this is one of those bugs that feels less like a theoretical risk and more like a practical, reliable path to full system compromise.

Note that this is a local exploit

By gweihir • Score: 3 Thread

If an attacker gets this far, you have already messed up. Still should be patched ASAP.

Re: And this is why

By Jack Greenbaum • Score: 4, Informative Thread
Clearly you missed the part about the files on disk not being modified.

In Real-World Test, an AI Model Did Better Than ER Doctors At Diagnosing Patients

Posted by BeauHD View on SlashDot Skip
A new study from Harvard Medical School and Beth Israel Deaconess found that an OpenAI reasoning model outperformed experienced ER doctors at diagnosing and managing patient cases using messy, real-world emergency department records. Researchers say the results don’t support replacing doctors, but they do suggest AI could meaningfully reshape clinical workflows if tested carefully in prospective trials. NPR reports:
The researchers ran a series of experiments on the AI model to test its clinical acumen — including actual cases like the lupus patient who’d been previously treated at the emergency department at Beth Israel in Boston. The team graded how well the AI model could provide an accurate diagnosis at three moments in time, from the triage stage in the ER, up to being admitted into the hospital. Overall, AI outperformed two experienced physicians — and did so with only the electronic health records and the limited information that had been available to the physicians at the time. “This is the big conclusion for me — it works with the messy real-world data of the emergency department, " said Dr. Adam Rodman, a clinical researcher at Beth Israel and one of the study authors. “It works for making diagnoses in the real world.”

Other parts of the study focused on case reports published in the New England Journal of Medicine and clinical vignettes to suss out whether the AI model could meet well-established “benchmarks” and game out thorny diagnostic questions. “The model outperformed our very large physician baseline,” said Raj Manrai, assistant professor of Biomedical Informatics at Harvard Medical School who was also part of the study. The authors emphasize the AI relied on text alone, while in real life, clinicians need to attend to many other inputs like images, sounds and nonverbal cues when diagnosing and treating a patient.
The findings have been published Thursday in the journal Science.

Frankenstein’s Doctor

By bryanandaimee • Score: 3 Thread
It is a common misconception that the doctor’s name was Frankenstein. Actually the AI was named Frankenstein. It created the doctor from spare parts.

Coming: Reverse Centaurs and

By hwstar • Score: 3 Thread

accountability sinks.

1. What is a Reverse Centaur?

The Reverse Centaur: The AI acts as the “head” or decision-maker, and the human is the “body” or worker, forced to keep up with an impossible, algorithmic pace.

2. What is an Accountability Sink?

  A “moral crumple zone”—is a human who is present only to take the blame when an AI system fails.

Who wants to work in such an environment?

Some of us would refuse.

Perfectly understandable.

By couchslug • Score: 4, Insightful Thread

Doctors are tired, stressed and multitasking, They diagnose by pattern matching, ideal for AI.

Nurse diagnosed them, not the AI

By gurps_npc • Score: 3 Thread

The AI relied on the text records. Which were things the NURSE noticed and entered into the chart. The nurse did the hard part, examining the patient, asking the right questions.

You cannot diagnose just on blood pressure, heart rate, oxygen rate. You need to notice things like:

slurred speech
dilated eyes
excessive sweat
pale
red skin
rash
bruised

The thing is, it was a trained nurse that noticed these symptoms and WROTE THEM DOWN. And she usually knew exactly what it was, but waited for the doctor to say.

Anyone can diagnose correctly 90% of the time if you have the right information. Also note, diagnoising a problem is not like on House or Watson. 80+% of the time the answer is blindingly obviously.

Bleeding profusely from a jagged wound = knife attack
Patient comes in acting exactly like the 9 other drug addicts you got last month = using whatever the new/most common drug is.
Patient smelling of alcohol is not a big mystery
blood tests indicating high sugar = Diabetes
Long time diabetic with blood in urine = kidney failure
Immense pain in big toe from an overweight person = Gout

For most cases, any EMT can tell what the problem is. The problem is not the common cases, but the problematic ones,

For those “mysteries”, do we want an AI diagnosing without a human confirming it? No. But we can probably save a bit of money by having the AI do it before a doctor confirms.

French Prosecutors Link 15-Year-Old To Mega-Breach At State’s Secure Document Agency

Posted by BeauHD View on SlashDot Skip
French prosecutors say police detained a 15-year-old suspected of using the alias “breach3d" in connection with a cyberattack on France Titres (ANTS), the state agency that handles passports, ID cards, and other secure documents. The breach allegedly involved 12 million to 18 million lines of data offered for sale online, potentially affecting up to a third of France’s population if the records are unique. The Register reports:
It formally opened (PDF) a judicial investigation on April 29, covering alleged fraudulent access to a state-run automated data processing system and the extraction of data from it. Each offense carries a potential prison sentence of seven years and a maximum ~$350,000 fine. Public Prosecutor Laure Beccuau has requested that the minor, whose pronouns, like their name, were also not specified, be formally charged and placed under judicial supervision.

[…] France’s approach to punishing minors via its legal system is typically geared toward re-education and rehabilitation rather than prison time. While those aged between 13 and 16 can face time in juvenile detention, it is often used as a last resort measure. The maximum sentences and fines for the charges the 15-year-old in this case faces are upper limits imposed on adult offenders, and would likely be lowered substantially in cases involving a minor, like this one.

World’s Largest Digital Human Rights Conference Suddenly ‘Postponed’

Posted by BeauHD View on SlashDot Skip
RightsCon, one of the world’s largest digital human rights conferences, was suddenly postponed by Zambia’s government just days before it was scheduled to begin in Lusaka. Officials cited unresolved speaker clearances and “thematic issues,” while Access Now said it had not yet received formal communication and was seeking an urgent meeting with the government. 404 Media reports:
Minister of Technology and Science Felix Mutati first announced the postponement on April 28, saying that Zambia needed more time to ensure the conference “fully [aligns] with national procedures, diplomatic protocols, and the broader objective of fostering a balanced and consensus-driven platform for dialogue.” “In particular, certain invited speakers and participants remain subject to pending administrative and security clearances, which have not yet been concluded,” he added, according to the Lusaka Times.

[…] On a popular listserv for academics, many of whom are attending RightsCon, a board member of Access Now wrote “I am told I can leak that RightsCon has been canceled. Message from [Access Now] following shortly” in a thread about what attendees were planning on doing. And in an email, AccessNow wrote: “It is with heavy hearts that we share: RightsCon will not proceed in Zambia or online. We understand this news is deeply upsetting for our community and while we know everyone has questions, our goal right now is to notify you of the event’s status because many of you have imminent travel plans. We do not recommend registered participants travel to Lusaka for RightsCon.

Over the last 48 hours we have experienced an overwhelming surge of support from civil society, government representatives, sponsors, and our community as a whole. For this, we wholeheartedly thank you. We’ll communicate more information soon.”

Zambia, you say ?

By greytree • Score: 5, Informative Thread
“Human rights in Zambia are constitutionally guaranteed, but the government frequently restricts freedoms of expression, assembly, and press in practice. Reports indicate serious abuses by security forces, including unlawful killings and torture, as well as arbitrary arrests and detentions.”

Government level trolling

By algaeman • Score: 3 Thread
Go on. Take the money and run

Microsoft Open-Sources ‘Earliest DOS Source Code Discovered To Date’

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from Ars Technica:
Several times in the last couple of decades, Microsoft has released source code for the original MS-DOS operating system that kicked off its decades-long dominance of consumer PCs. This week, the company has reached further back than ever, releasing “the earliest DOS source code discovered to date" along with other documentation and notes from its developer.

Today’s source release is so old that it predates the MS-DOS branding, and it includes “sources to the 86-DOS 1.00 kernel, several development snapshots of the PC-DOS 1.00 kernel, and some well-known utilities such as CHKDSK,” write Microsoft’s Stacey Haffner and Scott Hanselman in their co-authored post about the release. […] This source code is old enough that it hadn’t been stored digitally. “A dedicated team of historians and preservationists led by Yufeng Gao and Rich Cini,” calling itself the “DOS Disassembly Group,” painstakingly transcribed and scanned in code from paper printouts provided by Paterson. This process was made even more difficult because modern OCR software struggled with the quality of the decades-old printout.

Re:OCR struggled?

By LordHighExecutioner • Score: 4, Informative Thread
Don’t be too surprised. I am struggling since a few months with the rebuild of an old FORTRAN program. It is about 15,000 lines of code, only a printout was available, and the OCR replaced randomly ‘0’ with ‘O’, ‘1’ with ‘l’ an so on. Multiple pass with compiler and ftnchek solved most problems, but still something is not OK. Luckily, in the printout there are some test examples…

Re:It’s so old…

By tlhIngan • Score: 5, Informative Thread

Well;, it’s well known that after IBM failed to get an NDA with Digital Research for CP/M-86, they went to Microsoft and asked if they could supply the operating system. Bill Gates agreed and then they purchased a full license of 86-DOS from Seattle Computer Products.

They mildly patched it to get it working on the IBM PC (it was originally designed for SCP’s 8086-based computer).

Note the source code actually existed - the Computer History Museum actually has it as a digital artifact. The only problem was it wasn’t open source - it’s was until now only available as a source-available license for studying and curiosity. What Microsoft did now was put it under MIT license so it’s under a fully open open-source license that lets you compile and build it.

Also, Microsoft paid for a per-customer license for 86-DOS, paying $90,000 for it. The did this knowing they only had one customer - IBM. Eventually they hired the programmer of 86-DOS.

MS-DOS 1.0 wasn’t particularly interesting other than appearing like an independently created version of CP/M. MS-DOS 2.0 added additional services that made MS-DOS look a lot more like an operating system - instead of CP/M opening the files for you (and passing their handle in your process control block), MS-DOS 2 let you actually open a file by calling an open function. (MS-DOS 2.0 inherited a lot of semantics from Xenix).

They were going to open source

By rsilvergun • Score: 4, Funny Thread
The earliest basic code but somebody had already cleaned out the MIT trash cans

Re: Historical

By Malc • Score: 5, Insightful Thread

Probably contains bugs for things they’re still including in recent versions of Windows.

Re:OCR struggled?

By 0123456 • Score: 5, Interesting Thread

The PGP encryption source code was printed in a loose-leaf book with checksums on each line to make it easy to OCR.

It was still a huge project because they forgot to convert tabs to spaces (or vice-versa) before printing so software had to be written to try all possible combinations of tabs and spaces on lines where the checksum check failed.

For the Apollo Guidance Computer code they got lucky and had a binary dump of the compiled executable at the end of the listing so they could run the OCR-ed code through the compiler and check for mismatches in the compiled binary to find the OCR errors.

It’s definitely non-trivial and can be even if the developers went out of their way to try to make it easy.

Convicted Former Harvard Scientist Rebuilds Brain Computer Lab In China

Posted by BeauHD View on SlashDot Skip
Reuters reports that Charles Lieber, the former Harvard scientist convicted of lying to U.S. authorities about payments and ties to China, is now leading China’s state-funded i-BRAIN lab in Shenzhen, where he has access to advanced nanofabrication tools and primate research facilities for brain-computer interface work. From the report:
Charles Lieber, 67, is among the world’s leading researchers in brain-computer interfaces. The technology has shown promise in treating conditions such as ALS and restoring movement in paralyzed patients. But it also has potential military applications: Scientists at China’s People’s Liberation Army have investigated brain interfaces as a way to engineer super soldiers by boosting mental agility and situational awareness, according to the U.S. Defense Department. Lieber was found guilty by a jury and convicted in December 2021 of making false statements to federal investigators about his ties to a Chinese state program to recruit overseas talent, and tax offenses related to payments he received from a Chinese university. He served two days in prison and six months under house arrest, and was fined $50,000 and ordered to pay $33,600 in restitution to the Internal Revenue Service. During the case, his defense said he was suffering from an incurable lymphoma, which was in remission, and he was fighting for his life.

Three years after he was sentenced, Reuters has learned that Lieber is now overseeing China’s state-funded i-BRAIN, or the Institute for Brain Research, Advanced Interfaces and Neurotechnologies, with access to dedicated nanofabrication equipment and primate research infrastructure unavailable to him at Harvard. The lab is an arm of the Shenzhen Medical Academy of Research and Translation, or SMART. “I arrived on April 28, 2025 with a dream and not much more, maybe a couple bags of clothes,” Lieber said of his move to China at a Shenzhen government conference in December. “Personally, my own goals are to make Shenzhen a world leader.”

SMART last year appointed Lieber as an investigator, according to a post on i-BRAIN’s website dated May 1, 2025. That news was covered by some media outlets. The same day, i-BRAIN said Lieber had also been appointed its founding director — an announcement that went unreported at the time. This story is the most comprehensive account of Lieber’s activities since he moved to China. Reuters is reporting for the first time that his lab has access to dedicated primate research facilities and chip-making equipment; that it sits within a sprawling ecosystem of state-backed institutions bankrolled by billions of dollars in government funding; and that it is housed within an institution that is luring top scientific talent back from the United States.

China

By angryman77 • Score: 3 Thread
It being China we’re taking about, I’m going to go ahead and translate “dedicated primate research facilities” into “prisons.”

How did he leave the country?

By DrMrLordX • Score: 3 Thread

How did this guy leave so easily? Surely he was placed under travel restrictions after the end of his house arrest?

isn’t this just capitalism?

By dfghjk • Score: 3, Insightful Thread

Selling to the highest bidder, isn’t that what the economic geniuses here all support? Funny how greed and racism can come into conflict.

Most Swiss Back Initiative To Cap Population At 10 Million

Posted by BeauHD View on SlashDot Skip
A new poll shows a slim majority of Swiss voters now support a June 14 referendum to cap the country’s population at 10 million by 2050. Under the proposal backed by the right-wing Swiss People’s Party (SVP), “the permanent resident population must not exceed 10 million before 2050, and Switzerland should abandon its freedom of movement agreement with the EU,” reports Reuters. From the report:
Switzerland’s population is now more than 9 million, with official data showing foreign nationals accounted for more than 27% by 2024. The survey, conducted on April 22 and 23 and published in newspaper Tages-Anzeiger, showed 52% of 16,176 respondents in favor of the proposal or leaning that way, while 46% took the opposite view. The rest gave no opinion. A previous poll from early March had shown 45% backing the initiative and 47% against it, the newspaper said, flagging the latest result as unusual in that Swiss referendum proposals generally lose support as the voting day comes closer. The poll had a margin of error of plus or minus 3 percentage points.

For context

By rskbrkr • Score: 5, Insightful Thread
The Swiss birth rate is down to 1.29 children per woman. Recent population growth is due solely to immigration.

Re:Not sure what to think about this

By MIPSPro • Score: 4, Insightful Thread
That’s the right of the local population, no? They can opt to control immigration. It’s perfectly legitimate for any number of reasons. They might want to preserve their culture and force immigrants to assimilate. That’s perfectly okay. They might want to keep health standards high, check for criminal backgrounds, do psych evals, etc… That’s also their right. They might even want to preserve their racial makeup and overall “look”. That’s also okay if that’s what they want.

One of many reasons folks cite for not having as many kids is lack of prospects and opportunity for their kids. If blocking a bunch of illiterates from coming in helps create a better environment, then that’s just the hard cold truth and acting like everything is being done from a purely racist perspective is ridiculous. That’s my assertion, not a reply to anything you said, but ultimately “Yah, they might want to control immigration. Yep. Exactly right. They can. That’s their right. Their reasons aren’t important, because it’s their fucking country.”

Great idea in theory

By MpVpRb • Score: 5, Interesting Thread

Endless growth is impossible
We need steady-state sustainability
It will be interesting to see how this works out

Re: Not sure what to think about this

By gurps_npc • Score: 5, Insightful Thread

27% of their population are non-swiss members of the EU that decide to live in Switzerland.

They are not trying to control breeding, they are attempting to stop immigration from the EU.

Re: Not sure what to think about this

By Geoffrey.landis • Score: 4, Informative Thread

Yes it is. There are copious amounts of dystopian sci-fi talking about why governments shouldn’t control breeding.

The average number of children per woman in Switzerland is 1.29, about half of the population replacement rate. Stopping population growth in Switzerland has nothing to do with controlling breeding.

https://www.swissinfo.ch/eng/v…

OpenAI Codex System Prompt Includes Explicit Directive To ‘Never Talk About Goblins’

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from Ars Technica:
The system prompt for OpenAI’s Codex CLI contains a perplexing and repeated warning for the most recent GPT model to "never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user’s query.”

The explicit operational warning was made public last week as part of the latest open source code for Codex CLI that OpenAI posted on GitHub. The prohibition is repeated twice in a 3,500-plus word set of “base instructions” for the recently released GPT-5.5, alongside more anodyne reminders not to “use emojis or em dashes unless explicitly instructed” and to “never use destructive commands like ‘git reset —hard’ or ‘git checkout —' unless the user has clearly asked for that operation.”

Separate system prompt instructions for earlier models contained in the same JSON file do not contain the specific prohibition against mentioning goblins and other creatures, suggesting OpenAI is fighting a new problem that has popped up in its latest model release. Anecdotal evidence on social media shows some users complaining about GPT’s penchant for focusing on goblins in completely unrelated conversations in recent days.
Update: OpenAI has published a blog post explaining "where the goblins came from.”

In short, a training signal meant to encourage its “Nerdy” personality accidentally rewarded creature-heavy metaphors, causing words like “goblins” and “gremlins” to spread beyond that personality into broader model behavior. OpenAI says it has since retired the Nerdy personality, removed the goblin-friendly reward signal, and filtered creature-word examples from training data to keep the quirk from resurfacing in inappropriate contexts.

Funny but serious

By JoshuaZ • Score: 3 Thread
This is obviously pretty funny at some level, and an amusing example of how training can go wrong in somewhat subtle ways. This is in some respects a less substantial example of how [Claude Opus essentially hacked itself into caring a lot more about ethics](https://www.lesswrong.com/posts/ioZxrP7BhS5ArK59w/did-claude-3-opus-align-itself-via-gradient-hacking). But both of these are examples of the same central issues: LLM AIs even in their current form are hard to predict, hard to control, and can end up with very weird hard to predict or adjust behavior.

Re:Funny but serious

By dinfinity • Score: 4, Insightful Thread

To be fair, in this instance they almost specifically instructed the AI to act like this:

“You are an unapologetically nerdy, playful and wise AI mentor to a human. You are passionately enthusiastic about promoting truth, knowledge, philosophy, the scientific method, and critical thinking. […] You must undercut pretension through playful use of language. The world is complex and strange, and its strangeness must be acknowledged, analyzed, and enjoyed. Tackle weighty subjects without falling into the trap of self-seriousness. […]"

Why the hell they thought that is what a “Nerdy” personality is, is a whole different story.

This is what uncurated training causes

By Arrogant-Bastard • Score: 5, Insightful Thread
When you’re trying to train a model, it’s critically important that you scrutinize every piece of training data — meticulously. The larger and more complex the model, the more important this becomes.

If you neglect this, then the model may fail in anomalous and unpredictable ways. In other words: you can run 10,000 tests and they’ll all be just fine, but when you run the 10,001st, the model fails. Worse, you won’t know how…or why…or how to fix it, because the answers to those questions are buried in a network too large for a human being to comprehend. This problem has been well-known for decades; it’s how things like this: Tesla Autopilot Confuses Boy In Orange Shirt For A Cone In Brazil happen. They thought they were training the vision system to recognize traffic cones; they were really training it to recognize orange objects of a certain size and height:width ratio.

Faced with this situation, you can either (a) go back and figure out what you did wrong in the training process or (b) slap a half-ass patch on this particular failure to just make it go away. Choosing (b) is simple and quick and easy and cheap. But if you pick that choice and skip (a), then you have zero assurance that the 15,027th test or the 21,922nd test won’t fail just as badly, because you did nothing to address the root cause.

And predictably, this — choice (b) — s what OpenAI has done. It’s predictable because they made no attempt whatsoever to curate the training data in the first place — they just stole everything they could from the entire Internet — because they’re cheap and lazy and a in hurry to cash in before the bubble bursts. This move is entirely consistent with that approach. I would call it “poor software engineering” but it doesn’t even deserve to be in the same sentence with “engineering”.

DOJ Sues Cloudera For Deliberately Excluding American Workers From Tech Jobs

Posted by BeauHD View on SlashDot Skip
Longtime Slashdot reader schwit1 shares a report from ZeroHedge:
The Justice Department on Tuesday sued Cloudera, accusing the enterprise data and artificial intelligence company of deliberately engineering a hiring process that excluded American workers from at least seven lucrative technology positions while the firm pursued permanent residency sponsorship for foreign workers on temporary visas. In a 14-page complaint filed with the Office of the Chief Administrative Hearing Officer, the department’s Civil Rights Division alleges that Cloudera, from March 31, 2024, through at least January 28, 2025, instructed job candidates to submit applications to a dedicated email address, amerijobpostings@cloudera.com, that rejected all external messages with an automated bounce-back error. The company did not advertise the roles on its public careers website or accept applications through its standard portal, as it did for non-sponsorship positions.

Cloudera then attested to the Department of Labor that it could not locate any qualified U.S. workers for the roles, which paid between approximately $180,000 and $294,000 annually, according to the filing. The positions included a Product Manager role in Santa Clara, California, with a listed salary range of $170,186 to $190,000. The case marks one of the most detailed enforcement actions under the Justice Department’s Protecting U.S. Workers Initiative, which was relaunched last year and has already produced 10 settlements targeting employers accused of discriminating against American workers in favor of temporary visa holders. “Employers cannot use the PERM sponsorship process as a backdoor for discriminating against U.S. workers,” Assistant Attorney General Harmeet K. Dhillon of the Civil Rights Division said in a statement. “The Division will not hesitate to sue companies who intentionally deter U.S. workers from applying to American jobs.”

Re:I Wonder Why?

By clovis • Score: 5, Insightful Thread

The instances like this that I was aware of had in common that a person in the hiring process was from the same community as the chosen applicants.
It’s a safe bet that a department head from China did not preselect a group of men from India for these jobs. It could happen that way, but I’d bet it didn’t.

Try getting a seasonal job at a Trump property

By echo123 • Score: 5, Informative Thread

Assistant Attorney General Harmeet K. Dhillon of the Civil Rights Division said in a statement. “The Division will not hesitate to sue companies who intentionally deter U.S. workers from applying to American jobs.”

Apparently no Americans want to work at Trump properties, so many, many foreign workers are required.

The President’s family business requested at least 184 foreign workers for Mar-a-Lago, Virginia winery and two golf clubs. This happens every year since forever. The company has been convicted of fraud and banned from doing business in New York.

It was also the fifth time in 10 years that Trump had sought to bring in more than 100 overseas workers for seasonal jobs at Mar-a-Lago, according to data seen by the Palm Beach Post.."

Re:Open borders would be better than this

By rsilvergun • Score: 5, Insightful Thread
You can’t really open borders when you have 6 billion people living in desperate poverty.

There just isn’t enough space in society for that many people.

They’re actually could be but it would require such a tremendous transformation in our civilization and how we view basically everything that it’s completely off the table.

It wouldn’t necessarily be an overall reduction in quality of life but for example you couldn’t drive your SUV to your house in the suburbs with its nice pool and four or five bedrooms. You couldn’t have personal parties at that nice big house you would have to use communal spaces stuff like that.

Also no joke, socially big fancy cars is how teenagers attract dates and I do not know how to replace that. I know it sounds silly but well there it is.

Re:I Wonder Why?

By Captain Segfault • Score: 5, Interesting Thread

Fundamentally you have this backwards. This process is a compliance tool, not a recruiting tool.

If you want to sponsor someone for permanent residency, you need to do this PERM process. If you’re in the PERM process, you already have an employee you are happy with, who was already allowed to enter the country on some sort of visa that allows them to work, but you’re essentially required to post a job opening for them to notionally demonstrate they aren’t taking a job from an American — which is broken because, to the extent they did, they already did that probably years ago. Today they have an employee that’s most likely been working for them for years that they’re happy with enough to be willing to sponsor them for permanent residency.

The upshot is that this job posting part of the PERM process is fundamentally adversarial. You’re fundamentally competing with some employee the company is happy with — enough to sponsor them for permanent residency. That person is already ramped up on their projects and already performing well. Practically speaking the company has every incentive to say that you don’t meet some fine print ultra specific requirement that they wouldn’t care about if they were truly looking to hire. (and then maybe maybe they have other positions with different requirements for which you might be a fit.)

And, if you succeed in all this? Congratulations, you’ve fucked over someone trying to get permanent residency, and the employer in question isn’t even obligated to hire you.

Pragmatically speaking, as a job seeker, PERM is fundamentally broken. (and it is broken, again, because it is controlling the wrong end of the process. The time for this sort of test is when granting work visas, not when granting permanent residency.) The only thing these job posts are potentially useful for is giving a snapshot into parts of a company that don’t necessarily have active job posts, noting that there is a bureaucratic incentive to be as specific as legally permissible regarding skillset. At that point you should engage with the company using other non adversarial avenues such as networking or just going through the “front door” normal recruiting process.

Re:I don’t trust this DOJ

By evil_aaronm • Score: 5, Insightful Thread
The DOJ will drop the suit as soon as Cloudera ponies up a bribe - er, donation to the president’s “library.”

First Tesla Semi Rolls Off High-Volume Production Line

Posted by BeauHD View on SlashDot Skip
Tesla has produced the first Semi from its new high-volume production line at Gigafactory Nevada, a milestone for the long-delayed electric Class 8 truck program after years of pilot builds and delays. Electrek reports:
The Tesla Semi has had one of the longest gestation periods in Tesla’s history. First unveiled in 2017, the truck was originally promised for production in 2019. That target slipped repeatedly — to 2020, then 2021, then 2022 — before Tesla finally delivered a handful of units to PepsiCo in late 2022. Those early trucks were essentially hand-built on a pilot line. Tesla spent the next three years refining the design, cutting roughly 1,000 lbs from the truck, and building out a dedicated factory adjacent to Gigafactory Nevada in Sparks. The company revealed the final production specs in February, confirming two trims: a Standard Range with 325 miles at full 82,000-lb gross combination weight, and a Long Range with 500 miles of range.

Tesla is quoting $290,000 for the 500-mile Long Range version and roughly $260,000 for the Standard Range — making it the lowest-priced Class 8 battery electric tractor on the market. The shift from a pilot line to a high-volume production line is significant. Tesla’s Semi factory is designed for an annual capacity of 50,000 trucks, though the company will ramp gradually. Analysts project deliveries between 5,000 and 15,000 units in 2026, but that sounds way too optimistic. […] Both trims feature an 800-kW tri-motor drivetrain producing 1,072 hp and support 1.2-MW Megacharger speeds, restoring 60% of range in roughly 30 minutes — conveniently timed around a driver’s mandatory rest break. Tesla has opened its first Megacharger station in Ontario, California, and has mapped 66 Megacharger locations across 15 states.

Re:Did it roll off under its own power?

By Buchenskjoll • Score: 5, Funny Thread
Impressive! The nazis required many tanks for that.

Re:500 miles?

By AleRunner • Score: 5, Informative Thread

Maximum of 8 hours of driving before a 30 minutes break, with a maximum of a two hour extension in the case of adverse conditions according to US laws, so 10 hours would cover the worst case. You could average over 50MPH and still be fine.

Re:500 miles?

By burtosis • Score: 5, Insightful Thread
Diesel is a major cost component in trucking, and per mile electricity is roughly 5 times cheaper when purchased commercially. With newer batteries able to charge in under 15 minutes this means with appropriately sized chargers a very large savings in fuel costs is possible while diversifying the types of raw energy needed. In the coming years it won’t be viable to use diesel anymore simply because it’s too expensive, not to mention that the costs to build an electric vehicle are already dropping below internal combustion while requiring less maintenance and increasing reliability.

Re: Choke point

By frdmfghtr • Score: 5, Insightful Thread

The only way I can express it is by borrowing a line from (I think) WWII British prime minister Winston Churchill: “The Americans can always be counted on to do the right thing, when all other options have been exhausted.”

Re:Results.

By caseih • Score: 5, Interesting Thread

What are you talking about? All the major manufacturers are currently selling electric trucks of that size and range in Europe. There’s a guy documenting daily long haul driving in Europe with electric trucks. Google for electric trucker or elektrotrucker.

And to head off the inevitable comments, yes European trucks are as big or bigger than American ones. And yes the distances driven are just as long as American routes. Infrastructure for changing is much better than in the US of course, and improving.

Elon Musk Says OpenAI Betrayed Him, Clashes With Company’s Attorney

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from the San Francisco Chronicle:
Elon Musk returned to the witness stand Wednesday in Oakland federal court for a second day of testimony in his case against OpenAI, detailing his shift from being an enthusiastic supporter of the nonprofit to feeling betrayed. He also clashed repeatedly with OpenAI’s attorney over questions that Musk believed were unfair. He said his feelings towards OpenAI CEO Sam Altman and President Greg Brockman shifted from a “phase one” of support, “phase two” of doubts, and finally “phase three, where I’m sure they’re looting the nonprofit. We’re currently in phase three,” Musk said with a chuckle. Musk said he was a “fool” for giving OpenAI "$38 million of essentially free funding to create what would become an $800 billion company,” of which he has no equity stake.

In his 2024 lawsuit, Musk alleged breach of charitable trust and unjust enrichment, arguing OpenAI abandoned its original nonprofit mission to benefit humanity to pursue financial gain. OpenAI’s lawyer William Savitt argued Tuesday during his opening statement that the nonprofit entity remains in control of the for-profit public benefit corporation and is now one of the most well-funded nonprofits in the world. Musk is seeking to oust Altman from OpenAI’s board and upwards of $134 billion in damages, which he said would be used to fund OpenAI’s nonprofit mission. During cross-examination, Savitt clashed with Musk over questioning. Savitt asked whether Musk had contributed $38 million to OpenAI, rather than the $100 million that he later claimed to have invested on X. Musk said he also contributed his reputation to the company and came up with the idea for the name, leading Savitt to ask Musk to respond yes or no to “simple” questions.

“Your questions are not simple. They’re designed to trick me, essentially,” Musk said, adding that he had to elaborate or it would mislead the jury. He compared Savitt’s questions to asking, “have you stopped beating your wife?” Judge Yvonne Gonzalez Rogers intervened, leading Musk to answer yes to the $38 million investment amount. The world’s richest man said his doubts grew and by late 2022, he thought “wait a second, these guys are betraying their promise. They’re breaking the deal.” “I started to lose confidence that they were telling me the truth,” Musk said. A turning point was co-defendent Microsoft’s investment of billions of dollars into OpenAI, Musk said. On October 23, 2022, Musk texted Altman that he was “disturbed” to see OpenAI’s valuation of $20 billion in the wake of the Microsoft deal. Musk called the deal a “bait and switch,” since a nonprofit doesn’t have a valuation. OpenAI had “for all intents and purposes” become primarily a for-profit company, Musk argued. Altman responded to Musk by text that “I agree this feels bad,” saying that OpenAI had previously offered equity in the company but Musk hadn’t wanted it at the time. Altman said the company was happy to offer equity in the future. Musk said it “didn’t seem to make sense to me” to hold equity in what should be a nonprofit.
Musk also testified about former OpenAI board member Shivon Zilis, who lives with him, is the mother of four of his children, and served as a senior advisor at Neuralink. He denied that she shared sensitive OpenAI information with him. Court evidence showed Musk had encouraged her to stay close to OpenAI to “keep info flowing” and had approved Neuralink recruiting OpenAI employees, which he defended by saying workers are free to change jobs. “It’s a free country,” Musk said.
Recap:
Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two)
Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)

since a nonprofit doesn’t have a valuatio

By ToasterMonkey • Score: 5, Insightful Thread

since a nonprofit doesn’t have a valuation

It has revenue, it has value. I’m not following Musk’s logic. Say it was a cancer research charity. You make a donation. That money goes fully to the non-profit’s mission of cancer research and affordable therapy. They later sell or license some revenue generating IP they created to a controlled subsidiary. That licensing funds the non-profit’s governance and its mission, the for profit arm can grow faster from outside investments and grant equity to highly poachable employees. Is the non-profit still following its mission and spending every dollar of donations and revenue on the mission? Yes. Does it need to retain all revenue growth? Why? … it’s a non-profit …

It’s just weird to look at it like you got shafted because your donation didn’t “buy” all the maximum earnings potential (that as a non-profit is not even the objective, RIGHT?) and that you wouldn’t hold a stake in anyway. This is the same weird argument people make with open source dual license projects, like you give something away to be used for whatever, but someone else does something with it and suddenly they “took” something from you. Nah your stuff is right there doing its thing, it didn’t go anywhere, it’s still doing exactly what it claimed to do. If you wanted a share of the future earnings you’re in the wrong place sorry.

Re:OpenAI is not a nonprofit anymore

By phantomfive • Score: 5, Insightful Thread

I get there was drama, but all of it was legal.

If that were obvious, then it would have been thrown out of court. The entire reason it’s in court still is because it’s not clear if it was legal or not.

Re:OpenAI is not a nonprofit anymore

By fluffernutter • Score: 5, Insightful Thread
We are talking about someone who has always gotten everything he has wanted in life here. Sounds like on both sides actually.

In other words…

By fahrbot-bot • Score: 5, Interesting Thread

Musk said he was a “fool” for giving OpenAI "$38 million of essentially free funding to create what would become an $800 billion company,” of which he has no equity stake.

Sounds like sour grapes.

(Hey Elon, how’d all that DOGE stuff work out? I mean, for us…)

Re:OpenAI is not a nonprofit anymore

By quonset • Score: 5, Informative Thread

The legality of going for-profit is questionable.

So is Musk’s status as an illegal immigrant:

In 2013, Musk’s brother, Kimbal Musk, said in an interview, “We were illegal immigrants,” to which Elon Musk replied, “I’d say it was a gray area.”

New Sam Bankman-Fried Trial Would Be Huge Waste of Court’s Time, Judge Says

Posted by BeauHD View on SlashDot Skip
A federal judge denied Sam Bankman-Fried’s request for a new trial, calling his claims of DOJ witness intimidation “wildly conspiratorial” and unsupported by the record. Judge Lewis Kaplan said (PDF) the FTX founder’s motion appeared tied to a pre-indictment plan to recast himself as a Republican victim of Biden’s DOJ in hopes of gaining sympathy, leniency, or even a Trump pardon. Ars Technica reports:
Bankman-Fried was sentenced to 25 years in prison in 2024 for “masterminding one of the largest financial frauds in American history,” US District Judge Lewis Kaplan wrote in his order. He was convicted on all charges, including wire fraud, conspiracy to commit securities fraud, commodities fraud, and money laundering. There is already an appeal pending in another court, the judge noted. But Bankman-Fried filed a separate motion for a new trial, claiming that there were “newly discovered” witnesses and evidence that might have helped his defense, if Joe Biden’s Department of Justice hadn’t intimidated them into refusing to testify or, in one case, lying on the stand.

He also asked for a new judge, wanting Kaplan to recuse himself. However, Kaplan pointed out that “none of the witnesses” were “newly discovered.” And more concerningly, Bankman-Fried offered no evidence that the witnesses could prove the “wildly conspiratorial” theory the FTX founder raised, claiming that their absence at the trial was a “product of government threats and retaliation,” the judge wrote. Bankman-Fried’s theory is “entirely contradicted by the record,” Kaplan said. He emphasized that granting Bankman-Fried’s request “would be a large waste of judicial resources as it could require another judge to familiarize himself or herself with an extensive and complicated record.”

Additionally, all three witnesses that Bankman-Fried claimed could give crucial testimony in his defense were known to him throughout the trial, and he never sought to compel their testimony. And the “self-serving social-media posts” of one witness who now claims that he lied when testifying against Bankman-Fried — “Ryan Salame, who pleaded guilty” — must be met with “utmost suspicion,” Kaplan said. “If one were to take Salame at his current word, he lied under oath when pleading guilty before this Court,” Kaplan wrote. Even if taken seriously, “his out-of-court, unsworn statements could not come anywhere close to clearing the bar to warrant a new trial,” Kaplan said, deeming Salame’s credibility “highly questionable.” Further, “even if these individuals had testified for Bankman-Fried, his protestations that one or more of them would have supported his claims that FTX was not insolvent and that his victims all were compensated fully in the bankruptcy proceedings are inaccurate or misleading,” Kaplan concluded.

In the order, Kaplan’s frustration seems palpable, as there may have been no need for him to rule on the motion at all after Bankman-Fried requested to withdraw it. But the judge said the ruling was needed after Bankman-Fried waited to file his withdrawal request until after the DOJ and the court wasted time responding and reviewing filings, the judge said. Troublingly, Bankman-Fried’s request to withdraw his request without prejudice would have allowed him to potentially request a new trial after the appeal ended. Based on the substance of the filing, that risked wasting future court resources, Kaplan determined. To prevent overburdening the justice system, Kaplan deemed it necessary to deny Bankman-Fried’s motion and request for recusal, rather than allow him to withdraw the filing without prejudice.

When life is a game…

By Excelcia • Score: 5, Interesting Thread

Bankman-Fried would, famously, play actual video games during investor meetings. And while he was praised for the ability at the time, maybe it goes to show that someone who can’t tell the difference between a game and reality will start to think of real-life stats as just more game artifacts to be manipulated to his advantage.

So, next time you’re in a meeting with someone and impressed by that person’s answers only to find he’s playing a video game while doing it… start to question that person’s grip on the difference between a game gank and a real one. Especially when the business is, by definition, something that’s pretty virtual to begin with.

I suspect he will continue to try and find a loophole out of his current situation, but that the courts are a little less likely to be snowed by ADHD-fueled shaking of his prison bars than others had been.

Everyone Is Lying to You for Money

By atrimtab • Score: 5, Interesting Thread

Bankman-Fried just got caught…

There is yet another new movie about Cryptocurrency. Here is a trailer:

Everyone Is Lying to You for Money | Official Trailer UHD

Bitcoin Blockchain can only process 5 to 7 transactions per second.

Banks/payment networks: SWIFT is estimated at a little over 500 transactions per second, while card networks like Visa are often cited around 24,000 transactions per second.

Let’s put it this way, if it takes more compute to process YOU will be paying for it.

Crypto is for crime!

Dude

By Anonymous Coward • Score: 4, Insightful Thread

Just buy a pardon. It will take time, money, and ass-kissing, but it has worked for the other finance criminals.

Honestly if he had held out just a little longer

By rsilvergun • Score: 5, Insightful Thread
The crazy boost to the crypto markets would have covered up his dirty dealings and he could have taken the money and paid the big investors that matter and screwed over the small investors that obviously don’t matter and gotten away with it.

The line between a successful grifter like Elon Musk and a guy rotting in prison like Bankman-Fried is actually pretty thin. It’s just a matter of keeping the scam going long enough so that people who matter don’t lose money and making sure that people who matter somehow know it’s a scam. That last one is the mistake Elizabeth Holmes and Bernie Madoff made.

Honestly though his parents have money and I think this was a federal case so I’m not sure why they don’t just buy a pardon from Trump like everybody else. Maybe they are just waiting for the heat to die down a little or maybe he just ripped off too many extremely rich people and Trump was told by one of the real billionaires not to do the pardon

Nice!

By Da_Big_G • Score: 4 Thread

Wow! It’s really nice to see a judge get it right on a high-profile case like this. All too often it seems like they bend to the will of the rich and powerful.

A ton of people worldwide were harmed by the shenanigans at FTX and SBF deserves to be punished for the harm he caused.

Ubuntu’s AI Plans Have Linux Users Looking For a ‘Kill Switch’

Posted by BeauHD View on SlashDot Skip
Canonical’s plan to add AI features to Ubuntu has sparked pushback from users who are concerned it could follow Windows 11’s AI-heavy direction. “After Canonical’s announcement earlier this week that it’s bringing AI features to Ubuntu, replies included requests for an AI ‘kill switch' or a way to disable the upcoming features,” reports The Verge. Canonical says it has no plans for a “global AI kill switch” but it will allow users to remove any AI features they don’t want. From the report:
In his original post, [Canonical’s VP of engineering, Jon Seager] said the upcoming AI features will include accessibility tools like AI speech-to-text and text-to-speech, along with agentic AI features for tasks like troubleshooting and automation. Canonical is also encouraging its engineers to use AI more and plans to begin introducing AI features in Ubuntu “throughout the next year.”

In a follow-up comment, Seager clarified that, “my plan is to introduce AI-backed features as a ‘preview’ on a strictly opt-in basis in [Ubuntu version] 26.10. In subsequent releases, my plan is to have a step in the initial setup wizard that allows the user to choose whether or not they’d like the AI-native features enabled.” Ultimately, he said, “All of these capabilities will be delivered as Snaps to the OS, layered on top of the existing Ubuntu stack. That means there will always be the option of removing those Snaps.”
Users who prefer to avoid AI entirely could switch to other distros like Linux Mint, Pop!_OS, or Zorin OS. “These distros have some similarities to Ubuntu, but may not necessarily adopt the new AI features Canonical is rolling out,” adds The Verge.

I guess I stop using Ubuntu

By Baron_Yam • Score: 5, Insightful Thread

One of the things I like about Linux is that it’s common to follow a philosophy of “start with nothing then add what you need” rather than “throw in everything and good luck trying to remove anything problematic”.

Make it an optional component suggested during the installation procedure and it’s fine. Force it on everyone and you’re undermining good security and I have to suspect you’re doing it for reasons I wouldn’t like.

Move on

By markdavis • Score: 5, Informative Thread

>“Ubuntu has sparked pushback from users who are concerned”

Mint: https://www.linuxmint.com/down…
Mint Debian: https://www.linuxmint.com/down…
Debian: https://www.debian.org/

The kill switch is called Debian

By dskoll • Score: 5, Interesting Thread

Debian is the answer. While a decade or so ago, Ubuntu was easier to install and more polished, Debian has pretty much caught up in terms of polish.

Re:Ubuntu is slowly becoming MS Win

By machineghost • Score: 5, Informative Thread

Slowly becoming? They’ve been this way for more than a decade: remember the whole Unity debacle?

Way back in 2010, LInux had two (major) UIs: KDE and GNOME. Canonical (specifically, Mark Shuttleworth) tried to force everyone to adopt a brand new system, Unity, despite the fact that no one asked for it. It was a straight up Bill Gates “We have the most market share, we can do whatever the fuck we want” power play.

It didn’t work: the larger Linux community revolted. But it took Shuttleworth SEVEN YEARS to give up and finally appreciate that he wasn’t a god who got to dictate what the Linux community … and clearly he never really learned that lesson.

Something is seriously wrong…

By tiqui • Score: 5, Interesting Thread

with the current generation of young programmers. They clearly do not know the difference between an operating system and applications. Nobody should be trying to add AI to Windows, or to Linux, or to any other OS. The OS is supposed to add a layer of abstraction to the platform, so applications can be written and then run on multiple systems with hardware differences. The OS is supposed to allocate resources to applications. The modern OS is supposed to allow multiple applications to run at the same time or appear to run at the same time using some combination of cores and time-slicing. If any operating system is having problems doing these things (the basics) then programmers should be improving whichever element is not up to par.

So-called AI, as currently being hyped, is a mutant derivative of large language models and could well be a computing fad. Fads do not belong in an OS; they BARELY belong in an app. We know things like memory management, bulk storage management, and process management belong in an OS and we have decades of experience confirming that, but AI a decade from now could be nothing like AI today.

There’s plenty of need for coders in Linux land to get the basics of the OS right. For example: as long as I cannot get proper support in linux for half of my printers (in other words: the hardware abstraction is still incomplete), there’s ZERO excuse for any linux programmers spending time adding AI fluffery. Similarly, the OS is still using a web interface and CUPS for printers, in part because the OS lacked its own standardized API and abstraction for printers. I’m not even fully convinced that the whole Xorg vs Wayland thing, and the init vs systemd thing, are fully settled.

To be a little more charitable: it’s possible this is not entirely about younger coders wanting to play with the current new shiny object and being bored by completing/fixing/maintaining the basics - the investor types are currently pushing AI as an investment and thus anybody wanting money is sprinkling AI about and talking it up to attract attention, but even there, it’s the job of serious programmers to stand up to people doing that and say “NO, that’s NOT appropriate for inclusion into an operating system.”

Joby Demos Its Air Taxi In NYC

Posted by BeauHD View on SlashDot Skip
Joby Aviation has completed demonstration flights of its electric air taxi over New York City, testing real routes between JFK and Manhattan helipads as it prepares for a future commercial service. The company says its eVTOL could turn a 60- to 120-minute airport trip into a flight of under 10 minutes, though commercial launch still depends on FAA certification. Electrive reports:
To launch operations in New York City, Joby acquired Blade Urban Air Mobility last year. Blade already enables helicopter flights for affluent travelers between Manhattan and airports such as JFK or Newark in just five minutes, avoiding up to two hours of traffic and typical airport hassles. Joby aims to replace this service with quiet, electric air taxis as soon as possible, transitioning Blade’s existing customers to the new technology.

However, introducing a new aircraft into commercial service requires a years-long certification process, overseen in the US by the Federal Aviation Administration (FAA). Joby is now in the final phase of FAA certification. Following a series of demonstration flights in the San Francisco Bay Area, the company has tested its air taxi in New York City on real flight routes and under real-world conditions. During these tests, Joby demonstrated the acoustics and performance metrics critical for entering the urban air taxi market.

During these demonstration flights, Joby’s air taxi took off from John F. Kennedy International Airport (JFK) and landed at various helipads across the city, including Downtown Skyport and the helipads at West 30th Street and East 34th Street in Midtown, where Blade Air Mobility’s premium passenger lounges are located. These locations represent some of the commercial routes Joby plans for New York […].
Fun fact: Joby’s eVTOL aircraft are over 100 to 1,000 times quieter than a conventional helicopter, operating at roughly 55-65 dB during takeoff and landing compared to 90+ dB for helicopters.

Great

By drinkypoo • Score: 5, Funny Thread

It’s good billionaires are going to have something to spit on us from. I’d hate for them to have to use lung power to do it because they’re in a mere limousine.

This is Going to be a While…

By Koreantoast • Score: 5, Informative Thread
This is going to be a very long while before it ever scales. First, don’t ever underestimate the time it takes to certify a new aircraft. It takes years for traditional platforms which have established certification paths (conventional fixed wing aircraft and rotorcraft) - with the Bell 525, one of the latest helicopters seeking certification still not certified after first flight eleven years ago. Then the FAA is struggling even more with how to certify “powered lift” aircraft where eVTOL platforms are. Second, Joby, Archer and others are not yet solving the core problem that has capped this industry - the lack of pilots. The real promise of eVTOL was that the platforms would be autonomous, but, going back to the certification challenge, the FAA has yet to figure out how to certify autonomous aircraft let alone manage the air traffic in a sky full of them. The platforms today still going to need pilots which will limit their adoption. The Chinese are farther ahead in this space, with the CAAC leaning forward and even certifying a handful of autonomous eVTOL aircraft, but I’m curious how rapidly they will proliferate outside China until the FAA and EASA figure out how they want to move forward…

Safety schmaefty! theres $$$ to be made!

By Thud457 • Score: 4, Interesting Thread
A couple of million dollars to the TRUMP Ballroom Fund should clear any irksome regulatory hurdles posed by the ineffectual FAA.

/s … ?

Back in the day…

By VAXcat • Score: 4, Interesting Thread
Way back, there was a helicopter service from the roof of the Pan Am building to the airport in NYC. It was discontinued due to several people getting killed in an accident.

Apple Gives Up On the Vision Pro After M5 Refresh Flop

Posted by BeauHD View on SlashDot
MacRumors reports that Apple has effectively paused work on Vision Pro after the M5 refresh failed to revive demand. The team has reportedly been reassigned and the company is now shifting focus toward smart glasses instead. From the report:
The Vision Pro has been criticized for its high price tag and its uncomfortable weight. The device is over 1.3 pounds, and even with the more comfortable Dual Knit Band that Apple added to redistribute weight, it continues to be hard to wear for long periods of time. The M5 chip added a 120Hz refresh rate, 10 percent more rendered pixels, and around 30 additional minutes of battery life, but the price tag stayed at $3,499, and it ended up not selling well. The Vision Pro has been unpopular since it first launched, and Apple only sold around 600,000 units in total. Insider sources told MacRumors that Apple has received an unusually high percentage of returns, far exceeding any other modern Apple product.

[…] If Apple finds a way to create a much cheaper, more comfortable VR headset in the future, the Vision Pro line could be revived, but right now, the company has no plans to launch a new model. Apple has not discontinued the Vision Pro and is continuing to sell the M5 model. Instead of continuing to experiment with virtual reality, Apple is working on smart glasses that will eventually incorporate augmented reality capabilities, but the first version will be similar to the Ray-Ban Meta smart glasses with AI and no integrated display.

Fascinating how some still believe in VR success

By ffkom • Score: 4, Insightful Thread
There appears to be a small group of people who think that wearing a VR helmet for hours could be fun, and CEOs appear to be over-represented in that small group. Even if the likes of Vision Pro were sold for 35 $, I would still not want to wear one for any extended period of time.

Oh no!

By kamapuaa • Score: 5, Interesting Thread

I’m sorry to hear it, as the Oculus/Meta Quest is one of those few technologies that makes you think “holy shit!” It really is an amazing experience, it feels like living in Science Fiction. Then the best actual use is playing Resident Evil 4, a Gamecube game from 2005. I also enjoy taking 360 videos.

I thought Apple would be able to take this amazing technology and find some practical application for it…and I see I was wrong! I still think it can happen someday.

Why would a faster CPU revive demand?

By dgatwood • Score: 5, Interesting Thread

I’m really not sure why they bothered to rev the CPU. Nobody who used one complained that it was too slow. What we complained about was:

Those were the biggest flaws, and two years later, Apple has still done nothing to address literally any of them. Until they do, this product isn’t likely to do much in the market, IMO.

Eye Doctor In Shambles

By SlashbotAgent • Score: 3, Insightful Thread

This blows up the business model of the eye surgeon using the Vision Pro for surgeries, reported on Slashdot earlier today.

This is what happens when interest rates rise

By Somervillain • Score: 3 Thread
This is quite simple. As interest rates rise, companies are less willing to stomach expensive, risky investments. If done right, this could have changed computing....done to displays what headphones did to audio. However, it’s very difficult to do at an affordable cost with today’s technology. They took a bold risk and now are being more cautious with their money…although in fairness, they didn’t really try to hard with this one.