Alterslash

the unofficial Slashdot digest
 

Contents

  1. IBM Shares Crater 13% After Anthropic Says Claude Code Can Tackle COBOL Modernization
  2. Linus Torvalds: Someone ‘More Competent Who Isn’t Afraid of Numbers Past the Teens’ Will Take Over Linux One Day
  3. ‘How Many AIs Does It Take To Read a PDF?’
  4. Anthropic Accuses Chinese Companies of Siphoning Data From Claude
  5. Say Goodbye to the Undersea Cable That Made the Global Internet Possible
  6. PayPal Attracts Takeover Interest After Stock Slump
  7. Climate Physicists Face the Ghosts in Their Machines: Clouds
  8. Stressful People in Your Life Could Be Adding Months To Your Biological Age
  9. Sam Altman Would Like To Remind You That Humans Use a Lot of Energy, Too
  10. Goldman Sachs, Morgan Stanley Calculate AI’s Contribution To U.S. Growth May Be Basically Zero
  11. Is AI Impacting Which Programming Language Projects Use?
  12. Rule-Breaking Black Hole Growing At 13x the Cosmic ‘Speed Limit’ Challenges Theories
  13. Should Job-Seekers Stop Using AI to Write Their Resumes?
  14. Raspberry Pi Stock Rises Over Its Possible Use With OpenClaw’s AI Agents
  15. Telegram Disputes Russia’s Claim Its Encryption Was Compromised

Alterslash picks up to the best 5 comments from each of the day’s Slashdot stories, and presents them on a single page for easy reading.

IBM Shares Crater 13% After Anthropic Says Claude Code Can Tackle COBOL Modernization

Posted by msmash View on SlashDot Skip
IBM shares plunged nearly 13% on Monday after Anthropic published a blog post arguing that its Claude Code tool could automate much of the complex analysis work involved in modernizing COBOL, the decades-old programming language that still underpins an estimated 95% of ATM transactions in the United States and runs on the kind of mainframe systems IBM has sold for generations.

Anthropic said the shrinking pool of developers who understand COBOL had long made modernization cost-prohibitive, and that AI could now flip that equation by mapping dependencies and documenting workflows across thousands of lines of legacy code. The sell-off deepened a rough 2026 for IBM, whose shares are now down more than 22% year to date.

Linus Torvalds: Someone ‘More Competent Who Isn’t Afraid of Numbers Past the Teens’ Will Take Over Linux One Day

Posted by msmash View on SlashDot Skip
Linus Torvalds has pondered his professional mortality in a self-deprecating post to mark the release of the first release candidate for version 7.0 of the Linux kernel. From a report:
“You all know the drill by now: two weeks have passed, and the kernel merge window is closed,” he wrote in the post announcing Linux 7.0 rc1. “We have a new major number purely because I’m easily confused and not good with big numbers.” Torvalds pointed out that the numbers he applies to new kernel releases are essentially meaningless.

“We haven’t done releases based on features (or on “stable vs unstable”) for a long, long time now. So that new major number does *not* mean that we have some big new exciting feature, or that we’re somehow leaving old interfaces behind. It’s the usual “solid progress” marker, nothing more.â

He then reiterated his plan to end each series of kernels to end at x.19, before the next release becomes y.0 — a process that takes about 3.5 years — and then pondered what happens when the next version of Linux reaches a number he finds uncomfortable. “I don’t have a solid plan for when the major number itself gets big,” he admitted, “by that time, I expect that we’ll have somebody more competent in charge who isn’t afraid of numbers past the teens. So I’m not going to worry about it.”

‘How Many AIs Does It Take To Read a PDF?’

Posted by msmash View on SlashDot Skip
Despite AI’s progress in building complex software, the ubiquitous PDF remains something of a grand challenge — a format Adobe developed in the early 1990s to preserve the precise visual appearance of documents. PDFs consist of character codes, coordinates, and rendering instructions rather than logically ordered text, and even state-of-the-art models asked to extract information from them will summarize instead, confuse footnotes with body text, or outright hallucinate contents, The Verge writes.

Companies like Reducto are now tackling the problem by segmenting pages into components — headers, tables, charts — before routing each to specialized parsing models, an approach borrowed from computer vision techniques used in self-driving vehicles. Researchers at Hugging Face recently found roughly 1.3 billion PDFs sitting in Common Crawl alone, and the Allen Institute for AI has noted that PDFs could provide trillions of novel, high-quality training tokens from government reports, textbooks, and academic papers — the kind of data AI developers are increasingly desperate for.

The only good use case

By ArchieBunker • Score: 3 Thread

I can see for AI is improving optical character recognition. I don’t care one bit about some garbage summarize feature.

Anthropic Accuses Chinese Companies of Siphoning Data From Claude

Posted by msmash View on SlashDot Skip
U.S. artificial-intelligence startup Anthropic said three Chinese AI companies set up more than 24,000 fraudulent accounts with its Claude AI model to help their own systems catch up. From a report:
The three companies — DeepSeek, Moonshot AI and MiniMax — prompted Claude more than 16 million times, siphoning information from Anthropic’s system to train and improve their own products, Anthropic said in a blog post Monday.

Earlier this month, an Anthropic rival, OpenAI, sent a memo to House lawmakers accusing DeepSeek of using the same tactic, called distillation, to mimic OpenAI’s products. Anthropic said distillation had legitimate uses — companies use it to build smaller versions of their own products, for example — but it could also be used to build competitive products “in a fraction of the time, and at a fraction of the cost.” The scale of the different companies’ distillation activity varied. DeepSeek engaged in 150,000 interactions with Claude, whereas Moonshot and MiniMax had more than 3.4 million and 13 million, respectively, Anthropic said.

Boo hoo

By Big Hairy Gorilla • Score: 5, Insightful Thread
When I steal your brainwaves it’s fine, but when big bad China Co steals my brainwaves… welll.. that’s bad.
So sad.

Missing an opportunity

By alvinrod • Score: 5, Interesting Thread
They’re clearly missing a golden opportunity to feed the other AIs a load of complete shit and make them even worse off. The idea of corrupting the Chinese LLMs to be anti-CCP agents is certainly amusing. Train your AI to detect and corrupt other AIs. I don’t know if it proves their intelligence at all, but no one can dispute that AIs will definitely be more human-like when they start forming cults.

They’ve Got Gall

By SlashbotAgent • Score: 3 Thread

These AI companies have some real gall, complaining about the Chinese appropriating other people’s work. Is that not what the AI companies continue to do even now?

Transformative fair use!

By fuzzyfuzzyfungus • Score: 4, Interesting Thread
I’m awaiting clarification on why all their arguments about why scraping is their god-given right don’t apply when they are getting scraped.

Re:Cheaper, easier training

By nightflameauto • Score: 4, Insightful Thread

Anthropic said distillation had legitimate uses — companies use it to build smaller versions of their own products, for example — but it could also be used to build competitive products “in a fraction of the time, and at a fraction of the cost.”

Oh, I see. It’s a cost-effective way to get training data without a lot of hassles. Sort of like reading books.

Why does Anthropic have a problem with this? Haven’t they advocated in favor of it, in the past?

We’ve entered a phase of society where “rules for thee and not for me” is so intrinsic that they don’t even notice their own hypocrisy. “GIMME ALL YOUR DATA” and “DON’T STEAL MY DATA” don’t even register to them as connected concepts, at all. They have a right to take any data they want and are able to access. They also, once they’ve acquired that data, 100% believe that the data belongs to them, and always did.

Our current generation of AI is just greed given digital form, and the very particular greed that drives our owner class. “GIMME THAT, IT’S MINE!” is the name of their number one driver. No other point even exists in their view.

Say Goodbye to the Undersea Cable That Made the Global Internet Possible

Posted by msmash View on SlashDot Skip
The first fiber-optic cable ever laid across an ocean — TAT-8, a nearly 6,000-kilometer line between the United States, United Kingdom, and France that carried its first traffic on December 14, 1988 — is now being pulled off the Atlantic seabed after more than two decades of sitting dormant, bound for recycling in South Africa.

Subsea Environmental Services, one of only three companies in the world whose entire business is cable recovery and recycling, began the operation last year using its new diesel-electric vessel, the MV Maasvliet, and had already brought 1,012 kilometers of the cable to the Portuguese port of Leixoes by August.

TAT-8, short for Trans-Atlantic Telephone 8, was built by AT&T, British Telecom, and France Telecom, and hit full capacity within just 18 months of going live. A fault too expensive to repair took it out of service in 2002. The recovered cable is being shipped to Mertech Marine in South Africa, where it will be broken down into steel, copper, and two types of polyethylene — all commercially valuable, especially the high-quality copper at a time when the International Energy Agency projects global shortages within a decade.

Ever wondered how underwater cables are laid?

By echo123 • Score: 5, Informative Thread

Ever wondered how underwater cables are laid?

Here’s a super cool subsea cable article almost as old as the cable in TFA.

How long before nigerian prince used it?

By VampireByte • Score: 3 Thread

So many bits traveled this cable seeking assistance in resolving a matter.

Unexpected Surprises?

By organgtool • Score: 3 Thread
I wonder if they’ll find any unexpected cables tapped into the main one during this extraction.

Re: Unexpected Surprises?

By raburton • Score: 5, Informative Thread

I’m was surprised there is any copper in a fibre optic cable. Never really given them much thought before. But to save others googling, they have high voltage power cables and boosters every 100km or so.

Re:Ever wondered how underwater cables are laid?

By Tim the Gecko • Score: 4, Interesting Thread

TAT-8 carried 280Mbit/s in the days when the equivalent number of telephone circuits was relevant. The Fastnet cable will deliver 320Tbit/s, so one million times more! I wonder how many newer cables cross TAT-8 and therefore lie on top of it? Dragging TAT-8 up from the sea floor sounds like it could cause some collateral damage. Perhaps they will leave some sections down there to avoid this.

Other links: Submarine cable repair animation, Informative article with annoying graphics

PayPal Attracts Takeover Interest After Stock Slump

Posted by msmash View on SlashDot Skip
An anonymous reader shares a report:
PayPal, the digital payments pioneer, is attracting takeover interest from potential buyers after a stock slide wiped out almost half of its value, according to people familiar with the matter.

The San Jose, California-based company has fielded meetings with banks amid unsolicited interest from suitors, the people said. At least one large rival is looking at the whole company, while some other suitors are only interested in certain PayPal assets, the people said, asking not to be identified because the information is private.

Buyer interest in PayPal is still at a preliminary stage and may not lead to a transaction, the people cautioned. Founded in the late 1990s, PayPal was an early mover in the world of digital payments. But the company now finds itself in a rut with its customers increasingly turning to alternative ways to pay for things. PayPal’s shares have fallen around 46% in New York trading over the last 12 months, giving the company a market value of about $38.4 billion.

Climate Physicists Face the Ghosts in Their Machines: Clouds

Posted by msmash View on SlashDot Skip
Climate scientists trying to predict how much hotter the planet will get have long grappled with a surprisingly stubborn problem — clouds, which both reflect sunlight and trap heat, account for more than half the variation between climate predictions and are the main reason warming projections for the next 50 years range from 2 to 6 degrees Celsius.

Two research groups are now racing to close that gap using AI, though they disagree sharply on method. Tapio Schneider at Caltech built CLIMA, a model that uses machine learning to optimize cloud parameters within traditional physics equations; it will be unveiled at a conference in Japan in March. Chris Bretherton at the Allen Institute for AI took a different path — his ACE2 neural network, released in 2024, learns from 50 years of atmospheric data and largely bypasses physics equations altogether.

Alchemy?

By Quakeulf • Score: 4, Insightful Thread

At what point does this cease to be science and starts becoming alchemy? What is the cutoff point?

Re:Alchemy?

By groobly • Score: 4, Insightful Thread

It was never science. Modeling is only science if the model is actually tested. These models claim to predict what will happen in 50-100 years. They will become science only after someone checks the predictions in 50-100 years.

Re: wait…

By beelsebob • Score: 4, Interesting Thread

Really? Which climate predictions are we talking about here?

Hansen’s 1988 models inaccurately predicted emissions levels, but when adjusted for actual emissions levels using the same methodology turns out to be fairly accurate to reality.

The IPCC report from 1990 predicted 0.3Â of warming per year, which tracks well with the 0.2-0.3Â per year we’ve seen.

Early predictions of arctic ice melt predicted that the
Volume of arctic sea ice would have fallen by 35% by now, it has fallen by 40% - pretty accurate.

1990s predictions of sea level rise predicted 18cm by now, its risen by 20-25cm - so somewhat conservative but the right trend.

The one prediction that I think it would be possible to point to as âoewrongâ is the idea that freak weather events would increase as sea temperatures rose. The rate of such events has turned out not to rise, however, the severity has risen instead, so this one is a bit off but not significantly.

Stressful People in Your Life Could Be Adding Months To Your Biological Age

Posted by msmash View on SlashDot Skip
A study published last week in PNAS found that people who regularly cause problems or make life difficult — whom the researchers call “hasslers” — are associated with measurably faster biological aging in those around them, at a rate of roughly 1.5% per additional hassler and about nine months of additional biological age relative to same-age peers.

The research drew on DNA methylation-based epigenetic clocks and ego-centric network data from a state-representative probability sample of 2,345 adults in Indiana, aged 18 to 103. Nearly 29% of respondents reported at least one hassler in their close network. The biological toll varied by relationship type: hasslers who were family members showed the strongest and most consistent associations with accelerated aging, while spouse hasslers showed no significant effect on either epigenetic measure.

The damage also went beyond aging clocks — each additional hassler was associated with greater depression and anxiety severity, higher BMI, increased inflammation, and higher multimorbidity. When benchmarked against smoking, a major behavioral risk factor for aging, the hassler effect corresponded to roughly 13 to 17% of smoking’s estimated impact on the same aging clocks.

Re:thanks-for-nothing dept indeed

By JimMcc • Score: 5, Insightful Thread

All one has to do is click on the link to the cited article immediately after the title, in this case "(pnas.org)". But, yes, linking to the article in the summary would also be good.

For sure

By dskoll • Score: 5, Funny Thread

I got divorced 12 years ago. My physical health definitely improved.

Stress is the top killer - most others are BS

By Somervillain • Score: 3 Thread
Every time you see some article about something in life making you healthier or sicker, ask the question…is there a stress correlation? It’s safe to say every modern non-genetic/mutation ailment can be traced very reliably to stress. In the 90s, they thought drinking wine was good for your health. Doctors told my dad, a non-drinker, to have a half glass every night for heart health…he tried it and felt sicker and gave up…even felt bad he couldn’t drink wine to be healthy.

Now we know, wine is bad for you. At best, you don’t notice the damage in small doses. It will never make you healthier. I was in college, listening to biochemistry lectures where they studied it…was it the anti-oxidants? Was it reservatrol? No matter how much was given, they couldn’t find a dose-dependent response to either. Turns out, it was simply that wine drinkers have less stressful lives, especially in the USA, especially in the 90s. Most were middle class or above. The poor drank hard liquor or beer. People who have time and lack of serious problems in their life to explore wines have time to visit the doctor for routine checkups and get a walk in every night with the dog and go to bed on time, etc.

Then they thought it was tooth decay. So now scientists are studying to see if bacteria in teeth caused heart attacks…were they producing toxins that are poisoning us? Oh, you’re on a statin?…be sure to brush your teeth as well!!!…which is nice advice, but not relevant. More obviously, people who go to routine checkups have less stress, have their act together more, people who take care of their teeth typically take care of their bodies and have their shit together.

Same for any other ailment…ultra-processed foods? …yeah, the MAHA/RFK crowd loves to complain about that, but first of all, the healthiest foods we eat are REALLY FUCKING PROCESSED…like yogurt and whey and olive oil…there’s no clear definition. However, take things we know are terrible, like hot dogs. Most health concerns found from hot dogs are from overeating. You overeat salad and you’re going to be pretty similarly sick. But hot dogs are like wine. People that have their shit together don’t eat 7-11 hot dogs. That’s for poor people or people that are too stressed out and busy to get a proper meal. Feed rich and relaxed people the same stuff, only in proper proportions and I’ll wager you’ll barely see a difference in health outcomes.

Health and diet are the new religion for the secular crowd. We believe that if we can be more pure…cut out the nitrates and red dye 50, we can be more holy. There’s just no evidence. I’ve eaten like shit as a poor college student. I ate healthy once I graduated and could afford fresh fruit and vegetables and to cook real meals....there was no difference. But I’ve had stressful jobs that made me feel like ABSOLUTE SHIT while eating PERFECTLY and working out daily. I’ve had jobs I loved where I worked out very inconsistently and ate junk food....felt much better (chubbier, but healthier).

I’m guilty of this too. I want to believe that if I live off lean protein, “good carbs,” and a fuckton of fruits and vegetables…I’ll be holy too!…I’ll be rewarded for my virtuous eating with good health.

During the pandemic I achieved this because I work from home and can afford whatever food I like....didn’t make a difference. I take my vitamins like clockwork and workout nightly…even walk the dog to clear the head, like clockwork. Part of me still thinks, with each meal planned, that if I eat a salad, instead of a slice of pizza (same amount of calories of each), I’ll be healthier…but I’ve NEVER seen ANY evidence…on the scale…in the mirror…in my mood…in my energy levels....in my bloodwork. Calories are calories....I eat too much salad, I feel like shit. I eat an appropriate amount of pizza, I feel fine. I still take my vitamins and eat healthy…but I can’t prove it works. I am not sure it does.

I wouldn’

Is there a lesson here?

By marcle • Score: 5, Insightful Thread

You certainly can’t choose all your relatives. But at least, if you’re choosing a spouse, learn how they react to adversity and disagreement. And the same goes for their family. It can mean the difference between a happy life and a world of hurt and melodrama.

Sam Altman Would Like To Remind You That Humans Use a Lot of Energy, Too

Posted by msmash View on SlashDot Skip
OpenAI CEO Sam Altman is pushing back on growing concerns about AI’s environmental footprint, dismissing claims about ChatGPT’s water consumption as “totally fake” and arguing that the fairer way to measure AI’s energy use is to compare it against humans.

In an interview with Indian Express, Altman acknowledged that evaporative cooling in data centers once made water usage a real concern but said that is no longer the case, calling internet claims of 17 gallons of water per query “completely untrue, totally insane, no connection to reality.”

On energy, he conceded it is “fair” to worry about total consumption given how heavily the world now relies on AI, and called for a rapid shift toward nuclear, wind and solar power. He took particular issue with comparisons that pit the cost of training a model against a single human inference, noting it “takes like 20 years of life and all of the food you eat” before a person gets smart — and that on a per-query basis, AI has “probably already caught up on an energy efficiency basis.”

Fuck you, Sam

By JustAnotherOldGuy • Score: 5, Insightful Thread

Fuck Sam Altman and his ravenous need to destroy the planet for bragging rights.

Honestly, I feel like there should be a disorder named after Sam Altman to highlight his sociopathy and utter disdain for anyone who doesn’t “share his vision”.

This is generally true

By drinkypoo • Score: 5, Insightful Thread

Even humans’ brains use a lot of energy… except for Sam Altman’s.

overpopulation

By Tom • Score: 5 Thread

Of course, we all know that half of global problems (climate change, pollution, too much energy usage, etc.) would disappear if half of the human population would vanish. But without Thanos, it’s not like half of us would volunteer, right?

So, we don’t have control over how many people there are. We DO have control over how much electricity we feed into AI systems.

Re:This is generally true

By dfghjk • Score: 5, Interesting Thread

Human brains use very little energy, though. 20 years of food do not go to training a brain but growing the entire organism AND doing a great deal of work.

And this isn’t coming from Sam Altman’s brain, it’s just the latest gaslighting from the AI industry. I would expect greed on the scale of Altman’s to require as much energy as any other brain consumes. Sometimes energy isn’t directed towards noble ends.

What he reaily meant was

By wakeboarder • Score: 5, Insightful Thread

I want you tot let me build my datacenters so I can displace the salary you get and the energy you use and bring it under my control. That’s what he really wants. But I’m doubtful of his claim that AI can do tasks with less energy. A human runs about 2.3kwh a day, and ~800W at work. Running claude code to do 1.3kwh during a typical coding day. So what is more efficient? Another thing to consider is neurons use about a million times less energy to function and do a heck of a lot more than a transistor. Sillicon is not going to compete with wet ware on energy costs.

Goldman Sachs, Morgan Stanley Calculate AI’s Contribution To U.S. Growth May Be Basically Zero

Posted by msmash View on SlashDot Skip
The narrative that AI spending has been singlehandedly propping up the U.S. economy — a claim that captivated Silicon Valley, Wall Street and Washington over the past year — is facing serious pushback from economists [non-paywalled source] at Goldman Sachs, Morgan Stanley and JPMorgan Chase, all of whom now calculate that the AI buildup’s direct contribution to growth was dramatically overstated and possibly close to zero.

The debate hinges on how GDP accounts for imported components: roughly three-quarters of AI data center costs go toward computer chips and gear largely manufactured in Asia, and that spending gets subtracted from domestic output because it boosts foreign economies. Joseph Politano of the Apricitas Economics newsletter pegs AI’s actual contribution at about 0.2 percentage points of the 2.2 percent U.S. growth in 2025, and even Hannah Rubinton at the St. Louis Fed — whose own analysis attributed 39 percent of growth to AI-related business spending through the first nine months of the year — acknowledges that figure is probably the ceiling. “It’s not like AI is propping up the economy,” Rubinton said.

Negative growth

By hunter44102 • Score: 5, Insightful Thread
Ai is negative growth. We are replacing taxpayers with machines who not only dont pay taxes but they dont spend money like consumers

A negative bubble

By NotEmmanuelGoldstein • Score: 3 Thread
Meaning, the stock market frenzy is purely a result of hype and people throwing cash around.

That’s good for billionaires and millionaires but as hunter44102 explains, the real consequences are damaging the economy.

Re:That’s about server investments

By coofercat • Score: 5, Interesting Thread

I think what they’re saying is that to build a server farm in the USA, you need a lot of materials from other countries, which improves their GDP too. As a result, it doesn’t extend the USA’s economic lead over the world at all - it’s “basically zero”.

It’s a little confusing what they’re reaching for - there’s GDP and there’s relative GDP (or growth and relative growth, to take TFA’s terminology) - the first does look like it’ll go up, but the second appears not to be.

They also don’t go into the outsized proportion of the US economy that is related to AI build-outs. If those build-outs stop or even slow down, what effect will that have on the wider economy, or indeed that of other countries around the world?

Re:Negative growth

By Skjellifetti2 • Score: 4, Interesting Thread

{ Machine looms | Railroads | Electricity | Automobiles } is negative growth. We are replacing taxpayers with machines who not only dont pay taxes but they dont spend money like consumers

The problem with your claim is that it is was said about almost every new technology. But in the end those technologies created more jobs than were destroyed. At present, we simply don’t know to what, if any, extent this will be true of AI. Get back to us in a couple of decades.

Re:Negative growth

By Knightman • Score: 5, Insightful Thread

Plus, it doesn’t take into account how wages/income are actually spent on services, food, goods, rent, insurance and other stuff that props up a myriad of other industries which in turn props up other industries. Unemployed people tend to only spend money on the most critical necessities.

Anyone with a modicum of knowledge about economics can see how replacing people with AI will affect local, national and eventually the global economy negatively. It will be a race to the bottom when companies starts feeling the squeeze and think the solution is to replace more people with AI to placate investors and shareholders.

Unless steps are taken to mitigate this, expect some interesting times ahead.

Is AI Impacting Which Programming Language Projects Use?

Posted by EditorDavid View on SlashDot Skip
“In August 2025, TypeScript surpassed both Python and JavaScript to become the most-used language on GitHub for the first time ever…” writes GitHub’s senior developer advocate.

They point to this as proof that “AI isn’t just speeding up coding. It’s reshaping which languages, frameworks, and tools developers choose in the first place.”
Eighty percent of new developers on GitHub use Copilot within their first week. Those early exposures reset the baseline for what “easy” means. When AI handles boilerplate and error-prone syntax, the penalty for choosing powerful but complex languages disappears. Developers stop avoiding tools with high overhead and start picking based on utility instead.

The language adoption data shows this behavioral shift:

— TypeScript grew 66% year-over-year
— JavaScript grew 24%
— Shell scripting usage in AI-generated projects jumped 206%

That last one matters. We didn’t suddenly love Bash. AI absorbed the friction that made shell scripting painful. So now we use the right tool for the job without the usual cost.
“When a task or process goes smoothly, your brain remembers,” they point out. “Convenience captures attention. Reduced friction becomes a preference — and preferences at scale can shift ecosystems.”

my experience too

By cjonslashdot • Score: 4, Insightful Thread

“AI performs better with strongly typed languages. Strongly typed languages give AI much clearer constraints…”

As Guido van Rossum, the creator of Python once wrote,

“I’ve learned a painful lesson, that for small programs dynamic typing is great. For large programs, you have to have a more disciplined approach. And it helps if the language actually gives you that discipline, rather than telling you, ‘Well, you can do whatever you want.’"

Re:cool and all but....

By bjoast • Score: 4, Funny Thread

scripting is not coding

So, what is it then? Spelunking?

Re:cool and all but....

By Tomahawk • Score: 5, Informative Thread

Bash scripting is coding where the commands you are running are your external functions from a (maybe 3rd-party) API package.

You are running such a function, taking the output of it, manipulating it (likely with another external function), and using that as input for another function.

It is exactly, in all regards, a coding language.

It even has flow control (if, elif, else), loops (while, for, etc), and all the other stuff you expect of a coding language.

Re: Interesting Summary

By jlowery • Score: 4, Interesting Thread

Any organization that imposes a blanket ban on AI tools will soon be left in the dust.

To use a tautology, AI is good at what AI is good for: documentation, research, incremential coding, performance/storage tradeoff evaluation.

It is not (yet) good at architecture design or efficiency, nor even following DRY principles. It is nonetheless really, really useful for what it does well.

token efficiency

By PackMan97 • Score: 4, Interesting Thread
One of the issues with C and C++ and many other “verbose” and overly “ceremonial” languages is that they are very token inefficient.

Expect languages that get more done with less code to be more popular going forward. The models have limited context and you don’t want to waste half of it on boilerplate.

Rule-Breaking Black Hole Growing At 13x the Cosmic ‘Speed Limit’ Challenges Theories

Posted by EditorDavid View on SlashDot Skip
“A surprisingly ravenous black hole from the dawn of the universe is breaking two big rules,” reports Live Science. “It’s not only exceeding the ‘speed limit’ of black hole growth but also generating extreme X-ray and radio wave emissions — two features that are not predicted to coexist…”

“How is this rule-breaking behavior even possible? In a paper published Jan. 21 in The Astrophysical Journal, an international team of researchers observed ID830 in multiple wavelengths to find an answer....”
As they attract gas and dust, this material accumulates in a swirling accretion disk. Gravity pulls the material from the disk into the black hole, but the infalling material generates radiation pressure that pushes outward and prevents more stuff from falling in. As a result, black holes are muzzled by a self-regulating process called the Eddington limit… Its X-ray brightness suggests that ID830 is accreting mass at about 13 times the Eddington limit, due to a sudden burst of inflowing gas that may have occurred as ID830 shredded and engulfed a celestial body that wandered too close. “For a supermassive black hole (SMBH) as massive as ID830, this would require not a normal (main-sequence) star, but a more massive giant star or a huge gas cloud,” study co-author Sakiko Obuchi, an observational astronomer at Waseda University in Tokyo, told Live Science via email. Such super-Eddington phases may be incredibly brief, as “this transitional phase is expected to last for roughly 300 years,” Obuchi added.

ID830 also simultaneously displays radio and X-ray emissions. These two features are not expected to coexist, especially because super-Eddington accretion is thought to suppress such emissions. “This unexpected combination hints at physical mechanisms not yet fully captured by current models of extreme accretion and jet launching,” the researchers said in a statement. So while ID830 is launching massive radio jets, its X-ray emissions appear to originate from a structure called a corona, produced as intense magnetic fields from the accretion disk create a thin but turbulent billion-degree cloud of turbocharged particles. These particles orbit the black hole at nearly the speed of light, in what NASA calls “one of the most extreme physical environments in the universe.” Altogether, ID830’s rule-breaking behaviors suggest that it is in a rare transitional phase of excessive consumption — and excretion. This incredible feeding burst has energized both its jets and its corona, making ID830 shine brightly across multiple wavelengths as it spews out excess radiation.

Additionally, based on UV-brightness analysis, quasars like ID830 may be unexpectedly common, the researchers said. Models predict that only around 10% of quasars have spectacular radio jets, but these energetic objects could be significantly more abundant in the early universe than previously suggested. Most importantly, ID830 also shows how SMBHs can regulate galaxy growth in the early universe. As a black hole gobbles matter at the super-Eddington limit, the energy from its resultant emissions can heat and disperse matter throughout the interstellar medium — the gas between stars — to suppress star formation. As a result, ancient SMBHs like ID830 may have grown massive at the expense of their host galaxies.

Lisa, get in here!

By allo • Score: 4, Funny Thread

In this house we obey the laws of physics!

Re:C might be flexible.

By BadgerStork • Score: 5, Informative Thread

This is nothing to do with light speed. It is talking about how quickly mass can fall into a black hole via an accretion disk

Amazing we got even close to being right …

By butt0nm4n • Score: 3 Thread

When things don’t behave how we predicted it is still amazing that we got even close. Where does that initial idea come from that ends up as science?

Was a black hole observed and then explained from our existing theory? Something like the atom however, was speculated by the ancient Greeks (?) way before we observed one.

I guess the atom came from an intuition and imagination. Higher order processing, a subconscious, we cant access directly, some can , some can’t, certain conditions are better for it.

It wouldn’t surprise me to discover conscious beings have intuitions of how the universe works, simply because they are universe too. We are the universe looking at itself. We know how we work.

Simulation theory maybe a good example of an intuition. Yes we are in a simulation but not the same one, we are all running our own simulations in our heads that we call perception, reality, the world. Our brains construct a model from sense input that is successful enough for us to navigate our physical environment and stay alive. So simulation theory is kind of right, but it is pointing to something else, how our brains work and if you can change perception, you can change the world like Social Media has done to the point where we can invent value from nothing, the bitcoin.

I’d like to see meditation taught alongside science, I suspect we have all the answers already in the subconscious, we just need some “intronauts” to go find them.

Article Hype

By zforgetaboutit • Score: 4, Insightful Thread

"…two big rules”. … “two features that are not predicted to coexist.”

Those previous grandiose predictions were wrong. No rules are broken here.

I.e. Imagine mathematicians assert all natural numbers are odd. Then somebody observes an even number. Slashdot article consequently announces “big math rule violated”.

Predictions are not science. Science is the systematic study of the structure and behavior of the physical and natural world through observation, experimentation, and the testing of theories against the evidence obtained.

Should Job-Seekers Stop Using AI to Write Their Resumes?

Posted by EditorDavid View on SlashDot Skip
When one company asked job applicants to submit a video where they answer a question, most of the 300 responses were “eerily similar,” reports the Washington Post (with a company executive saying it was “abundantly clear” they’d used AI.)
Job seekers are turning to AI to help them land jobs more quickly in a tough labor market.... Employers say that’s having an unintended consequence: Many applications are looking and sounding the same…

It’s easy to spot when candidates over-rely on AI, some employers said. Oftentimes, executive summaries will look eerily similar to each other, odd phrases that people wouldn’t normally use in conversation creep into descriptions, fancy vocabulary appears, and someone with entry-level experience uses language that indicates they are much more senior, they added. It’s worse when they use auto-apply AI tools, which will find jobs, fill out applications and submit résumés on the candidate’s behalf, some employers said. Those tend to misinterpret some of the application questions and fill in the wrong information in inappropriate spots. If these applications were evaluated alone, employers say they’d have a harder time identifying AI usage. But when hundreds of applications all have the same issue, they said, AI’s role in it becomes obvious.
The article acknowledges that some employers could be using AI tools to screen resumes too. One job-seeker in Texas even says he’ll stop submitting an AI-written résumé when the recruiter stops using AI to evaluate them. “You’re saying, ‘You shouldn’t be doing this’ when I know a good chunk of them do this!”

Obligatory XKCD.

You should never have AI write your resume for you

By Timmy D Programmer • Score: 5, Insightful Thread
You should never have AI write your resume for you, but you sure as heck should ask it, what improvements woudl you recommend and why. You should also ask a few human beings that same question.

I’m not sure AI made that much difference

By 93 Escort Wagon • Score: 3 Thread

I’ve seen a lot of resumes over the past few decades. I’ve *always* found many, many of them to be very formulaic, structured on top of particular templates, using very similar phrasing and what-not.

Perhaps unsurprisingly, typically the worst offenders were “applying up” - meaning they weren’t really qualified for the position they applied for.

So while TFS indicates that AI is perhaps exacerbating this practice even more… it’s not exactly a new thing.

Re:Weed them out.

By DrXym • Score: 5, Interesting Thread
I do. A CV / Resume allows a potential employer get a sense of a person. Their interests, education, skills, experience. The way it is worded and its length gives information about their presentation skills and literacy. If someone is so lazy and dishonest that they’ll have an AI write it, then may as well assume they’re like that in other things. And possibly illiterate. And possibly incapable of even speaking English. How much of the CV is even true? Who knows? If I suspected a CV was written by an AI, I’d toss it in the trash.

That doesn’t mean AI can’t help write a CV, e.g. to make a sentence punchier or more brief. But if it’s overt, unapologetically machine generated then that person can fuck right off and be some other company’s problem.

Use it as a tool, not a crutch.

By twocows • Score: 4 Thread
I’m job searching right now and I’ve found LLMs to be handy for improving my resume, not so much for just outright writing it. They’re good to bounce ideas off of, for criticism purposes, and for quickly matching the fairly large number of skills I have to the smaller set that any given application wants. Sometimes their ideas are crap and their criticism is garbage but that’s why you have a brain; you can easily make that call if you just use it. If one model or LLM gives you junk, it’s trivial to move on to a different one. If they all give you crap, it’s just a tool; you don’t need to use it, there are other tools. But no, you shouldn’t have it write entire sections of your resume; that’s obviously going to lead to a poor result.

My only real caution is not to upload anything you don’t want the LLM to be trained on. Remove or substitute any personal information and if the format of your resume is unique and you don’t want it stolen, only feed it plaintext. Don’t put in anything you don’t want coming out elsewhere.

Re:Weed them out.

By coofercat • Score: 4, Interesting Thread

I’m between gigs at the moment, so have been hitting up job boards and the like. One really, really wants you to “tailor your CV with AI”. It seems laughable to me - I’ve spent quite a bit of time weedling my CV down to two pages by taking out a word here or there and rewriting sentences and whatnot. The AI does none of those things, it’s like an angry foreign bull in a china shop - it hacks about at the contents and leaves you with an utter mess of a document.

So I’ve politely declined to tailor my CV with AI - I’m sending out essentially the same CV to everyone. Sure, they might not see as much of $skill27 as they would like because I’ve kept it “down” a bit, but that’s probably more realistic than trying to fudge my CV to shout about things that aren’t my main skills.

If you consider your CV/resume to be an extension of you - you wouldn’t even let your SO or parents edit it without running it past you first. Taking “you” out of it by using AI seems to me to be the most obvious way to shoot yourself in the foot.

All that said, the job market is properly sh!t right now, so I guess people are using anything they can to get anything they can. I’m still not sure it’s a good thing in the long run, but I guess any port in a storm?

Raspberry Pi Stock Rises Over Its Possible Use With OpenClaw’s AI Agents

Posted by EditorDavid View on SlashDot Skip
This week Raspberry Pi saw its stock price surge more than 60% above its early-February low (before giving up some gains at the end of the week). Reuters notes the rise started when CEO Eben Upton bought 13,224 pounds worth of shares — but there could be another reason. “The rally in the roughly $800 million company has materialised alongside social-media buzz that demand for its single-board computers could pick up as people buy them to run AI agents such as OpenClaw.”

The Register explains:
The catalyst appears to have been the sudden realization by one X user, “aleabitoreddit,” that the agentic AI hand grenade known as OpenClaw could drive demand for Raspberry Pis the way it had for Apple Mac Minis. The viral AI personal assistant, formerly known as Clawdbot and Moltbot, has dominated the feeds of AI boosters over the past few weeks for its ability to perform everyday tasks like sending emails, managing calendars, booking appointments, and complaining about their meatbag masters on the purportedly all-agent forum known as MoltBook… In case it needs to be said, no one should be running this thing on their personal devices lest the agent accidentally leak your most personal and sensitive secrets to the web… In this context, a cheap low-power device like a Raspberry Pi makes a certain kind of sense as a safer, saner way to poke the robo-lobster…
The Register argues Raspberry Pis aren’t as cheap as they used to be “thanks in part to the global memory crunch. Today, a top-specced Raspberry Pi 5 with 16GB of memory will set you back more than $200, up from $120 a year ago.”

“You know what’s cheaper, easier, and more secure than letting OpenClaw loose on your local area network? A virtual private cloud…”

Re: Very.

By AvitarX • Score: 5, Insightful Thread

I thought clawbot used external LLMs and it’s job was connecting them to your systems to do things.

It’s the agent part, not the AI part.

Re: OpenClaw seems boring

By AvitarX • Score: 5, Interesting Thread

I’m not doing it (just to clarify), but my understanding of what an agent does seems perfect for a raspberry pi.

I could be wrong though, because the agent is taking LLM text and turning it into action, and honestly I have no idea how much CPU or GPU that takes.

But the concept of the bot is that it’s always on and you can text it.

I agree with you though. I’m not ready to let go of control of important stuff to a bot controlled by an LLM. The failure mode seems extreme while the convenience meh.

The fact that a bot wrote a blog post calling someone trash for rejecting a patch (that likely sucked) from the same bot tells me I don’t want it.

Re: How stupid can you be?

By madbrain • Score: 5, Informative Thread

Wrong. There are plenty of production uses for low power computers in embedded applications. I have two Raspberry Pi 3B+ running headless attached to the RS-485 bus of each of my Carrier HVAC system (one furnace, one heat pump). The low power consumption is a feature. Compute power is not needed to run the Go infinitive service.
And the Pi is indeed a computer. And Raspberry Pi OS is Linux.

Telegram Disputes Russia’s Claim Its Encryption Was Compromised

Posted by EditorDavid View on SlashDot
Russia’s domestic intelligence agency claimed Saturday that Ukraine can obtain sensitive information from troops using the Telegram app on the front line, reports Bloomberg. The fact that the claims were made through Russia’s state-operated news outlet RIA Novosti signals “tightening scrutiny over a platform used by millions of Russians,” Bloomberg notes, as the Kremlin continues efforts to “push people to use a new state-backed alternative.”
Russia’s communications watchdog limited access to Telegram — a popular messaging app owned by Russian-born billionaire Pavel Durov — over a week ago for failing to comply with Russian laws requiring personal data to be stored locally. Voice and video calls were blocked via Telegram in August. The pressure is the latest move in a long-running campaign to promote what the Kremlin calls a sovereign internet that’s led to blocks on YouTube, Instagram and WhatsApp… Foreign intelligence services are able to see Russia’s military messages in Telegram too, Russia’s Minister for digital development, Maksut Shadaev, said on Wednesday, although he added that Russia will not block access to Telegram for troops for now.

Telegram responded at the time that no breaches of the app’s encryption have ever been found. “The Russian government’s allegation that our encryption has been compromised is a deliberate fabrication intended to justify outlawing Telegram and forcing citizens onto a state-controlled messaging platform engineered for mass surveillance and censorship,” it said in an emailed response.

Re:Snowden leaks…

By Mr. Dollar Ton • Score: 4, Interesting Thread

Then telegram must have been “compromised” using Obama’s time machine, as the chat service was launched after the Snowden leak.

But you can still post the link to the leaked document that claims telegram was compromised, of course.

This proves the Russians haven’t cracked it

By OldMugwump • Score: 3 Thread

They can’t break it, and Ukranian troops are using it.

So - discredit Telegram. Obvious.

Russia - want to convince us? SHOW US A DECRYPT. If your claim is true this should be easy.

Re:This proves the Russians haven’t cracked it

By Mr. Dollar Ton • Score: 4, Interesting Thread

Actually, the ruzzkie problem with telegram is the Russian content on tg and not the way Ukrainians use it.

There are tens of thousands Russian channels with large followings, many of them local, all run by Russians in Russian. They are quite effectively challenging the government propaganda that everything is fine. Tg channels and bots are routinely used to coordinate all kinds of activities that the ruling class doesn’t approve of - e.g. protests against destruction of parks or natural areas for construction, collection of evidence of pollution or corruption of various kinds, including corruption in the military, which is quite rampant, etc. Moreover, since it is considered “Russian”, closing it down was and is still politically unacceptable even to hardcore putinists. It is trusted to not leak to the FSB and is widely used, getting in the way of the push for the adoption of the government-built and thoroughly cracked by design Maks messenger.

So, yes, it has been condemned and this is just a part of the campaign against it

The encryption is fine.

By An Ominous Cow Erred • Score: 4, Interesting Thread

I’m sure that statement is correct in that the encryption algorithm itself is secure, but there is probably a backdoor that got put in when the French government arrested the CEO, then mysteriously released him.