Alterslash

the unofficial Slashdot digest
 

Contents

  1. SaaS Apocalypse Could Be OpenSource’s Greatest Opportunity
  2. 2026 Turing Award Goes To Inventors of Quantum Cryptography
  3. Federal Cyber Experts Called Microsoft’s Cloud ‘a Pile of Shit’, Yet Approved It Anyway
  4. Apple Can Delist Apps ‘With Or Without Cause,’ Judge Says In Loss For Musi App
  5. Experiments Show Potatoes Can Survive In Lunar Solar (With Lots of Help)
  6. Nvidia Announces Vera Rubin Space-1 Chip System For Orbital AI Data Centers
  7. AI Job Loss Research Ignores How AI Is Utterly Destroying the Internet
  8. Arizona Charges Kalshi With Illegal Gambling Operation
  9. Rural Ohioans Seek To Ban Data Centers Through Constitutional Amendment
  10. Gamers React With Overwhelming Disgust To DLSS 5’s Generative AI Glow-Ups
  11. Finance Bros To Tech Bros: Don’t Mess With My Bloomberg Terminal
  12. Samsung Ends $2,899 Galaxy Z TriFold Sales After Just Three Months
  13. Nvidia Expects To Sell ‘At Least’ $1 Trillion In AI Chips By 2028
  14. Are Split Spacebars the Next Big Gaming Keyboard Trend?
  15. US SEC Preparing To Scrap Quarterly Reporting Requirement

Alterslash picks up to the best 5 comments from each of the day’s Slashdot stories, and presents them on a single page for easy reading.

SaaS Apocalypse Could Be OpenSource’s Greatest Opportunity

Posted by BeauHD View on SlashDot Skip
Longtime Slashdot reader internet-redstar writes:
Nearly a trillion dollars has been wiped from software stocks in 2026, with hedge funds making billions shorting Salesforce, HubSpot, and Atlassian. At FOSDEM 2026, cURL maintainer Daniel Stenberg shut down his bug bounty program after AI-generated slop overwhelmed his team. A new article on HackerNoon argues that most commercial SaaS could inevitably become OpenSource, not out of ideology but economics. The author points to Proxmox replacing VMware at enterprise scale and startups like Holosign replicating DocuSign at $19/month flat as evidence. The catch, the article claims, is that maintainers who refuse to embrace AI tools risk being forked, or simply replicated from scratch, by those who do.

2026 Turing Award Goes To Inventors of Quantum Cryptography

Posted by BeauHD View on SlashDot Skip
Dave Knott shares a report from the New York Times:
On Wednesday, the Association for Computing Machinery, the world’s largest society of computing professionals, said Drs. Charles Bennett and Gilles Brassard had won this year’s Turing Award for their work on quantum cryptography and related technologies. The Turing Award, which was introduced in 1966, is often called the Nobel Prize of computing, and it includes a $1 million prize, which the two scientists will share.

[…] The two met in 1979 while swimming in the Atlantic just off the north shore of Puerto Rico. They were taking a break while attending an academic conference in San Juan. Dr. Bennett swam up to Dr. Brassard and suggested they use quantum mechanics to create a bank note that could never be forged. Collaborating between Montreal and New York, they applied Dr. Bennett’s idea to subway tokens rather than bank notes. In a research paper published in 1983, they showed that their quantum subway tokens could never be forged, even if someone managed to steal the subway turnstile housing the elaborate hardware needed to read them.

This led to quantum cryptography. After describing their new form of encryption in a research paper published in 1984, they demonstrated the technology with a physical experiment five years later. Called BB84, their system used photons — particles of light — to create encryption keys used to lock and unlock digital data. Thanks to the laws of quantum mechanics, the behavior of a photon changes if someone looks at it. This means that if anyone tries to steal the keys, he or she will leave a telltale sign of the attempted theft — a bit like breaking the seal on an aspirin bottle.

Federal Cyber Experts Called Microsoft’s Cloud ‘a Pile of Shit’, Yet Approved It Anyway

Posted by BeauHD View on SlashDot Skip
ProPublica reports that federal cybersecurity reviewers had serious, yearslong concerns about Microsoft’s GCC High cloud offering, yet they approved it anyway because the product was already deeply embedded across government. As one member of the team put it: “The package is a pile of shit.” From the report:
In late 2024, the federal government’s cybersecurity evaluators rendered a troubling verdict on one of Microsoft’s biggest cloud computing offerings. The tech giant’s “lack of proper detailed security documentation” left reviewers with a “lack of confidence in assessing the system’s overall security posture,” according to an internal government report reviewed by ProPublica. For years, reviewers said, Microsoft had tried and failed to fully explain how it protects sensitive information in the cloud as it hops from server to server across the digital terrain. Given that and other unknowns, government experts couldn’t vouch for the technology’s security.

Such judgments would be damning for any company seeking to sell its wares to the U.S. government, but it should have been particularly devastating for Microsoft. The tech giant’s products had been at the heart of two major cybersecurity attacks against the U.S. in three years. In one, Russian hackers exploited a weakness to steal sensitive data from a number of federal agencies, including the National Nuclear Security Administration. In the other, Chinese hackers infiltrated the email accounts of a Cabinet member and other senior government officials. The federal government could be further exposed if it couldn’t verify the cybersecurity of Microsoft’s Government Community Cloud High, a suite of cloud-based services intended to safeguard some of the nation’s most sensitive information.

Yet, in a highly unusual move that still reverberates across Washington, the Federal Risk and Authorization Management Program, or FedRAMP, authorized the product anyway, bestowing what amounts to the federal government’s cybersecurity seal of approval. FedRAMP’s ruling — which included a kind of “buyer beware” notice to any federal agency considering GCC High — helped Microsoft expand a government business empire worth billions of dollars. “BOOM SHAKA LAKA,” Richard Wakeman, one of the company’s chief security architects, boasted in an online forum, celebrating the milestone with a meme of Leonardo DiCaprio in “The Wolf of Wall Street.”

It was not the type of outcome that federal policymakers envisioned a decade and a half ago when they embraced the cloud revolution and created FedRAMP to help safeguard the government’s cybersecurity. The program’s layers of review, which included an assessment by outside experts, were supposed to ensure that service providers like Microsoft could be entrusted with the government’s secrets. But ProPublica’s investigation — drawn from internal FedRAMP memos, logs, emails, meeting minutes, and interviews with seven former and current government employees and contractors — found breakdowns at every juncture of that process. It also found a remarkable deference to Microsoft, even as the company’s products and practices were central to two of the most damaging cyberattacks ever carried out against the government.

Microsoft and pile of shit

By strike6 • Score: 5, Insightful Thread
Seems redundant.......

More Proof

By organgtool • Score: 5, Insightful Thread
More proof that it’s better to be entrenched than to be good.

Not surprising

By ebunga • Score: 4, Informative Thread

I mean, this is no big surprise for anyone that has had to deal with this shit on a daily basis. I’m sure we’ve all been forced to use Teams at some point, so just extrapolate that out to their entire tech stack.

Trust

By Snert32 • Score: 4, Funny Thread
If you can’t trust Microsoft to protect you, you can at least trust the government oversight to protect you.

The product is crap…

By roc97007 • Score: 3 Thread

…but is too deeply embedded not to continue using.

Sounds like Microsoft’s business model.

Apple Can Delist Apps ‘With Or Without Cause,’ Judge Says In Loss For Musi App

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from Ars Technica:
Musi, a free music streaming app that had tens of millions of iPhone downloads and garnered plenty of controversy over its method of acquiring music, has lost an attempt to get back on Apple’s App Store. A federal judge dismissed Musi’s lawsuit against Apple with prejudice and sanctioned Musi’s lawyers for “mak[ing] up facts to fill the perceived gaps in Musi’s case.”

Musi built a streaming service without striking its own deals with copyright holders. It did so by playing music from YouTube, writing in its 2024 lawsuit against Apple that “the Musi app plays or displays content based on the user’s own interactions with YouTube and enhances the user experience via Musi’s proprietary technology.” Musi’s app displayed its own ads but let users remove them for a one-time fee of $5.99. Musi claimed it complied with YouTube’s terms, but Apple removed it from the App Store in September 2024. Musi does not offer an Android app. Musi alleged that Apple delisted its app based on “unsubstantiated” intellectual property claims from YouTube and that Apple violated its own Developer Program License Agreement (DPLA) by delisting the app.

Musi was handed a resounding defeat yesterday in two rulings from US District Judge Eumi Lee in the Northern District of California. Lee found that Apple can remove apps “with or without cause,” as stipulated in the developer agreement. Lee wrote (PDF): “The plain language of the DPLA governs because it is clear and explicit: Apple may ‘cease marketing, offering, and allowing download by end-users of the [Musi app] at any time, with or without cause, by providing notice of termination.’ Based on this language, Apple had the right to cease offering the Musi app without cause if Apple provided notice to Musi. The complaint alleges, and Musi does not dispute, that Apple gave Musi the required notice. Therefore, Apple’s decision to remove the Musi app from the App Store did not breach the DPLA.”

Walled Garden

By Himmy32 • Score: 3 Thread
If you play in the walled garden, better not anger the groundskeeper.

Gatekeeping

By DewDude • Score: 3 Thread

On one hand, I think a company should do what they want.

Oh the other....a company of any size should not be able to dictate the market by abusing it’s power.

Re:Gatekeeping

By Baron_Yam • Score: 5, Insightful Thread

This is why capitalism requires regulation - without it, the dominant player simply dominates and the market is no longer free. If you are the player, or someone on their payroll, you probably approve of this. If you’re the vast majority of the population, you should be very angry this happens.

But Musi’s a scam, repackaging someone else’s work and replacing the ads with their own.

Re:uhh duh

By PPH • Score: 4, Interesting Thread

So, a duopoly.

There are ways of dealing with monopolies/duopolies. Break them up. Probably can’t do that effectively with Apple/Android. Then there’s regulation. You place the entity(s) under the authority of some thing like a utilities commision. They want to make any changes to their pricing or terms of service, they have to seek approval from the commission. Such a situation is so onerous that those subject to it (even utilities) do everything in their power to weasel out from under it. And one obvious way would be to open up the platforms to third party app stores.

“But we can’t! Muh security!” Wrong. There’s nothing stopping the third party stores from implementing their own app vetting proceses. And allowing users to pick one tailored to their needs.

Experiments Show Potatoes Can Survive In Lunar Solar (With Lots of Help)

Posted by BeauHD View on SlashDot Skip
sciencehabit shares a report from Science.org:
In The Martian, fictional astronaut Mark Watney survives the wasteland of Mars by growing potatoes in lunar soil — with a bit of help from human poop. The idea may not be so far-fetched. In a preprint posted this month on bioRxiv, researchers show potatoes can indeed grow in the equivalent of Moon dust, though they need a lot of help from compost found on Earth. To make the discovery, scientists first had to re-create lunar regolith — the loose, powdery layer that blankets the Moon’s surface. To replicate that in the lab, David Handy, a space biologist at Oregon State University (OSU), and his colleagues used a mix of crushed minerals and volcanic ash that matched the chemistry of the Moon.

But lunar regolith is entirely devoid of the organic matter that plants need to grow. “Turning an inorganic, inhospitable bucket of glorified sand into something that can support plant growth is complex,” says Anna-Lisa Paul, a plant molecular biologist at the University of Florida not involved with the work. So Handy and his colleagues added vermicompost — organic waste from worms — into the regolith. They found that a mix with 5% compost allowed the potatoes to grow while still emulating the stressful conditions of the lunar environment. After almost 2 months of growth, the team harvested the tubers, freeze-dried them, and ground them up for further testing.

Analysis of the potatoes’ DNA showed stress-related genes had been activated. The potatoes also had higher concentrations of copper and zinc than Earth-grown ones, which may make them dangerous for human consumption. The plants’ nutritional value, though, was similar to traditional potatoes — a surprise to the scientists, who expected lower levels of nutrition “because the plants might have been working overtime to overcome certain stressors,” Handy says.

Lunar soil

By chefren • Score: 5, Insightful Thread

I thought he planted potatoes in Martian soil, but I guess I was mistaken.

Fun Fact

By necro81 • Score: 5, Interesting Thread
Although associated with wholesome soil and gardening, the woodlands of northern North America were devoid of native earthworms after the last ice age. This meant that the woodlands adapted to having thick layers of slowly decomposing detritus (e.g., leaf litter) on the forest floor. Colonizers in the 17th and 18th century introduced European earthworms (along with literal boatloads of non-native plants), with a mix of effects on native species and landscapes. Their deliberate incorporation into farming practices and use as bait by anglers allowed them to spread widely.

So by a certain reckoning, earthworms are an invasive species!

Re: Lunar soil

By Rei • Score: 4, Insightful Thread

All of these (endless) studies are so stupid. Someone buys a “lunar soil simulant” or “martian soil simulant”, grows something in it, and writes a paper. But these simulants are *not* the same thing. They’re designed to match (very roughly!) in terms of bulk elemental composition and grain size, but not *chemical* composition, nor trace elements, or even grain texture. For example, you’re not going to find perchlorates or whatnot in them.

I mean, congrats dude, you’ve shown you can grow potatoes in Hawaiian volcanic ash. Stop the presses.

And what’s even the point? At best you’re showing “I took something inorganic and grew plants in it”. That’s literally the definition of hydroponics. You can grow plants in a pot ground up plastic Elvis dolls -what exactly is the point? The only thing you could prove is what e.g. perchlorates, arsenic, hexavalent chromium, sharp grains, etc do to plants - *but they’re not testing that*.

And he’s not even testing hydroponics anyway - if you’re mixing it with organics, then you’re just using volcanic ash as a soil amendment. Your average ancient Roman farmer could have told you that works.

Lastly, the “potato farming” bit of The Martian was mind-bogglingly stupid tripe, even by that book’s low standards.

But why?

By codeButcher • Score: 4, Interesting Thread

It’s good that they do these experiments, as it shows risks regarding heavy metal toxicity.

Vermicompost obviously contains lots of earth microorganisms that live in symbiosis (“living together”) with plants here on earth - getting nutrients from plants (mostly carbs produced via photosynthesis, light not being available under the soil) and also supplying nutrients (nitrogen, minerals etc. from inert soil, converted to a bio-available form that plants can utilize) and even water. No surprise here, foodweb is a known concept by now with many people interested in this.

But I don’t know that it would be the most practical to ship vermicompost from earth in large and continuing quantities. It might be better to initially ship the earthworms themselves (or at least their eggs) as well as (organic) foodstuffs for the humans there. This could then serve to ramp up a growing population of earthworms on the moon. Should be obvious though that this will be in a sheltered environment, not on the exposed raw lunar surface - like with earth-origin humans and their earth-origin-crop plants. This would be a live ecosystem being constructed from the ground up, protected from a hostile environment - not inert or sterilized materials.

Having a colony of earthworms would allow the setup of a vermiponics system (~“aquaponics” using worm casting nutrients instead of dissolved salts) for growing food plants, using some inert substrate for a physical support structure for the plants - no dependence on a possibly toxic growth medium. Potatoes and other root crops are successfully grown here on earth, together with the customary leaf and fruit crops. After the food has been eaten and the waste passed out again, the worms can come into play again, to convert this back into compost, as has been done successfully here on earth with multiple wet or dry vermicomposting toilet systems.

One drawback with vermicomposting is the amount of time it may take - much less of a problem here on earth if you’ve got some space and a friendly environment. (This is one site I found via websearch that was quite interesting regarding construction, ramp-up and maintenance here on earth, gives some feel of what could be possible.)

Ironic that these little miracle workers considered for the moon are named “earth"worms.

I’m interested how the difference in gravity would influence them.

Re:NOT LUNAR SOIL

By votsalo • Score: 4, Interesting Thread

There is a similar, but more informative article about growing Chickpeas on 75% moon soil.

These articles bring up interesting questions about circular farming. What would it take to build a closed ecosystem on the moon that does not require continuously shipping nutrients from earth?

Nvidia Announces Vera Rubin Space-1 Chip System For Orbital AI Data Centers

Posted by BeauHD View on SlashDot Skip
Nvidia unveiled its Vera Rubin Space-1 system for powering AI workloads in orbital data centers. “Space computing, the final frontier, has arrived,” said CEO Jensen Huang. “As we deploy satellite constellations and explore deeper into space, intelligence must live wherever data is generated.” CNBC reports:
In a press release, the company said that its Vera Rubin Space-1 Module, which includes the IGX Thor and Jetson Orin, will be used on space missions led by multiple companies. The chips are specifically “engineered for size-, weight- and power-constrained environments.” Partners include Axiom Space, Starcloud and Planet.

Huang said Nvidia is working with partners on a new computer for orbital data centers, but there are still engineering hurdles to overcome. “In space, there’s no convection, there’s just radiation,” Huang said during his GTC keynote, “and so we have to figure out how to cool these systems out in space, but we’ve got lots of great engineers working on it.”

This is concerning

By AvitarX • Score: 3 Thread

It feels like they’re making chips to fuel hype for a thing we all know can’t work because physics.

I guess there’s probably a huge circular investment with SpaceX or something though?

Re: This is concerning

By nightflameauto • Score: 5, Funny Thread

Even excluding launch costs do we have a feasible way to make a space data center work?

Step 1: Place all the tech-bro AI broligarchs onto a SpaceX Starship with data terminals for each of them. Make sure live video feeds are available so we can watch them work.

Step 2: Launch to a Lagrange point.

Step 3: Allow the tech-bros access to their terminals, and flip on the live video feeds.

Step 4: Enjoy watching them try to continue to hype their own farts from space while their supplies dwindle. See how long it takes until they start to realize they are totally boned.

Step 5: Place bets on who eats who first and enjoy the show.

Step 6: Once free of them, start to reclaim some small semblance of sanity back here on Earth. But, since we’re humanity, this step is mostly optional.

AI Job Loss Research Ignores How AI Is Utterly Destroying the Internet

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from 404 Media, written by Jason Koebler:
Over the last few months, various academics and AI companies have attempted to predict how artificial intelligence is going to impact the labor market. These studies, including a high-profile paper published by Anthropic earlier this month, largely try to take the things AI is good at, or could be good at, and match them to existing job categories and job tasks. But the papers ignore some of the most impactful and most common uses of AI today: AI porn and AI slop.

Anthropic’s paper, called "Labor market impacts of AI: A new measure and early evidence,” essentially attempts to find 1:1 correlations between tasks that people do today at their jobs and things people are using Claude for. The researchers also try to predict if a job’s tasks “are theoretically possible with AI,” which resulted in this chart, which has gone somewhat viral and was included in a newsletter by MSNOW’s Phillip Bump and threaded about by tech journalist Christopher Mims. (Because everything is terrible, the research is now also feeding into a gambling website where you can see the apparent odds of having your job replaced by AI.) In his thread, Mims makes the case that the “theoretical capability” of AI to do different jobs in different sectors is totally made up, and that this chart basically means nothing. Mims makes a good and fair observation: The nature of the many, many studies that attempt to predict which people are going to lose their jobs to AI are all flawed because the inputs must be guessed, to some degree.

But I believe most of these studies are flawed in a deeper way: They do not take into account how people are actually using AI, though Anthropic claims that that is exactly what it is doing. “We introduce a new measure of AI displacement risk, observed exposure, that combines theoretical LLM capability and real-world usage data, weighting automated (rather than augmentative) and work-related uses more heavily,” the researchers write. This is based in part on the “Anthropic Economic Index,” which was introduced in an extremely long paper published in January that tries to catalog all the high-minded uses of AI in specific work-related contexts. These uses include “Complete humanities and social science academic assignments across multiple disciplines,” “Draft and revise professional workplace correspondence and business communications,” and “Build, debug, and customize web applications and websites.” Not included in any of Anthropic’s research are extremely popular uses of AI such as “create AI porn” and “create AI slop and spam.” These uses are destroying discoverability on the internet, cause cascading societal and economic harms.
“Anthropic’s research continues a time-honored tradition by AI companies who want to highlight the ‘good’ uses of AI that show up in their marketing materials while ignoring the world-destroying applications that people actually use it for,” argues Koebler. “Meanwhile, as we have repeatedly shown, huge parts of social media websites and Google search results have been overtaken by AI slop. Chatbots themselves have killed traffic to lots of websites that were once able to rely on ad revenue to employ people, so on and so forth…”

“This is all to say that these studies about the economic impacts of AI are ignoring a hugely important piece of context: AI is eating and breaking the internet and social media,” writes Koebler, in closing. “We are moving from a many-to-many publishing environment that created untold millions of jobs and businesses towards a system where AI tools can easily overwhelm human-created websites, businesses, art, writing, videos, and human activity on the internet. What’s happening may be too chaotic, messy, and unpleasant for AI companies to want to reckon with, but to ignore it entirely is malpractice.”

AI is not very intelligent and not improving.

By gurps_npc • Score: 5, Insightful Thread

Parrots sound like they are speaking, but they are merely repeating.

AI has only one single reasoning methodology - prediction based on existing data.

AI is not gaining more methods, it is instead just increasing the data. This gives ‘better’ results, but evolution not revolutionary. Minor improvements at great speed, not major improvements.

AI is not even as intelligent as the Parrot, it is just better educated.

The various stories of evil (AI blackmailing people, AI blogging about how people are prejudiced against it for not letting it post, AI being racist) all demonstrate low level thought - not dogs, not rats, not mice, but instead the kind of thing that an insect could do.

We think it is smart only because it has learned how to predict words that we recognize as sentences. Ignoring that ability, it is the same stupid it was when we first invented LLMs.

You can get better results from AI simply by telling it not to guess and to only show results it can back up. That is not something a person has to be told. That is something we do automatically. A well trained dog does that (i.e. drug detection dogs know not to false alert if they are well trained).

AI is like a guy I knew from college that got in because of his parent’s money: Well educated moron.

Re:AI is not very intelligent and not improving.

By gweihir • Score: 5, Informative Thread

Yes, a pretty good summary. Of course, LLM guardrails are getting a bit better (but only better adapted, not fundamentally better) and LLM training has reduced the most extreme forms of hallucinations (same), but LLMs remain laughably incapable as soon as something did not have good prevalence in the training data.

Re:AI is not very intelligent and not improving.

By geekmux • Score: 5, Interesting Thread

The various stories of evil (AI blackmailing people, AI blogging about how people are prejudiced against it for not letting it post, AI being racist) all demonstrate low level thought - not dogs, not rats, not mice, but instead the kind of thing that an insect could do.

When idiot judges can’t even describe what a shitcoin wallet is are presiding over crypto cases, it tends to say a lot about the moron voter or elected leader who put them there.

If AI did NOTHING else but image and video manipulation from this point forward, it would become one hell of a dangerous weapon. Stop pretending our legal system is smart enough. It isn’t. It’s corrupt enough. The various stories of today will look like child’s play compared to the scams of tomorrow. Including ones pulled by law enforcement (like when they arrest sober people for drunk driving, because revenue generation.)

AI in the American legal system alone should scare you. Because only one of those entities isn’t getting any better or smarter.

Not AI fault

By Visarga • Score: 5, Insightful Thread
Slop is caused by competition for scarce attention on platforms that optimize for engagement. Not by AI. Don’t confuse the cause with the effect here, nobody is posting slop in places where there is no chance of getting that precious traffic. It is a system level problem.

The old Internet already WAS subsumed

By Moridineas • Score: 5, Insightful Thread

Yes, AI slop has accelerated a lot of enshittification, but the enshittification started decades ago.

It started when Facebook and other major social media aggregators started putting content behind walls and made searching old content extremely difficult.

It started when Google pagerank started being actively abused by SEO “experts” churning out meaningless, contentless blog posts and other junk content just to fluff up rank.

It started when every error message you search for leads to a enshittified page that exists solely to capture common searches, lead you along for as long as possible while displaying as many ads as possible, without any real content.

I used to be able to search for recipes and fine a lot of individual bloggers and websites. IF you search for any given receipe today, there are a handful of sites that are going to pop up at the top of search results for almost everything. Damn you Spruce Eats!

Etc.

I could keep going. The biggest problem is that the EXISTING, in-progress enshittification, is 100% compatible with AI slop.

Arizona Charges Kalshi With Illegal Gambling Operation

Posted by BeauHD View on SlashDot Skip
Arizona has filed criminal charges against Kalshi, accusing it of operating an illegal gambling business. “Kalshi may brand itself as a ‘prediction market,’ but what it’s actually doing is running an illegal gambling operation and taking bets on Arizona elections, both of which violate Arizona law,” Arizona Attorney General Kris Mayes said in a statement. The case could ultimately head to the Supreme Court to decide whether federal oversight by the Commodity Futures Trading Commission overrides state gambling laws. Bloomberg reports:
While state regulators have taken steps to crack down on what they say is unlicensed betting on Kalshi’s site, Arizona appears to be the first state to escalate to criminal charges. The charges cited in the complaint are misdemeanors, which carry less serious penalties than felonies. […] Prediction market exchanges like Kalshi have said they should continue to be regulated by the US Commodity Futures Trading Commission despite opposition from some state officials, who argue the trading should come under state gambling laws.

Arizona’s criminal complaint follows Kalshi’s move last week to block the state’s gaming department from taking enforcement action against the company. “These are the first criminal charges of any kind filed against Kalshi in any court in the United States, but it will likely be the first of several,” said Daniel Wallach, a sports and gaming attorney.

Block accounts from the state

By gurps_npc • Score: 5, Insightful Thread

Arizona can stop Arizonians from using the gambling web site, they cannot stop non-arizonians from betting on Arizona elections.

Re:About damn time

By swillden • Score: 5, Insightful Thread

Whats wild to me is a lot of Silicon Valley Tech Bro types actually treat these fucking things as genuine useful predictors, as if a bunch of gambling addicted basement folk somehow voltron up to form a giant soothsaying oracle..

If it were just “gambling-addicted basement folk” that would indeed be crazy. But the idea — and the reality — is that it attracts interest from serious and well-informed people who know stuff and do their research. And prediction markets do have a pretty good track record; they tend to outperform experts a lot of the time. They did much better than pundits or pollsters at predicting Trump’s wins, just to name one example.

Re: About damn time

By Anamon • Score: 4, Informative Thread
Millions of gallons don’t last very long.

The US has a daily crude oil consumption of around 850 million gallons per day. The strategic reserve is around 17 billion gallons, lasting about three weeks. More than a third of that was authorised to be released last week.

Re: About damn time

By YetanotherUID • Score: 5, Informative Thread
Your numbers are the exact opposite of reality. RacetotheWH, which aggregates several polls, including some ludicrously partisan right wing ones (cough, Napolitano, cough) has Trump sitting at 40.1 %. And most of these polls were taken before the scope of the disaster that is now quickly unfolding in Iran was apparent.Yougov/Economist, which is considered reputable with a mild right bias in the U.S., and whose survey period ended on the 16th has Trump at 36%. Granted, the pollsters aren’t exactly the same now, with some having gone under and others popping up over the last 12 years, but Obama’s aggregate approval at this point in his term was around 44-45%.

Re: About damn time FTFY

By zlives • Score: 4, Insightful Thread

what “boycott buying gas for two weeks” means to US
boycott job for two weeks
boycott school for two weeks
boycott groceries for two weeks

remeber we sold our soul and public transportation to gas and car companies back in the 40’s & 50’s

Rural Ohioans Seek To Ban Data Centers Through Constitutional Amendment

Posted by BeauHD View on SlashDot Skip
Residents in rural Ohio are pushing a constitutional amendment to ban large data centers over 25 megawatts, citing concerns about energy use, water consumption, and lack of transparency around proposed projects. “My biggest concern is because I love Adams County,” Nikki Gerber told Cleveland.com. “What it feels like they are doing is just taking advantage of the unzoned rural areas of Ohio, where they can go ahead and put in whatever they want.” From the report:
Gerber and a handful of residents from Adams and Brown counties gathered about 1,800 signatures in eight days to start the ballot process. They submitted those petitions to the Ohio attorney general’s office on Monday. That’s the first step before supporters can begin collecting signatures statewide.

State law requires at least 1,000 valid voter signatures to begin the process. The petitions must also include the full text of the proposed amendment and a summary explaining what it would do. Attorney General Dave Yost’s office now has 10 days to decide whether the summary fairly and truthfully describes the proposal. If it does, the measure will move to the Ohio Ballot Board. Supporters would then need to gather about 413,000 valid signatures by July to place the amendment before voters this November.
The report notes that a 25-megawatt limit “would effectively block most modern data centers from being built in Ohio.”

Re:What Mama Pajama Saw

By sound+vision • Score: 5, Insightful Thread

Are normal residents of Ohio able to call a vote on “regular statute laws” without the legislature? That would be my guess as to why they did it this way, as an end-run around a nonresponsive legislature.

NIMBY?

By jenningsthecat • Score: 5, Insightful Thread

To be clear, I support controls on data centre construction which take much more account of what citizens want and what’s good for their health. I think citizens should be able to say “Hell no!” and have the government honour their wishes.

In addition to the factors mentioned in TFS, there are some really serious health issues that come with having a data centre in your general vicinity. One of the most insidious is infrasonic emissions which can cause physical and mental health problems over a very large area surrounding the centres. So placing one close to residences and other businesses can be a major health problem for a lot of people.

At the same time, I’m sure these people, like most of us, watch a lot of YouTube, Prime, Netflix, etc. So they want to benefit from data centres - they just don’t want them located in their back yards. I sympathize with them, and would likely do what they’re doing; but the data centres have to go somewhere, and anyone who uses the internet a lot is on shaky ethical ground when insisting that the negative consequences be someone else’s problem.

Sure, a lot of new data centres are being built just to run LLMs. But if AI hadn’t come along, they would still be looking for places to build server farms - it would just be happening at a slower rate. There are no easy answers; but a good start would be to take back control of the government from tech broligarchs and other big corporations. That would force a dialog which might yield solutions. Until then, corporations will be predators and average citizens will be victims.

Smart and the only way to do it

By rsilvergun • Score: 3 Thread
One of the things the right wing in the epstein-class figured out years ago was that you didn’t want a lot of democracy at the national government level because it’s too big to easily buy off even for them.

Once you get down to the state level then it’s a lot more manageable to buy off the state senators.

It’s trivial to buy off a county election or city election but there’s just too many of them so it’s not pract.

This is what they mean when they say government small enough to drown in a bathtub.

You want government just the right size, big enough that you can force the plebs to do whatever you tell them to do at the barrel of a gun but small enough that you can buy them off effectively. And that’s the state level.

This is why if you ever get the option to do direct ballot initiatives or pass your own legislation through ballots you’re going to find that billionaires are constantly rolling in with a ton of money to try to take that away from you. Because it lets you override the state legislatures that they spent so much money buying off

Re:What Mama Pajama Saw

By sound+vision • Score: 4, Informative Thread

But I do want to be clear the problem you’ve pointed out is absolutely real. Case in point, the Texas constitution. But there it is the legislature choosing to make everything a constitutional amendment, of their own choice. Not the citizenry doing it out of necessity.

It is interesting to contemplate, though, the purpose and function of a constitution. If we are to look at constitutions as expressions of the people’s will, and also a document to bind officials (legislatures) who might neglect their duty, I don’t see how the proposed amendment in Ohio goes against that.

It’s the NDA’s that bother me

By Sethra • Score: 5, Insightful Thread

“Gerger was also frustrated by the proliferation of non-disclosure agreements between big tech companies and local officials”

There should never be a situation where local officials can hide their negotiations behind NDA’s. They are PUBLIC officials and the public has the right to know what decisions they are making on behalf of the community (as opposed to the officials enriching themselves or pocketing huge campaign donations).

Gamers React With Overwhelming Disgust To DLSS 5’s Generative AI Glow-Ups

Posted by BeauHD View on SlashDot Skip
Kyle Orland writes via Ars Technica:
Since deep-learning super-sampling (DLSS) launched on 2018’s RTX 2080 cards, gamers have been generally bullish on the technology as a way to effectively use machine-learning upscaling techniques to increase resolutions or juice frame rates in games. With yesterday’s tease of the upcoming DLSS 5, though, Nvidia has crossed a line from mere upscaling into complete lighting and texture overhauls influenced by “generative AI.” The result is a bland, uncanny gloss that has received an instant and overwhelmingly negative reaction from large swaths of gamers and the industry at large.

While previous DLSS releases rendered upscaled frames or created entirely new ones to smooth out gaps, Nvidia calls DLSS 5 — which it plans to launch in Autumn — “a real-time neural rendering model” that can “deliver a new level of photoreal computer graphics previously only achieved in Hollywood visual effects.” Nvidia CEO Jensen Huang said explicitly that the technology melds “generative AI” with “handcrafted rendering” for “a dramatic leap in visual realism while preserving the control artists need for creative expression.”

Unlike existing generative video models, which Nvidia notes are “difficult to precisely control and often lack predictability,” DLSS 5 uses a game’s internal color and motion vectors “to infuse the scene with photoreal lighting and materials that are anchored to source 3D content and consistent from frame to frame.” That underlying game data helps the system “understand complex scene semantics such as characters, hair, fabric and translucent skin, along with environmental lighting conditions like front-lit, back-lit or overcast,” the company says.
Nvidia’s announcement video and detailed Digital Foundry breakdown can be found at their respective links.

“Reactions have compared the effect to air-brushed pornography, 'yassified, looks-maxed freaks,’ or those uncanny, unavoidable Evony ads,” writes Orland. “Others have noted how DLSS 5 seems to mangle the intended art direction by dampening shadows in favor of a homogenized look.”

Thomas Was Alone developer Mike Bithell said the technology seems designed “for when you absolutely, positively, don’t want any art direction in your gaming experience.”

Gunfire Games Senior Concept Artist Jeff Talbot added that “in every shot the art direction was taken away for the senseless addition of ‘details.’ Each DLSS 5 shot looked worse and had less character than the original. This is just a garbage AI Filter.”

DLSS 5’s “AI dogshit is actually depressing,” said New Blood Interactive founder and CEO Dave Oshry, adding that future generations “won’t even know this looks ‘bad’ or ‘wrong’ because to them it’ll be normal.”

Re:Uhhh

By thegarbz • Score: 5, Insightful Thread

Maybe I’m missing something, but from Nvidia’s announcement video every single example looks significantly better in a number of ways.

The thing you’re missing is that they demonstrated multiple different games with multiple different artistic directions and yet … they all look the same when DLSS is on. That’s the big problem here. You’re no longer getting a game made by developers, you’re getting a game interpreted by NVIDIA’s training set.

Take a closer look before you declare better. Here’s what I see:

1. Resident Evil the character looks amazing. But the environment looks amazing as well, which is a problem since there was a clear design choice to have a background brown fog make the environment not stand out. Someone added that on purpose. DLSS5 removed it.
2. Hogwarts Legacy was clearly lit by an overhead light casting a strong shadow over the face of the character. This game supports raytraced shadows so this shadow was rendered correctly. DLSS5 all but removed it, making it virtually impossible to tell where the lighting is coming from. The character now almost looks completely front lit.
3. Starfield fundamentally changed the character’s looks. The face is actually a different dimension. Not bad enough that all lighting will look the same, but are we expecting every character to be rendered with a catwalk model chin now? (Sidenote: What isn’t visible in that video but was visible in an extended cut was that the lighting is no longer visually stable. Not bad enough that reflection react differently, and fast moving object glitch out with DLSS, but now we get to contend with subsurface scattering and ambient occlusion popping in and out depending on what the AI thinks about any given frame. There was a scene with a chapter in Starfield standing up and going for a walk. I can’t even tell you what the chachter looked like because I was distracted by the unstable shadows in the environment screwing up. And this AI instability is something DLSS hasn’t fixed for its 8 year existence).
4. EA … ha nothing could make that game worse, this I’ll take as an absolute win.

Give me bland graphics over this crap anyday.

Re:Quick look

By thegarbz • Score: 5, Insightful Thread

So that’s a thing that will bother people that are all wrapped around the axle about “atmosphere.”

The sad part about you putting the word “atmosphere” in quotes is that you fail to realise that the resident evil games have all fundamentally been about atmosphere. It’s what drew you in back in the days of the original Playstation where a character’s face was modelled with polygons you could count on your fingers.

Yeah-nah I don’t want all my games looking the same thanks to an AI interpretation. The problem is, we now have another tool that will make developers not give a damn. Not good enough that DLSS has destroyed any motivation for developers to optimise their games, now we get to contend with them shitting out crappy games and relying on AI to make it look real.

But hey as long as the tech demo applied to an infamously bad game looks good right?

Re:“Gamers Hate”

By sound+vision • Score: 5, Informative Thread

DLSS (in its original AA/upscaling definition) is amazing. It looks infinitely better than whatever internal scaling your monitor can do. It gave us back something we lost in the transition away from CRTs, which is the ability to play games at something other than your monitor’s native resolution.

Frame gen is more of a mixed bag, I tend to think of it as a motion smoothing effect rather than “free performance”. It’s only useful in a narrow range of scenarios. The marketing of “Turn 20 fps into 120 fps with 6x frame gen” is BS.

Now this new stuff sounds like AI content generation, i.e. slop, meaning it’s totally useless.

Re:That actually looks pretty darn good…

By karmawarrior • Score: 5, Interesting Thread

I didn’t.

First of all, the stills look OK. But there’s still an uncanny valley affect.

Second, there was a radical variation in the styles used in each game. The Hogwarts woman 25s into the video, for instance, looked cartoonish. The guy immediately afterwards looked intended to be real albeit with cartoonish clothing, which set against his cartoonish cart driver looked very out of place.

As an aside, the Hogwarts woman looked OK in paused scenes but seemed to have something seriously wrong with her face, like the skin was sliding off or something, when animated.

Third, the animation styles are the same bobbing and necks making weird angles when speaking type stuff we’ve seen in video games since the 1990s. DLSS doesn’t fix that. So you get the extreme weirdness of “real” people making video game movements. That adds more uncanny valley sense to the whole thing.

And this is a video of them showcasing it all going to plan. They’ve mostly used cut scenes. They’ve choosen sequences where there’s a lot of animation in the first half (while showing non-DLSS) and relatively little in the second (showing DLSS 5.) So you’re seeing a VERY cherry picked selection.

Despite this, it’s all uncanny valleys, which is why gamers are not happy with it.

This is what you get when you replace the executive staff at a graphics card maker from enthusiastic gamers to AI boosting charlatans.

Re:Quick look

By Z80a • Score: 5, Insightful Thread

The tech is potentially good, but the way nvidia used it was pretty horrible, getting characters that are purposefully stylized and doing these terrible “what if mario was real” kind of hell.
A game made with this from the ground up, with the NN trained to draw the character faces and all that would be awesome, but it was of really bad taste how they shown it.

Finance Bros To Tech Bros: Don’t Mess With My Bloomberg Terminal

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from the Wall Street Journal:
A battle of insults and threats has broken out between the tech world and Wall Street. What’s got everyone so worked up? The same thing that starts most fights: business software. A series of social-media posts went viral in recent days with claims that AI has created a worthy — and way cheaper — alternative to the Bloomberg terminal, a computer system that is like oxygen to professional investors. Now “Bloomberg is cooked,” some posters argued as they heralded the arrival of a newly released AI tool from startup Perplexity. […]

The finance bros who worship at the altar of Bloomberg have declared war on the tech evangelists who have put all their faith in AI. To suggest that the terminal is replaceable is "laughable,” said Jason Lemire, who jumped into the conversation on LinkedIn. (Ironically or not, his post also included an AI-generated image of churchgoers praying to the Bloomberg terminal). “It seems quite obvious to me that those propagating that post are either just looking for easy engagement and/or have never worked in a serious financial institution,” he wrote. […] Morgan Linton, the co-founder and CTO of AI startup Bold Metrics and an avid Perplexity Computer user, said it’s rare for a single AI prompt to generate anything close to what Bloomberg does. That said, he added that tools like this can lay “a really good foundation for a financial application. And that really has not been possible before.”

Others aren’t so sure. Michael Terry, an institutional investment manager who used the terminal for more than 30 years, said he used a prompt circulating online to try to vibe code a Bloomberg replica on Anthropic’s Claude. “It was laughable at best, horrific at worst,” he said. Shevelenko acknowledged there are some aspects of the terminal that can’t be replicated with vibe coding, including some of Bloomberg’s proprietary data inputs. The live chat network, which includes 350,000 financial professionals in 184 countries, would also be hard to re-create, as well as the terminal’s data security, reliability and robust support system. “I love Bloomberg. And I know most people that use Bloomberg are very, very loyal and extremely happy,” said Lemire. His message to the techies? “There’s nothing that you can vibe code in a weekend or even like over the course of a year that’s going to come anywhere close.”

Re: WTF is a Bloomberg Terminal?

By zeiche • Score: 5, Informative Thread

stop insulting folks, asstard.

Re:WTF is a Bloomberg Terminal?

By Koreantoast • Score: 5, Interesting Thread
It would not be an understatement to say that Bloomberg is THE global information broker for the financial sector - no one else has the same amount of data and analysis that they do. The Bloomberg Terminal is the defacto tool used by financial professionals globally, involved in moving trillions of dollars in assets every day. The amount of information hosted there is incredible: split second latest numbers for just about every financial and economic metric on the planet along with historicals going back decades, news before even news orgs start reporting out, proprietary intelligence and analysis that provide details into the supply chains of individual firms that the firms may not even have as clear of a view on, etc. They have a massive network effect advantage - their internal chat system has networked just about every major financial professional on the planet. There is also a regulatory advantage - the terminal is setup to navigate the complex web of financial regulations across dozens, if not hundreds, of regulatory bodies from a compliance perspective.

They are the defacto tool not just because they bring that data together in a way no one else on the market has, but it has a level of vetting, security, and support for a system that you’d expect for a tool that the entire financial sector depends upon. AI could do some of the data manipulation, but it would take years to negotiate access to the sheer number of proprietary data sets they have access to and find professionals to train the models.

Re: lol

By PPH • Score: 5, Funny Thread

And event logging system. And login service. And file space mounting manager. And boot manager. Its also a great dessert topping as well as a floor wax.

Re:lol

By Local ID10T • Score: 5, Insightful Thread

The people who make or lose millions on a single decision do not trust AI to change their decision making process.

This is an example of a Chesterton Fence

By Arrogant-Bastard • Score: 5, Insightful Thread
(There’s a Wikipedia entry on it, but I recommend Chesterton’s Fence: A Lesson in Thinking.)

The Bloomberg Terminal is a critical piece of financial infrastructure. It has its issues, to be sure, but it’s stable, functional, and has been tested under serious duress for a very long time…so it works. This is not some unimportant app or transient service or game; it’s actually important in the real world.

Could it be replaced? Sure. But it’s not going to replaced by the kind of slop that vibe coding churns out. If it’s replaced, it will be replaced by the work product of superb designers, excruciatingly careful developers, and fanatical testers working together for years with professionals who’ve been in the field for decades.

I’ve been in this field for close to half a century, and I’m getting increasingly annoyed by the ignorance, illiteracy, and arrogance of young and inexperienced tech bros whose world view is so constricted, so limited, so myopic that it never occurs to them that no, they do not know the answers to everything, and yes, some of us cranky geezers who have actually been there and done that might know a thing or two that has thus far eluded them and maybe, just maybe, they ought to shut up, sit down, pay attention, take notes, and learn — if they’re capable of learning.

Samsung Ends $2,899 Galaxy Z TriFold Sales After Just Three Months

Posted by BeauHD View on SlashDot Skip
Samsung is reportedly ending sales of the Galaxy Z TriFold just months after launch, likely due to “high production costs” and limited supply. 9to5Google reports:
The Galaxy Z TriFold launched in South Korea barely four months ago, arriving in Samsung’s home market ahead of a larger debut in the U.S. and other markets in January. The $2,899 smartphone brought an entirely new form factor to the foldable market, but it’s apparently very short-lived.

Korean media reports (via SamMobile) that Samsung is planning to end sales of the Galaxy Z TriFold in Korea, with one more restock coming in the country this week. In the United States, the report mentions that the TriFold will be available until “the current production volume is sold out,” which sounds like we might only get another restock or two here as well.

Who could’ve seen this coming?

By fruviad • Score: 5, Insightful Thread

Overpriced phone based on a wildly unpopular form factor isn’t successful and is cancelled. Who’d a thunk it?

The REAL reason it got cancelled

By 93 Escort Wagon • Score: 5, Funny Thread

Samsung’s chairman, Lee Jae-yong, was overheard saying “F*** everything, we’re doing 5 folds.”

Fool me once …

By zmollusc • Score: 3 Thread

Every time I/we pay extra for more screen real estate, it just gets hijacked for advertising, on-screen controls and other crap obscuring the content or the stupid website breaks the content over several pages and refuses to flow or size text.
Samsung should get all those idiots to chip in and buy me a three grand phone if they are the ones getting to use it.

Jacket

By nospam007 • Score: 3 Thread

I’d buy 2 iPads professional and one of the special jackets with a pocket for them instead.

Re:Who could’ve seen this coming?

By TWX • Score: 4, Insightful Thread

I could. I don’t look at it as a phone so much as a foldable tablet with a screen sufficiently large as to be actually useful.

If it also happens to work well as a smartphone when folded down then that could likewise be useful.

Trouble is, it needs to be no more expensive than a phone and a tablet separately purchased in order for most potential customers to justify it. If it costs more than both combined then scant few will bother adopting it.

Nvidia Expects To Sell ‘At Least’ $1 Trillion In AI Chips By 2028

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from TechCrunch:
Nvidia CEO Jensen Huang threw out a lot of numbers — mostly of the technical variety — during his keynote Monday to kick off the company’s annual GTC Conference in San Jose, California. But there was one financial figure that investors surely took notice of: his projection that there will be $1 trillion worth of orders for Nvidia’s Blackwell and Vera Rubin chips, a monetary reflection of a booming AI business.

About an hour into his keynote, Huang noted that last year Nvidia saw about $500 billion in demand for its Blackwell and upcoming Rubin chips through 2026. “Now, I don’t know if you guys feel the same way, but $500 billion is an enormous amount of revenue,” he said. “Well, I’m here to tell you that right now where I stand — a few short months after GTC DC, one year after last GTC — right here where I stand, I see through 2027, at least $1 trillion.”

Why GPUs?

By Dan East • Score: 3 Thread

Serious question, why haven’t they architected something better than GPUs for running inference? Surely something specifically designed for the task that could do it faster using less power? Something like Groq ASIC (that’s just one I’ve heard of). Why aren’t these the future and eclipsing the stop-gap that is GPUs because they already existed and were the best fit at the time?

Re:Nobody can afford them

By nedlohs • Score: 4, Funny Thread

To the hyperscalers. Who then rent them to the AI companies that are losing hundreds of billions of dollars as fast as they can, and surely won’t by unable to pay their bills.

Re:Nobody can afford them

By ArchieBunker • Score: 5, Insightful Thread

The AI companies will pay with the money they don’t have to put in the datacenters that haven’t been built.

I only care about an RTX6070

By sabbede • Score: 4, Insightful Thread
Will there be one of those, or has Nvidia decided that the customers they built everything upon aren’t worth serving anymore?

Re:Why GPUs?

By Junta • Score: 5, Informative Thread

The datacenter “GPUs” at this point have been specifically designed for the task.

The B300 is mostly dedicated to FP4. The only use case for 4-bit floating point is AI. If you want VDI or non-AI use, you want something other than a B300.

Are Split Spacebars the Next Big Gaming Keyboard Trend?

Posted by BeauHD View on SlashDot Skip
“There are countless upgrades you could make to your gaming setup,” writes PC Gamer’s Jacob Ridley. “A wireless this, a bigger that, a faster thing. But how do you know what’s going to be a genuine upgrade worth investing in? Personally, I think it might be split spacebars.” His argument centers on the fact that spacebars take up a “greedy” amount of keyboard space — space that could instead be divided into multiple keys for different actions, such as voice chat or melee attacks. From the report:
While it’s often very easy to reprogram your spacebar to do a different action via your keyboard’s software, it’s a lot harder to reprogram your brain to hit any other key when you try to jump in game. Spacebar makes you jump. Everyone knows that; it’s practically etched onto your brain if you’re a long-time mouse and keyboard player. So, why does a split spacebar help with that? It comes down to this: once you know which side of a spacebar you tend to thwack with your thumb, you can program the other side to do whatever you want. I hit the right-side of my spacebar every time when I’m typing. Therefore, when I started using a Wooting 60HE v2 with a split spacebar, I set the left-side to be the delete key; the keyboard lacking a dedicated delete key for its 60% size.

Though for gaming, the split spacebar offers much more varied purpose. People do strange things with the WASD keys that I won’t litigate here, but I’m pretty sure most gamers use their left thumb to strike the spacebar for gaming. Right? Right. If you fall into this category, you have the option of using the right-side spacebar for things like a chunky melee key, or, my personal favorite, an in-game voice chat key.

wot

By ZERO1ZERO • Score: 5, Funny Thread
if i’ve my left hand on wasd, thumb on the left of space, and my right hand is on my mouse. How on earth do i hit the right side of my space bar without you know, moving either hand. , In which case i have about 90 other keys I could use instead. What’s that you say ? oh right yeah , ..unzips…

Gamepad

By Himmy32 • Score: 3 Thread
At that point, why not use something purposefully designed like a gamepad. Maybe if this trend continues, Logitech will bring back the G13

US SEC Preparing To Scrap Quarterly Reporting Requirement

Posted by BeauHD View on SlashDot
The U.S. SEC is reportedly preparing a proposal to make quarterly earnings reports optional, potentially allowing companies to report results just twice a year. “The proposal could be published as soon as next month,” reports Reuters, citing a paywalled report from the Wall Street Journal, adding that “regulators are in talks with major exchanges to discuss how their rules may need to be adjusted.” Reuters reports:
The SEC will vote on the proposal once it is published, after a public comment period which typically lasts at least 30 days, the report said. The WSJ report added that the rule is expected to make quarterly reporting optional and not eliminate it altogether. The proposed change in the reporting standard would allow listed companies to publish results every six months instead of the current mandate to report figures every 90 days.

Trump, who first floated the idea in his first term as president, has argued the change in requirements would discourage shortsightedness from public companies while cutting costs. Skeptics, however, caution delaying disclosures could reduce transparency and heighten market volatility.

Why not yearly?

By alvinrod • Score: 5, Interesting Thread
Why not just switch to yearly reporting? Companies can still report more often, but if it allows companies to hire managers that aren’t constantly chasing quarterly results at the expense of long term prospects, it’s better for everyone other than investors that like to profit off of valuation swings from quarterly earnings reports. Those people aren’t creating anything of real value anyway so why should I care if they have to find something more useful to do?

Information Hiding

By Luthair • Score: 5, Insightful Thread
Corporations are already engaged in pretty heavy information hiding that really limits the ability of investors to evaluate businesses. For example one would think that the number of iphones sold would be pretty important to know.

Mixed feelings..

By Local ID10T • Score: 5, Interesting Thread

I have mixed feelings on this.

On the one hand, reducing modern businesses chasing short-term results over longer term goals is a good thing.

On the other hand, reporting less often to share holders feels like "Trust me, bro."

Overall, I think that for most businesses, detailed annual reporting is the sweet spot. It gives enough time to actually accomplish something -or at least make meaningful progress. It is not so long that we (as investors) forget what they promised in the previous report.

Re:Because

By fahrbot-bot • Score: 5, Insightful Thread

Companies are gaming their finances and will only have to worry about hiding it twice a year.

Re:Why not yearly?

By eepok • Score: 5, Interesting Thread

The quarterly report standard was set to ensure that the public is sufficiently well-informed about the fiscal health of publicly traded companies prior to their making investment decisions. Moving to a 6-month cycle increases the knowledge gap between the general public and those with inside knowledge.