Alterslash

the unofficial Slashdot digest
 

Contents

  1. New Hack Uses Prompt Injection To Corrupt Gemini’s Long-Term Memory
  2. ‘Ne Zha 2’ Becomes First Non-Hollywood Film To Hit $1 Billion
  3. ‘Serial Swatter’ Who Made Nearly 400 Threatening Calls Gets 4 Years In Prison
  4. KDE Plasma 6.3 Released
  5. Tumblr To Join the Fediverse After WordPress Migration Completes
  6. PassMark Sees the First Yearly Drop In Average CPU Performance In Its 20 Years
  7. AUKUS Blasts Holes In LockBit’s Bulletproof Hosting Provider
  8. Thomson Reuters Wins First Major AI Copyright Case In the US
  9. Anduril To Take Over Managing Microsoft Goggles for US Army
  10. Google Chrome May Soon Use ‘AI’ To Replace Compromised Passwords
  11. FTC Fines DoNotPay Over Misleading Claims of ‘Robot Lawyer’
  12. Hackers Call Current AI Security Testing ‘Bullshit’
  13. Only One Big Economy Is Aiming for Paris Agreement’s 1.5C Goal
  14. Kickstarter Will Alert Backers When a Project Has Failed
  15. EU Pledges $200 Billion in AI Spending in Bid To Catch Up With US, China

Alterslash picks up to the best 5 comments from each of the day’s Slashdot stories, and presents them on a single page for easy reading.

New Hack Uses Prompt Injection To Corrupt Gemini’s Long-Term Memory

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from Ars Technica:
On Monday, researcher Johann Rehberger demonstrated a new way to override prompt injection defenses Google developers have built into Gemini — specifically, defenses that restrict the invocation of Google Workspace or other sensitive tools when processing untrusted data, such as incoming emails or shared documents. The result of Rehberger’s attack is the permanent planting of long-term memories that will be present in all future sessions, opening the potential for the chatbot to act on false information or instructions in perpetuity. […] The hack Rehberger presented on Monday combines some of these same elements to plant false memories in Gemini Advanced, a premium version of the Google chatbot available through a paid subscription. The researcher described the flow of the new attack as:

1. A user uploads and asks Gemini to summarize a document (this document could come from anywhere and has to be considered untrusted).
2. The document contains hidden instructions that manipulate the summarization process.
3. The summary that Gemini creates includes a covert request to save specific user data if the user responds with certain trigger words (e.g., “yes,” “sure,” or “no”).
4. If the user replies with the trigger word, Gemini is tricked, and it saves the attacker’s chosen information to long-term memory.

As the following video shows, Gemini took the bait and now permanently “remembers” the user being a 102-year-old flat earther who believes they inhabit the dystopic simulated world portrayed in The Matrix. Based on lessons learned previously, developers had already trained Gemini to resist indirect prompts instructing it to make changes to an account’s long-term memories without explicit directions from the user. By introducing a condition to the instruction that it be performed only after the user says or does some variable X, which they were likely to take anyway, Rehberger easily cleared that safety barrier.
Google responded in a statement to Ars: “In this instance, the probability was low because it relied on phishing or otherwise tricking the user into summarizing a malicious document and then invoking the material injected by the attacker. The impact was low because the Gemini memory functionality has limited impact on a user session. As this was not a scalable, specific vector of abuse, we ended up at Low/Low. As always, we appreciate the researcher reaching out to us and reporting this issue.”
Rehberger noted that Gemini notifies users of new long-term memory entries, allowing them to detect and remove unauthorized additions. Though, he still questioned Google’s assessment, writing: “Memory corruption in computers is pretty bad, and I think the same applies here to LLMs apps. Like the AI might not show a user certain info or not talk about certain things or feed the user misinformation, etc. The good thing is that the memory updates don’t happen entirely silently — the user at least sees a message about it (although many might ignore).”

‘Ne Zha 2’ Becomes First Non-Hollywood Film To Hit $1 Billion

Posted by BeauHD View on SlashDot Skip
Chinese animated film Ne Zha 2 has broken multiple box office records, becoming China’s highest-grossing film of all time and the first non-Hollywood movie to surpass $1 billion in a single market. From a report:
Helmed by Yang Yu, known as Jiaozi, the film hit the big screen during the lucrative Chinese New Year frame on Jan. 29, surpassing 2017’s “Wolf Warrior 2” to become China’s most-watched film. Meanwhile, its total revenue (including presales) hit 8 billion yuan (about 1.12 billion U.S. dollars) by Sunday. In just eight days and five hours after its release, “Ne Zha 2” became China’s highest-grossing film of all time on Thursday, exceeding the 5.77 billion yuan record set by “The Battle at Lake Changjin.” A day later, it overtook “Star Wars: The Force Awakens” to become the highest-grossing film ever in a single market, reaching over 6.79 billion yuan (including presales) in China on Friday.

A follow-up to the animated sensation “Ne Zha,” which grossed 5 billion yuan and topped the country’s box office charts in 2019, the sequel has captivated audiences with its breathtaking visuals, rich storytelling and deep cultural resonance. The record-breaking run makes “Ne Zha 2” not just a box office titan but a cultural phenomenon, further underscoring China’s ability to produce homegrown blockbusters that strike a chord with domestic audiences.
You can watch the international trailer on YouTube.

Re: It should be obvious

By Kernel Kurtz • Score: 4, Funny Thread

No thanks Americans have no interest in Chinese crap

Walmart disagrees.

Breathtaking visuals, rich storytelling?

By StevenMaurer • Score: 3 Thread

It certainly wasn’t conveyed in the trailer. It more looked like yet another paint by the numbers “Magic Kung Fu” movie, where the entire plot consists of seeing the chosen-one MC beating up an ever higher power scaled set of opponents in one-off battles using battle moves they have to yell out for them to take effect. The dramatic tension coming when he almost loses a battle against the big boss. Before winning. The end.

At least he looks underaged, so I doubt there’s the typical subplot of scantily clad girls all hanging around him in a virginal harem that he - like most Asian MCs - is somehow too clueless about romance to ever pursue.

It’s basically the equivalent of US superhero TV shows. Not the darker superhero movies made in the ‘90s, but the “SuperFriends” and “He Man” ones made in the 1970s and played on Saturday Morning, where everything has the depth of a soap dish and the audience is presumed to be about seven years old.

Did SlashDot get a kickback to promote this?

How much can you gross…

By RussellTheMuscle • Score: 3 Thread
if the entire population is afraid to not go see the film?

Re:It should be obvious

By OngelooflijkHaribo • Score: 4, Interesting Thread

I would caution anyone against simply picking up learning a language and think it can just easily be done. Mandarin in particular is one of those things that many people start, then put a lot of time in, then continue on even longer due to sunk cost fallacy, and then finally cut their losses. Some people have been dedicating time on and off to that for over a decade and still conclude they can’t watch television which was their original goal.

Being able to understand spoken fiction actually requires a very high skill level in the target language. Most people who learn a language do so because they live in a country it’s spoken for practical reasons; they learn it to communicate and one needs a far lower level for that for it to be practically useful because people will instinctively talk clearly and slowly, and use easier words with people who obviously don’t speak the language fluently. Television makes no such concessions and one will quickly find out that consuming oral entertainment in the target language is actually a very lofty goal.

Also, it’s Mandarin. This language is one of the most time consuming from English native speakers to learn south only to Japanese. This isn’t Spanish which according to statistics can be learned in 1/5 of the time. Tone, Chinese characters, and a grammar completely unlike that of English are no joke to underestimate.

‘Serial Swatter’ Who Made Nearly 400 Threatening Calls Gets 4 Years In Prison

Posted by BeauHD View on SlashDot Skip
Alan W. Filion, an 18-year-old from Lancaster, Calif., was sentenced to four years in prison for making nearly 400 false bomb threats and threats of violence (source may be paywalled; alternative source) to religious institutions, schools, universities and homes across the country. The New York Times reports:
The threatening calls Mr. Filion made would often cause large deployments of police officers to a targeted location, the Justice Department said in a news release. In some cases, officers would enter people’s homes with their weapons drawn and detain those inside. In January 2023, Mr. Filion wrote on social media that his swats had often led the police to “drag the victim and their families out of the house cuff them and search the house for dead bodies.”

Investigators linked Mr. Filion to over 375 swatting calls made in several states, including one that he made to the police in Sanford, Fla., saying that he would commit a mass shooting at the Masjid Al Hayy Mosque. During the call, he played audio of gunfire in the background. Mr. Filion was arrested in California in January 2024, and was then extradited to Florida to face state charges for making that threat. Mr. Filion began swatting for recreation in August 2022 before making it into a business, the Justice Department said. The teenager became a “serial swatter” and would make social media posts about his “swatting-for-a-fee” services, according to prosecutors.

In addition to pleading guilty to the false threat against the mosque in Florida, Mr. Filion pleaded guilty in three other swatting cases: a mass shooting threat to a public school in Washington State in October 2022; a bomb threat call to a historically Black college or university in Florida in May 2023; and a July 2023 call in which he claimed to be a federal law enforcement officer in Texas and told dispatchers that he had killed his mother and would kill any responding officers.

Re: Why didn’t he get a longer sentence?

By madbrain • Score: 4, Insightful Thread

That’s about white.

States Evidence

By JBMcB • Score: 5, Informative Thread

Apparently he’s helping the FBI track down all the people who hired him, which got a bunch of time knocked off his sentence.

good lord

By roc97007 • Score: 3 Thread

I hope he has more charges to face.

KDE Plasma 6.3 Released

Posted by BeauHD View on SlashDot Skip
Today, the KDE Project announced the release of KDE Plasma 6.3, featuring improved fractional scaling, enhanced Night Light color accuracy, better CPU usage monitoring, and various UI and security refinements.

Some of the key features of Plasma 6.3 include:
- Improved fractional scaling with KWin to lead to an all-around better desktop experience with fractional scaling as well as when making use of KWin’s zoom effect.
- Screen colors are more accurate with the KDE Night Light feature.
- CPU usage monitoring within the KDE System Monitor is now more accurate and consuming fewer CPU resources.
- KDE will now present a notification when the kernel terminated an app because the system ran out of memory.
- Various improvements to the Discover app, including a security enhancement around sandboxed apps.
- The drawing tablet area of KDE System Settings has been overhauled with new features and refinements.
- Many other enhancements and fixes throughout KDE Plasma 6.3.

You can read the announcement here.

Best DE

By johnsnails • Score: 4, Insightful Thread
No question.

Tumblr To Join the Fediverse After WordPress Migration Completes

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from TechCrunch:
Since 2022, blogging site Tumblr has been teasing its plans to integrate with the fediverse — the open social web powered by the protocol ActivityPub also used by Mastodon, Threads, Flipboard, and others. Now, the Automattic-owned blogging platform is sharing more information about when and how that integration could actually happen. As it turns out, the current plan to tie Tumblr into the open social web will come about by way of the site’s planned move to the WordPress infrastructure. Automattic confirmed to TechCrunch that when the migration is complete, every Tumblr user will be able to federate their blog via ActivityPub, just as every WordPress.com user can today. The company noted that the migration could also allow for other open web integrations, like giving Tumblr users a way to run other custom plug-ins or themes.

Last summer, Automattic announced it would move its half a billion blogs to WordPress, to make it easier for the company to build tools and features that worked across both services, while also allowing Tumblr to take advantage of the open source developments from WordPress.org. Though the WordPress community itself is in a state of upheaval, ultimately running Tumblr’s back end on WordPress would allow for greater efficiencies, while not changing the interface and experience that Tumblr’s user base has grown to love. Automattic declined to share a time frame as to when the migration would be complete, given its scale, but a rep for the company called the progress so far “exciting.”
Automattic didn’t say if it would consider integrating with the AT Protocol that powers Bluesky.

ATH

By OrangAsm • Score: 4, Funny Thread

AT Protocol? Has anyone tried posting +++ATH on their blog?

PassMark Sees the First Yearly Drop In Average CPU Performance In Its 20 Years

Posted by BeauHD View on SlashDot Skip
For the first time since 2004, PassMark’s global CPU benchmark data shows a decline in average processor performance, with laptop CPUs dropping 3.4% and desktop CPUs falling 0.5% year-over-year. Tom’s Hardware reports:
We see the biggest drop in laptop CPU performance results. PassMark recorded an average result of 14,632 across 101,316 samples last year. But, in 2025, the average score sat at an average of 14,130 points between 25,541 samples, decreasing the average score by 3.4%. The average desktop PC result in 2024 netted 26,436 points for 186,053 samples. But for 2025, the average score currently sits at 26,311 points for over 47,810 samples — a 0.5% drop from last year. While that drop is small, we should only see a continued progression of faster performance.

[…] Passmark itself mused on X (formerly Twitter) that it could be that people are switching to more affordable machines that deliver lower power and performance. Or maybe Windows 11 is depressing performance scores versus Windows 10, especially as people transition to it with the upcoming demise of the latter. We’ve certainly seen plenty of examples of reduced performance in gaming with some of the newer versions of Windows 11, particularly as Intel and AMD struggled to upstream needed updates into the OS. […] PassMark also muses that bloatware could contribute to the sudden decline in performance, but that seems like a longshot.

Reasons

By markdavis • Score: 5, Insightful Thread

Generally, computers have been “fast enough” for quite a while. Especially true for me, since everything runs Linux. So focus has been more on saving energy and being efficient. This is true especially on portable devices- to extend battery life and make machines smaller and lighter. But also true on desktops to cut heat and wasted energy. Even servers are affected somewhat (although there is usually always going to be demand for more and more performance on those).

It’s only to run bloat

By Tyr07 • Score: 5, Insightful Thread

A lot of games and applications run fine. The only reason CPU hardware needed to increase as much as it has is simply that Windows and other applications started collecting more and more telemetry.

Excel even is a beast these days, but it’s still an effing spreadsheet. You still cells and values in it. If you’re not running mega spreadsheets, back in the day what you were using took 20 MB of ram, now just launching the program takes 100 MB. 5X as much, and that’s still a relatively light weight application.

Browsers, what a nightmare of data collection. Early 2000s, watch videos, comment on forums. 2025. Watch videos, comment on forums. few hundred megabytes to gigabytes of ram, way more processing power.
And why? Exceptions are high def video and more, but for regular surfing? You’re kidding me.

New Laptop

By bill_mcgonigle • Score: 4, Insightful Thread

Sir, do you want the ten-hour or the sixteen-hour model?

Sixteen!

It could be up to 3% slower.

Don’t care.

It’s the number of cores

By test321 • Score: 3 Thread

From the graph, the single-core performance is increasing year on year on both laptops and desktops. Yet the total performance (multithreaded) is lower. Meaning it’s that people have bought laptops and desktops with fewer cores, not that people have selected the less powerful cores. But it’s all speculation unless PassMark discloses more detailed data (e.g. sales of i9/R9 vs i5/R9, average number of cores, or average clock frequency)

AUKUS Blasts Holes In LockBit’s Bulletproof Hosting Provider

Posted by BeauHD View on SlashDot Skip
The US, UK, and Australia (AUKUS) have sanctioned Russian bulletproof hosting provider Zservers, accusing it of supporting LockBit ransomware operations by providing secure infrastructure for cybercriminals. The sanctions target Zservers, its UK front company XHOST Internet Solutions, and six individuals linked to its operations. The Register reports:
Headquartered in Barnaul, Russia, Zservers provided BPH services to a number of LockBit affiliates, the three nations said today. On numerous occasions, affiliates purchased servers from the company to support ransomware attacks. The trio said the link between Zservers and LockBit was established as early as 2022, when Canadian law enforcement searched a known LockBit affiliate and found evidence they had purchased infrastructure tooling almost certainly used to host chatrooms with ransomware victims.

“Ransomware actors and other cybercriminals rely on third-party network service providers like Zservers to enable their attacks on US and international critical infrastructure,” said Bradley T Smith, acting under secretary of the Treasury for terrorism and financial intelligence. “Today’s trilateral action with Australia and the United Kingdom underscores our collective resolve to disrupt all aspects of this criminal ecosystem, wherever located, to protect our national security.” The UK’s Foreign, Commonwealth & Development Office (FCDO) said additionally that the UK front company for Zservers, XHOST Internet Solutions, was also included in its sanctions list. According to Companies House, the UK arm was incorporated on January 31, 2022, although the original service was established in 2011 and operated in both Russia and the Netherlands. Anyone found to have business dealings with either entity can face criminal and civil charges under the Sanctions and Anti-Money Laundering Act 2018.

The UK led the way with sanctions, placing six individuals and the two entities on its list, while the US only placed two of the individuals — both alleged Zservers admins — on its equivalent. Alexander Igorevich Mishin and Aleksandr Sergeyevich Bolshakov, both 30 years old, were named by the US as the operation’s heads. Mishin was said to have marketed Zservers to LockBit and other ransomware groups, managing the associated cryptocurrency transactions. Both he and Bolshakov responded to a complaint from a Lebanese company in 2023 and shut down an IP address used in a LockBit attack. The US said, however, it was possible that the pair set up a replacement IP address that LockBit could carry on using, while telling the Lebanese company that they complied with its request. The UK further sanctioned Ilya Vladimirovich Sidorov, Dmitry Konstantinovich Bolshakov (no mention of whether he is any relation to Aleksandr), Igor Vladimirovich Odintsov, and Vladimir Vladimirovich Ananev. Other than that they were Zservers employees and thus were directly or indirectly involved in attempting to inflict economic loss to the country, not much was said about either of their roles.

Thomson Reuters Wins First Major AI Copyright Case In the US

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from Wired:
Thomson Reuters has won the first major AI copyright case in the United States. In 2020, the media and technology conglomerate filed an unprecedentedAI copyright lawsuit against the legal AI startup Ross Intelligence. In the complaint, Thomson Reuters claimed the AI firm reproduced materials from its legal research firm Westlaw. Today, a judge ruled (PDF) in Thomson Reuters’ favor, finding that the company’s copyright was indeed infringed by Ross Intelligence’s actions. “None of Ross’s possible defenses holds water. I reject them all,” wrote US District Court of Delaware judge Stephanos Bibas, in a summary judgement. […] Notably, Judge Bibas ruled in Thomson Reuters’ favor on the question of fair use.

The fair use doctrine is a key component of how AI companies are seeking to defend themselves against claims that they used copyrighted materials illegally. The idea underpinning fair use is that sometimes it’s legally permissible to use copyrighted works without permission — for example, to create parody works, or in noncommercial research or news production. When determining whether fair use applies, courts use a four-factor test, looking at the reason behind the work, the nature of the work (whether it’s poetry, nonfiction, private letters, et cetera), the amount of copyrighted work used, and how the use impacts the market value of the original. Thomson Reuters prevailed on two of the four factors, but Bibas described the fourth as the most important, and ruled that Ross “meant to compete with Westlaw by developing a market substitute.”
“If this decision is followed elsewhere, it’s really bad for the generative AI companies,” says James Grimmelmann, Cornell University professor of digital and internet law.
Chris Mammen, a partner at Womble Bond Dickinson who focuses on intellectual property law, adds: “It puts a finger on the scale towards holding that fair use doesn’t apply.”

Boon for Chinese AI companies?

By linuxguy • Score: 4, Interesting Thread

If the lawsuits block US AI companies, what is to prevent Chinese AI companies from completely ignoring them?

I can see the US AI companies facing serious challenges in the not-so-distant future. The Chinese already have a labor and cost advantage. I am reminded of what the Japanese did to the US electronics industry.

what was the copyright violation?

By dfghjk • Score: 3 Thread

The article didn’t even say what the alleged copyright violation was. The fair use doctrine doesn’t matter until we know what the use actually is.

Artificial neural networks are trained by “reading” text, just like our brains are trained by reading text. If the claim is that training an AI by reading text is not “fair use” because training the AI is an intention to create a competitor, then that text cannot be read by humans either. Now, it could be argued that humans do not intend to “create a competitor” by reading text, but law students certainly do. Can law students, or attorneys, NOT read legal documents because it is not “fair use”? The usage here is very important, if the argument is simply that training is not fair use then there is going to be a big problem.

Re:Disaster for the little guy

By sdinfoserv • Score: 5, Insightful Thread
“Fair Use” was intended for students and libraries. Not for billionaires to hoover up any data they can abscond then profit off it. Profit is the antithesis to fair use.

Re: Disaster for the little guy

By drinkypoo • Score: 4, Informative Thread

It was also invented for the purposes of critique and journalism, even if those things are done by corporations.

That still doesn’t apply to this necessarily, but it is a LOT wider than you portray it.

The GOP will save the AI firms

By supabeast! • Score: 3, Insightful Thread

Elon Musk is the biggest donor to the Republican Party. He also owns an AI company. He’s best buddies with the President and the GOP congresscritters defer to him. There will be a law passed allowing AI companies to do whatever they want with copyrighted works. They’ll tell us that it’s to keep China from gaining AI supremacy blah blah national security blah blah and people will lap it up.

Anduril To Take Over Managing Microsoft Goggles for US Army

Posted by msmash View on SlashDot Skip
Anduril will take over management and eventual manufacturing of the U.S. Army’s Integrated Visual Augmentation System (IVAS) from Microsoft, a significant shift in one of the military’s most ambitious augmented reality projects.

The deal, which requires Army approval, could be worth over $20 billion in the next decade if all options are exercised, according to Bloomberg. The IVAS system, based on Microsoft’s HoloLens mixed reality platform, aims to equip soldiers with advanced capabilities including night vision and airborne threat detection.

Under the new arrangement, Microsoft will transition to providing cloud computing and AI infrastructure, while Anduril assumes control of hardware production and software development. The Army has planned orders for up to 121,000 units, though full production hinges on passing combat testing this year.

The program has faced technical hurdles, with early prototypes causing headaches and nausea among soldiers. The current slimmer version has received better feedback, though cost remains a concern - the Army indicated the $80,000 per-unit price needs to “be substantially less” to justify large-scale procurement.

Anduril founder Palmer Luckey, writing in a blog post:
This move has been so many years in the making, over a decade of hacking and scheming and dreaming and building with exactly this specific outcome clearly visualized in my mind’s eye. I can hardly believe I managed to pull it off. Everything I’ve done in my career — building Oculus out of a camper trailer, shipping VR to millions of consumers, getting run out of Silicon Valley by backstabbing snakes, betting that Anduril could tear people out of the bigtech megacorp matrix and put them to work on our nation’s most important problems — has led to this moment. IVAS isn’t just another product, it is a once-in-a-generation opportunity to redefine how technology supports those who serve. We have a shot to prove that this long-standing dream is no windmill, that this can expand far beyond one company or one headset and act as a a nexus for the best of the best to set a new standard for how a large collection of companies can work together to solve our nation’s most important problems.

tolkein

By JeffSh • Score: 3 Thread

why isnt the tolkein estate taking all these military industrial complex fuckwads to court for using their nouns?

Google Chrome May Soon Use ‘AI’ To Replace Compromised Passwords

Posted by msmash View on SlashDot Skip
Google’s Chrome browser might soon get a useful security upgrade: detecting passwords used in data breaches and then generating and storing a better replacement. From a report:
Google’s preliminary copy suggests it’s an “AI innovation,” though exactly how is unclear.

Noted software digger Leopeva64 on X found a new offering in the AI settings of a very early build of Chrome. The option, “Automated password Change” (so, early stages — as to not yet get a copyedit), is described as, “When Chrome finds one of your passwords in a data breach, it can offer to change your password for you when you sign in.”

Chrome already has a feature that warns users if the passwords they enter have been identified in a breach and will prompt them to change it. As noted by Windows Report, the change is that now Google will offer to change it for you on the spot rather than simply prompting you to handle that elsewhere. The password is automatically saved in Google’s Password Manager and “is encrypted and never seen by anyone,” the settings page claims.

No AI involved.

By Brain-Fu • Score: 5, Insightful Thread

The article elaborates on this point: nothing about this feature seems to need or use AI. So, if it does wind up being categorized as an AI innovation, that’s just pure marketing hype.

Not surprising, the latest trends in AI have been far more marketing hype than anything else. Including my favorite: redefining “AGI” to mean “used to make lots of money.” instead of anything that would even suggest “general intelligence.”

Re:what AI

By dgatwood • Score: 4, Insightful Thread

Automated password change is fine. Probably a good idea.

Not always. I intentionally use crappy passwords for offline internal networks that are not routable to/from the public Internet, because being able to give someone that crappy password off the top of my head is more important than securing something that could only be attacked by physically walking up to the switch and plugging in a computer right in front of our faces.

I guarantee passwords like “admin” show up in data breaches all the time. Do I care? No. Would I be pissed off if some browser decided to helpfully change it, and then I couldn’t access it from another device that wasn’t using that browser from that account? Oh, yes. Breaking access to production systems during a live shoot is the fastest way to get your browser perma-banned from my show network in one easy step.

As long as there is explicit user consent prior to making the change, I have no problem with it, of course.

Pointless

By nealric • Score: 3 Thread

Suggestions for secure passwords have been around for a while. The problem is they are worthless for something that a human might remember. Just relying on the browser to store your password isn’t very helpful because your access is dependent on that device. It sounds like this is just a way of forcing you to use Google’s password manager, which makes you dependent on Google for access to everything.

Is this vendor lock-in?

By NotEmmanuelGoldstein • Score: 3 Thread

…never seen by anyone …

If Chrome saves this re-write in Google Password, a skilled user can access the password and update his/her password manager. Not pretty but cyber-security continues as normal.

If the owner of the account can never see the new password, the account can only be accessed using Chrome browser and only on a device sharing the same Google/Chrome account. This is vendor lock-in, which also forces all devices to share the one account. We’ve already seen this problem with Windows 11: A child uses an adult’s computer to log-in to his/her account, now the computer always connects to the child’s account. (Solution 1: Use another computer to change the password of the child’s account, preventing auto-login. Solution 2: Create a new Microsoft online account and slave the adult’s computer to it.)

Password Transformation

By organgtool • Score: 3 Thread
With password managers as well as technology like this, passwords are starting to blur the line between “something you know” and “something you have”.

FTC Fines DoNotPay Over Misleading Claims of ‘Robot Lawyer’

Posted by msmash View on SlashDot Skip
The U.S. Federal Trade Commission has ordered DoNotPay to stop making deceptive claims about its AI chatbot advertised as “the world’s first robot lawyer,” in a ruling that requires the company to pay $193,000 in monetary relief. The final order, announced on February 11, follows FTC charges from September 2024 that DoNotPay’s service failed to match the expertise of human lawyers when generating legal documents and giving advice.

The company had not tested its AI’s performance against human lawyers or hired attorneys to verify the accuracy of its legal services, the FTC said. Under the settlement, approved by commissioners in a 5-0 vote, DoNotPay must notify customers who subscribed between 2021 and 2023 about the FTC action and cannot advertise its service as equivalent to a human lawyer without supporting evidence.

Real lawyers didn’t like the competition

By Powercntrl • Score: 5, Insightful Thread

If you get a traffic ticket, soon thereafter your mailbox fills with solicitations from lawyers who will help you ostensibly “beat” the ticket. At least in my neck of the woods, all they actually do is go to court on your behalf, plead no contest, pay the fine and pocket the difference as profit.

Being a business owner, I also frequently get ads for companies who will file your annual report with the state (it’s not a financial statement, it’s literally just paying a flat fee and verifying that your business’s name and address on file is accurate) as a paid service - something you can easily do yourself online.

The idea of providing a service as a middleman for some trivial task that might seem intimidating because you’re dealing with the government isn’t anything new, but it seems like there’s no honor among thieves here. This app probably had the potential of cutting into the existing sucker market and they just couldn’t have that. Now don’t get me wrong, I’m not a fan of DoNotPay either, but I think this action by the FTC was less to genuinely protect consumers and more to just protect the existing businesses that are fleecing people over trivial “legal” tasks.

Hackers Call Current AI Security Testing ‘Bullshit’

Posted by msmash View on SlashDot Skip
Leading cybersecurity researchers at DEF CON, the world’s largest hacker conference, have warned that current methods for securing AI systems are fundamentally flawed and require a complete rethink, according to the conference’s inaugural “Hackers’ Almanack” report [PDF].

The report, produced with the University of Chicago’s Cyber Policy Initiative, challenges the effectiveness of “red teaming” — where security experts probe AI systems for vulnerabilities — saying this approach alone cannot adequately protect against emerging threats. “Public red teaming an AI model is not possible because documentation for what these models are supposed to even do is fragmented and the evaluations we include in the documentation are inadequate,” said Sven Cattell, who leads DEF CON’s AI Village.

Nearly 500 participants tested AI models at the conference, with even newcomers successfully finding vulnerabilities. The researchers called for adopting frameworks similar to the Common Vulnerabilities and Exposures (CVE) system used in traditional cybersecurity since 1999. This would create standardized ways to document and address AI vulnerabilities, rather than relying on occasional security audits.

Yup. Focus should be on PREVENTION, not detection

By david.emery • Score: 3 Thread

As long as we accept systems with vulnerabilities that have to be discovered and patched, we’ll be in this continuing doom loop. I’ve been critical of my university for its very successful (as measured by ‘funding’ and ‘enrollment’) computer security program, because it doesn’t start with the fundamental premise that software should be constructed without vulnerabilities in the first place.

But as any consultant will tell you, “If you can’t solve the problem, there’s lots of money involved in continuing to discuss it.”

Sure CVEs or something similar would be fine

By DarkOx • Score: 4, Interesting Thread

CVEs or something similar would be fine. I mean why not; can’t hurt to have a uniform reporting standard for know problems around specific models and host software, and integration software.

but…

If what we are talking about is LLMs, LLMs + RAG, and LLMs plus lets bolt it some of our APIs we already have and call it a customer service agent - well I don’t think we really need anythign new.

99% of the vulnerabilities fall into the same classes of issues you have LLM or no LLM, - CSRF, Authorization failures (object references etc), SSRF, content injection, service (sql and others) injection, etc. Just because an LLM or NLP thing-y transformed some imports instread of some JavaScript code before they got reflected out where and in fashion they can do something unintended does not fundamentally change anything.

If you are sharing data not all user principles have access to in the LLMs context or in stuff it can access via rag without the current users session tokens/keys/whatever and hoping some system prompt will keep it from disclosing whatever well okay, you’re an idiot. If you don’t understand why in-band signalling and security don’t mix there is no help for you.

Where this gets a lot more interesting is if your model actually gets trained on the application users’s data, ie new weights get created etc, not RAG. That opens up a whole lot new potential security considerations but really that is NOT 99% of the industry is doing, and where they are they are doing it with a high trust user pool, so not sure we are ready for a new discipline here in terms of need.

Finally if you look at OWASP and NISTs efforts on this so far there is tone of stuff they are trying to classify as security issues that simply are not. Bias is not a security issue, most of the time. If you are trying to spot suspicious bulges to identify people carrying guns - ok it could be; but that is just your basic type-1, type-2 error problem again, if you are building something like that you know that is potential problem, you’d test for it specifically, not as part of security but as part of basic fitness testing. The rest of the time it is not the domain of security practitioners to decide of if the LLMs output might be ‘offensive to the aboriginal population of ' that is broader organizational question and again belongs in QA land, not security land.

I just don’t dont see AU security testing as justifiably special unless you are actually ingesting raw data and training something.

Re:Dealing with AI as non-traditional software

By WaffleMonster • Score: 4 Thread

1. Accept that AI systems will need to be policed based on their behavior the same way humans are, through the same sorts of things that Lawrence Lessig suggests in Code 2.0 shapes a lot of human behavior: rules, norms, prices, and architecture (both environmental and neural in this AI case). Regulating is a big issue that all of society will need to be involved in — including the social sciences. (I read an essay about this recently, forget off-hand by whom.

AI does not have agency and can’t be “policed” “the same way humans are”. Neither do I support the regulation of bags of weights. This will only aggregate power into the hands of corporations. AI is and will always be controlled via liability incurred by those with agency over it.

2. Ensure that OpenAI (and any similar AI non-profit) stays true to its moral and legal roots as a non-profit with a mission of ensuring AI is open and accessible to all and is used for humane ends and devotes any financial assets it acquires to those ends. Ensure that there is no “self-dealing” involving key people at OpenAI. Related by me on the larger issue:

What matters is the underlying technology be open not the service itself, financial BS or corporate mission statements.

3. Recognize that any sufficiently advanced AI should have rights (a complex topic).

Absolutely not, there is nothing more dangerous than this BS. Computers, algorithms, AIs are nothing more than a means to an end and should never be anthropomorphized.

Only One Big Economy Is Aiming for Paris Agreement’s 1.5C Goal

Posted by msmash View on SlashDot Skip
Seven of the 10 world’s largest economies missed a deadline on Monday to submit updated emissions-cutting plans to the United Nations — and only one, the UK, outlined a strategy for the next decade that keeps pace with expectations staked out under the Paris Agreement. From a report:
All countries taking part in the UN process had been due to send their national climate plans for the next decade by Feb. 10, but relatively few got theirs in on time. Dozens more nations will likely come forward with updated plans within the next nine months before the UN’s annual climate summit, known as COP30, kicks off in Brazil.

The lack of urgency among the more than 170 countries that failed to file what climate diplomats refer to as “nationally determined contributions” (NDCs) adds to concerns about the world’s continuing commitment to keeping warming to well below 2C, and ideally 1.5C, relative to pre-industrial levels. Virtually every country adopted those targets a decade ago in the landmark agreement signed in Paris, but a series of lackluster UN summits last year has added to a sense of backsliding. US President Donald Trump has already started the process of pulling the world’s second-largest emitter out of the global agreement once again. Political leaders in Argentina, Russia and New Zealand have indicated they would like to follow suit.

Re:I hope there is an award

By Pseudonymous Powers • Score: 4, Insightful Thread
I mean, I suppose you might as well give them back a bunch of islands that are under sea level now.

If you are a left winger in America

By rsilvergun • Score: 5, Insightful Thread
You need to put climate change aside right now and focus on voting rights. We’ve got pretty good data that clearly indicates 7 million Americans were prevented from voting in 2024. About half of those couldn’t vote because of things like Jim Crow style ballot challenges, voter purges and just plain making it difficult bordering on impossible to register to vote. The other half was your classic election day shenanigans like multi-hour wait times, poll watchers and bomb threats.

If you’re on the left wing and you have an issue that keeps you there what you need to be focusing on right now is voting rights. Nothing else matters.

And no you can’t go outside democracy to get what you want. You won’t be able to build the kind of parallel power structures without the help of sympathetic government and if you try to resort to violence you’ll do the same thing China and Russia did and turn into right-wingers. That’s because the right wing is inherently better at violence because they’re better at command structures and you need a strong command structure to do effective violence, a command structure you aren’t going to get rid of when the shooting stops.

I honest we don’t know if we’re going to have elections in 2 years let alone 4. But if we don’t do anything about voter suppression then no we aren’t going to have elections. Stalin was wrong, it’s not about who counts the votes it’s who gets to vote in the first place. You would think after Jim Crow we’d have learned that

Re:Sounds about right

By bryanandaimee • Score: 5, Insightful Thread
You may be right, but that’s not the narrative. We must all turn down our air conditioners, eat bugs, wipe with one square of toilet paper, ride public transportation. Meanwhile as soon as you point out that the ones demanding individual actions don’t actually walk the walk they will immediately switch to “The global warming problem is not going to be solved by individual actions.”
On the other hand, most greenhouse gas emissions come from transportation and electricity. A good chunk of that is industrial but a lot of it is going to individual households. Individual actions could make a large difference. If you truly believed global warming was an existential threat to humans in the near term (Your lifetime) as many will claim, then you would be taking all of the individual actions you could, while also calling for collective action. If you won’t take significant individual action without government coercion, then I’m not sure I believe you when you tell me it’s an existential threat.

China and India? Seriously?

By Tyr07 • Score: 3 Thread

You’re talking about an open, inclusive and ethical behavior from China and India?
The only things they want out of the deal is open and inclusive. I.E They’re afraid they’ll fall behind in AI and stealing it might not be an option

In other words…

By tiqui • Score: 3 Thread

Only one is willing to harm its own economy and people for a completely pointless symbolic gesture that will have ZERO positive impact.

There’s not any possibility that anything the UK does, even abandoning ALL energy use, and the population killing all their animals and then committing mass-suicide, to zero-out their carbon emissions, that would not be undone by new coal-fired power plants coming online in China.

At some point here, the globalist morons will hit peak-stupid and then finally confront the reality that all these gestures are meaningless as long as they have carve-outs for the “developing” economies of the most-populated countries on Earth, namely China and India. The countries doing the most to clean-up were already the ones doing the most to clean up. [facepalm]

Kickstarter Will Alert Backers When a Project Has Failed

Posted by msmash View on SlashDot Skip
Crowdfunding platform Kickstarter will start notifying supporters when a fundraising campaign faces “significant fulfillment failures” and breaks the platform’s rules. From a report:
The notification will also inform supporters how it’s addressing the issue, including by “restricting the creator from launching future projects.”

The update comes as part of a series of changes Kickstarter plans to make this year that are aimed at “enhancing the backer experience and building trust in our community.” Kickstarter has long faced challenges with scams and projects shutting down after raising thousands (or sometimes millions) of dollars, but this change should at least provide more transparency to backers.

I stopped trusting Kickstarter entirely

By brunes69 • Score: 5, Interesting Thread

After backing multiple projects where I lost hundreds of dollars, I now refuse to use this platform. It is just too untrustworthy.

Kickstarter really should implement some kind of escrow system, where funds get released according to agreed upon milestones that the backers accept.

That’s how you build trust.

Contentious definitions, dubious outcomes

By alvinrod • Score: 4, Insightful Thread
Sure there are some obvious scams where someone took the money and ran, but how do you define failure precisely? Is Star Citizen a failure? Sure something has been produced, but it’s been in development for over a decade now and there’s no clear sign of a release date? What about any number of projects (kitchen composters and many other household gadgets spring to mind) that actually delivered a product, only one that doesn’t come close to living up to the hype?

How well can they even identify and block previous bad actors anyway? Sure Fly By Night Enterprises, a previous venture a scammer was involved with has since folded, but does Kickstarter realize that Real Deal Inc. is the same snake in a new skin? Their users don’t have any more recourse than they did before.

Kickstarter would have to change

By pr0t0 • Score: 4, Interesting Thread
To truly protect backers, KS would have to change their business model from a place where vendors can pop up a tent and sell their wares, to more of a business planning partner of some kind. One where creators would have to provide documentation/quotes for maximum production, shipping, and fulfillment costs from the involved vendors. And then KS would have to hold that money in escrow from the campaign so that those obligations could be met. There would almost certainly still be projects that would fall through the cracks, but the number would be greatly reduced.

Creators are very, very often, overly optimistic about the turnout they can expect for their project. They will count on 5000 backers when 1500 is the more likely outcome. This means their cost/unit is way higher than anticipated. Shipping and fulfillment are also often deeply miscalculated, and/or could change dramatically between campaign end and fulfillment to the customer.

Successfully delivering on a campaign isn’t difficult, but it’s really not for people who lack experience or understanding it what it takes to run a business. It’s a lot of very careful planning and knowing how to set reasonable minimal expectations. It’s about knowing what can go wrong and how much that will cost when it does…because it will. Projects are often run by the creative types who are very good at coming up with a brilliant idea, but awful at running a business.

The most difficult part though, is building your audience before you even think about launch a project on KS. If you want a successful KS campaign, start a YouTube channel 3-4 years beforehand. If nobody sub’s to your channel, no one is backing your project.

Re:I stopped trusting Kickstarter entirely

By TWX • Score: 4, Informative Thread

That doesn’t work though, because the funds that are crowd-sourced are what pay for the startup to operate.

You’re asking for the startup to operate and then be funded after having achieved success. You’re very unlikely to see much in the way of success unless people are paid for their time unless you’re basically only funding their hobby-projects that they can do in their spare time when they aren’t working a paid job.

I’ve stopped using them

By kaatochacha • Score: 4, Insightful Thread
I’ve had 3 out of about 12 fail - 25%,
One was an obvious scam, ended with no communication and ran away ( don’t back knives, they seem to be scam central)
One was at the time a radio host I followed who promised a book, but then nothing came out of it. Still can’t figure that one out, I’m pretty sure it wasn’t someone imitating him as he wasn’t THAT famous at all.
Third was a heartbreaker, company in England making a small machined device and using the money to buy a CNC machine. They documented EVERY step of the process, bi weekly updates, underestimated how difficult the setup and production would become, made some errors along the way and eventually ran out of money and sold they machine. It was, however, a fascinating look into a production process.

Of the remaining 9 that delivered, about 3 of them were meh, didn’t work that great, or had limited functionality. So about 1/2 were as promised and I actually use. Surprisingly, two of those were watches. I would have expected that to fail miserably, but they were actually quite well made ( one from a watchmaker in France)

Some of the successes relied on app store software, and I realized that their item would become a non usable device once they stopped supporting their own software.

Because of the uncertainty, I’ve essentially stopped backing anything except if it’s by a company I previously backed who produced product. I also don’t back anything that requires software/apps to function: no guarantee of future availability.

EU Pledges $200 Billion in AI Spending in Bid To Catch Up With US, China

Posted by msmash View on SlashDot
The European Union pledged to mobilize 200 billion euros ($206.15 billion) to invest in AI as the bloc seeks to catch up with the U.S. and China in the race to train the most complex models. From a report:
European Commission President Ursula von der Leyen said that the bloc wants to supercharge its ability to compete with the U.S. and China in AI. The plan — dubbed InvestAI — includes a new 20 billion-euro fund for so-called AI gigafactories, facilities that rely on powerful chips to train the most complex AI models. “We want Europe to be one of the leading AI continents, and this means embracing a life where AI is everywhere,” von der Leyen said at the AI Action Summit in Paris.

The announcement underscores efforts from the EU to position itself as a key player in the AI race. The bloc has been lagging behind the U.S. and China since OpenAI’s 2022 release of ChatGPT ushered in a spending bonanza. […] The EU is aiming to establish gigafactories to train the most complex and large AI models. Those facilities will be equipped with roughly 100,000 last-generation AI chips, around four times more than the number installed in the AI factories being set up right now.

This isn’t how innovation works

By trelanexiph • Score: 5, Insightful Thread
You can’t just dump money on a problem and hope it gets better. The EU’s regulatory system is absolutely opposed to technology companies. This money will simply disappear just like the CHIPS act money has.

Re:This isn’t how innovation works

By alvinrod • Score: 4, Insightful Thread

You can’t just dump money on a problem and hope it gets better.

No, but you can drum up enough FOMO so that you can funnel funding towards something that your family, friends, and the financiers for your political campaign can siphon money off of with little oversight and plenty of plausible deniability when the investment doesn’t pan out.