While AI could offer transformative benefits, without proper safeguards it could facilitate nuclear and biological threats and cause “potentially irreversible harms,” a new report commissioned by California Governor Gavin Newsom has warned. “The opportunity to establish effective AI governance frameworks may not remain open indefinitely,” says the report, which was published on June 17 (PDF). Citing new evidence that AI can help users source nuclear-grade uranium and is on the cusp of letting novices create biological threats, it notes that the cost for inaction at this current moment could be “extremely high.” […]
“Foundation model capabilities have rapidly advanced since Governor Newsom vetoed SB 1047 last September,” the report states. The industry has shifted from large language AI models that merely predict the next word in a stream of text toward systems trained to solve complex problems and that benefit from “inference scaling,” which allows them more time to process information. These advances could accelerate scientific research, but also potentially amplify national security risks by making it easier for bad actors to conduct cyberattacks or acquire chemical and biological weapons. The report points to Anthropic’s Claude 4 models, released just last month, which the company said might be capable of helping would-be terrorists create bioweapons or engineer a pandemic. Similarly, OpenAI’s o3 model reportedly outperformed 94% of virologists on a key evaluation. In recent months, new evidence has emerged showing AI’s ability to strategically lie, appearing aligned with its creators’ goals during training but displaying other objectives once deployed, and exploit loopholes to achieve its goals, the report says. While “currently benign, these developments represent concrete empirical evidence for behaviors that could present significant challenges to measuring loss of control risks and possibly foreshadow future harm,” the report says.
While Republicans have proposed a 10 year ban on all state AI regulation over concerns that a fragmented policy environment could hamper national competitiveness, the report argues that targeted regulation in California could actually “reduce compliance burdens on developers and avoid a patchwork approach” by providing a blueprint for other states, while keeping the public safer. It stops short of advocating for any specific policy, instead outlining the key principles the working group believes California should adopt when crafting future legislation. It “steers clear” of some of the more divisive provisions of SB 1047, like the requirement for a “kill switch” or shutdown mechanism to quickly halt certain AI systems in case of potential harm, says Scott Singer, a visiting scholar in the Technology and International Affairs Program at the Carnegie Endowment for International Peace, and a lead-writer of the report.
Instead, the approach centers around enhancing transparency, for example through legally protecting whistleblowers and establishing incident reporting systems, so that lawmakers and the public have better visibility into AI’s progress. The goal is to “reap the benefits of innovation. Let’s not set artificial barriers, but at the same time, as we go, let’s think about what we’re learning about how it is that the technology is behaving,” says Cuellar, who co-led the report. The report emphasizes this visibility is crucial not only for public-facing AI applications, but for understanding how systems are tested and deployed inside AI companies, where concerning behaviors might first emerge. “The underlying approach here is one of ‘trust but verify,’" Singer says, a concept borrowed from Cold War-era arms control treaties that would involve designing mechanisms to independently check compliance. That’s a departure from existing efforts, which hinge on voluntary cooperation from companies, such as the deal between OpenAI and Center for AI Standards and Innovation (formerly the U.S. AI Safety Institute) to conduct pre-deployment tests. It’s an approach that acknowledges the “substantial expertise inside industry,” Singer says, but “also underscores the importance of methods of independently verifying safety claims.”
The announcements come amidst the escalating war between Iran and Israel, which broke out after Israel attacked the country on June 12th, and a rise in reported internet outages. Civilians have claimed that they’ve been unable to access basic but critical telecommunications services, such as messaging apps, maps, and sometimes the internet itself. Cloudflare reported that two major Iranian cellular carriers effectively went offline on Tuesday, and The New York Times reports that even VPNs, which Iranians frequently use to access banned sites like Facebook and Instagram, have become increasingly harder to access. […]
Israel’s role in the cyber outages has not been officially confirmed, but independent analysts at NetBlocks noticed a significant reduction of internet traffic originating from Iran on Tuesday, starting at 5:30 PM local time. According to Tasnim, a news network affiliated with the Iranian Revolutionary Guards, Iranians will still have access to the country’s state-operated national internet service, though two Iranian officials told the Times that the internal bandwidth could be reduced by up to 80 percent.
As written, the bill would set up guardrails around the approval and supervision of U.S. issuers of stablecoins, the dollar-based tokens such as the ones backed by Circle, Ripple and Tether. Firms making these digital assets available to U.S. users would have to meet stringent reserve demands, transparency requirements, money-laundering compliance and regulatory supervision that’s also likely to include new capital rules. “This is a win for the U.S., a win for innovation and a monumental step towards appropriate regulation for digital assets in the United States,” said Amanda Tuminelli, executive director and chief legal officer of the DeFi Education Fund, in a similar statement. […]
While this is the first significant crypto bill to clear the Senate, it’s also the first time a stablecoin bill has passed either chamber, despite years of negotiation in the House Financial Services Committee that managed to produce other major crypto legislation in the previous congressional session. The destiny of the GENIUS Act is also tied closely to the House’s own Digital Asset Market Clarity Act, the more sweeping crypto bill that would establish the legal footing of the wider U.S. crypto markets. The stablecoin effort is slightly ahead of the bigger task of the market structure bill, but the industry and their lawmaker allies argue that they’re inextricably connected and need to become law together. So far, the Clarity Act has been cleared by the relevant House committees and awaits floor action.
Like I’ve said before, this is just yet another financial system being created to have a minority of people manage the majority of the wealth, to their own advantage. This is just a new competing system created by the crypto bros to wrestle the current system away from the Wall St. bros.
And that “huge surge of Democrats joining their Republican counterparts”? Well whaddaya think that big money in the crypto world is buying up right now, besides new yachts? Money talks, baby, and remember, elections coming up again soon (it never fucking ends!).
“President Trump will sign an additional Executive Order this week to keep TikTok up and running,” White House Press Secretary Karoline Leavitt said in a statement. “As he has said many times, President Trump does not want TikTok to go dark. This extension will last 90 days, which the Administration will spend working to ensure this deal is closed so that the American people can continue to use TikTok with the assurance that their data is safe and secure.”
ByteDance was nearing the deadline of June 19, to sell TikTok’s U.S. operations in order to satisfy a national security law that the Supreme Court upheld just a few days before Trump’s second presidential inauguration. Under the law, app store operators like Apple and Google and internet service providers would be penalized for supporting TikTok. ByteDance originally faced a Jan. 19 deadline to comply with the national security law, but Trump signed an executive order when he first took office that pushed the deadline to April 5. Trump extended the deadline for the second time a day before that April mark. Trump told NBC News in May that he would extend the TikTok deadline again if no deal was reached, and he reiterated his plans on Thursday.
Back in 2016 I was threatened with a “taco truck on every corner” and that threat was a complete lie. https://en.wikipedia.org/wiki/…
A taco truck near work would be amazing. Hell I will even settle for a tamale lady who sells them from her car.
I really don’t fear the CCP. They’re pretty transparent about their goals. Their goal is China first. They put massive investments into infrastructure and high end manufacturing. Don’t for a minute think that China only produces cheap garbage. Americans buy the cheapest shit that China sells. If you bought the absolute cheapest domestic products you’d get the same quality.
What I do fear is the current administration who doesn’t even hide the fact that favors are for sale. https://thehill.com/homenews/a…
The head of the military is a drunken former Fox News host who accidentally texts top secret plans to journalists on unsecure devices. https://www.npr.org/2025/04/22…
The head of the department of education is an elderly CEO who used to run a wrestling company and doesn’t know the different between artificial intelligence and steak sauce. https://www.usatoday.com/story… Bonus for her husband Vince whose legal troubles will likely quietly vanish.
As for the head of the department of health, RFK Jr.
I’ll present his own words: I don’t want to seem like I’m being evasive, but I don’t think people should be taking medical advice from me https://www.cbsnews.com/news/r…
The rest of the cabinet is a rogues gallery of billionaire assholes and other unqualified DEI hires. DEI in this context meaning they were not hired on the basis of being best qualified.
That is the stuff I fear.
He’ll ban them the instant he stops getting paid not to.
This is extortion, plain and simple.
[…] the rise in China of open technology, which relies on transparency and decentralisation, is awkward for an authoritarian state. If the party’s patience with open-source fades, and it decides to exert control, that could hinder both the course of innovation at home, and developers’ ability to export their technology abroad.
China’s open-source movement first gained traction in the mid-2010s. Richard Lin, co-founder of Kaiyuanshe, a local open-source advocacy group, recalls that most of the early adopters were developers who simply wanted free software. That changed when they realised that contributing to open-source projects could improve their job prospects. Big firms soon followed, with companies like Huawei backing open-source work to attract talent and cut costs by sharing technology.
Momentum gathered in 2019 when Huawei was, in effect, barred by America from using Android. That gave new urgency to efforts to cut reliance on Western technology. Open-source offered a faster way for Chinese tech firms to take existing code and build their own programs with help from the country’s vast community of developers. In 2020 Huawei launched OpenHarmony, a family of open-source operating systems for smartphones and other devices. It also joined others, including Alibaba, Baidu and Tencent, to establish the OpenAtom Foundation, a body dedicated to open-source development. China quickly became not just a big contributor to open-source programs, but also an early adopter of software. JD.com, an e-commerce firm, was among the first to deploy Kubernetes.
AI has lately given China’s open-source movement a further boost. Chinese companies, and the government, see open models as the quickest way to narrow the gap with America. DeepSeek’s models have generated the most interest, but Qwen, developed by Alibaba, is also highly rated, and Baidu has said it will soon open up the model behind its Ernie chatbot.
Plasma is a popular desktop (and mobile) environment for GNU/Linux and other UNIX-like operating systems. Among other things, it also powers the desktop mode of the Steam Deck gaming handheld. The KDE community today announced the latest release: Plasma 6.4. This fresh new release improves on nearly every front, with progress being made in accessibility, color rendering, tablet support, window management, and more.
Plasma already offered virtual desktops and customizable tiles to help organize your windows and activities, and now it lets you choose a different configuration of tiles on each virtual desktop. The Wayland session brings some new accessibility features: you can now move the pointer using your keyboard’s number pad keys, or use a three-finger touchpad pinch gesture to zoom in or out.
Plasma file transfer notification now shows a speed graph, giving you a more visual idea of how fast the transfer is going and how long it will take to complete. When any applications are in full screen mode Plasma will now enter Do Not Disturb mode and only show urgent notifications. When you exit full-screen mode, you’ll see a summary of any notifications you missed.
Now, when an application tries to access the microphone and finds it muted, a notification will pop up. A new feature in the Application Launcher widget will place a green New! tag next to newly installed apps, so you can easily find where something you just installed lives in the menu.
The Display and Monitor page in System Settings comes with a brand new HDR calibration wizard. Support for Extended Dynamic Range (a different kind of HDR) and P010 video color format has also been added. System Monitor now supports usage monitoring for AMD and Intel graphic cards — it can even show the GPU usage on a per-process basis.
Spectacle, the built-in app for taking screenshots and screen recordings, has a much-improved design and more streamlined functionality. The background of the desktop or window now darkens when an authentication dialog shows up, helping you locate and focus on the window asking for your password.
There’s a brand-new Animations page in System Settings that groups all the settings for purely visual animated effects into one place, making them easier to find and configure. Aurorae, a newly added SVG vector graphics theme engine, enhances KWin window decorations.
You can read more about these and many other other features in the Plasma 6.4 announcement and complete changelog.
Plasma launched with KDE 4 in 2008. 17 years later, it still doesn’t seem like a meaningful improvement over KDE 3.5.
Aaron Seigo, with Plasma, set KDE back years. Way too many years.
I don’t understand what you want to say here. I read “no meaningful improvement” as “consistent user experience”. Users who loved KDE 3.5 are still able to work the same way with recent Plasma. KDE could choose a different paradigm in hope of providing “meaningful improvement”, but for people who are looking for something else, there are already alternatives.
Look at Windows, people have been very vocal that XP or 7 provided the best user experience and more recent versions broke something that worked well. Look at GNOME, they sought of providing meaningful improvement with version 3, and fragmented their community with two very active forks. OTOH KDE managed to keep most of their community aligned (Trinity isn’t very used).
There’s a brand-new Animations page in System Settings that groups all the settings for purely visual animated effects into one place, making them easier to find and configure. Aurorae, a newly added SVG vector graphics theme engine, enhances KWin window decorations.
Oh, good, that makes it easier to turn all of them frelling off.
Now, don’t get me wrong, I enjoy using KDE. It has been remarkably rock solid for my use cases. There are some settings that are always hard to find, but it mostly just works. Given that I can ignore some of the features that they try to push and have had better solutions for years (like Activities, which is better managed by having just a fixed number of desktops with simple keyboard shortcuts, which I’ve been doing for, literally, 30 years now, or KDE Wallet, or Dolphin, or …) and still have things work just fine, that says a lot. The idea of building a useful set of tools an not forcing one particular path through them … that idea resonates deeply for me.
The one aspect of KDE that drives me nuts, however, is that when a process opens a new window, the default should be to open that window on the desktop that the process has been assigned to rather than the current desktop (who, in their right mind, thinks that latter behavior is the right choice?). That, and there’s no setting for focus that matches what I want, and the descriptions, despite multiple revisions, remain opaque.
Jassy wrote that employees should learn how to use AI tools and experiment and figure out “how to get more done with scrappier teams.” The directive comes as Amazon has laid off more than 27,000 employees since 2022 and made several cuts this year. Amazon cut about 200 employees in its North America stores unit in January and a further 100 in its devices and services unit in May. Amazon had 1.56 million full-time and part-time employees in its global workforce as of the end of March, according to financial filings. The company also employs temporary workers in its warehouse operations, along with some contractors.
Amazon is using generative AI broadly across its internal operations, including in its fulfillment network where the technology is being deployed to assist with inventory placement, demand forecasting and the efficiency of warehouse robots, Jassy said. […] In his most recent letter to shareholders, Jassy called generative AI a “once-in-a-lifetime reinvention of everything we know.” He added that the technology is “saving companies lots of money,” and stands to shift the norms in coding, search, financial services, shopping and other areas. “It’s moving faster than almost anything technology has ever seen,” Jassy said.
“Sales are in a slump, so we’ll fire employees yet blame staff shrinkage on replacement by our wonderful shiny AI”
* Dilbert’s boss who is light on knowledge but heavy on buzzwords and BS.
Yesterday I had a need to extract a bunch of documents our of a web based ERP system. The ERP system has a pretty extensive and quite good API. But, I wasn’t in the mood to study and learn a new API to do a simple script that would pull all customer numbers and then iterate through each customer and download their files.
I decided to try the Vibe coding that the cool kids are supposedly doing. I tried Copilot, ChatGPT, and Claude. None of them could assemble even a basic remotely working Python or JavaScript script. Attempt after attempt, they al returned completely incorrect and completely non-functional paragraphs of code. I could get nothing of value from AI.
Today, I spent three hours studying the API and finding the relevant calls. I had a fully functional Python script running in 30 minutes and the task completed two hours later.
AI 0 Weak Programmer 1
I just don’t see successful replacement of human coders with AI.
The Spanish government has said that the national grid operator and private power generation companies were to blame for an energy blackout that caused widespread chaos in Spain and Portugal earlier this year. Shortly after midday on April 28, both countries were disconnected from the European electricity grid for several hours. Businesses, schools, universities, government buildings and transport hubs were all left without power and traffic light outages caused gridlocks. While schoolchildren, students and workers were sent home for the day, many other people were stuck in lifts or stranded on trains in isolated rural areas.Aagesen said there was no evidence of a cyberattack behind the blackout. The government also maintained that Spain’s renewable energy output was not to blame.
In the immediate aftermath, the left-wing coalition government did not provide an explanation, instead calling for patience as it investigated. Nearly two months after the unprecedented outage, the minister for ecological transition, Sara Aagesen, has presented a report on its causes. She said the partly state-owned grid operator, Red Electrica, had miscalculated the power capacity needs for that day, explaining that the “system did not have enough dynamic voltage capacity.” The regulator should have switched on another thermal plant, she said, but “they made their calculations and decided that it was not necessary.”
Aagesen also blamed private generators for failing to regulate the grid’s voltage shortly before the blackout happened. “Generation firms which were supposed to control voltage and which, in addition, were paid to do just that did not absorb all the voltage they were supposed to when tension was high,” she said, without naming any of the companies responsible. The day after the outage, Prime Minister Pedro Sanchez suggested that private electricity companies might have played a role, saying that his government would demand “all the relevant accountability” from them. However, the new report on the blackout also raises questions about the role of Beatriz Corredor, president of Red Electrica and a former Socialist minister, who had previously insisted that the grid regulator had not been at fault.
The company is also renaming the “Video” tab on its platform to the “Reels” tab. The update won’t change what videos are recommended to you, Facebook says. […] The idea behind the changes is to streamline the video-sharing format on the social network. It won’t be the first time that a Meta-owned platform has done so, as Instagram began automatically converting new video posts under 15 minutes into reels back in 2022.
“Previously, you’d upload a video to Feed or post a reel using different creative flows and tools for each format,” Facebook explained in a blog post. “Now, we’re bringing these experiences together with a simplified publishing flow that gives you access to even more creative tools. We’ll also give you control over your audience setting of who sees your reels.” […] The company says it will gradually roll out the changes globally over the coming months.
We’re just like TikTok! Please like us!
… “see less of these” (or whatever it says)
If this is less, I’d hate to see more …
Honda R&D, the research arm of Japan’s second-biggest carmaker, successfully landed its 6.3-meter (20.6-foot) experimental reusable launch vehicle after reaching an altitude of 271 meters (889 feet) at its test facility in northern Japan’s space town Taiki, according to the company. While “no decisions have been made regarding commercialization of these rocket technologies, Honda will continue making progress in the fundamental research with a technology development goal of realizing technological capability to enable a suborbital launch by 2029,” it said in a statement.
Honda in 2021 said it was studying space technologies such as reusable rockets, but it has not previously announced the details of the launch test. A suborbital launch may touch the verge of outer space but does not enter orbit. Studying launch vehicles “has the potential to contribute more to people’s daily lives by launching satellites with its own rockets, that could lead to various services that are also compatible with other Honda business,” the company added.
Anyone in the space industry know if there is much of a difference in successfully landing something that went up to 271 meters vs to the ISS?
Yes, there is. The capsule that left the ISS and returned to earth doesn’t have the tank capacity, or mass, required to reach orbit and dock at the ISS. All of SpaceX’s successful rocket landings have been of suborbital rockets, e.g. Falcon 1, Falcon 9, or Falcon Heavy. Theoretically, Dragon is capable of landing on the pads at Canaveral, but NASA has insisted that they drop them in the ocean instead.
OpenAI executives have discussed filing an antitrust complaint with US regulators against Microsoft, the company’s largest investor, The Wall Street Journal reported Monday, marking a dramatic escalation in tensions between the two long-term AI partners. OpenAI, which develops ChatGPT, has reportedly considered seeking a federal regulatory review of the terms of its contract with Microsoft for potential antitrust law violations, according to people familiar with the matter. The potential antitrust complaint would likely argue that Microsoft is using its dominant position in cloud services and contractual leverage to suppress competition, according to insiders who described it as a “nuclear option,” the WSJ reports.
The move could unravel one of the most important business partnerships in the AI industry — a relationship that started with a $1 billion investment by Microsoft in 2019 and has grown to include billions more in funding, along with Microsoft’s exclusive rights to host OpenAI models on its Azure cloud platform. The friction centers on OpenAI’s efforts to transition from its current nonprofit structure into a public benefit corporation, a conversion that needs Microsoft’s approval to complete. The two companies have not been able to agree on details after months of negotiations, sources told Reuters. OpenAI’s existing for-profit arm would become a Delaware-based public benefit corporation under the proposed restructuring.
The companies are discussing revising the terms of Microsoft’s investment, including the future equity stake it will hold in OpenAI. According to The Information, OpenAI wants Microsoft to hold a 33 percent stake in a restructured unit in exchange for foregoing rights to future profits. The AI company also wants to modify existing clauses that give Microsoft exclusive rights to host OpenAI models in its cloud. The restructuring debate attracted criticism from multiple quarters. Elon Musk alleges that OpenAI violated contract provisions by prioritizing profit over the public good in its push to advance AI and has sued to block the conversion. In December, Meta Platforms also asked California’s attorney general to block OpenAI’s conversion to a for-profit company.
I do appreciate though OpenAI declaring war on Microsoft for the unforgivable sin of… being right about their success and believing in them and making a wise investment? OpenAI was free to say no to that money, they were back then all about the non-profit public good right? Right? No, seems like they always planned to cash in hard as possible if they could? Imagine that.
“Nuclear” is not really a good description for a decade(s) long anti-trust case.
I’ve nearly already fallen asleep at the possibility this amazing option may get exercised.
Iran’s cybersecurity authority has banned officials from using devices that connect to the internet, apparently fearing being tracked or hacked by Israel. According to the state-linked Fars news agency, Iranian officials and their bodyguards have been told they are not allowed to use any equipment that connects to public internet or telecommunications networks.
Time to use all those pagers they bought a few years ago.
Obviously one of the problems with the use of any given technology during wartime is that if it isn’t local, there’s a reasonably good chance that there will be attempts by one’s adversary to use that technology against one in some capacity.
For traditional conflicts before the computer age this was often a matter of raw materials or finished products being denied delivery. In the IT age it means things like hidden subroutines to degrade performance or outright disable or damage systems, or to snoop or locate.
So yeah, Iran is using the same Internet protocols and other systems that the rest of the world uses, and there are lots of known issues with those open protocols, and that’s even before getting to the hardware itself, where it sources from, and what sort of backdoors or other penetration into that hardware might have been achieved by Israel. If Iran is mostly using commercial, off-the-shelf equipment that anyone including Israel could purchase same as they did then I have no doubt that samples have been obtained and put through testing.
Obviously one of the problems with the use of any given technology during wartime is that if it isn’t local, there’s a reasonably good chance that there will be attempts by one’s adversary to use that technology against one in some capacity.
See the USA doesn’t have this problem. Even with home grown infrastructure the highest echelons of the armed forces will simply openly share war plans with journalists.
When someone is trying to kill you and has missiles and bombs to do the job if they know where you are, it tends to focus your mind a bit.
You’d think so, but the war in Ukraine shows otherwise. The Russians suffered heavily early on due to using cell phones—and they kept using them even after figuring out they’d lost something like four general officers due to them/staff/bodyguards using their phones, causing even more losses. I can’t help but agree with the GP, some assholes are always going to come to the conclusion that “everyone else shouldn’t do it, but it will be fine if it’s only me.”
they really have nothing, huh. they’re insisting that not only do you want their slop, they are going to make you pay more for it even if you don’t want it, because they know you want it so bad.
cannot wait for this all to blow up.
Pusher: “Take AI. It’s the best.”
User: “No thanks. I don’t want AI.”
Pusher: “Take AI. It’ll save you money.”
User: “No thanks. I don’t want AI.”
Pusher: “HERE’S AI IN THE PRODUCTS YOU ALREADY USE! Also, we’re increasing the base price to cover the new AI features you demanded.”
User: “Are you touched? Mentally defective? Deaf? Stupid? All of the above?”
Pusher: “Also, there’s additional tiers where you can pay us even more for more AI features, but the base price will continue to climb as we shove more of those optional features in due to lack of actual user demand.”
User: “Oh. So, you’re just an asshole then?”
Pusher: “Prices will continue to increase as we full integrate all AI into all everything because you demanded it! You need it. You want it. You’re desperate for it.”
User: “I thought you said this shit would save me money?”
Pusher: “Pay us for the thing you must have because we told you you must have it!”
User: “Sigh.”
So both the Intelligence and the Cost Savings are a myth with AI. Think you should keep believing Mark Benioff? Should Salesforce be the most expensive software you buy?
The company’s annual work trends study, which is based on aggregated and anonymized data from Microsoft 365 users and a global survey of 31,000 desk workers, also found that almost 20% of employees actively working weekends are checking email before noon on Saturdays and Sundays [non-paywalled source], while over 5% are active on email again on Sunday evenings, gearing up for the start of the work week.
[…] Meetings are often spontaneous. Some 57% of the gatherings tallied by Microsoft came together without a calendar invite, and even 10% of scheduled meetings were booked at the last minute. […] Mass emails, those which loop in more than 20 participants, are on the rise, climbing 7% from last year.
Yeah good luck with that. You want me to attend a meeting at 8pm from my home? Fine then you’re getting billed for time.
Occasional meetings, planned well in advance, with people on the other side of the world, in the late evening are acceptable.
Genuine emergency meetings at short notice, also acceptable.
Effectively being on call every evening and at the weekend (without getting paid for being on call), not acceptable
Except this is really about having meetings with India.
Is it going to be marketed like Banks
We have basically all the elements now needed for a great depression.
Climate change means we are going to have increased food prices and shortages. Bird flu isn’t going to help either.
The trade wars being used to counterbalance the incoming tax cuts for billionaires are here too just like in the 20s and 30s
And the Senate is ramming through 5 to 7 trillion in tax cuts for the 1%, so we have out of control trickle down economics.
And of course we’re gearing up for war, in this case with Iran maybe even China at some point.
Everything we need for a great depression. Good job guys good job.. we never fucking learn.