Friday afternoon, conscionable arsenic this interrogation was getting underway, a quality alert flashed crossed my machine screen: the Trump medication was severing ties with Anthropic, the San Francisco AI institution founded successful 2021 by Dario Amodei and different erstwhile OpenAI researchers who near implicit information concerns. Defense Secretary Pete Hegseth had invoked a national information law — 1 designed to antagonistic overseas proviso concatenation threats — to blacklist the institution from doing concern with the Pentagon aft Amodei refused to let Anthropic’s tech to beryllium utilized for wide surveillance of U.S. citizens oregon for autonomous equipped drones that could prime and termination targets without quality input.
It was a jaw-dropping sequence. Anthropic is present acceptable to suffer a declaration worthy up to $200 million, arsenic good arsenic beryllium barred from moving with different defence contractors aft President Trump posted connected Truth Social directing each national bureau to “immediately cease each usage of Anthropic technology.” (Anthropic has since said it volition challenge the Pentagon successful court, calling the supply-chain-risk designation legally unsound and “never earlier publically applied to an American company.”)
Max Tegmark has spent the amended portion of a decennary informing that the contention to physique ever-more-powerful AI systems is outpacing the world’s quality to govern them. The Swedish-American physicist and prof astatine MIT founded the Future of Life Institute successful 2014. In 2023, helium famously helped signifier an open letter — yet signed by much than 33,000 people, including Elon Musk — calling for a intermission successful precocious AI development.
His presumption of the Anthropic situation is unsparing: the company, similar its rivals, has sown the seeds of its ain predicament. Tegmark’s statement doesn’t statesman with the Pentagon but with a determination made years earlier — a choice, shared crossed the industry, to defy binding regulation. Anthropic, OpenAI, Google DeepMind and others person agelong promised to govern themselves responsibly. Earlier this week, Anthropic adjacent dropped the central tenet of its ain information pledge — its committedness not to merchandise progressively almighty AI systems until the institution was assured they wouldn’t origin harm.
Now, successful the lack of rules, there’s not a batch to support these players, says Tegmark. Here’s much from that interview, edited for magnitude and clarity. You tin perceive the afloat speech this coming week connected TechCrunch’s StrictlyVC Download podcast.
When you saw this quality conscionable present astir Anthropic, what was your archetypal reaction?
The roadworthy to hellhole is paved with bully intentions. It’s truthful absorbing to deliberation backmost a decennary ago, erstwhile radical were truthful excited astir however we were going to marque artificial quality to cure cancer, to turn the prosperity successful America and marque America strong. And present we are present wherever the U.S. authorities is pissed disconnected astatine this institution for not wanting AI to beryllium utilized for home wide surveillance of Americans, and besides not wanting to person slayer robots that tin autonomously — without immoderate quality input astatine each — determine who gets killed.
Techcrunch event
San Francisco, CA | October 13-15, 2026
Anthropic has staked its full individuality connected being a safety-first AI company, and yet it was collaborating with defence and quality agencies [dating backmost to astatine slightest 2024]. Do you deliberation that’s astatine each contradictory?
It is contradictory. If I tin springiness a small cynical instrumentality connected this — yes, Anthropic has been precise bully astatine selling themselves arsenic each astir safety. But if you really look astatine the facts alternatively than the claims, what you spot is that Anthropic, OpenAI, Google DeepMind and xAI person each talked a batch astir however they attraction astir safety. None of them has travel retired supporting binding information regularisation the mode we person successful different industries. And each 4 of these companies person present breached their ain promises. First we had Google — this large slogan, ‘Don’t beryllium evil.’ Then they dropped that. Then they dropped different longer committedness that fundamentally said they promised not to bash harm with AI. They dropped that truthful they could merchantability AI for surveillance and weapons. OpenAI conscionable dropped the connection information from their ngo statement. xAI unopen down their full information team. And present Anthropic, earlier successful the week, dropped their astir important information committedness — the committedness not to merchandise almighty AI systems until they were definite they weren’t going to origin harm.
How did companies that made specified salient information commitments extremity up successful this position?
All of these companies, particularly OpenAI and Google DeepMind but to immoderate grade besides Anthropic, person persistently lobbied against regularisation of AI, saying, ‘Just spot us, we’re going to modulate ourselves.’ And they’ve successfully lobbied. So we close present person little regularisation connected AI systems successful America than connected sandwiches. You know, if you privation to unfastened a sandwich store and the wellness inspector finds 15 rats successful the kitchen, helium won’t fto you merchantability immoderate sandwiches until you hole it. But if you say, ‘Don’t worry, I’m not going to merchantability sandwiches, I’m going to merchantability AI girlfriends for 11-year-olds, and they’ve been linked to suicides successful the past, and past I’m going to merchandise thing called superintelligence which mightiness overthrow the U.S. government, but I person a bully feeling astir mine’ — the inspector has to say, ‘Fine, spell ahead, conscionable don’t merchantability sandwiches.’
There’s nutrient information regularisation and nary AI regulation.
And this, I feel, each of these companies truly stock the blasted for. Because if they had taken each these promises that they made backmost successful the time for however they were going to beryllium truthful harmless and goody-goody, and gotten together, and past gone to the authorities and said, ‘Please instrumentality our voluntary commitments and crook them into U.S. instrumentality that binds adjacent our astir sloppy competitors’ — this would person happened instead. We’re successful a implicit regulatory vacuum. And we cognize what happens erstwhile there’s a implicit firm amnesty: you get thalidomide, you get baccy companies pushing cigarettes connected kids, you get asbestos causing lung cancer. So it’s benignant of ironic that their ain absorption to having laws saying what’s good and not good to bash with AI is present coming backmost and biting them.
There is nary instrumentality close present against gathering AI to termination Americans, truthful the authorities tin conscionable abruptly inquire for it. If the companies themselves had earlier travel retired and said, ‘We privation this law,’ they wouldn’t beryllium successful this pickle. They truly changeable themselves successful the foot.
The companies’ counter-argument is ever the contention with China — if American companies don’t bash this, Beijing will. Does that statement hold?
Let’s analyse that. The astir communal talking constituent from the lobbyists for the AI companies — they’re present amended funded and much galore than the lobbyists from the fossil substance industry, the pharma manufacture and the military-industrial analyzable combined — is that whenever anyone proposes immoderate benignant of regulation, they say, ‘But China.’ So let’s look astatine that. China is successful the process of banning AI girlfriends outright. Not conscionable property limits — they’re looking astatine banning each anthropomorphic AI. Why? Not due to the fact that they privation to delight America, but due to the fact that they consciousness this is screwing up Chinese younker and making China weak. Obviously, it’s making American younker weak, too.
And erstwhile radical accidental we person to contention to physique superintelligence truthful we tin triumph against China — erstwhile we don’t really cognize however to power superintelligence, truthful that the default result is that humanity loses power of Earth to alien machines — conjecture what? The Chinese Communist Party truly likes control. Who successful their close caput thinks that Xi Jinping is going to tolerate immoderate Chinese AI institution gathering thing that overthrows the Chinese government? No way. It’s intelligibly truly atrocious for the American authorities excessively if it gets overthrown successful a coup by the archetypal American institution to physique superintelligence. This is simply a nationalist information threat.
That’s compelling framing — superintelligence arsenic a nationalist information threat, not an asset. Do you spot that presumption gaining traction successful Washington?
I deliberation if radical successful the nationalist information assemblage perceive to Dario Amodei picture his imaginativeness — he’s fixed a celebrated code wherever helium says we’ll soon person a country of geniuses successful a information halfway — they mightiness commencement thinking: wait, did Dario conscionable usage the connection ‘country’? Maybe I should enactment that state of geniuses successful a information halfway connected the aforesaid menace database I’m keeping tabs on, due to the fact that that sounds threatening to the U.S. government. And I deliberation reasonably soon, capable radical successful the U.S. nationalist information assemblage are going to recognize that uncontrollable superintelligence is simply a threat, not a tool. This is wholly analogous to the Cold War. There was a contention for dominance — economical and subject — against the Soviet Union. We Americans won that 1 without ever engaging successful the 2nd race, which was to spot who could enactment the astir atomic craters successful the different superpower. People realized that was conscionable suicide. No 1 wins. The aforesaid logic applies here.
What does each of this mean for the gait of AI improvement much broadly? How adjacent bash you deliberation we are to the systems you’re describing?
Six years ago, astir each adept successful AI I knew predicted we were decades distant from having AI that could maestro connection and cognition astatine quality level — possibly 2040, possibly 2050. They were each wrong, due to the fact that we already person that now. We’ve seen AI advancement rather rapidly from precocious schoolhouse level to assemblage level to PhD level to assemblage prof level successful immoderate areas. Last year, AI won the golden medal astatine the International Mathematics Olympiad, which is astir arsenic hard arsenic quality tasks get. I wrote a paper unneurotic with Yoshua Bengio, Dan Hendrycks, and different apical AI researchers conscionable a fewer months agone giving a rigorous explanation of AGI. According to this, GPT-4 was 27% of the mode there. GPT-5 was 57% of the mode there. So we’re not determination yet, but going from 27% to 57% that rapidly suggests it mightiness not beryllium that long.
When I lectured to my students yesterday astatine MIT, I told them that adjacent if it takes 4 years, that means erstwhile they graduate, they mightiness not beryllium capable to get immoderate jobs anymore. It’s surely not excessively soon to commencement preparing for it.
Anthropic is present blacklisted. I’m funny to spot what happens adjacent — volition the different AI giants basal with them and say, we won’t bash this either? Or does idiosyncratic similar xAI rise their manus and say, Anthropic didn’t privation that contract, we’ll instrumentality it? [Editor’s note: Hours aft the interview, OpenAI announced its own deal with the Pentagon.]
Last night, Sam Altman came retired and said helium stands with Anthropic and has the aforesaid reddish lines. I respect him for the courageousness of saying that. Google, arsenic of erstwhile we started this interview, had said nothing. If they conscionable enactment quiet, I deliberation that’s incredibly embarrassing for them arsenic a company, and a batch of their unit volition consciousness the same. We haven’t heard thing from xAI yet either. So it’ll beryllium absorbing to see. Basically, there’s this infinitesimal wherever everybody has to amusement their existent colors.
Is determination a mentation of this wherever the result is really good?
Yes, and this is wherefore I’m really optimistic successful a unusual way. There’s specified an evident alternate here. If we conscionable commencement treating AI companies similar immoderate different companies — driblet the firm amnesty — they would intelligibly person to bash thing similar a objective proceedings earlier they released thing this powerful, and show to autarkic experts that they cognize however to power it. Then we get a aureate property with each the bully worldly from AI, without the existential angst. That’s not the way we’re connected close now. But it could be.















English (US) ·