The past 2 weeks person been defined by a clash betwixt Anthropic CEO Dario Amodei and Defense Secretary Pete Hegseth arsenic the 2 conflict implicit the military’s usage of AI.
Anthropic refuses to let its AI models to beryllium utilized for wide surveillance of Americans oregon for afloat autonomous weapons that behaviour strikes without quality input. At the aforesaid time, Secretary Hegseth has argued the Department of Defense shouldn’t beryllium constricted by the rules of a vendor, arguing immoderate “lawful use” of the exertion should beryllium permitted.
On Thursday, Amodei publically signaled that Anthropic isn’t backing down – contempt threats that his institution could beryllium designated arsenic a proviso concatenation hazard arsenic a result. But with the quality rhythm moving fast, it’s worthy revisiting precisely what’s astatine involvement successful the fight.
At its core, this combat is astir who controls almighty AI systems — the companies that physique them, oregon the authorities that wants to deploy them.
What is Anthropic disquieted about?
As we said above, Anthropic doesn’t privation its AI models to beryllium utilized for wide surveillance of Americans oregon for autonomous weapons with nary quality successful the loop for targeting and firing decisions. Traditional defence contractors typically person small accidental successful however their products volition beryllium used, but Anthropic has argued from its inception that AI exertion poses unsocial risks and truthful requires unsocial safeguards. From the company’s perspective, the question is however to support those safeguards erstwhile the exertion is being utilized by the military.
The U.S. subject already relies connected highly automated systems, immoderate of which are lethal. The determination to usage lethal unit has historically been near to humans, but determination are fewer ineligible restrictions connected subject usage of autonomous weapons.The DoD doesn’t categorically prohibition afloat autonomous weapons systems. According to a 2023 DOD directive, AI systems tin prime and prosecute targets without quality intervention, arsenic agelong arsenic they conscionable definite standards and walk reappraisal by elder defence officials.
That’s precisely what makes Anthropic nervous. Military exertion is secretive by nature, truthful if the U.S. subject were taking steps to automate lethal decision-making, we mightiness not cognize astir it until it was operational. And if it utilized Anthropic’s models, it could number arsenic ‘lawful use.’
Techcrunch event
Boston, MA | June 9, 2026
Anthropic’s presumption isn’t that specified uses should beryllium permanently disconnected the table. It’s that its models aren’t susceptible capable to enactment them safely yet. Imagine an autonomous strategy misidentifying a target, escalating a struggle without quality authorization, oregon making a split-second lethal determination that nary 1 tin reverse. Put a less-capable AI successful complaint of weapons, and you get a precise fast, precise assured instrumentality that’s atrocious astatine making precocious stakes calls.
AI besides has the powerfulness to supercharge lawful surveillance of American citizens to a concerning degree. Under existent U.S. laws, surveillance of American citizens is already possible, whether done postulation of texts, emails, and different communication. AI changes the equation by enabling automated large-scale signifier detection, entity solution crossed datasets, predictive hazard scoring, and continuous behavioral analysis.
What does the Pentagon want?
The Pentagon’s statement is that it should beryllium capable to deploy Anthropic’s exertion for immoderate lawful usage it deems necessary, alternatively than beryllium constricted by Anthropic’s interior policies connected things similar autonomous weapons oregon surveillance.
More specifically, Secretary Hegseth has argued the Department of Defense shouldn’t beryllium constricted by the rules of a vendor and that it would prosecute successful “lawful use” of the technology.
Sean Parnell, the Pentagon’s main spokesperson, said successful a Thursday X post that the section has nary involvement successful conducting wide home surveillance oregon deploying autonomous weapons.
“Here’s what we’re asking: Allow the Pentagon to usage Anthropic’s exemplary for each lawful purposes,” Parnell said. “This is simply a simple, common-sense petition that volition forestall Anthropic from jeopardizing captious subject operations and perchance putting our warfighters astatine risk. We volition not fto ANY institution dictate the presumption regarding however we marque operational decisions.”
He added that Anthropic has until 5:01 PM ET connected Friday to decide. “Otherwise, we volition terminate our concern with Anthropic and deem them a proviso concatenation hazard for DOW,” helium said.
Despite the Department’s stance that it simply doesn’t judge it should beryllium constricted by a corporation’s usage policies, Secretary Hegseth’s concerns astir Anthropic person astatine times seemed connected to taste grievance. In a code astatine SpaceX and xAI offices successful January, Hegseth railed against “woke AI” successful a code that immoderate saw arsenic a preview of his feud with Anthropic.
“Department of War AI volition not beryllium woke,” Hegseth said. “We’re gathering war-ready weapons and systems, not chatbots for an Ivy League module lounge.”
So what now?
The Pentagon has threatened to either state Anthropic a “supply concatenation risk” — which efficaciously blacklists Anthropic from doing concern with the authorities — oregon invoke the Defense Production Act (DPA) to unit the institution to tailor its exemplary to the military’s needs. Hegseth has fixed Anthropic until 5:01pm connected Friday to respond. But with the deadline approaching, it’s anyone’s conjecture whether the Pentagon volition marque bully connected its threat.
This is not a combat either enactment tin easy locomotion distant from. Sachin Seth, a VC astatine Trousdale Ventures who focuses connected defence tech, says a proviso concatenation hazard statement for Anthropic could mean “lights out” for the company.
However, helium said, if Anthropic is dropped from the DOD, it could beryllium a nationalist information issue.
“[The Department] would person to hold six to 12 months for either OpenAI oregon xAI to drawback up,” Seth told TechCrunch. “That leaves a model of up to a twelvemonth wherever they mightiness beryllium moving from not the champion model, but the second- oregon third-best.”
xAI is gearing up to go classified-ready and regenerate Anthropic, and it’s just to accidental fixed proprietor Elon Musk’s rhetoric on the substance that the institution would person nary occupation giving the DOD full power implicit its technology. Recent reports bespeak that OpenAI whitethorn instrumentality to the aforesaid reddish lines arsenic Anthropic.















English (US) ·