Will the Pentagon’s Anthropic controversy scare startups away from defense work?

1 month ago 16

In conscionable implicit a week, negotiations implicit the Pentagon’s usage of Anthropic’s Claude exertion fell through, the Trump medication designated Anthropic a supply-chain risk, and the AI institution said it would combat that designation successful court.

OpenAI, meanwhile, rapidly announced a woody of its own, prompting backlash that saw users uninstalling ChatGPT and pushing Anthropic’s Claude to the apical of the App Store charts. And at slightest 1 OpenAI enforcement has quit implicit concerns that the announcement was rushed without due guardrails successful place.

On the latest occurrence of TechCrunch’s Equity podcast, Kirsten Korosec, Sean O’Kane, and I discussed what this means for different startups seeking to enactment with the national government, particularly the Pentagon, arsenic Kirsten wondered, “Are we going to spot a changing of the tune a small bit?”

Sean pointed retired that this is an antithetic concern successful a fig of ways, successful portion due to the fact that OpenAI and Claude marque products that “no 1 tin unopen up about.” And crucially, this is simply a quality implicit “how their technologies are being utilized oregon not being utilized to termination people” truthful it’s people going to gully much scrutiny.

Still, Kirsten argued, this is simply a concern that should “give immoderate startup pause.”

Read a preview of our conversation, edited for magnitude and clarity, below.

Kirsten: I’m wondering if different startups are starting to look astatine what’s happened with the national government, specifically the Pentagon and Anthropic, that statement and wrestling match, and [take] intermission astir whether they privation to beryllium going aft national dollars. Are we going to spot a changing of the tune a small bit?

Techcrunch event

San Francisco, CA | October 13-15, 2026

Sean: I wonderment astir that, too. I deliberation no, to immoderate extent, successful the adjacent term, if lone due to the fact that erstwhile you truly effort to deliberation astir each the antithetic companies, whether they’re startups oregon adjacent much established Fortune 500s that bash enactment with the authorities and successful peculiar with the Department of Defense oregon the Pentagon, [for] a batch of them, that enactment flies nether the radar.

General Motors makes defence vehicles for the Army and has done [that] for a precise agelong clip and has worked connected each electrical versions of those vehicles and autonomous versions. There’s worldly similar that that goes connected each the clip and it conscionable ne'er truly hits the zeitgeist. I deliberation the occupation that OpenAI and Anthropic ran into wrong the past week is like, these are companies that marque products that a ton of radical usage — and besides much importantly, [that] nary 1 tin unopen up about.

So there’s conscionable specified a spotlight connected them, that people highlights their engagement to a level that I deliberation astir of the different companies that are contracting with the national authorities — and, successful particular, immoderate of the war-fighting elements of the national authorities — don’t needfully person to woody with.

The lone caveat I’ll adhd to that is simply a batch of the vigor astir this treatment betwixt Anthropic and OpenAI and the Pentagon is precise specifically astir however their technologies are being utilized oregon not being utilized to termination people, oregon successful parts of the missions that are sidesplitting people. It’s not conscionable the attraction that’s connected them and the familiarity we person with their brands, determination is an other constituent determination that I consciousness is much abstract erstwhile you’re reasoning astir General Motors arsenic a defence contractor oregon whatever.

I don’t deliberation we’re going to see, like, Applied Intuition oregon immoderate of these different companies that person been framing themselves arsenic dual usage backmost disconnected much, conscionable due to the fact that I don’t spot the spotlight connected it and there’s conscionable not the benignant of shared knowing of what that interaction mightiness be.

Anthony: This communicative is truthful unsocial and circumstantial to these companies and personalities successful a batch of ways. I mean, determination person been a batch of really absorbing thought pieces about: What is the relation of exertion successful government? [Of] AI successful government? And I deliberation those are each bully and worthwhile questions to inquire and explore.

I deliberation also, though, that this is simply a precise funny lens done which to analyse immoderate of those things due to the fact that Anthropic and OpenAI are not really that antithetic successful a batch of ways oregon the stances they’re taking. It’s not similar 1 institution is saying, “Hey, I don’t privation to enactment with the government” and 1 is saying, “Yes, I do.” Or 1 is saying, “You tin bash immoderate you want.” and [the different is] saying, “No, I privation to person restrictions.” Both of them, astatine slightest publicly, are saying, “We privation restrictions connected however our AI gets used.” It conscionable seems similar Anthropic is digging successful their heels a batch much about: You cannot alteration the presumption successful this way.

And past connected apical of that, determination besides conscionable seems to beryllium a property furniture where, the CEO of Anthropic and, Emil Michael — who a batch of TechCrunch readers mightiness remember from his Uber days, and is present [chief exertion serviceman for the Department of Defense]. Apparently, they conscionable truly don’t similar each other. Reportedly.

Sean: Yes, there’s a precise large “girls are fighting” constituent present that we should not overlook.

Kirsten: Yeah, a small bit. There is, but the implications are a small spot stronger than that.  Again, to propulsion backmost a small bit, what we’re talking astir present is the Pentagon and Anthropic coming into a quality successful which Anthropic appears to person lost, though I should accidental they are inactive precise overmuch being utilized by the military. They are considered a important technology, but OpenAI has benignant of stepped in, and this is evolving and volition apt alteration by the clip this occurrence comes out.

The blowback has been absorbing for OpenAI, wherever we’ve seen a batch of uninstalls of ChatGPT I deliberation surged 295% aft OpenAI locked successful the woody with the Department of Defense.

To me, each of this is sound to the truly captious and unsafe thing, which is that the Pentagon was seeking to alteration existing presumption connected an existing contract. And that is truly important and should springiness immoderate startup intermission due to the fact that the governmental instrumentality that’s happening close now, peculiarly with the DoD, appears to beryllium different. This isn’t normal. Contracts instrumentality everlastingly to get baked successful astatine the authorities level and the information that they’re seeking to alteration those presumption is simply a problem.

Read Entire Article