Image Credits:Thomas Fuller/SOPA Images/LightRocket / Getty Images12:44 PM PST · March 7, 2026
Hardware enforcement Caitlin Kalinowski announced contiguous that successful effect to OpenAI’s arguable statement with the Department of Defense, she’s resigned from her relation starring the company’s robotics team.
“This wasn’t an casual call,” Kalinowski said in a societal media post. “AI has an important relation successful nationalist security. But surveillance of Americans without judicial oversight and lethal autonomy without quality authorization are lines that deserved much deliberation than they got.”
Kalinowski, who antecedently led the squad gathering augmented world glasses astatine Meta, joined OpenAI successful November 2024. In her announcement today, she emphasized that the determination was “about principle, not people” and said she has “deep respect” for CEO Sam Altman and the OpenAI team.
In a follow-up station connected X, Kallinowski added, “To beryllium clear, my contented is that the announcement was rushed without the guardrails defined. It’s a governance interest archetypal and foremost. These are excessively important for deals oregon announcements to beryllium rushed.”
An OpenAI spokesperson confirmed Kalinowski’s departure to TechCrunch.
“We judge our statement with the Pentagon creates a workable way for liable nationalist information uses of AI portion making wide our reddish lines: nary home surveillance and nary autonomous weapons,” the institution said successful a statement. “We admit that radical person beardown views astir these issues and we volition proceed to prosecute successful treatment with employees, government, civilian nine and communities astir the world.”
OpenAI’s statement with the Pentagon was announced conscionable implicit a week ago, aft discussions betwixt the Pentagon and Anthropic fell through arsenic the AI institution tried to negociate for safeguards preventing its exertion from being utilized successful wide home surveillance oregon afloat autonomous weapons. The Pentagon subsequently designated Anthropic a supply-chain risk. (Anthropic said it volition fight the designation successful court; successful the meantime, Microsoft, Google, and Amazon said they volition proceed to make Anthropic’s Claude disposable to non-defense customers.)
Techcrunch event
San Francisco, CA | October 13-15, 2026
Then, OpenAI rapidly announced a statement of its own allowing its exertion to beryllium utilized successful classified environments. As executives attempted to explicate the woody connected societal media, the institution described it arsenic taking “a much expansive, multi-layered approach” that relies not conscionable connected declaration language, but besides method safeguards, to support reddish lines akin to Anthropic’s.
Nonetheless, the contention appears to person damaged OpenAI’s estimation among immoderate consumers, with ChatGPT uninstalls surging 295% and Claude climbing to the apical of the App Store charts. As of Saturday afternoon, Claude and ChatGPT stay the U.S. App Store’s fig 1 and fig 2 escaped apps, respectively.
Anthony Ha is TechCrunch’s play editor. Previously, helium worked arsenic a tech newsman astatine Adweek, a elder exertion astatine VentureBeat, a section authorities newsman astatine the Hollister Free Lance, and vice president of contented astatine a VC firm. He lives successful New York City.
You tin interaction oregon verify outreach from Anthony by emailing anthony.ha@techcrunch.com.















English (US) ·