Silicon Valley spooks the AI safety advocates

6 months ago 71

Silicon Valley leaders including White House AI & Crypto Czar David Sacks and OpenAI Chief Strategy Officer Jason Kwon caused a disturbance online this week for their comments astir groups promoting AI safety. In abstracted instances, they alleged that definite advocates of AI information are not arsenic virtuous arsenic they appear, and are either acting successful the involvement of themselves oregon billionaire puppet masters down the scenes.

AI information groups that spoke with TechCrunch accidental the allegations from Sacks and OpenAI are Silicon Valley’s latest effort to intimidate its critics, but surely not the first. In 2024, immoderate task superior firms spread rumors that a California AI information bill, SB 1047, would nonstop startup founders to jail. The Brookings Institution labeled the rumor arsenic 1 of galore “misrepresentations” astir the bill, but Governor Gavin Newsom yet vetoed it anyway.

Whether oregon not Sacks and OpenAI intended to intimidate critics, their actions person sufficiently frightened respective AI information advocates. Many nonprofit leaders that TechCrunch reached retired to successful the past week asked to talk connected the information of anonymity to spare their groups from retaliation.

The contention underscores Silicon Valley’s increasing hostility betwixt gathering AI responsibly and gathering it to beryllium a monolithic user merchandise — a taxable my colleagues Kirsten Korosec, Anthony Ha, and I unpack connected this week’s Equity podcast. We besides dive into a caller AI information instrumentality passed successful California to modulate chatbots, and OpenAI’s attack to erotica successful ChatGPT.

On Tuesday, Sacks wrote a post connected X alleging that Anthropic — which has raised concerns implicit AI’s quality to lend to unemployment, cyberattacks, and catastrophic harms to nine — is simply fearmongering to get laws passed that volition payment itself and drown retired smaller startups successful paperwork. Anthropic was the lone large AI laboratory to endorse California’s Senate Bill 53 (SB 53), a measure that sets information reporting requirements for ample AI companies, which was signed into instrumentality past month.

Sacks was responding to a viral essay from Anthropic co-founder Jack Clark astir his fears regarding AI. Clark delivered the effort arsenic a code astatine the Curve AI information league successful Berkeley weeks earlier. Sitting successful the audience, it surely felt similar a genuine relationship of a technologist’s reservations astir his products, but Sacks didn’t spot it that way.

Anthropic is moving a blase regulatory seizure strategy based connected fear-mongering. It is principally liable for the authorities regulatory frenzy that is damaging the startup ecosystem. https://t.co/C5RuJbVi4P

— David Sacks (@DavidSacks) October 14, 2025

Sacks said Anthropic is moving a “sophisticated regulatory seizure strategy,” though it’s worthy noting that a genuinely blase strategy astir apt wouldn’t impact making an force retired of the national government. In a follow up station connected X, Sacks noted that Anthropic has positioned “itself consistently arsenic a foe of the Trump administration.”

Techcrunch event

San Francisco | October 27-29, 2025

Also this week, OpenAI’s main strategy officer, Jason Kwon, wrote a post connected X explaining wherefore the institution was sending subpoenas to AI information nonprofits, specified arsenic Encode, a nonprofit that advocates for liable AI policy. (A subpoena is simply a ineligible bid demanding documents oregon testimony.) Kwon said that aft Elon Musk sued OpenAI — implicit concerns that the ChatGPT-maker has veered distant from its nonprofit ngo — OpenAI recovered it suspicious however respective organizations besides raised absorption to its restructuring. Encode filed an amicus little successful enactment of Musk’s lawsuit, and different nonprofits spoke retired publically against OpenAI’s restructuring.

There’s rather a batch much to the communicative than this.

As everyone knows, we are actively defending against Elon successful a suit wherever helium is trying to harm OpenAI for his ain fiscal benefit.

Encode, the enactment for which @_NathanCalvin serves arsenic the General Counsel, was one… https://t.co/DiBJmEwtE4

— Jason Kwon (@jasonkwon) October 10, 2025

“This raised transparency questions astir who was backing them and whether determination was immoderate coordination,” said Kwon.

NBC News reported this week that OpenAI sent wide subpoenas to Encode and six different nonprofits that criticized the company, asking for their communications related to 2 of OpenAI’s biggest opponents, Musk and Meta CEO Mark Zuckerberg. OpenAI besides asked Encode for communications related to its enactment of SB 53.

One salient AI information person told TechCrunch that there’s a increasing divided betwixt OpenAI’s authorities affairs squad and its probe organization. While OpenAI’s information researchers often people reports disclosing the risks of AI systems, OpenAI’s argumentation portion lobbied against SB 53, saying it would alternatively person azygous rules astatine the national level.

OpenAI’s caput of ngo alignment, Joshua Achiam, spoke retired astir his institution sending subpoenas to nonprofits successful a post connected X this week.

“At what is perchance a hazard to my full vocation I volition say: this doesn’t look great,” said Achiam.

Brendan Steinhauser, CEO of the AI information nonprofit Alliance for Secure AI (which has not been subpoenaed by OpenAI), told TechCrunch that OpenAI seems convinced its critics are portion of a Musk-led conspiracy. However, helium argues this is not the case, and that overmuch of the AI information assemblage is rather captious of xAI’s information practices, or deficiency thereof.

“On OpenAI’s part, this is meant to soundlessness critics, to intimidate them, and to dissuade different nonprofits from doing the same,” said Steinhauser. “For Sacks, I deliberation he’s acrophobic that [the AI safety] question is increasing and radical privation to clasp these companies accountable.”

Sriram Krishnan, the White House’s elder argumentation advisor for AI and a erstwhile a16z wide partner, chimed successful connected the speech this week with a social media station of his own, calling AI information advocates retired of touch. He urged AI information organizations to speech to “people successful the existent satellite using, selling, adopting AI successful their homes and organizations.”

A caller Pew survey recovered that astir fractional of Americans are more acrophobic than excited astir AI, but it’s unclear what worries them exactly. Another caller survey went into much item and recovered that American voters attraction much astir job losses and deepfakes than catastrophic risks caused by AI, which the AI information question is mostly focused on.

Addressing these information concerns could travel astatine the disbursal of the AI industry’s accelerated maturation — a trade-off that worries galore successful Silicon Valley. With AI concern propping up overmuch of America’s economy, the fearfulness of over-regulation is understandable.

But aft years of unregulated AI progress, the AI information question appears to beryllium gaining existent momentum heading into 2026. Silicon Valley’s attempts to combat backmost against safety-focused groups whitethorn beryllium a motion that they’re working.

Read Entire Article