Scott Wiener on his fight to make Big Tech disclose AI’s dangers

6 months ago 70

This is not California authorities Senator Scott Wiener’s archetypal effort astatine addressing the dangers of AI.

In 2024, Silicon Valley mounted a fierce campaign against his arguable AI information bill, SB 1047, which would person made tech companies liable for the imaginable harms of their AI systems. Tech leaders warned that it would stifle America’s AI boom. Governor Gavin Newsom ultimately vetoed the bill, echoing akin concerns, and a fashionable AI hacker location promptly threw a “SB 1047 Veto Party.” One attendee told me, “Thank god, AI is inactive legal.”

Now Wiener has returned with a caller AI information bill, SB 53, which sits connected Governor Newsom’s table awaiting his signature oregon veto sometime successful the adjacent fewer weeks. This clip around, the measure is overmuch much fashionable oregon astatine least, Silicon Valley doesn’t look to beryllium astatine warfare with it.

Anthropic outright endorsed SB 53 earlier this month. Meta spokesperson Jim Cullinan tells TechCrunch that the institution supports AI regularisation that balances guardrails with innovation and says “SB 53 is simply a measurement successful that direction,” though determination are areas for improvement.

Former White House AI argumentation advisor Dean Ball tells TechCrunch that SB 53 is simply a “victory for tenable voices,” and thinks there’s a beardown accidental Governor Newsom signs it.

If signed, SB 53 would enforce immoderate of the nation’s archetypal information reporting requirements connected AI giants similar OpenAI, Anthropic, xAI, and Google — companies that contiguous look nary work to uncover however they trial their AI systems. Many AI labs voluntarily people information reports explaining however their AI models could beryllium utilized to make bioweapons and different dangers, but they bash this astatine volition and they’re not ever consistent.

The bill requires starring AI labs — specifically those making much than $500 cardinal successful gross — to people information reports for their astir susceptible AI models. Much similar SB 1047, the measure specifically focuses connected the worst kinds of AI risks: their quality to lend to quality deaths, cyberattacks, and chemic weapons. Governor Newsom is considering respective different bills that code different types of AI risks, specified arsenic engagement-optimization techniques successful AI companions.

Techcrunch event

San Francisco | October 27-29, 2025

SB 53 besides creates protected channels for employees moving astatine AI labs to study information concerns to authorities officials, and establishes a state-operated unreality computing cluster, CalCompute, to supply AI probe resources beyond the large tech companies.

One crushed SB 53 whitethorn beryllium much fashionable than SB 1047 is that it’s little severe. SB 1047 besides would person made AI companies liable for immoderate harms caused by their AI models, whereas SB 53 focuses much connected requiring self-reporting and transparency. SB 53 besides narrowly applies to the world’s largest tech companies, alternatively than startups.

But galore successful the tech manufacture inactive judge states should permission AI regularisation up to the national government. In a recent missive to Governor Newsom, OpenAI argued that AI labs should lone person to comply with national standards — which is simply a comic happening to accidental to a authorities governor. The task steadfast Andreessen Horowitz wrote a caller blog post vaguely suggesting that immoderate bills successful California could interruption the Constitution’s dormant Commerce Clause, which prohibits states from unfairly limiting interstate commerce.

Senator Wiener addresses these concerns: helium lacks religion successful the national authorities to walk meaningful AI information regulation, truthful states request to measurement up. In fact, Wiener thinks the Trump medication has been captured by the tech industry, and that caller national efforts to artifact each authorities AI laws are a signifier of Trump “rewarding his funders.”

The Trump medication has made a notable displacement distant from the Biden administration’s absorption connected AI safety, replacing it with an accent connected growth. Shortly aft taking office, Vice President J.D. Vance appeared astatine an AI conference successful Paris and said: “I’m not present this greeting to speech astir AI safety, which was the rubric of the league a mates of years ago. I’m present to speech astir AI opportunity.”

Silicon Valley has applauded this shift, exemplified by Trump’s AI Action Plan, which removed barriers to gathering retired the infrastructure needed to bid and service AI models. Today, Big Tech CEOs are regularly seen dining astatine the White House oregon announcing hundred-billion-dollar information centers alongside President Trump.

Senator Wiener thinks it’s captious for California to pb the federation connected AI safety, but without choking disconnected innovation.

I precocious interviewed Senator Wiener to sermon his years astatine the negotiating array with Silicon Valley and wherefore he’s truthful focused connected AI information bills. Our speech has been edited lightly for clarity and brevity. My questions are successful bold, and his answers are not.

Maxwell Zeff: Senator Wiener, I interviewed you erstwhile SB 1047 was sitting connected Governor Newsom’s desk. Talk to maine astir the travel you’ve been connected to modulate AI information successful the past fewer years.

Scott Wiener: It’s been a roller coaster, an unthinkable learning experience, and conscionable truly rewarding. We’ve been capable to assistance elevate this contented [of AI safety], not conscionable successful California, but successful the nationalist and planetary discourse.

We person this incredibly almighty caller exertion that is changing the world. How bash we marque definite it benefits humanity successful a mode wherever we trim the risk? How bash we beforehand innovation, portion besides being precise mindful of nationalist wellness and nationalist safety. It’s an important — and successful immoderate ways, existential — speech astir the future. SB 1047, and present SB 53, person helped to foster that speech astir harmless innovation.

In the past 20 years of technology, what person you learned astir the value of laws that tin clasp Silicon Valley to account?

I’m the feline who represents San Francisco, the beating bosom of AI innovation. I’m instantly northbound of Silicon Valley itself, truthful we’re close present successful the mediate of it all. But we’ve besides seen however the ample tech companies — immoderate of the wealthiest companies successful satellite past — person been capable to halt national regulation.

Every clip I spot tech CEOs having meal astatine the White House with the aspiring fascist dictator, I person to instrumentality a heavy breath. These are each truly superb radical who person generated tremendous wealth. A batch of folks I correspond enactment for them. It truly pains maine erstwhile I spot the deals that are being struck with Saudi Arabia and the United Arab Emirates, and however that wealth gets funneled into Trump’s meme coin. It causes maine heavy concern.

I’m not idiosyncratic who’s anti-tech. I privation tech innovation to happen. It’s incredibly important. But this is an manufacture that we should not spot to modulate itself oregon marque voluntary commitments. And that’s not casting aspersions connected anyone. This is capitalism, and it tin make tremendous prosperity but besides origin harm if determination are not sensible regulations to support the nationalist interest. When it comes to AI safety, we’re trying to thread that needle.

SB 53 is focused connected the worst harms that AI could imaginably origin — death, monolithic cyber attacks, and the instauration of bioweapons. Why absorption there?

The risks of AI are varied. There is algorithmic discrimination, occupation loss, heavy fakes, and scams. There person been assorted bills successful California and elsewhere to code those risks. SB 53 was ne'er intended to screen the tract and code each hazard created by AI. We’re focused connected 1 circumstantial class of risk, successful presumption of catastrophic risk.

That contented came to maine organically from folks successful the AI abstraction successful San Francisco — startup founders, frontline AI technologists, and radical who are gathering these models. They came to maine and said, ‘This is an contented that needs to beryllium addressed successful a thoughtful way.’

Do you consciousness that AI systems are inherently unsafe, oregon person the imaginable to origin decease and monolithic cyberattacks?

I don’t deliberation they’re inherently safe. I cognize determination are a batch of radical moving successful these labs who attraction precise profoundly astir trying to mitigate risk. And again, it’s not astir eliminating risk. Life is astir risk, unless you’re going to unrecorded successful your basement and ne'er leave, you’re going to person hazard successful your life. Even successful your basement, the ceiling mightiness autumn down.

Is determination a hazard that immoderate AI models could beryllium utilized to bash important harm to society? Yes, and we cognize determination are radical who would emotion to bash that. We should effort to marque it harder for atrocious actors to origin these terrible harms, and truthful should the radical processing these models.

Anthropic issued its enactment for SB 53. What are your conversations similar with different manufacture players?

We’ve talked to everyone: ample companies, tiny startups, investors, and academics. Anthropic has been truly constructive. Last year, they ne'er formally supported [SB 1047] but they had affirmative things to accidental astir aspects of the bill. I don’t deliberation [Anthropic} loves each facet of SB 53, but I deliberation they concluded that connected equilibrium the measure was worthy supporting.

I’ve had conversations with ample AI labs who are not supporting the bill, but are not astatine warfare with it successful the mode they were with SB 1047. It’s not surprising. SB 1047 was much of a liability bill, SB 53 is much of a transparency bill. Startups person been little engaged this twelvemonth due to the fact that the measure truly focuses connected the largest companies.

Do you consciousness unit from the ample AI PACs that person formed successful caller months?

This is different grounds of Citizens United. The wealthiest companies successful the satellite tin conscionable determination endless resources into these PACs to effort to intimidate elected officials. Under the rules we have, they person each close to bash that. It’s ne'er truly impacted however I attack policy. There person been groups trying to destruct maine for arsenic agelong arsenic I’ve been successful elected office. Various groups person spent millions trying to stroke maine up, and present I am. I’m successful this to bash close by my constituents and effort to marque my community, San Francisco, and the satellite a amended place.

What’s your connection to Governor Newsom arsenic he’s debating whether to motion oregon veto this bill?

My connection is that we heard you. You vetoed SB 1047 and provided a precise broad and thoughtful veto message. You wisely convened a moving radical that produced a precise beardown report, and we truly looked to that study successful crafting this bill. The politician laid retired a path, and we followed that way successful bid to travel to an agreement, and I anticipation we got there.

Read Entire Article