California Gov. Gavin Newsom vetoed an artificial intelligence safety bill on Sunday, a win for AI heavyweights like OpenAI and Big Tech companies that lobbied against it.
The bill, SB 1047, was introduced earlier this year by Sen. Scott Weiner and passed the California State Assembly last month. It aimed to force the development of safety measures at companies that spend $100 million or more training AI models so their tech could not be used to harm society — a loose definition that the bill said could include creating dangerous weapons or undertaking cyberattacks.
"This veto is a setback for everyone who believes in oversight of massive corporations," Sen. Wiener said in a statement on Sunday.
About two weeks ago, Newsom said that he was concerned about the bill's potential "chilling effect" on AI development. He said he didn't want California to lose its dominance over the AI space.
"The bill applies stringent standards to even the most basic functions — so long as a large system deploys it," the governor said in a statement on Sunday. "I do not believe this is the best approach to protecting the public from real threats posed by the technology."
The now-killed bill also would have required companies operating in California to report any safety incidents stemming from their AI products to the government. It would have protected company whistleblowers and allowed third parties to test models for safety. The bill said that if necessary, the developers should be able to enact a full shutdown of their AI tools.
The debate in California reflects the challenge governments face walking the fine line between allowing tech companies to innovate while protecting against new potential risks. Newsom may also want to signal that the state is open for business after a string of high-profile company exits, including Chevron, Tesla, Oracle, Charles Schwab, and CBRE.
In a release announcing the veto, Newsom's office also noted that the governor signed 17 bills over the last month relating to generative AI, which crack down on deepfakes and misinformation and aim to protect children and employees.
Newsom's veto will come as a relief to many in Silicon Valley who had criticized the bill, saying it would hurt innovation.
Meta's vice president of policy, Rob Sherman praised Newsom for rejecting the bill. In an X post on Sunday, he said the bill would have "hurt business growth and job creation, and broken the state's long tradition of fostering open-source development."
Marc Andreessen, general partner at Venture Capital firm Andreessen Horowitz, applauded Newsom's decision, too. In a statement on X, he said the veto sided with California's growth and freedom over "safetyism, doomerism, and decline."
Jason Kwon, OpenAI's chief strategy officer, warned in an August letter to Sen. Wiener that the bill could stifle progress and drive companies out of California.
The ChatGPT maker joined tech giant Meta in lobbying against the bill. Meta said that the bill could discourage the open-source movement by exposing developers to significant legal liabilities.
Andreessen Horowitz also cited similar innovation concerns and paid for a petition campaign against the bill.
To be sure, not all of Big Tech opposed the bill.
Elon Musk, who founded AI company xAI last year, said last month that although it was "a tough call and will make some people upset," he thought "California should probably pass the SB 1047 AI safety bill."
Anthropic CEO Dario Amodei seemed to switch sides in the middle of the debate. In August, he said that the bill's "benefits likely outweigh the costs." However, he added that "we are not certain of this, and there are still some aspects of the bill which seem concerning or ambiguous to us."
Several former OpenAI employees also supported the safety bill and said that OpenAI's opposition to the bill was disappointing.
"We joined OpenAI because we wanted to ensure the safety of the incredibly powerful AI systems the company is developing," former OpenAI researchers, William Saunders and Daniel Kokotajlo, wrote in the letter. "But we resigned from OpenAI because we lost trust that it would safely, honestly, and responsibly develop its AI systems."