close
close

California’s governor vetoed controversial AI security bill

SACRAMENTO, Calif. — California Gov. Gavin Newsom on Sunday vetoed landmark legislation aimed at establishing the nation’s first security measures for large artificial intelligence models.

The decision is a major blow to efforts to rein in a domestic industry that is evolving rapidly with little oversight. The bill would establish some of the first regulations in the country for large-scale artificial intelligence models and pave the way for artificial intelligence security regulations across the country, supporters say.

Earlier this month, the Democratic governor told the audience at Dreamforce, an annual conference hosted by software giant Salesforce, that California must lead on regulating artificial intelligence in the face of federal inaction, but the proposal “could create a chilling effect on the industry.”


California Governor Gavin Newsom
California Governor Gavin Newsom speaks during a news conference in Los Angeles, September 25, 2024. AP/Eric Thayer

Newsom said the proposal, which has faced fierce opposition from startups, tech giants and several Democratic House members, could have harmed domestic industries by establishing stiff requirements.

“While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making, or uses sensitive data,” Newsom said in a statement. “Instead, the bill imposes rigorous standards on even the most basic functions – provided it is implemented in a large system. I don’t think this is the best approach to protecting society from the real threats this technology poses.

Instead, on Sunday, Newsom announced that the state would work with several industry experts, including artificial intelligence pioneer Fei-Fei Li, to develop guardrails around powerful artificial intelligence models. Li opposed the AI ​​security proposal.

The measure, intended to limit potential threats posed by artificial intelligence, would require companies to test their models and publicly disclose security protocols to prevent the models from being manipulated to, for example, destroy a state’s power grid or help build chemical weapons. Experts say such scenarios may be possible in the future as the industry continues to grow rapidly. It would also provide employees with whistleblower protection.

The bill’s author, Democratic state Sen. Scott Weiner, called the veto “a failure for anyone who believes in oversight of the massive corporations that make critical decisions that impact the safety and well-being of society and the future of the planet.”

“Companies developing advanced artificial intelligence systems confirm that the risk these models pose to society is real and growing rapidly. “While large AI labs have made admirable commitments to monitor and mitigate these threats, the truth is that voluntary commitments by industry are not enforceable and rarely work well for society,” Wiener said in a statement Sunday.


Newsom talks about the AI ​​security bill with Salesforce CEO Mark Benioff at the Dreamforce conference in San Francisco on September 17.
Newsom talks about the AI ​​security bill with Salesforce CEO Mark Benioff at the Dreamforce conference in San Francisco on September 17. JOHN G MABANGLO/EPA-EFE/Shutterstock

Wiener said the debate around the bill has dramatically advanced the issue of artificial intelligence security and that he would continue to press the issue.

This legislation is one of many bills passed by the Legislature this year to regulate artificial intelligence, combat deepfakes and protect workers. State lawmakers said California must take action this year, citing the difficult lessons it has learned from its failure to police social media companies when it might have had a chance.

Supporters, including Elon Musk and Anthropic, say the proposal could bring some level of transparency and accountability to large-scale AI models, as developers and experts say they still don’t have a full understanding of how AI models behave and like Why.

The bill targeted systems that require more than $100 million to build. None of the current AI models have reached this threshold, but some experts say that could change over the next year.

“This is happening because of the huge increase in investment in the industry,” said Daniel Kokotajlo, a former OpenAI researcher who resigned in April because of what he said was the company’s disregard for the dangers of artificial intelligence. “It is an insane power to have control over any private company in an inexplicable way, and it is also extremely risky.”

The United States is already ahead of Europe in regulating AI to reduce risks. California’s proposal was not as comprehensive as regulations in Europe, but it would be a good first step in putting up guardrails around the rapidly evolving technology that raises concerns about job losses, misinformation, invasions of privacy and automation bias, supporters say.

Last year, many leading artificial intelligence companies voluntarily agreed to follow safeguards outlined by the White House, such as testing and sharing information about their models. The California bill would require AI developers to follow requirements similar to those obligations, supporters say.

But critics, including former U.S. House Speaker Nancy Pelosi, argued the bill would “kill California tech” and stifle innovation. They argue that this would discourage AI developers from investing in large models or making their software open source.