close
close

California Governor Gavin Newsom Will Veto AI Security Measures Bill

SACRAMENTO, Calif. (AP) – California Gov. Gavin Newsom on Sunday vetoed a landmark bill aimed at establishing the nation’s first security measures for large artificial intelligence models.

The decision is a major blow to efforts to rein in a domestic industry that is evolving rapidly with little oversight. The bill would establish some of the first regulations in the country for large-scale artificial intelligence models and pave the way for artificial intelligence security regulations across the country, supporters say.

Earlier this month, the Democratic governor told the audience at Dreamforce, an annual conference hosted by software giant Salesforce, that California must lead on regulating artificial intelligence in the face of federal inaction, but the proposal “could create a chilling effect on the industry.”

Newsom said the proposal, which has faced fierce opposition from startups, tech giants and several Democratic House members, could have harmed domestic industries by establishing stiff requirements.

“While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making, or uses sensitive data,” Newsom said in a statement. “Instead, the bill imposes rigorous standards on even the most basic functions – provided it is implemented in a large system. I don’t think this is the best approach to protecting society from the real threats this technology poses.

Instead, on Sunday, Newsom announced that the state would work with several industry experts, including artificial intelligence pioneer Fei-Fei Li, to develop guardrails around powerful artificial intelligence models. Li opposed the AI ​​security proposal.

The measure, intended to limit potential threats posed by artificial intelligence, would require companies to test their models and publicly disclose security protocols to prevent the models from being manipulated to, for example, destroy a state’s power grid or help build chemical weapons. Experts say such scenarios may be possible in the future as the industry continues to grow rapidly. It would also provide employees with whistleblower protection.

The bill’s author, Democratic state Sen. Scott Weiner, called the veto “a failure for anyone who believes in oversight of the massive corporations that make critical decisions that impact the safety and well-being of society and the future of the planet.”

“Companies developing advanced artificial intelligence systems confirm that the risk these models pose to society is real and growing rapidly. “While large AI labs have made admirable commitments to monitor and mitigate these threats, the truth is that voluntary commitments by industry are not enforceable and rarely work well for society,” Wiener said in a statement Sunday.

Wiener said the debate around the bill has dramatically advanced the issue of artificial intelligence security and that he would continue to press the issue.

This legislation is one of many bills passed by the Legislature this year to regulate artificial intelligence, combat deepfakes and protect workers. State lawmakers said California must take action this year, citing the difficult lessons it has learned from its failure to police social media companies when it might have had a chance.

Supporters, including Elon Musk and Anthropic, say the proposal could bring some level of transparency and accountability to large-scale AI models, as developers and experts say they still don’t have a full understanding of how AI models behave and like Why.

The bill concerned systems that would require high computing power and cost over $100 million to build. None of the current AI models have reached this threshold, but some experts say that could change over the next year.

“This is happening because of the huge increase in investment in the industry,” said Daniel Kokotajlo, a former OpenAI researcher who resigned in April because of what he said was the company’s disregard for the dangers of artificial intelligence. “It is an insane power to have control over any private company in an inexplicable way, and it is also extremely risky.”

The United States is already ahead of Europe in regulating AI to reduce risks. California’s proposal was not as comprehensive as regulations in Europe, but it would be a good first step in putting up guardrails around the rapidly evolving technology that raises concerns about job losses, misinformation, invasions of privacy and automation bias, supporters say.

Last year, many leading artificial intelligence companies voluntarily agreed to follow safeguards outlined by the White House, such as testing and sharing information about their models. The California bill would require AI developers to follow requirements similar to those obligations, supporters say.

But critics, including former U.S. House Speaker Nancy Pelosi, argued the bill would “kill California tech” and stifle innovation. They argue that this would discourage AI developers from investing in large models or making their software open source.

Newsom’s decision to veto the bill marks another victory in California for big tech companies and artificial intelligence developers, many of whom have spent the past year lobbying alongside the California Chamber of Commerce to persuade the governor and lawmakers to make changes to artificial intelligence laws.

Two other sweeping artificial intelligence proposals that also faced growing opposition from the tech industry and others collapsed before last month’s legislative deadline. The bills would require AI developers to label AI-generated content and prohibit discrimination based on AI tools used to make employment decisions.

The governor announced earlier this summer that he wants to protect California’s status as a global leader in artificial intelligence, noting that 32 of the world’s 50 largest artificial intelligence companies are headquartered in the state.

He promoted California as a pioneer because the state could soon deploy generative artificial intelligence tools to solve highway congestion, provide tax guidelines and improve homeless programs. Last month, the state also announced a voluntary partnership with artificial intelligence giant Nvidia to help train students, college faculty, developers and data analysts. California is also considering new rules against AI discrimination in recruiting practices.

Earlier this month, Newsom signed some of the toughest legislation in the country to crack down on fraudulent elections and put in place measures to protect Hollywood workers from unauthorized use of artificial intelligence.

But even with Newsom’s veto, California’s security proposal is inspiring lawmakers in other states to take similar action, said Tatiana Rice, deputy director of the Future of Privacy Forum, a nonprofit that works with lawmakers on technology and privacy proposals.

“They are going to potentially either copy this or do something similar in the next legislative session,” Rice said. “So it won’t go away.”

Support free journalism

Please consider supporting HuffPost with as little as $2 to help us provide free, high-quality journalism that puts people first.

Thank you for your contributions to HuffPost so far. We’re sincerely grateful to readers like you who help us keep our journalism free for all.

The stakes are high this year, and our 2024 offering could still be useful. Would you consider becoming a regular HuffPost contributor?

Thank you for your contributions to HuffPost so far. We’re sincerely grateful to readers like you who help us keep our journalism free for all.

The stakes are high this year, and our 2024 offering could still be useful. We hope you’ll consider contributing to HuffPost again.

Support HuffPost

The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI to access portions of the AP’s text archives.

Support free journalism

Please consider supporting HuffPost with as little as $2 to help us provide free, high-quality journalism that puts people first.

Thank you for your contributions to HuffPost so far. We’re sincerely grateful to readers like you who help us keep our journalism free for all.

The stakes are high this year, and our 2024 offering could still be useful. Would you consider becoming a regular HuffPost contributor?

Thank you for your contributions to HuffPost so far. We’re sincerely grateful to readers like you who help us keep our journalism free for all.

The stakes are high this year, and our 2024 offering could still be useful. We hope you’ll consider contributing to HuffPost again.

Support HuffPost