close
close

Newsom vetoed a bill aimed at creating the nation’s first artificial intelligence security measures

The image shows the logo of the ChatGPT application developed by US artificial intelligence research organization OpenAI on a smartphone screen (L), and the letters AI appear on a laptop screen. (Photo by KIRILL KUDRYAVTSEV/AFP via Getty Images)

California Gov. Gavin Newsom vetoed landmark legislation aimed at establishing a first in the nation security measures for large AI models on Sunday.

The decision is a major blow to efforts to rein in a domestic industry that is evolving rapidly with little oversight. The bill would establish some of the first regulations in the country for large-scale artificial intelligence models and pave the way for artificial intelligence security regulations across the country, supporters say.

Earlier this month, the Democratic governor told an audience at Dreamforce, an annual conference hosted by software giant Salesforce, that California must lead on regulating artificial intelligence in the face of federal inaction, but that proposal “could have a chilling effect on the industry.”

Newsom said the proposal, which has faced fierce opposition from startups, tech giants and several Democratic House members, could have harmed domestic industries by establishing stiff requirements.

“While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making, or uses sensitive data,” Newsom said in a statement. “Instead, the bill imposes rigorous standards on even the most basic functions – provided it is implemented in a large system. “I don’t think this is the best approach to protecting the public from the real threats posed by this technology.”

Instead, on Sunday, Newsom announced the state would work with several industry experts, including an artificial intelligence pioneer Fei-Fei Lito develop guardrails around powerful AI models. Li opposed the AI ​​security proposal.

The measure, intended to limit potential threats posed by artificial intelligence, would require companies to test their models and publicly disclose security protocols to prevent the models from being manipulated to, for example, destroy a state’s power grid or help build chemical weapons. Experts say such scenarios may be possible in the future as the industry continues to grow rapidly. It would also provide employees with whistleblower protection.

RELATED SCOPE:

The bill’s author, Democratic state Sen. Scott Weiner, called the veto “a failure for anyone who believes in oversight of massive corporations that make critical decisions that impact the safety and well-being of society and the future of the planet.”

“Companies developing advanced artificial intelligence systems confirm that the risk these models pose to society is real and growing rapidly. “While large AI labs have made admirable commitments to monitor and mitigate these risks, the truth is that voluntary commitments by industry are not enforceable and rarely work well for society,” Wiener said in a statement on Sunday.

Wiener said the debate around the bill has dramatically advanced the issue of artificial intelligence security and that he would continue to press the issue.

The legislation includes: lots of bills passed by the Legislature this year to regulate artificial intelligence, fight against deepfakes AND protect employees. State lawmakers said California must take action this year, citing the difficult lessons it has learned from its failure to police social media companies when it might have had a chance.

Supporters, including Elon Musk and Anthropic, say the proposal could bring some level of transparency and accountability to large-scale AI models, as developers and experts say they still don’t have a full understanding of how AI models behave and like Why.

The bill concerned demanding systems over $100 million build. None of the current AI models have reached this threshold, but some experts say that could change over the next year.

“This is happening because of the huge increase in investment in the industry,” said Daniel Kokotajlo, a former OpenAI researcher who resigned in April because of what he said was the company’s disregard for the dangers of artificial intelligence. “It is an insane power to have control over any private company in an inexplicable way, and it is also extremely risky.”

The United States is already behind Europe in regulating AI to reduce risk. The California proposal was not as comprehensive as regulations in Europe, but a good first step would be to put up barriers around rapidly developing technology that raises concerns about job losses, disinformation, invasions of privacy and automation bias– say supporters.

The past year has seen the emergence of many leading AI companies he willingly agreed to follow safeguards established by the White House, such as testing and sharing information about their models. The California bill would require AI developers to follow requirements similar to those obligations, supporters say.

But critics, including former U.S. House Speaker Nancy Pelosi, argued the bill would “kill California tech” and stifle innovation. They argue that this would discourage AI developers from investing in large models or making their software open source.

Newsom’s decision to veto the bill marks another victory in California for big tech companies and artificial intelligence developers, many of whom have spent the past year lobbying alongside the California Chamber of Commerce to persuade the governor and lawmakers to make changes to artificial intelligence laws.

Two other sweeping artificial intelligence proposals that also faced growing opposition from the tech industry and others collapsed before last month’s legislative deadline. The bills would require AI developers to label AI-generated content and prohibit discrimination based on AI tools used to make employment decisions.

The governor announced earlier this summer that he wants to protect California’s status as a global leader in artificial intelligence, noting that 32 of the world’s 50 largest artificial intelligence companies are headquartered in the state.

He promoted California as the first state in the world will soon be able to implement generative artificial intelligence tools to address traffic congestion, provide tax guidance and improve homelessness programs. The state also announced this last month voluntary partnership with artificial intelligence giant Nvidia to help train students, faculty, developers and data analysts. California is also considering new rules against AI discrimination in recruiting practices.

Earlier this month, Newsom signed some of the toughest legislation in the country to deal with election deepfakes and means to protect Hollywood workers from unauthorized use of artificial intelligence.

But even with Newsom’s veto, California’s security proposal is inspiring lawmakers in other states to take similar action, said Tatiana Rice, deputy director of the Future of Privacy Forum, a nonprofit that works with lawmakers on technology and privacy proposals.

“They are going to potentially either copy this or do something similar in the next legislative session,” Rice said. – So it won’t go away.

The Associated Press and OpenAI have already done this license and technology agreement which allows OpenAI access to part of the AP text archives.