close
close

California Governor Newsom vetoed an artificial intelligence bill considered the toughest in the country

Gov. Gavin Newsom of California on Sunday vetoed a bill that would have introduced the most far-reaching regulations in the country on the booming artificial intelligence industry.

California lawmakers overwhelmingly passed a bill called SB 1047 that was seen as a potential model for national artificial intelligence legislation.

The measure would hold technology companies legally liable for damages caused by artificial intelligence models. Additionally, the bill would require technology companies to enable a “kill switch” for artificial intelligence technologies in the event of systems being misused or fraudulent.

Newsom described the bill as “well-intentioned,” but noted that its requirements would call for “stringent” regulations that would be burdensome on the state’s leading artificial intelligence companies.

In his veto, Newsom said the bill focused too much on the largest and most powerful models of artificial intelligence, arguing that smaller innovations could prove just as destructive.

“Smaller, specialized models may prove to be as dangerous or even more dangerous than the models targeted by SB 1047 — at the potential cost of limiting innovation that drives progress for the public good,” Newsom wrote.

California Sen. Scott Wiener, a co-sponsor of the bill, criticized Newsom’s move, saying the veto was a setback for AI accountability.

“This veto leaves us with the disturbing reality that companies seeking to create extremely powerful technology face no binding constraints from U.S. policymakers, especially given Congress’ continued paralysis on any meaningful regulation of the tech industry,” Wiener wrote in X.

The bill now defeated would force the industry to conduct security tests on extremely efficient artificial intelligence models. Without such requirements, Wiener wrote on Sunday, the industry will have to maintain order itself.

“While large AI labs have made admirable commitments to monitor and mitigate these threats, the truth is that voluntary commitments from industry are not enforceable and rarely work well for society.”

Many influential Silicon Valley players, including venture capital firm Andreessen Horowitz, OpenAI and trade groups representing Google and Meta, lobbied against the bill, arguing it would slow the development of artificial intelligence and stifle the growth of early-stage companies.

“SB 1047 would threaten that growth, slow the pace of innovation, and drive California’s world-class engineers and entrepreneurs to leave the state in search of greater opportunities elsewhere,” Jason Kwon, OpenAI chief strategy officer he wrote in a letter sent to Wiener last month.

However, other tech leaders supported the bill, including Elon Musk and pioneering artificial intelligence scientists such as Geoffrey Hinton and Yoshua Bengio, who signed a letter urging Newsom to sign it.

“We believe that the most powerful AI models could soon create serious threats, such as increased access to biological weapons and cyberattacks on critical infrastructure. It is feasible and appropriate for pioneering AI companies to test whether the most powerful AI models can cause serious harm, and for those companies to implement reasonable safeguards against such threats.” he wrote Hinton and several dozen former and current employees of leading artificial intelligence companies.

In his post on Sunday,

Other states, e.g Colorado AND Utahhave adopted legislation more narrowly to address the potential for AI to perpetuate bias in employment and health care decisions, as well as other consumer protection concerns related to AI.

Newsom recently signed other artificial intelligence bills, including one aimed at combating the spread of false information during elections. Another protects actors from having their likenesses reproduced by artificial intelligence without their consent.

As billions of dollars are poured into developing artificial intelligence and it permeates more and more corners of everyday life, lawmakers in Washington have yet to propose a single piece of federal legislation to protect people from potential harm or provide oversight of its rapid development.

Copyright 2024 NPR