close
close

A veto of California’s artificial intelligence bill opens the door to bank-unfriendly changes

Gavin Newsom

The vetoed bill would have imposed several restrictions on major AI model providers, including the ability to immediately disable their models.

California Governor Gavin Newsom vetoed a bill that would have forced foundation or “frontier” model providers like OpenAI to test and audit their software, be liable for damages caused by their models, and create “kill switches” that would immediately stop the model from running.

Instead, Newsom said he has signed 17 bills in the last 30 days covering the implementation and regulation of generative artificial intelligence technology.

“This has all been a balancing act for Governor Newsom,” John Cunningham, a compliance and corporate investigations partner at Dickinson Wright, said in an interview. “It is a question of costs and benefits and the balance between continued innovation in AI and sound regulatory oversight. If we can reasonably control what we do with AI, it will be good for everyone.”

Explaining why he vetoed the bill, Newsom said the focus on the largest providers of artificial intelligence models was inappropriate. “SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making, or uses sensitive data,” he said.

Some of the most important decisions and sensitive data concern financial services. Banks’ use of artificial intelligence in lending and employment decisions has been closely scrutinized by regulators, but is not subject to the new regulations. A revised bill focusing on riskier use cases could target banks.

California is one of several states trying to secure advanced artificial intelligence in the absence of national regulations. California, Pennsylvania, Massachusetts, New Jersey, and the District of Columbia have had AI laws in place for some time. An additional five states – Colorado, Illinois, Maryland, New York and Utah – have passed artificial intelligence legislation this year, according to the study. state law tracker run by the law firm Husch Blackwell. Eight states tried and failed to pass AI laws this year.

National AI legislation has been presented to the U.S. Congress. For example: AI Civil Rights Act was introduced last week in the Senate, which would prohibit discrimination by corporate algorithms, require independent testing of artificial intelligence models and ensure that consumers can make decisions by humans rather than artificial intelligence.

Vetoed bill

The Safe and Secure Frontier Artificial Intelligence Innovations Act, or SB 1074, would be the toughest state AI legislation in the country. California has long been known for its rigorous consumer protection policies. For example, it passed the first data protection law, the so-called decisions of which have been adopted by many other countries.

“California, like New York, is often at the vanguard of regulation,” Cunningham said. “So a lot of people will turn to them and say, hey, how do we start to address this regulatory piece before AI goes too far, from a regulatory standpoint? “A lot of people are relying on states like California and New York to think more deeply about this.”

SB 1074 would require developers of large artificial intelligence models, such as OpenAI, Anthropic, Google and Meta, to put in place safeguards and policies to prevent catastrophic harm. For example, they would have to provide a “kill switch” that could turn off their systems. They will have to submit security plans and audit reports. The bill would also provide protections for whistleblowers and establish a state entity called the Board of Frontier Models to oversee the development of these models.

Over 125 Hollywood starsincluding Mark Hamill, Jane Fonda and Alec Baldwin, signed a letter urging Newsom to sign the bill.

Many of the companies that the bill would affect, including OpenAI, Meta, Google and Anthropic, are based in California. In letter to California Democratic Sen. Scott Wiener, who proposed the bill, OpenAI chief strategy officer Jason Kwon said the bill would hamper innovation and that regulation of artificial intelligence should be left to the federal government.

Newsom said the bill’s focus on only the most expensive and large-scale models could give the public a false sense of security, when smaller, specialized models could be just as dangerous.

Newsom said he was guided by several experts in artificial intelligence in making this decision, including Fei-Fei Li, a professor of computer science at Stanford University; Tino Cuéllar, member of the Committee on the Social and Ethical Implications of Computer Science Research, National Academy of Sciences; and Jennifer Tour Chayes, dean of the College of Computing, Data Science and Society at the University of California, Berkeley. He asked these advisors to help develop responsible guardrails for implementing generative AI.

Banking experts say it makes sense to move from controlling providers of basic models to smaller, more detailed regulations governing generative AI.

“The regulations as written were too broad and risked driving innovative companies out of California that did not provide as much specific consumer protection as could have had this potential impact,” said Ian Watson, research director at Celent. “Delivering it to smaller panels of experts not only gives California more time for a more national consensus to form, but also creates the opportunity to develop a pipeline of more targeted industry regulations that address tangible pain points for state politicians’ constituents.”

Some thought the California bill’s focus on the existential threats of artificial intelligence was misguided.

“Artificial intelligence can be very dangerous, but I firmly believe that the immediate threat is to consumers through predatory practices and surveillance, and to our democratic institutions through disinformation and surveillance, not to the survival of humanity,” said Patrick Hall, an assistant professor at George University Washingtonian.

It is too early to determine what the new law will look like.

“Newsom’s messaging sounds like he wants a tougher and better bill, but that doesn’t mean he’s going to get it,” Hall said. “My research and experience lead me to believe that regulating use cases and the people around them – such as appointing chief model risk officers – is much more effective than regulating the technology directly.”

Hall liked some aspects of the vetoed California bill, such as the kill switch requirement.

“I have been advocating for this for years because it is clear that AI systems can sometimes malfunction, and in some cases it is a good idea to turn them off quickly,” he said.

California has passed 17 bills

The 17 AI bills signed by Newsom aim to crack down on deepfakes, require AI watermarking, protect children and workers, and combat AI-generated disinformation.

Several of them apply to core model developers and companies, such as banks, that use these models.

For example, one bill (AB 1008) clarifies that the California Consumer Privacy Act applies to personal information stored by artificial intelligence systems. Another bill (AB 1836) prohibits anyone from creating, distributing, or sharing a digital replica of a deceased person’s voice or likeness without prior consent.

The third, AB 2013, requires AI developers to publish information on their websites about the data used to train the AI ​​system or service. The fourth, SB 942, requires developers of covered generative AI to include provenance information in the original content generated by their systems and to provide tools to identify generative AI content produced by their systems.