close
close

California’s AI Security Bill Is Under Fire. Making It Law Is the Best Way to Improve It

On August 29, the California legislature passed Senate Bill No. 1047 — The Safe Innovation for AI Models at the Forefront Act — and sent it to Gov. Gavin Newsom for his signature. Newsome’s choice, due by Sept. 30, is binary: Kill it or make it law.

Recognizing the potential harm that advanced AI can cause, SB 1047 requires technology developers to integrate safeguards into the development and deployment of what the bill calls “covered models.” The California attorney general can enforce these requirements by taking civil action against parties that fail to take “reasonable care” to ensure that 1) their models do not cause catastrophic harm or 2) their models can be disabled in the event of a failure.

Many prominent AI companies are opposing the bill individually or through trade associations. Their objections include concerns that the definition of covered models is too inflexible to account for technological advances, that it is unreasonable to hold them liable for malicious applications that others develop, and that the bill will generally stifle innovation and cripple small start-ups that do not have the resources to devote to compliance.

These objections are not trivial; they deserve consideration and very likely further amendments to the bill. But the governor should sign or approve it regardless, because a veto would mean that no regulation of AI is acceptable now and probably until or unless catastrophic harm occurs. That is not the proper position for governments to take on such technology.

The bill’s author, Sen. Scott Wiener (D-San Francisco), worked with the AI ​​industry on several iterations of the bill before it was finally passed. At least one major AI company, Anthropic, asked for specific and significant changes to the text, many of which were included in the final bill. Since the bill passed the Legislature, Anthropic’s CEO he said that “the benefits likely outweigh the costs… (although) some aspects of the bill (still) seem troubling or ambiguous.” Public evidence so far suggests that most other AI companies have simply opposed the bill on principle, rather than engaging in concrete efforts to modify it.

What should we make of such a backlash, especially since the leaders of some of these companies have publicly expressed concerns about the potential risks of advanced AI? In 2023, for example, the CEOs of Google’s OpenAI and DeepMind signed an open letter comparing the risks of AI to pandemics and nuclear war.

The reasonable conclusion is that, unlike Anthropic, they oppose any mandatory regulation. They want to reserve for themselves the right to decide when the risks of a given activity, research effort, or whatever model they implement outweigh the benefits. More importantly, they want those who develop applications based on the models they cover to bear full responsibility for mitigating the risks. Recent Court Cases suggested that parents who put weapons in the hands of their children bear some legal responsibility for the outcome. Why should AI companies be treated differently?

AI companies want the public to give them a free hand, despite the obvious conflict of interest — for-profit companies should not be trusted to make decisions that could hinder their profits.

We’ve been through this before. In November 2023, OpenAI’s board fired its CEO because it felt the company was headed down a dangerous technological path under his leadership. Within days, various OpenAI stakeholders were able to reverse the decision, reinstating him and removing the board members who had advocated for his dismissal. Ironically, OpenAI was specifically structured to allow the board to operate in the way it does—despite the company’s potential to generate profits, the board was supposed to ensure that the public interest came first.

If SB 1047 is vetoed, the anti-regulatory forces will claim victory, which demonstrates the wisdom of their position, and will have little incentive to work on alternative legislation. The lack of meaningful regulation works in their favor, and they will build on the veto to maintain the status quo.

Alternatively, the governor could make SB 1047 law, adding an open invitation to its opponents to help fix its specific flaws. Given what they see as an imperfect law, opponents of the bill would have significant incentive to work—and to work in good faith—to fix it. But the basic approach would be for industry, not government, to provide its view of what constitutes reasonable care for the safety properties of its advanced models. The government’s role would be to make sure that industry does what industry itself says it should do.

The consequences of killing SB 1047 and preserving the status quo are significant: companies would be free to develop their technologies without restrictions. The consequences of accepting the imperfect bill would be a significant step toward a better regulatory environment for all concerned. It would be the beginning, not the end, of the AI ​​regulatory game. This first move sets the tone for things to come and establishes the legitimacy of AI regulation. The governor should sign SB 1047.

Herbert Lin is a senior fellow at the Center for International Security and Cooperation at Stanford University and a fellow at the Hoover Institution. He is the author of “Cyber ​​Threats and Nuclear Weapons.