close
close

Editorial: Why California Should Lead the Way in AI Regulation

The release of OpenAI’s ChatGPT in late 2022 was like a starting gun, setting off a race among big tech companies to develop increasingly powerful generative AI systems. Giants like Microsoft, Google, and Meta rushed to implement new AI tools as billions of dollars of venture capital flowed into AI startups.

At the same time, a growing chorus of AI researchers and workers began to sound the alarm: The technology was developing faster than anyone had anticipated, and there was concern that in their rush to dominate the market, companies might release products before they were safe.

In spring 2023, more than 1,000 researchers and industry leaders called for a six-month break in developing the most advanced artificial intelligence systems, saying AI labs are racing to deploy “digital minds” that even their creators can’t understand, predict or reliably control. The technology poses “grave risks to society and humanity,” they warned. The tech company leaders urged lawmakers to develop rules to prevent harm.

It was in this environment that state Sen. Scott Wiener (D-San Francisco) began talking to industry experts about developing legislation that would become Senate Bill No. 1047Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. This act is an important first step in the responsible development of AI.

While state lawmakers have introduced dozens of bills Taking aim at a variety of AI-related issues, including election disinformation and protecting artists’ work, Wiener has taken a different approach. His bill focuses on trying to prevent disasters if AI systems are misused.

SB 1047 would require developers of the most powerful AI models to implement testing procedures and safeguards to prevent the technology from being used to disable the power grid, enable the development of biological weapons, conduct serious cyberattacks or cause other serious harm. If developers fail to take reasonable precautions to prevent catastrophic harm, the state’s attorney general could sue them. The bill would also protect whistleblowers at AI companies and create CalCompute, a public cloud computing cluster that would be available to help startups, researchers and scientists develop AI models.

The bill is supported by major AI security groups, including some of the so-called godfathers of AI, who he wrote in a letter to Governor Gavin Newsom claiming that “given the scale of the risk we face, this is a surprisingly lenient piece of legislation.”

But that hasn’t stemmed a wave of opposition from tech companies, investors and researchers who say the bill unfairly holds modelers accountable for predicting harm users might cause. They say such liability will make creators less likely to share their models, stifling innovation in California.

Last week, eight California members of Congress chimed in with a letter to Newsome urging him to veto SB 1047 if it passes the Legislature. They argued that the bill is premature, with a “misguided emphasis on hypothetical risks,” and that lawmakers should instead focus on regulating uses of AI that are currently causing harm, such as the use of deepfakes in election ads and revenge porn.

There are many good bills that address immediate and specific AI misuses. That doesn’t negate the need to anticipate and try to prevent future harm—especially when experts in the field are calling for action. SB 1047 poses familiar questions for the tech sector and lawmakers. When is the right time to regulate new technology? What is the right balance to encourage innovation while protecting the public that must live with its effects? And can the genie be put back in the bottle after the technology has been implemented?

Sitting on the sidelines for too long comes with risks. Right now, lawmakers are still playing catch-up on data privacy and trying to limit the damage on social media platforms. This isn’t the first time that big tech leaders have publicly said they welcome regulation of their products but then lobbied hard to block specific proposals.

Ideally, the federal government would lead the way on AI regulation to avoid a patchwork of state policies. But Congress has proven unable — or unwilling — to regulate big tech. Over the years, proposed legislation protect data privacy and reduce online threats to children have stalled. In the absence of federal action, California, particularly because it is home to Silicon Valley, has chosen to lead the way by introducing first-of-its-kind legislation on net neutrality, data privacy, and online safety for children. AI is no exception. Indeed, House Republicans have already said they will not support any new AI legislation.

By passing SB 1047, California can pressure the federal government to establish standards and regulations that could replace state regulations, and until then, the bill could provide an important safeguard.