close
close

California’s AI Bill SB-1047 Sparks Fierce Debate Over Regulation of Powerful AI Models

A California state bill has become a flashpoint between those who believe AI should be regulated to ensure its safety and those who believe regulation could potentially stifle innovation. The bill, which will go to a final vote in August, is sparking heated debate and heated resistance from leaders across the AI ​​industry—even from some AI companies and leaders who have previously called for regulation of the sector.

California Senate Bill 1047 has taken on added significance as efforts to regulate AI at the federal level have proven elusive in an election year. It aims to establish guardrails on the development and use of the most powerful AI models by requiring developers to comply with various security requirements and report security incidents.

The heightened debate and lobbying over the California bill, which passed the state Senate 32-1 in May and is headed for a final vote in August, has reached a fever pitch in the past few weeks. The state senator who introduced the bill, Scott Wiener, recently said Fortune compares the battle pitting AI security experts against some of the tech industry’s top venture capitalists to a “Jets vs. Sharks” Silicon Valley meet-up West Side History.

“I underestimated how toxic this division is,” he said, days after publishing a public letter in response to “inaccurate, inflammatory statements” from startup incubator Y Combinator and venture capital firm a16z about the legislation. The letter came a week after a16z published its own open letter , saying the bill “will stifle open-source AI development and have a chilling effect not only on AI investment and expansion, but also on the small business entrepreneurship that makes California what it is today.”

There’s certainly a lot of bickering, arguments, and snide social media memes about SB-1047, whose full name is the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. At first glance, the debate might seem like a popcorn-eating, GIF-worthy clash between AI “doomers”—pessimists pushing for guardrails against AI’s supposed “existential” risks to humanity—and AI “accelerators,” who advocate for an unapologetic rush to develop AI because they believe the benefits of the technology far outweigh any harm it causes.

But Wiener’s take—a gang war between two rival factions fighting for territory—belies the seriousness of the issues that underlie the political positioning of both sides. Many believe that AI regulation is essential to managing not only the known risks of AI—from bias and privacy violations to job displacement—but also to promoting ethical standards and building public trust. On the other hand, there are those who worry about regulatory capture—that regulation will ultimately advance the interests of a select few AI modelers like OpenAI, Google, Microsoft, Anthropic, and Meta at the expense of broader competition or the true interests of the public. Many were suspicious when, for example, OpenAI CEO Sam Altman famously implored Congress to regulate AI at a May 2023 hearing. Yet Congress, which held numerous hearings on AI regulation last year, largely declined to act until after the 2024 election.

SB-1047 has so far been moving quickly toward enactment. Its creators focused on what they saw as a fairly simple, narrow goal for the legislation. Only companies that spend more than $100 million and use certain high levels of computing power to train the largest and most advanced AI models—such as OpenAI’s GPT-5, which is in the works—would be required to comply with security testing and monitoring to prevent misuse of “dangerous capabilities.” Those capabilities would include creating weapons of mass destruction or using AI to launch cyberattacks on critical infrastructure.

Supporters of the bill include AI authorities Geoffrey Hinton and Yoshua Bengio, as well as nonprofits such as Encode Justice, a youth-led movement for “safe and fair AI.” Another supporter is X.ai advisor Dan Hendrycks, whose nonprofit, the Center for AI Safety, is funded by Open Philanthropy—known for its ties to the controversial Effective Altruism (EA) movement, which is heavily focused on the “existential risk” of AI to humanity.

IN last tweetY Combinator CEO Garry Tan has been dismissive of EA. Responding to a list of EA-affiliated organizations that support SB-1047, including those funded by Skype co-founder Jaan Tallinn, he wrote, “EA is just doing EA stuff.”

In addition to a16z and Y Combinator, SB-1047’s critics include a broad swath of Big Tech, venture capitalists, startups, and open source organizations. AI heavyweights including former Google Brain founder Andrew Ng, Stanford professor Fei-Fei Li, and Meta’s chief scientist Yann LeCun have also spoken out against it, saying the bill is anything but simple or narrow. Opponents, meanwhile, say the bill’s language is too vague. They say the bill’s focus on AI models themselves, rather than how they’re used, has led to uncertainty about compliance and has made developers wary of legal liability for how customers deploy or modify their models. They also argue that the bill could consolidate power over AI in the hands of a few deep-pocketed tech giants, stifle the efforts of small startups and open source developers, and allow China to take the lead in AI development.

“There are reasonable proposals for regulating artificial intelligence,” Ng said. Fortune“Unfortunately, SB-1047 is not one of them.”

Wiener and his fellow supporters of the bill say it’s a load of crap: “This is a lightweight, basic safety bill,” Winer said. Sunny Madra, vice president of policy at Encode Justice, a co-sponsor of the bill, said he didn’t expect the opposition to mount such a massive counteroffensive. “We’re really trying to focus on what we think are super-sensical issues,” he said.

That hasn’t reduced resistance to the bill, which says developers can’t release models covered by the bill if there’s an “unreasonable risk” of “critical harm.” It also requires developers to comply with annual model audits and submit a certificate of compliance to a new division within the state’s Government Operations Agency “under penalty of perjury.”

“I don’t think anyone really wants a small, unelected board to implement vague security standards on a whim,” said Daniel Jeffries, CEO of AI startup Kentaurus AI. “I think the practical regulations are about use cases, security,” he explained. “Let’s talk about autonomous weapons, self-driving cars, or using technology to clone your mom’s voice to scam you out of five thousand dollars.”

But Yacine Jernite, a researcher at the open-source crowdfunding platform Hugging Face, took a different tack—pointing out that SB-1047’s intent to hold AI developers more accountable is “definitely in line with positions we’ve expressed on regulatory proposals in the past.” But he added that the way the bill is written doesn’t understand the tech ecosystem. For example, the models covered by the bill could be trained not just by large companies looking to integrate them into their products, but also by public or philanthropic organizations or coalitions of researchers who have come together to train the models.

“While these models are less popular than those supporting mainstream AI systems, they play an indispensable role in the scientific understanding and informed regulation of this technology,” Jernite said.

Although Hendrycks did not respond to Fortune Request for comment, recently insisted to Bloomberg that venture capitalists would likely oppose the bill “regardless of its content” and that the bill is focused on national security, particularly protecting critical infrastructure. Most companies already conduct security testing, he said, to comply with President Biden’s executive order on artificial intelligence, which was signed into law in October 2023. “That just makes it law, as opposed to an executive order that could be repealed by some future administration,” he said.

Ng maintained that the tech community is “putting a lot of work into trying to understand what apps can be harmful,” and he welcomed government involvement and funding for such efforts. “For example, I would like to see regulations that put a stop to unwanted deepfake porn,” he said.

But Wiener explained that there are other AI-related bills in the works in the California Legislature that focus on short-term, immediate AI threats, such as algorithmic discrimination, deepfakes and AI revenge. “You can’t do everything in one bill,” he said, emphasizing his continued willingness to collaborate. “We’ve made significant changes in response to constructive feedback, and we continue to welcome it,” he said.

Jeffries, however, said he has read every version of the bill along the way and that while there have been changes, “the substance of the bill remains the same,” including the requirement to sign a certificate of compliance under penalty of perjury. “The rules can change overnight,” he said. “They can raise or lower the threshold. And the standards are written in a frustratingly vague way.”

In a letter responding to a16z and Y Combinator, Wiener emphasized that the California attorney general can only file a lawsuit if “the creator of a protected model (which costs over $100 million to train) fails to conduct a safety assessment or take steps to mitigate the risk of a catastrophe, and a catastrophe subsequently occurs.”

Gov. Newsom has not indicated whether he will sign the bill if it passes the California House of Representatives. “We are reaching out to the administration, as we do with large or complex bills,” Wiener said. “We would certainly like to have the governor give us some feedback.”

Even some AI companies with reputations for AI safety, such as Anthropic, have backed away from supporting the bill, perhaps fearing it would limit their own efforts to develop more advanced AI models. Anthropic CEO Dario Amodei said during an interview on the In Good Company podcast in June that the regulation in SB-1047 is too early—that industry consensus on “responsible scaling policies” should come first.

Ng said Fortune that he spoke with Senator Wiener and gave him his opinion. “He didn’t say much during our conversation, and I don’t think the changes in the bill address the concerns that many of us in the tech world share,” he said.

In the meantime, Wiener insists his door remains open to discussing potential changes to SB-1047. “We’ve hired some great lobbyists on the bill … who are trying to engage constructively,” he said, as well as “people from the industry who have not only taken a strong stand and been constructive in proposing amendments, some of which we’ve incorporated, some of which we disagree with.” He emphasized that the process “has actually been a pretty collaborative process, and I really appreciate that.” His goal? “To get it right.”