close
close

California AI Regulation Bill Heads to Assembly Vote with Key Amendments — THE Journal

Policy

California AI Regulation Bill Heads to Assembly Vote with Key Amendments

The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (Senate Bill 1047) led by Sen. Scott Wiener (D-San Francisco) in California has cleared the Assembly Appropriations Committee with several significant amendments. The bill, which aims to establish rigorous security standards for large-scale artificial intelligence (AI) systems, is set for a vote on the Assembly floor on Aug. 20 and must pass by Aug. 31 to advance.

SB 1047 was designed to regulate the development of advanced AI models by establishing clear, enforceable safety requirements as well as regulatory measures. It applies to AI models that are particularly powerful and expensive to develop, in order to balance innovation with public safety.

The bill establishes standards for AI models with significant computing power—specifically, models that use 1,026 floating-point operations (FLOPS) per second and cost more than $100 million to train. These models are known as “frontier” AI systems.

The bill establishes, among other things, risk assessment, safety, security, and testing requirements that the creator of a covered AI model must meet before training the covered model, using the covered model, or making the covered model available for public or commercial use.

The Act imposes on the creator of the model covered by the project the obligation to hire an external auditor annually to conduct an independent audit of compliance with the requirements of the Act.

The bill has undergone significant changes based on industry feedback, perhaps most notably from Anthropic, a leading AI research organization known for its work developing advanced AI systems with a focus on safety, fit and ethics. The amendments aim to balance innovation with safety, Weiner said in a statement.

“We can drive both innovation and security; the two are not mutually exclusive,” Wiener said. “While the fixes do not reflect 100% of the changes requested by all stakeholders, we addressed the underlying concerns of industry leaders and made adjustments to meet a variety of needs, including those of the open source community.”

Major changes in SB 1047

  • Elimination of Perjury Penalties: The law now provides only civil penalties for making false statements to authorities, addressing concerns about the potential abuse of criminal penalties.
  • Elimination of the Frontier Model Division (FMD): The proposed new regulatory authority has been eliminated. Enforcement will continue through the Attorney General’s office, and some FMD functions will be transferred to the Government Operations Agency.
  • Adjusted legal standards: The compliance standard for developers has changed from “reasonable assurance” to “reasonable care,” a well-established common law standard that includes elements such as adherence to NIST security standards.
  • New threshold for tuned models: Models tuned at a cost of less than $10 million are exempt from the act’s requirements, allowing the regulatory burden to be focused on larger-scale projects.
  • Limited enforcement before harm occurs: The Attorney General’s authority to pursue civil penalties is currently limited to situations where actual harm has occurred or there is an imminent threat to public safety.

Support and criticism

SB 1047 has garnered support from prominent AI researchers, including Geoffrey Hinton and Yoshua Bengio, who emphasize the importance of balancing innovation with safety. Hinton praised the bill for its sensible approach, emphasizing the need for legislation that addresses the risks of powerful AI systems.

But the bill has also drawn criticism, particularly from startup founders and industry leaders. Critics say the bill’s thresholds and liability provisions could stifle innovation and disproportionately burden smaller developers. Anjney Midha, a general partner at Silicon Valley-based VC firm Andreessen Horowitz, criticized the bill’s focus on model-level regulation rather than specific abuse or malicious applications. He warned that the strict requirements could drive innovation abroad and hinder the growth of the open source community.

“It’s hard to (overstate) how surprised startups, founders and the investment community feel about this bill,” Midha said in an interview posted on his company’s website. “When it comes to policymaking, especially around technology at the frontier, our legislators should sit down and get feedback from their constituents — in this case, startup founders.”

“If passed in California, it would set a precedent for other states and have far-reaching implications both in the U.S. and beyond — it would essentially be a huge butterfly effect on the state of innovation,” he added.

In an open letter posted on their website (“Statement of Opposition to California SB 1047”), members of the AI ​​Alliance, an organization that describes itself as “a community of creators, developers, and users of technologies working together to advance safe and responsible AI based on open innovation,” expressed their concerns about SB 1047.

“While SB 1047 is not specifically aimed at open source development, it will have a dramatic impact on the open source community. The bill requires that AI model developers with performance of 1026 FLOPS or similar (as determined by undefined benchmarks) implement a full shutdown control that would stop the model and all derivatives from running. Once a model is released as open source and then downloaded by a third party, the developers no longer have control over the model. Before such a “switch” provision is enacted, we need to understand how this could be done in the context of open source; the bill does not answer that question. There are currently no 1026 FLOPS models available, but the technology is evolving rapidly and the open ecosystem could evolve with it. However, the intent of this legislation appears to be to freeze open source AI software development at 2024.”

Legislative context

The bill’s progress comes amid a backdrop of federal inaction on AI regulation. While the U.S. Congress has largely stalled on tech legislation, California’s initiative aims to preemptively address the risks posed by rapidly evolving AI technologies while fostering an environment conducive to innovation.

Gov. Gavin Newsom’s administration has also been proactive on AI. The governor issued an executive order last September to prepare for the impacts of AI, and his office released a report on the potential benefits and harms of AI.

SB 1047 represents a significant step in California’s regulatory approach to AI, and its outcome is expected to impact national and global AI policy. The Assembly’s vote on August 20 will be a critical moment in shaping the future of AI regulation in the state.

About the author



John K. Waters is the editor-in-chief of several Converge360.com sites, focusing on high-end development, AI, and future technologies. He has been writing about cutting-edge technology and Silicon Valley culture for over two decades and has written more than a dozen books. He also co-wrote the documentary Silicon Valley: A 100-Year Renaissancewhich was broadcast on PBS. He can be contacted at (email protected).