close
close

OpenAI says controversial California AI bill will hurt innovation – Orange County Register

By Shirin Ghaffary | Bloomberg

OpenAI is opposing a California bill that would impose new security requirements on artificial intelligence companies, joining a chorus of tech leaders and politicians who have recently opposed the controversial legislation.

The San Francisco-based startup said the bill would hurt innovation in the AI ​​industry and argued that regulation on the matter should come from the federal government, not states, according to a letter sent to state Sen. Scott Wiener’s office on Wednesday.

The letter also raised concerns that the bill, if passed, could have “broad and significant” implications for U.S. competitiveness in artificial intelligence and national security.

SB 1047, introduced by Wiener, aims to establish what his office called “common sense safety standards” for companies that create large AI models above a certain size and cost threshold. The bill, which passed the state Senate in May, would require AI companies to take steps to prevent their models from causing “critical harm,” such as enabling the development of biological weapons that could cause mass casualties or contribute to financial losses exceeding $500 million.

Under the bill, companies would have to provide a way to turn off AI systems, take “reasonable care” to ensure AI models don’t cause disaster, and disclose a statement of compliance to the California attorney general. If companies fail to comply, they could be sued and face civil penalties.

The bill has faced fierce opposition from many big tech companies, startups and venture capitalists who say it goes too far for a technology that is still in its infancy and could stifle technological innovation in the state.

Some critics of the bill have expressed concerns that it could drive AI companies out of California. OpenAI echoed those concerns in a letter to Wiener’s office.

“The AI ​​revolution is just beginning, and California’s unique status as a global leader in AI is driving the state’s economic momentum,” Jason Kwon, OpenAI’s chief strategy officer, wrote in the letter. “SB 1047 will threaten that growth, slow the pace of innovation, and cause California’s world-class engineers and entrepreneurs to leave the state for greater opportunities elsewhere.”

OpenAI has suspended talks to expand its San Francisco offices due to concerns about regulatory uncertainty in California, according to a person familiar with the company’s real estate plans who asked not to be identified to discuss internal discussions.

In a statement, Wiener defended the proposed legislation and said OpenAI’s letter “does not criticize a single provision in the bill.” He also said the argument that AI talent is leaving the state “doesn’t make sense” because the law would apply to all companies doing business in California, regardless of where their offices are located.

Wiener’s office cited two prominent national security experts who publicly supported the bill.

“In summary, SB 1047 is a very sensible bill that requires large AI labs to do what they have already committed to doing, namely testing their large models for catastrophic safety risks,” Wiener said. “SB 1047 is well-calibrated to what we know about foreseeable AI risks and deserves passage.”

Critics say the bill will stifle innovation by requiring companies to provide state governments with detailed information about their models, and will prevent smaller open-source software developers from launching startups for fear of being sued.

Last week, in response to some opposition, Wiener amended the proposed rules to eliminate criminal liability for tech companies that fail to comply, added protections for smaller open-source model developers and got rid of a new proposed “Frontier Model Division.”

Previously, developers could be held criminally liable for intentionally providing false information to the government about their security plans, under penalty of perjury.

OpenAI competitor Anthropic, known for its stronger focus on security than its rivals, has previously said it would support the bill with some of these amendments.

Even with the amendments, however, the bill still had opponents, including former House Speaker Nancy Pelosi, who issued a statement calling it “ill-informed.”

A group of Democratic members of Congress also publicly opposed the bill.

According to state filings, OpenAI and other tech companies hired lobbyists to work on the bill. In a letter, OpenAI said it had been working with Wiener’s office on the bill for several months but ultimately did not support it.

“We must protect America’s advantage in AI with a set of federal policies — not state policies — that can provide transparency and certainty to AI labs and developers while preserving public safety,” according to the letter. OpenAI also said that having a clear federal framework “would help the United States maintain its competitive edge over countries like China and promote democratic governance and values ​​around the world.”

OpenAI argued that federal agencies like the White House Office of Science and Technology Policy, the Department of Commerce and the National Security Council are better equipped to manage critical AI risks than California’s state-level government agencies. The company said it supports several proposed pieces of federal legislation, such as the Future of AI Innovation Act, which secures congressional support for a new U.S. AI Safety Institute.

“As I have said many times, I agree that ideally Congress would address this,” Wiener said in a statement. “However, Congress has not done so, and we are skeptical that Congress will. Under OpenAI’s argument about Congress, California would never have passed its data privacy law, and given Congress’s inaction, Californians would have no protection for their data.”

SB 1047 is set to be voted on in the California Assembly this month. If passed, it would eventually head to Gov. Gavin Newsom’s desk. While Newsom has not indicated whether he would veto the bill, he has publicly spoken about the need to promote AI innovation in California while mitigating its risks.