close
close

Anthropic says it’s closer to supporting California’s AI bill after lawmakers made some tweaks. Here’s what changed.

  • Anthropic says it is getting closer to supporting a bill in California to regulate artificial intelligence.

  • The bill, introduced by Senator Scott Wiener, aims to discourage the creation of unsafe AI models.

  • The changes include lighter penalties, clearer language and increased reporting requirements.

AI company Anthropic is currently backing a controversial bill known as SB 1047 that would regulate AI in California.

In a letter to California Gov. Gavin Newsom, Anthropic CEO Dario Amodei said the bill’s “benefits likely outweigh the costs.” But he added that “we don’t know for sure, and there are still aspects of the bill that we find troubling or unclear.”

Anthropic’s cautious approval came just a month after the company proposed a series of amendments to SB 1047 — which was first introduced in the state Legislature by Sen. Scott Wiener in February.

In a letter to state leaders in July, Anthropic called for a greater focus on deterring companies from building unsafe models, rather than enforcing strict laws before catastrophic incidents occur. He also suggested that companies be allowed to set their own safety testing standards rather than following state-mandated regulations.

The revised version of the bill released Aug. 19 includes several modifications. First, it limits the scope of civil penalties for violations that do not result in harm or imminent risk, according to a post by Nathan Calvin, senior policy adviser at the Center for AI Safety Action Fund, which co-sponsored the bill and has worked with Anthropic since it was first introduced.

There are also some key language changes. While the bill originally called for companies to demonstrate “reasonable assurance” against potential harm, it now calls for them to demonstrate “reasonable care,” which “helps clarify the bill’s focus on testing and risk mitigation,” according to Calvin. It’s also “the most common standard that exists in tort liability,” he wrote.

The updated version also shrunk the new government agency that would enforce AI regulations, once called the Frontier Model Division, to a board known as the Board of Frontier Models and placed it inside the existing Government Operations Agency. That board now has nine members, up from five. But with that, reporting requirements for companies have also increased. Companies must publicly publish security reports and send uncensored versions to the state attorney general.

The updated bill removes the penalty for perjury, thereby eliminating all criminal liability for companies and imposing only civil liability. Companies are now required to file “statements of compliance” with the Attorney General, rather than “certificates of compliance” with the Frontier Model Division.

Amodei said the bill “now seems to us to be halfway between our suggested version and the original bill.” The benefits of developing publicly available safety and security protocols, mitigating damage downstream and forcing companies to seriously question the risks of their technologies would “significantly improve” the industry’s ability to combat threats.

Anthropic bills itself as a “security and research company” and has received about $4 billion in funding from Amazon. In 2021, a group of former OpenAI employees, including Dario Amodei and his sister Daniela, founded the company because they believed AI would have a dramatic impact on the world and wanted to build a company that would ensure it was aligned with human values.

Wiener was “really pleased to see the level of detail that Anthropic has put into their ‘letter of support, if amended,’” Calvin told Business Insider. “I really hope that this encourages other companies to engage meaningfully and try to approach some of these issues with nuance and understand that this kind of false trade-off between innovation and security is not going to be in the long-term best interest of this industry.”

Other companies that would be affected by the new legislation are more hesitant. OpenAI sent a letter to California state leaders this week opposing the bill. One of the main concerns was that the new rules would drive AI companies out of California. Meta also argued that the bill “actively discourages the release of open-source AI.”

Read the original article on Business Insider