close
close

Some industry players oppose California AI regulations

Some AI companies have asked for regulation. Now that it’s coming, some are angry.

Since last year, tech giants from Mark Zuckerberg to OpenAI CEO Sam Altman have gone to Congress to debate the regulation of artificial intelligence, warning of the technology’s potentially catastrophic consequences and even demanding regulation.

California lawmakers have responded by introducing bills to outlaw unfairly biased AI decision-making, attempt to mitigate its impact on elections through false and misleading information, and demand insight into how models are trained, among dozens of other proposed pieces of legislation.

“As we’ve seen with the creation and growth of the internet and social media, we can’t count on this industry to self-regulate,” said Teri Olle, director of Economic Security California Action, which co-sponsored one of the bills. “They simply won’t put the public interest ahead of profits — and they’ve proven that time and time again.”

But now, as the proposed rules take effect, the companies are protesting that holding them liable for further use of the technology they developed would stifle innovation and force a billion-dollar industry out of California.

Nowhere is the opposition more clear than in a letter sent by Meta, Facebook’s parent company, to state Sen. Scott Wiener, a San Francisco Democrat, recently protesting his signature attempt to impose some accountability on big AI developers, SB1047.

The bill would focus on future versions of large AI programs—those that would cost $100 million or more to train and are not yet of this size—and would allow the state attorney general to sue developers if their models cause mass destruction. The bill does not give private citizens the right to sue AI companies. It would also require testing of the models’ safety to prevent foreseeable harm, such as their use in creating biological weapons or shutting down the power grid, for example.

“If a company decides to impose such a serious risk on society without good justification, they should be willing to take responsibility for the consequences,” Nathan Calvin, senior counsel at the Center for AI Safety, a political action fund sponsoring the bill, said in an email. “This is an incredibly fair and reasonable request for big tech companies like Meta to make,” Calvin said.

But Meta said in its letter that Wiener’s bill places “disproportionate burdens on modelers” because they could be held accountable for how someone else uses their technology. Meta is the creator of the Llama family of AI programs, which use an “open source” approach, allowing any company or developer to reuse them for free as chatbots or for other purposes.

Another influential opponent of the bill, Silicon Valley investment firm Andreessen Horowitz, launched a website warning of the potential harms of SB1047, from freezing investment to destroying the open-source startup ecosystem.

Andreessen Horowitz partner Anjney Midha said in a published interview that proving the program is completely safe would be impossible and would expose companies and developers to enormous risk because the bill does not clearly define what would violate its provisions.

The company’s founder, Marc Andreessen, recently took to the stage at an AI event at Stanford University to criticize Wiener’s bill, saying the only way to truly regulate AI programs, especially open source software, would be to roll them out globally, at the risk of war.

Wiener said the goal is not to regulate every AI model everywhere, but only the big ones that are developed in the future. The liability created by the act is unusually narrow, he said, adding that it is intended to prevent the most disastrous uses of AI. These are outcomes, he said, for which developers would likely be held legally liable anyway, but it seeks to prevent that in the first place through safety testing.

He also said Andreessen Horowitz had spread “misinformation” about the bill, including that developers could be thrown in jail, which he called “completely false,” saying the company could only face criminal liability if it lied about security testing.

Despite the focus on large AI models and companies, smaller companies like Benchmark Labs in San Diego, which builds AI-powered weather forecasting technology, are concerned that they could face liability in the future as their models grow. CEO Carlos Gaitan said during a news conference that his company is not currently covered by the Wiener Act, but that as its weather models grow in size and complexity, they could face liability.

“If an arsonist receives my prediction and, God forbid, decides to set a fire, I will be responsible for that,” Gaitan said.

From Wiener’s perspective, his bill doesn’t affect any existing programs and would only regulate large future programs. AI developers signed a White House pledge last year to safely develop the technology, and he said he’s just asking them to keep their word. “It can’t just be some opaque voluntary compliance,” Wiener said.

“Let’s do a security assessment up front” to avoid situations like this in the first place, he said.

Meta and Andreessen Horowitz said if the bill becomes law, companies would have to leave California to avoid compliance.

Wiener said he met with representatives from both companies and other tech industry players and made amendments to the bill, including clarifying when a company would no longer be liable if a developer had changed elements of its software to the point of causing harm.

The threats to leave the state are a straw man, he said, because the law covers all companies doing business in California, not just those headquartered there. Companies made similar threats when the state’s data privacy law and other regulations were passed, but they didn’t follow through, Wiener said.

Whether Meta or other companies are liable for what happens to their products mirrors the debate about whether social media companies should be liable for illegal things done on their sites. In most cases, they aren’t. But technology shouldn’t be treated any differently when it causes harm than a car company would be treated if its seat belts failed, said Ahmed Banafa, a professor at San Jose State University.

In sending the letter, Meta signaled that it was “trying to avoid spending more time and money testing” its models, he said. In a similar case, where a car is defective and causes injuries, the manufacturer, not the driver, is liable.

However, Meta says “we are not responsible if someone else uses it in a harmful way,” Banafa said.

In a dividing line between the two sides, Andreessen Horowitz’s Midha also cited the example of the car. Except that, in his view, Wiener’s project amounted to “holding car manufacturers liable for any accident caused by a driver who modified their car.”

Wiener dismissed some of the objections raised by Andreessen Horowitz as “extreme and melodramatic,” calling his project a “soft regulation” that prohibits nothing and does not require a license to train the large AI models of the future.

“We are simply requiring large labs to perform the safety tests they have publicly committed to,” he said.

(c)2024 San Francisco Chronicle. Distributed by Tribune Content Agency, LLC.