close
close

California’s AI Bill Will Put a Burden on Tech Startups

The compilations of quotes are full of sarcasm against legislators, as deep thinkers complain of the cowardice, venality, and opportunism of politicians. The journalist HL Mencken complained that “a good politician is as unthinkable as an honest burglar.” Napoleon Bonaparte is supposed to have quipped that “in politics, stupidity is no handicap.” I have known good, honest, and wise politicians, but my chief complaint is their general lack of humility.

Not so much personal humility as a sense of the limitations of what government can accomplish. California is notoriously absurd on this front, as our top politicians routinely make lofty statements. Their latest ban will change the trajectory of Earth’s climate patterns! They will stand up to greed and the Other Evil Forces! Every one of them aspires to sound like John F. Kennedy.

Sure, governments can sometimes accomplish something worthwhile, but those with the most elaborate promises seem the least capable of delivering basic services. My local utility company promises only to provide electricity, and it delivers on that promise almost every day. The government promises to end poverty but can’t deliver unemployment benefits without sending billions to fraudsters.

It is against this backdrop that I present the latest arrogance: Senate Bill 1047, which is sitting on the governor’s desk. It is the “first in the nation,” “groundbreaking,” “groundbreaking” attempt by the legislature to take control of artificial intelligence before, like in the movie “Terminator,” the AI ​​becomes self-aware. I will always remember gubernatorial candidate Arnold Schwarzenegger’s visit to Orange County Register during a break in filming the 2003 sequel, but Hollywood isn’t usually a role model for Sacramento.

According to the Senate analysis, the bill “requires developers of powerful AI models and those who provide the computing power to train such models to put in place appropriate safeguards and policies to prevent critical harm” and “establishes a state entity to oversee the development of these models.”

While testifying on another state’s bill that would tax and regulate vaping devices, I once watched lawmakers cite examples of vaping devices and stare at them with obvious bewilderment. They had little understanding of how these relatively simple devices worked. How many California lawmakers really understand the nature of AI models, which are among the most complex (and rapidly evolving) technologies in existence?

Do you think lawmakers will protect us from unforeseen “critical harms” caused by almost unconsciously complex technology in ways we haven’t yet grasped? If you think so, you may have too much trust in government—and too little understanding of the clumsy, backward way it almost always works. It is sometimes effective in twisting new regulatory tools to abuse our rights, but it rarely serves to protect us.

Some tech groups (including my employer, the R Street Institute) sent a letter to Gavin Newsom urging a veto. “SB 1047 seeks to limit potential ‘critical harm,’ which includes ‘creating or using chemical, biological, radiological, or nuclear weapons in a manner that would result in mass casualties,’” it argued. “These harms are theoretical. There are no real-world examples of third parties abusing foundation models to cause mass casualties.”

Still, California lawmakers think they need to be smart to stave off some fictional catastrophe they saw in a dystopian movie by imposing regulations that, say, require a “kill switch” (like an easy button!). They’ll create yet another bureaucracy in which regulators presumably understand the technology at the level of its designers. If they were that skilled, they’d be startup billionaires living in Mountain View, not government employees living in Orangevale.

While the benefits of such sweeping and vague regulation are hard to fathom, the downsides are pretty clear — especially in a state that relies so heavily on the tech industry. California has been losing tech companies and jobs to the Bay Area for several years, but AI is an increasingly popular hotspot. Is it smart to push the industry out? It’s not like AI designers can’t easily build businesses in other communities (Austin or Seattle) with large tech workers.

In their favor, lawmakers amended the bill to remove some troubling provisions that could have exposed AI companies to state attorney general lawsuits and even potential criminal charges, but it will still leave the industry dazed, confused, and vulnerable to incalculable penalties. This is an industry that relies heavily on startups, but the legislation will place particular burdens on startups that lack the compliance and legal resources to navigate the state-imposed thicket.

“The entire framework is based on the assumption that these advanced models will pose a threat, a highly contested assumption,” wrote Will Rinehart of the American Enterprise Institute. “Most importantly, if these AI models are truly dangerous … then California should not be regulating them at all—that should be the province of the federal government.” That analysis makes sense, but who believes Newsom has the humility to listen to it?

This column was first published in The Orange County Register.