close
close

Lawmakers Think They’re Smart Enough to Control AI – Orange County Register

SACRAMENTO – Compilations of quotes are full of sarcasm against lawmakers as deep-thinkers complain of cowardice, venality and opportunism among politicians. Journalist HL Mencken complained that “a good politician is as unthinkable as an honest burglar.” Napoleon Bonaparte quipped that “in politics, stupidity is no obstacle.” I have known good, honest and wise politicians. My chief problem is their general lack of humility.

Not so much personal humility as a sense of the limitations of what government can accomplish. California is notoriously absurd on this front, as our top politicians routinely make lofty statements. Their latest ban will change the trajectory of Earth’s climate patterns! They will stand up to greed and the Other Evil Forces! Every one of them aspires to sound like John F. Kennedy.

Sure, governments can sometimes accomplish something worthwhile, but those with the most elaborate promises seem the least capable of delivering basic services. My local utility company promises only to provide electricity, and it delivers on that promise almost every day. The government promises to end poverty but can’t deliver unemployment benefits without sending billions to fraudsters.

It is against this backdrop that I present the latest arrogance: Senate Bill 1047, which is sitting on the governor’s desk. It is the “first in the nation,” “groundbreaking,” “groundbreaking” effort by the Legislature to take control of artificial intelligence before, like in the movie “Terminator,” it becomes self-aware. I will always remember gubernatorial candidate Arnold Schwarzenegger’s visit to The Orange County Register during a break in filming the 2003 sequel, but Hollywood is not usually a model for Sacramento.

According to the Senate analysis, the bill “requires developers of high-performance AI models and those providing computing power to train such models to implement appropriate safeguards and policies to prevent serious harm” and “establishes a state entity to oversee the development of these models.”

Once, while testifying on a bill in another state that would tax and regulate vaping devices, I watched as lawmakers gave examples of vaporizers and looked at them with obvious surprise. They had little understanding of how these relatively simple devices worked.

How many California lawmakers really understand the nature of AI models, which are among the most complex (and rapidly evolving) technologies in existence? “I’ll admit I don’t know much about AI…very little when it comes to the facts…I like the idea of ​​doing it badly, better than having no one else do anything,” Assemblyman Jim Wood, D-Healdsburg, said before the vote for the bill.

Do you think lawmakers will protect us from unforeseen “critical harms” caused by technology that is almost unconsciously complex in ways we have not yet grasped? Government is sometimes effective in twisting new regulatory tools to abuse our rights, but it rarely serves to protect us.

Some tech groups (including my employer, the R Street Institute) sent a letter to Gavin Newsom urging a veto. “SB 1047 seeks to limit potential ‘critical harm,’ which includes ‘creating or using chemical, biological, radiological, or nuclear weapons in a manner that would result in mass casualties,’” it argued. “These harms are theoretical. There are no real-world examples of third parties abusing foundation models to cause mass casualties.”

Still, California lawmakers think they need to be smart to stave off some fictional catastrophe they saw in a dystopian movie by imposing regulations that, say, require a “kill switch” (like an easy button!). They’ll create yet another bureaucracy in which regulators presumably understand the technology at the level of its designers. If they were that skilled, they’d be startup billionaires living in Mountain View, not government employees living in Orangevale.