close
close

California is implementing measures to combat AI and deepfake discrimination

SACRAMENTO, Calif. — As corporations increasingly weave artificial intelligence technologies into Americans’ daily lives, California lawmakers want to build public trust, fight algorithmic discrimination and ban deepfakes related to elections or pornography.

Efforts in California – home to many of the world’s largest artificial intelligence companies – could pave the way for artificial intelligence regulations to be introduced nationwide. The United States is already ahead of Europe in regulating artificial intelligence to reduce risks, lawmakers and experts say, and the rapidly evolving technology is raising concerns about job losses, disinformation, privacy breaches and automation bias.

A number of proposals were introduced last week to address those concerns, but they must gain approval from the other chamber before reaching Gov. Gavin Newsom’s desk. The Democratic governor promoted California as a first mover and regulator, saying the state could soon deploy generative artificial intelligence tools to solve traffic congestion, increase road safety and provide tax guidance, even though his administration is considering new rules against artificial intelligence discrimination in hiring practices.

With strong privacy laws already in place, California is better positioned to enact effective regulations than other states with strong artificial intelligence interests, such as New York, said Tatiana Rice, deputy director of the Future of Privacy Forum, a nonprofit that works with lawmakers on on proposals on technology and privacy.

GET CATCHED

Summarized stories to get information quickly

“In order to pass an AI bill, we need a data privacy law,” Rice said. “We still kind of pay attention to what New York is doing, but I would lean more towards California.”

California lawmakers said they looked forward to action, citing the hard lessons they learned from failing to dominate social media companies when they might have had the chance. However, they also want to continue attracting artificial intelligence companies to the country.

Here’s a closer look at California’s proposals:

COMBATING AI DISCRIMINATION AND BUILDING PUBLIC TRUST

Some companies, including hospitals, are already using artificial intelligence models to define decisions about employment, housing and medical options for millions of Americans without much oversight. According to the U.S. Equal Employment Opportunity Commission, as many as 83% of employers use artificial intelligence when hiring employees. How these algorithms work remains largely a mystery.

One of the most ambitious AI efforts in California this year would involve pulling back the curtain on these models by establishing an oversight framework aimed at preventing bias and discrimination. This would require companies using AI tools to participate in decisions that influence outcomes and to inform those affected when AI is used. AI developers would have to routinely conduct internal assessments of their models for bias. The state’s attorney general would have the authority to investigate reports of discriminatory modeling and impose fines of $10,000 per violation.

Artificial intelligence companies may soon be required to disclose what data they use to train their models.

WORK AND LIKENESS PROTECTION

Inspired by a months-long strike by Hollywood actors last year, a California lawmaker wants to protect workers from being replaced by artificial intelligence-generated clones, a major sticking point in contract negotiations.

The proposal, backed by the California Federation of Labor, would allow performers to opt out of existing contracts if vague language allowed studios to freely use artificial intelligence to digitally clone their voices and likenesses. It would also require contractors to be represented by a lawyer or union representative when signing new “voice and likeness” contracts.

California could also introduce penalties for digitally cloning the dead without the consent of their estate, citing the case of a media company that produced a fake, hour-long AI-generated comedy special intended to recreate the style and material of the late comedian George Carlin without the consent of his estate .

REGULATION OF EFFICIENT AI GENERATIVE SYSTEMS

There are many real-world threats as generative AI creates new content such as text, audio, and photos in response to prompts. That’s why lawmakers are considering putting in place guardrails around “extraordinarily large” artificial intelligence systems that could potentially issue instructions to trigger disasters – such as building chemical weapons or aiding in cyberattacks – that could cause at least $500 million in damage. This would require such models to have, among other things, a built-in “kill switch.”

The solution, backed by some of the world’s most renowned artificial intelligence researchers, would also create a new state agency to oversee developers and ensure best practices, including for even more powerful models that don’t yet exist. The Attorney General would also have the ability to take legal action in the event of violations.

NO DEEPFAKS ABOUT POLITICS OR PORNOGRAPHY

A bipartisan coalition is seeking to make it easier to prosecute people who use artificial intelligence tools to create images of child sexual abuse. Current law does not allow district attorneys to prosecute people who possess or distribute artificial intelligence-generated child sexual abuse images if the material does not depict a real person, law enforcement officials said.

Many Democratic lawmakers also support a bill to address fraudulent elections, citing concerns after artificial intelligence-generated robocalls mimicked President Joe Biden’s voice ahead of the recent presidential primary in New Hampshire. The proposal would ban “materially misleading” election-related deepfakes in political mail, robocalls and television ads 120 days before and 60 days after Election Day. Another proposal would require social media platforms to label any election-related posts created by artificial intelligence.