close
close

Government delays ending deepfakes like Elon Musk’s Harris ad

Elon Musk recently made headlines when he posted a deepfake video of Vice President Kamala Harris with audio manipulated to make it sound like she was calling herself “the top diversity pick” who doesn’t know “the first thing about running a country.” A month earlier, the Republican congressional candidate in Michigan posted a TikTok using the AI-generated voice of Dr. Martin Luther King Jr. to say he had returned from the dead to endorse Anthony Hudson. In January, President Joe Biden’s voice was replicated using AI to send a fake robocall to thousands of people in New Hampshire, urging them not to vote in the state’s primary the following day.

AI experts and lawmakers are sounding the alarm, demanding more regulation as AI is used to fuel disinformation and misinformation. Now, three months before the presidential election, the United States is ill-prepared for the potential influx of false content headed our way.

Digitally altered images—also known as deepfakes—have been around for decades, but thanks to generative AI, they’re now exponentially easier to make and harder to detect. As the threshold for creating deepfakes has been lowered, they’re now being produced at scale and increasingly difficult to regulate. To make matters even more difficult, government agencies are at odds over when and how to regulate the technology—if at all—and AI experts fear that failure to act could have devastating effects on our democracy. Some officials are proposing basic regulations that would disclose when AI is being used in political ads, but Republican political appointees are standing in the way.

“Whenever you have disinformation or misinformation interfering in elections, we have to imagine that this is a form of voter suppression,” says Dr. Alondra Nelson. Nelson was deputy director and acting director of Joe Biden’s White House Office of Science and Technology Policy and led the creation of the AI ​​Bill of Rights. She says AI disinformation “is keeping people from having a reliable information environment in which to make decisions about pretty important issues in their lives.” Rather than keeping people from going to the polls to vote, she says, this new type of voter suppression is “the insidious, slow erosion of people’s trust in the truth,” which affects their confidence in the legitimacy of institutions and government.

Nelson says the fact that Musk’s deepfake post is still online is proof that we can’t count on companies to follow their own policies on disinformation. “There have to be clear guardrails, clear boundaries of what is acceptable and what is not acceptable for individual actors and companies, and the consequences of that behavior.”

Many states have passed laws targeting AI-generated deepfakes in elections, but federal regulations are harder to come by. This month, the Federal Communications Commission is accepting public comments on the agency’s proposed rules that would require advertisers to disclose when AI technology is used in political ads on radio and television. (The FCC has no jurisdiction over online content.)

Since the 1930s, the FCC has required TV and radio stations to keep records of who buys campaign ads and how much they paid. Now, the agency is proposing to add a question asking whether artificial intelligence was used in the production of the ad. The proposal would not ban the use of artificial intelligence in ads; it would simply ask whether artificial intelligence was used.

“We have this nationwide tool that’s been around for decades,” FCC Chairwoman Jessica Rosenworcel said Rolling stone in a telephone interview. “We decided now was a good time to try to modernize it in a really simple way, when I think a lot of voters just want to know: Are you using this technology? Yes or no?”

Rosenworcel says there’s a lot of work to be done when it comes to AI and disinformation. He points to Biden’s fake robocall, which the FCC responded to invocation The Telephone Consumer Protection Act of 1991, which restricts the use of artificial voices in telephone calls. The FCC then cooperated with the New Hampshire attorney general, who brought criminal charges against the man who created the robocall.

“You have to start somewhere, and I don’t think we should let perfection be the enemy of good,” Rosenworcel says. “I think building on a foundation that’s been around for decades is a good place to start.”

Republican Federal Election Commission Chairman Sean Cooksey opposes the FCC’s latest proposal, saying it will “create chaos” as the election approaches.

“Every American should be alarmed that the Democratic-controlled FCC is moving forward with its radical plan to change the rules on political ads just weeks before the general election,” Cooksey said in a written statement to Rolling stone“These vague rules would not only violate the jurisdiction of the Federal Election Commission, but they would also sow chaos in political campaigns and confuse voters before they head to the polls. The FCC should abandon this misguided proposal.”

The FEC has been at an impasse on various issues for years, as Republicans on the commission have tried for years to block new regulations on just about everything.

The watchdog group Public Citizen has petitioned the FEC to take regulatory action on AI, and Cooksey previously said the agency would provide an update in early summer.

Cooksey told Axios that the FEC will not take action to regulate AI in political ads this year, and the commission is scheduled to vote to close Public Citizen’s petition on Aug. 15. “The FEC would be better off waiting for guidance from Congress and studying how AI is actually being used on the ground before considering any new regulations,” Cooksey told the outlet, adding that the agency “will continue to enforce its existing rules against deceptive misrepresentations of campaign authority, regardless of the medium.”

AI experts say action is urgently needed. “We’re not going to be able to solve all of these problems,” Nelson says, adding that there’s no one-size-fits-all solution to AI deepfakes. “I think we often approach the AI ​​problem with that lens, instead of saying, unfortunately, there’s always going to be crime and we can’t stop it, but we can add friction. We can make sure that people have consequences on the other side of their bad behavior that we hope can mitigate.”

Rep. Yvette Clarke (D-N.Y.) has been calling for Congress to pass AI legislation for years. A bipartisan bill targeting nonconsensual AI deepfake porn recently passed the Senate.

“It was inevitable that these new technologies, particularly artificial intelligence that can distort images and voices, would at some point be weaponized to mislead, disinform and deceive the American people,” Clarke said.

“(There’s) no way to really differentiate between a made-up image and something that’s fact and reality, (which) puts Americans at a disadvantage, especially in these hard-hitting campaigns.”

Clarke introduced the REAL Political Ads Act in May 2023 to require campaign ads to disclose and digitally tag videos or images in ads created by generative AI. “We got quite a few cosponsors on the bill, but it didn’t get through the (Republican) majority on the Energy and Commerce Committee,” Clarke says.

“It’s a wide open field for those who want to create disinformation and misinformation right now because there’s nothing to regulate it,” Clarke says. He emphasizes that he’s also working with the Congressional Black Caucus on this, given that marginalized communities and minorities are often disproportionately targeted by disinformation. “We’re behind here in the United States, and I’m doing everything I can to move us into the future as quickly as we can.”

Dr. Rumman Chowdhury led ethical AI for X (formerly Twitter) before Musk took over and is now the U.S. science envoy for AI. She says the broader problem is that America is at a dangerously low level of trust in government, elections and communications institutions in history. She says the FEC could further damage its credibility by failing to act.

“We’re in a crisis about the institutions and the government that we should trust, and they’re going to sit on their hands and say, ‘We don’t know, should we do something?’” Chowdhury says. “If they’re not seen to be doing something about deepfakes, it could further tarnish their image in the eyes of the American people.”

Trends

As for Musk specifically sharing the Harris deepfake, Chowdhury says he doesn’t know why people are so surprised he’s doing it. Musk has turned X (formerly Twitter) into a disinformation machine since he took over the platform.

“Is it scary? Absolutely,” Chowdhury says. “But it’s a bit like we’re people at a party eating leopard faces. Are you going to be mad because this guy is doing exactly what he said he would do? If you’re mad, then literally don’t be on Twitter. Or know that if you’re on the platform, you’re complicit in this guy manipulating the course of democracy.”