close
close

FCC will consider rules for AI-generated political ads on TV and radio, but may not apply to streaming

FILE PHOTO: The Federal Communications Commission (FCC) logo is seen before an FCC hearing on net neutrality in Washington, February 26, 2015. Photo: Yuri Gripas/Reuters

NEW YORK (AP) – The head of the Federal Communications Commission introduced a proposal Wednesday that would require political advertisers to disclose when they use artificial intelligence-generated content in broadcast television and radio ads.

If adopted by the five-member committee, the proposal would add a layer of transparency that many lawmakers and artificial intelligence experts have called for as rapidly developing generative artificial intelligence tools generate realistic images, videos and audio clips that threaten to mislead voters in the coming years. elections Elections in the USA.

But the nation’s top telecommunications regulator would only have authority over television, radio and some cable providers. The new rules, if adopted, will not cover the huge increase in advertising on digital and streaming platforms.

“As artificial intelligence tools become increasingly available, the Commission wants to ensure that consumers are fully informed about the use of this technology,” FCC Chair Jessica Rosenworcel said in a statement Wednesday. “Today I shared a proposal with my colleagues that makes clear that consumers have the right to know when artificial intelligence tools are used in the political ads they see, and I hope they will act quickly on this issue.”

The proposal means that for the second time this year the Commission has taken significant steps to combat the growing use of artificial intelligence tools in political communication. The FCC previously confirmed that existing law prohibits AI voice cloning tools in robocalls. The decision was made after an incident during the New Hampshire primary where robocalls used voice cloning software to imitate President Joe Biden in an effort to discourage voters from going to the polls.

READ MORE: Net neutrality was restored after the FCC passed a measure regulating Internet providers

If adopted Wednesday, the proposal would require broadcasters to check with political advertisers whether their content was generated using artificial intelligence tools such as text-to-image tools or voice cloning software. The FCC has authority over political advertising on broadcast channels under the Bipartisan Campaign Reform Act of 2002.

Commissioners remain to discuss several details of the proposal, including whether broadcasters will have to disclose AI-generated content in on-air news, or only in a TV or radio station’s policy documents, which are public. They will also be tasked with agreeing on the definition of AI-generated content, a challenge that has become increasingly difficult as retouching tools and other AI enhancements are increasingly embedded in all kinds of creative software.

Rosenworcel hopes the regulations will be introduced before the elections.

Jonathan Uriarte, a spokesman and policy advisor for Rosenworcel, said it wants to define AI-generated content as content generated using computational technology or machine systems, “including, but not limited to, AI-generated voices that sound like human voices, and AI-generated actors who appear to be human actors.” However, he said its draft definition would likely change as a result of the regulatory process.

The proposal comes as political campaigns have already been experimenting extensively with generative artificial intelligence, from creating chatbots for their websites to creating videos and images using the technology.

Last year, for example, the RNC released an ad entirely generated by artificial intelligence intended to show a dystopian future under another Biden administration. Fake but realistic photos were used showing boarded-up storefronts, armored military patrols on the streets and waves of immigrants causing panic.

READ MORE: FCC fines insurance telemarketers $225 million per billion robocalls

Political campaigns and rogue actors have also used highly realistic images, videos and audio content to deceive, mislead and disenfranchise voters. Recent AI-generated videos during Indian elections falsely depicting Bollywood stars while criticizing the prime minister are an example of a trend that AI experts say is emerging in democratic elections around the world.

Rob Weissman, president of the advocacy group Public Citizen, said he was glad to see the FCC “increasing efforts to proactively respond to threats from artificial intelligence and deepfakes, especially to election integrity.”

He urged the FCC to require on-air disclosures for the benefit of the public and chided another agency, the Federal Election Commission, for the delays as it also considers the possibility of regulating artificial intelligence-generated false information in political ads.

As generative AI has become cheaper, available and easy to use, bipartisan groups of lawmakers have called for legislation to regulate the technology in politics. Although there are just over five months left until the November elections, no laws have been adopted yet.

A bipartisan bill introduced by Sen. Amy Klobuchar, a Democrat from Minnesota, and Sen. Lisa Murkowski, a Republican from Alaska, would require political ads to include a disclaimer if they are created or significantly altered using artificial intelligence. It would require the Federal Election Commission to respond to violations.

Uriarte said Rosenworcel recognizes the FCC’s ability to respond to AI threats is limited, but wants to do what it can before the 2024 election.

“This proposal ensures the maximum standards of transparency that the Commission can enforce within its jurisdiction,” Uriarte said. “We hope government agencies and lawmakers can build on this important first step in setting a standard for transparency in the use of artificial intelligence in political advertising.”