close
close

NAB: FCC’s AI Rules for Political Ads Are ‘Burdensome’ for Broadcasters

On Friday, the NAB submitted comments on the FCC’s proposed rules requiring broadcasters to disclose the use of artificial intelligence in political ads. Radio ink looked at the lengthy, 74-page document to explain the organization’s objection.

The crux of the argument is that the FCC has no authority under the Communications Act to impose the proposed disclosure requirements. Specifically, the act “imposes certain requirements on federal applicants to qualify for the lowest unit fee but does not mandate (or permit the FCC to impose) any disclosures by broadcasters.”

The NAB emphasizes that “the fact that Congress has granted carefully defined, limited authority over certain aspects of political broadcasting does not mean that the Commission may act alone and adopt additional requirements regarding political broadcasting if it deems it best.”

The letter said that even if the FCC had such authority, the proposed rule would be “arbitrary and capricious under the Administrative Procedure Act.” The APA requires agencies to “examine the relevant data and provide a satisfactory explanation for their actions, including a reasonable relationship between the facts found and the choice made.” The rule would require disclosures for all political advertising that uses AI, regardless of whether it is deceptive. The NAB argues that such an approach could lead recipients to distrust all ads labeled as AI or to ignore the disclosures altogether, and thus would not effectively address the problem the FCC seeks to address.

The NAB also argues that the rules would violate First Amendment protections by imposing content-based regulation on political speech. They argue that “the proposed rules are content-based because, on their face, they apply only to political ads with AI-created content and not to any other ads or programs with or without AI content.”

Because the regulations are content-based and mandate speech, NAB says the “proposed regulations will be subject to strict scrutiny that requires the government to prove that they ‘advance a compelling interest and are narrowly tailored to achieve that interest.’”

“Labeling a candidate or issue-related ad as AI-generated will automatically make that ad more suspicious in the public’s eyes than other political ads or other content without such a designation, regardless of the veracity of the ad or the use of AI in its creation.”

Citing Supreme Court precedent, the NAB argues that the only permissible basis for restricting political speech is to prevent quid pro quo corruption or the appearance of it. The NAB points out that the FCC has not identified instances where AI-generated deepfake political ads have been aired on broadcast stations, failing to demonstrate a real harm that must be addressed.

NAB says the proposed rules will impose significant operational burdens on broadcasters, including having to verify AI content in ads, often without sufficient information. It explained in a statement: “It would be very burdensome and time-consuming for broadcasters to try to discover the person or people with personal knowledge of how much advertising has been produced and whether AI has been used.”

These requirements could delay the airing of political ads, which would jeopardize the right of candidates and political speakers to reach voters during key election periods.

NAB emphasizes that existing mechanisms already address concerns about deceptive political ads, including AI-generated content. They note that “Broadcasters have decades of experience dealing with political ads… These station-specific processes effectively — and usually quickly — resolve complaints about problem ads.”

In addition, several states have enacted or are considering legislation to address the deceptive nature of AI-generated content in political advertising, targeting the creators of such content rather than the broadcasters. The NAB notes that “many states have already enacted legislation to regulate the use of AI or other synthetic media to mislead audiences in political communications, and other states and the U.S. Congress are considering legislative action.”

It also suggests that the FEC, not the FCC, has the authority under the Federal Election Campaign Act to police false political ads, a point that has previously been disputed between the two agencies.

As stated in the NAB’s opening comments, “NAB strongly encourages the Commission to close this proceeding without moving forward. While the Commission’s staff has worked diligently to grapple with the deepfake problem it has identified, the agency is severely limited by the complete or near-complete lack of congressional authority. NAB urges the Commission to pursue only holistic solutions that do not create new problems by trying to solve others. To the extent that there is a problem to solve, Congress, not the FCC, can and should take the lead.”