close
close

Deepfakes: Federal and state regulations aim to curb growing threat

From Pope Francis to Taylor Swift, deepfakes have already caused confusion in the public sphere; but while there are no federal laws specifically overseeing the technology, a patchwork of federal and state laws seek to regulate its use

Is it true or is it the so-called deep fakes? Deepfakes are simulated images, audio recordings or videos that have been convincingly altered or manipulated to falsely portray someone as saying or doing something that the person did not actually say or do.

For example, in March 2023, a fake photo of Pope Francis wearing a white down coat went viral on social media, confusing millions of viewers. In January 2024, fake sexually explicit photos of Taylor Swift circulated on social media, causing confusion among her millions of fans and the media.

These images and the artificial intelligence (AI) tools that create deepfakes have raised public awareness of the significant risks posed by the unauthorized creation, disclosure and dissemination of these digital forgeries, which may result in defamation, infringement of intellectual property (IP), violation of publicity rights , harassment, fraud, blackmail, election interference, and incitement to violence and social and civil unrest.

Two Generative Artificial Intelligence (GenAI) tools are required to create a fake image or recording. One tool creates an image or recording and the other tries to detect whether the result is false. These systems are called competitive artificial neural networks that generate and discriminate in a generative adversarial network (GAN), which is a deep learning process. A generator (also known as an encoder) analyzes the input and extracts key features from the recording to generate an output that is sent to a discriminator (also known as a decoder) to detect artificial output signals, such as a manipulated voice recording.

The generator and the discriminator form a feedback loop, causing the generator to produce increasingly higher quality artificial output signals and the discriminator to detect them better and better. The feedback loop repeats until the desired quality of the recording or deepfake image is achieved.

Federal legislation to combat deepfakes

There is currently no comprehensive federal legislation in the United States that prohibits or even regulates deepfakes. However, the Identifying Outputs of Generative Adversarial Networks Act requires the director of the National Science Foundation to support research to develop and measure the standards needed to generate GAN outputs and any other comparable techniques developed in the future.

Congress is considering additional legislation that, if passed, would regulate the creation, disclosure and spread of deepfakes. Some of this legislation includes the Deepfake Report Act of 2019, which requires the Directorate of Science and Technology at the U.S. Department of Homeland Security to report at specified intervals on the state of digital counterfeiting technology; DEEPFAKES Accountability Act, which aims to protect national security from the threats posed by deepfake technology and provide legal remedies for victims of harmful deepfakes; The DEFIANCE Act of 2024, which will improve relief rights for people affected by non-consensual conduct involving intimate digital forgery and for other purposes; and the Consumer Protection from Fraudulent Artificial Intelligence Act, which requires the National Institute of Standards and Technology to establish task forces to facilitate and inform the development of technical standards and guidelines for identifying content created by GenAI, ensuring that audiovisual content created or significantly modified by GenAI include disclosure attesting to the origin of such content from GenAI, and for other purposes.

Countries are implementing fake news legislation

Additionally, several states have passed laws regulating deepfakes, including:

    • Texas SB 751 – Makes it a crime to fabricate a deceptive video intended to harm a candidate or influence the outcome of an election.
    • Florida SB 1798 – criminalizes images created, altered, adapted or altered by electronic, mechanical or other means to depict an identifiable minor engaging in sexual conduct.
    • Louisiana Bill 457 – Criminalizes deepfakes involving minors engaging in sexual conduct.
    • South Dakota SB 79 – Amends laws relating to the possession, distribution, and production of child pornography to include computer-generated child pornography, defined as any visual depiction of an actual minor that is created, adapted, or modified to depict that minor engaging in some prohibited sexual act; an actual adult who has been created, adapted or modified to portray that adult as a minor engaged in a prohibited sexual act; or a natural person who is indistinguishable from an actual minor, created using artificial intelligence or other computer technology capable of processing and interpreting specific input data to create a visual representation.
    • New Mexico HB 182 – Amends and enacts sections of the New Mexico Campaign Reporting Act to add disclaimer requirements for advertisements containing misleading media and creates a crime of distributing or contracting with another person to disseminate misleading media.
    • Indiana HB 1133 – Requires certain campaign communications containing fabricated media to include a disclaimer. The legislation also allows a candidate featured in fabricated media that does not include the required disclaimer to bring a civil action against certain individuals.
    • Washington HB 1999 – deals with fabricated intimate or sexually explicit images and depictions. The law provides civil and criminal remedies for victims of sexually explicit deepfakes.
    • Tennessee Likeness, Voice and Image Security Act (ELVIS) – Updates and replaces the Personal Rights Protection Act of 1984 to protect a person’s name, photograph, voice or likeness; provides for liability in the event of a civil action for actions related to the unauthorized creation and dissemination of a photograph, voice or likeness of a person; covers the liability of persons who distribute, transmit or otherwise make available technology whose primary purpose is the unauthorized use of a person’s photograph, voice or likeness.
    • Oregon SB 1571 – requires disclosure of the use of synthetic media in election campaign messages.
    • Mississippi SB 2577 (effective July 1) – Establishes criminal penalties for unlawful distribution digitization, which is defined as a realistic alteration of an image or sound using the image or sound of a person other than the person depicted or computer-generated images or sound, commonly called deepfakes; or the creation of an image or sound using software, machine learning-based artificial intelligence or other computer-generated or technological means.

Florida, Virginia, California, and Ohio have additional laws regulating deepfakes, and they are being considered in other states.

Additional steps to reduce the risk of deepfakes

In addition to relying on the government to enact comprehensive deepfakes regulation and enforcement laws, and on the courts to enforce this legislation, companies can take several additional steps to reduce their exposure to the risks posed by deepfakes.

These steps include knowing how to defend against the increasingly sophisticated use of phishing and AI social engineering attacks; preventing harassment and impersonation through artificial intelligence through the responsible use of social media; ensuring the company has comprehensive employee and supplier policies to protect against threats related to artificial intelligence and social media; and educating employees on the proper use of social media and AI tools.