close
close

Attack of the (Voice) Clones: Protecting the Right to Speak

In January 2023, AI speech synthesis company ElevenLabs, Inc. released a beta platform for its natural-sounding voice cloning tool. Using the platform, a short snippet of a person’s voice could generate audio files of the target saying anything the uploader wanted. This release led to a surge in misappropriated vocal clones, from viral rap songs to parodies of political figures. Recognizing that their software was being widely misused, ElevenLabs installed safeguards to ensure that the company could trace the generated audio back to its creator. But it was too late. Pandora’s box had already been opened.

Since then, a wide range of similar voice-cloning tools have been developed, making deepfake voices a common source of fraud and disinformation. And these problems have only been exacerbated by the lack of adequate laws and regulations to limit the use of AI and protect the individual’s right to speak.

AI Voice Deepfakes Are Coming to Mainstream Media