close
close

Public policy could force AI platforms to eliminate deepfakes before they are created

Cyber ​​fraud has evolved dramatically in recent years, moving from simple email scams to using advanced GenAI technologies to create fake fake news that can replicate the face, body movements, voice and accent of the real thing. Lawsuits are being filed around the world seeking to block or remove these deepfakes under copyright infringement laws. However, using the courts to prevent such abuses is not enough; the state must enforce policy at the level of artificial intelligence.

Recently, the CFO of an international company was impersonated during a video call using deepfake technology. This AI-generated facade was used to authorize the transfer of a significant sum of almost $25 million to multiple local bank accounts. The initially suspicious employee was convinced after a video call with her CFO and several colleagues, demonstrating the dangerous persuasive power of false information to commit financial crimes.

Until recently, from synthesized music to edited images and voice assistants, such innovations, while artificial, often still carried a human touch – a critical aspect when it came to assigning ownership and responsibility for a synthetic output. Moreover, each piece of improvement was tedious, expensive, and had human interfaces.

However, Generative Artificial Intelligence (AI) is radically changing this landscape, bypassing the human touch and creating works almost exclusively with software – deepfakes. These convincing forgeries are powered by advanced artificial intelligence tools, many available online, that combine machine learning and neural networks.

Deepfakes are generally combated on the grounds of copyright and intellectual property infringement, or as a financial crime if they involve a financial transaction and digital spoofing.

Since such reactions are usually post facto, the damage caused by these deepfakes at a reputational, financial or social level has already been committed; they do not prevent or reduce deepfakes. There is also no deterrent to artificial intelligence platforms being used as tools to create such deepfakes. Most breaches are limited to the removal of deepfakes from social media.

Detecting the criminal is difficult because most AI tools do not authenticate users. In some ways, it’s like gun control. These tools can be accessed over the Internet, most AI tools are free because they require user data and do not charge for it yet. It’s like everyone has access to a gun; there is no need to buy weapons or ammunition. The only way to control deepfakes is to monitor AI engine control and data access, similar to weapons and ammunition access.

advertisement

Weapons and ammunition: AI platforms and data

One way to stop deepfakes is to make the platform a co-defendant in every case of proven deepfakes. Second, public policy must require AI platforms to authenticate users and maintain a log of them. The products they create must be watermarked and their digital and real identities must be available for legal access. This is already a requirement for social media platforms available in India; are also asked to appoint a nodal compliance officer. A similar obligation should apply to all AI platforms.

It is more important to control data (ammunition) through ‘data rights’ laws to ensure that individuals can own their data. At a basic level, a video or image is data; the AI ​​engine must scrape several thousand images to create a near-exact replica of the deepfake. If its ownership is clear, deleting data from the public Internet is discouraged; this can control deepfakes at the very creation stage.

However, under the Indian Data Protection Act, the right to data ownership is not clearly defined. Currently, platforms own their users’ data; AI engines can freely download and use this data to build their models.

Until the people who created them decide on data ownership, privacy and data protection rights cannot be fully applied. First, it is necessary to define ownership, distinguished from privacy and protection – not only in the context of data directly linked to identity, as in the case of privacy, but also “indirect” digital traces on the network – images, video and text created by any user action.

Privacy laws only define what data must be kept private and do not cover all data. In the case of AI platforms, all personal data and even non-personal data are important. The data may not be personal in nature, but it can be combined with personally identifiable information to digitally impersonate users. The spoofing engine needs enough data to create videos similar to the one used to impersonate a company’s CFO during a video call.

Users, not the platform, should define data ownership

The next step is to define consent to the use of data. The law governing data fiduciaries and account aggregators clearly defines consent to the use of financial data. Why does this definition of consent not apply to video data created and presented on social media? The default assumption should not be that the platform accepts consent, and the user should not be forced to consent as part of a default registration to a huge list of “terms of use” that no one reads. The default assumption must be without consent, it must be explicit, and the platform must not use any data or if it is stolen or intercepted by AI, the fiduciary responsibility to maintain its sanctity should rest with the platform.

If an AI algorithm extracts data from a platform, the platform should have a fiduciary duty imposed by the regulator to restore the data to the user. Platforms must not share user data with artificial intelligence algorithms, both internal and external, that misuse the data for purposes other than those for which the data is transferred.

best movies

show all

  • Swati Maliwal assault case: Delhi Police appeals to Quiz Arvind Kejriwal’s parents

  • Colombian Petro orders opening of embassy in Ramallah, West Bank

  • Israeli forces advance deeper into Rafah; 800,000 flee the city

  • Was there any attempt to protect the person accused in the Pune car accident case? Questions arise

  • Stage collapse during an election rally in Mexico, killing 9 people

  • Data, whether obtained by hook or crook, is GenAI’s ammunition for its weapons. If data rights are recognized and granted to individuals, their agency over the data will be established ex ante. The inappropriate use of deepfakes will be combated at the creation stage.

    K Yatish Rajawat and D. Chandrashekar work at the Gurgaon-based Center for Innovation in Public Policy. The views expressed in the above text are the personal views of the authors alone. They do not necessarily reflect the views of News18.

    first published: May 24, 2024, 12:05 pm EST