close
close

The Race Against Deepfake Ads: Will AI Regulation Catch Up With Us?

As per the order, the Indian Ministry of Information and Broadcasting has released detailed guidelines outlining the procedure to obtain this certificate, which has become necessary before the release of any new advertisement from June 18.

While there is a broader debate about how this change will impact advertisers, this article focuses specifically on ads using GenAI-created deepfakes on social media platforms like Instagram, Facebook, and YouTube.

In an editorial last year titled “Urgently Needed: A Law to Protect Consumers from Deepfake Ads,” I highlighted the growing threat of deepfake ads that contain misleading or deceptive claims, negatively impacting the rights of consumers and public figures.

Read also: Ministry of I&B changes rules on self-declaration for advertisers

This comes after a 12-month study by McAfee, in which 75% of Indians said they had come across some form of deepfake content, with 38% saying they had fallen victim to deepfake scams and 18% having been directly affected by such fraudulent schemes. Alarmingly, 57% of those who were targeted mistook celebrity deepfakes for authentic content.

The deepfake threat: In my editorial, I argued that while deepfake advertisements can be used under the Consumer Protection Act (Sections 2(9), 2(28) and 2(47)) and its Misleading Advertisements and Hidden Patterns Guidelines, the Internet Data Protection Act 2023 (Section 6), the Information Technology Act 2000 (Sections 66C, 66D, 79 and 66E) and the Information Technology (Intermediary Guidelines and Digital Media Code of Ethics) Regulations 2021 (Rs. 4(2) and 3(1)(b)), if the identity of these advertisers is unknown (which is often the case), regulators do not have much scope to impose penalties.

I have therefore suggested that the government implement preventive measures to ensure that advertisers do not use, for example, unwanted deepfakes, and also order online platforms to develop effective mechanisms to combat such deceptive practices.

While the ministry’s latest guidelines do not explicitly mention disclosing any use of AI in self-certification, it is a step in the right direction as self-certification will require authorised advertiser representatives to provide reliable data along with final ad copies to support their claims.

The measure promises to solve problems with identifying and locating advertisers, making it easier to track after complaints are filed. It also authorizes courts to impose significant fines on offenders.

However, industry bodies such as the Indian Internet and Mobile Association of India (IAMAI), Indian Newspaper Association (INS) and Indian Society of Advertisers (ISA) have raised concerns over the recently adopted pre-publication rules, arguing that imposing additional compliance requirements puts a heavy burden on advertisers, especially smaller ones.

While these concerns are legitimate, ad industry regulators have an opportunity to strengthen their case in the Supreme Court to improve a compliance mechanism that is clearly burdensome. The online use of deepfakes of unknown origin has exposed the ineffectiveness of the current regulatory mechanism in policing misleading advertising in any medium.

The advertising industry could argue that while the idea of ​​self-certification has its merits, the process needs to be simplified so as not to limit the role of advertising as a legitimate business tool.

The impossible challenge: Can any surveillance system be effective against AI-enabled deepfakes? The challenge lies in the growing volume of digital ads, which would put an additional burden on regulators if they decided to review every ad that was sent. It’s also unclear how easily even experts can distinguish suspiciously motivated deepfakes from legitimate ads that comply with regulations.

Therefore, a possible solution, at least for online advertising, is to require social media platforms to filter deepfake ads, as they may have the technology and resources to do so effectively.

Industry bodies have suggested that it should be given the task of checking advertising violations using its long-standing self-regulatory model, rather than imposing a new compliance burden. However, this model has not proven effective and has served only to prevent regulatory action in the past.

Since it is important to ensure that Internet users are not exposed to fraud, social media intermediaries must take shared responsibility.

This was also highlighted by the Ministry of Electronics and Information Technology in its March 2024 communiqué, which drew attention to the negligence of social media intermediaries in fulfilling their due diligence obligations as set out in the Information Technology Regulations (R. 3(1)(b)).

While not binding, the warning states that the intermediary must not “allow its users to host, display, upload, modify, publish, transmit, store, update or make available any unlawful content.”

The Supreme Court will revisit the case on July 9, at which time industry representatives are likely to present their views on the new guidelines.

The country is facing a growing threat from dark patterns in online advertising. The high court’s intervention could not only address the shortcomings of current regulatory approaches but also set a precedent for robust measures against deceptive advertising practices.

Nayan Chandra Mishra is a Research Assistant to Dr. C. Raj Mohan at the Board of Strategic and Defence Research and is currently working on the topic of Global Governance of Emerging Technologies.