close
close

The impact of AI regulation on the advertising industry | Partner content

The rapid development of artificial intelligence (AI) has generated both excitement and concern about its potential ethical implications. AI has moved beyond supporting back-end operations and is now taking center stage, enabling hyper-personalized advertising as well as predictive ad targeting and optimization.

As the industry continues to realize the potential of AI, there is a need to properly balance the development of AI while preventing the potential misuse of AI technology – through regulation, codes of conduct or self-regulation.

The EU is playing a leading role in trying to regulate artificial intelligence

Recognizing this, the European Union (EU) has taken a leading role by adopting the world’s first AI legislation, the EU Artificial Intelligence Act, which aims to strike a balance between enabling the development of AI in Europe while mitigating the risks associated with AI systems and protecting EU basic values.

This is a significant step towards the governance of AI and establishing a robust AI regulatory regime across the EU, which is estimated to be fully operational in 2026. The act adopts a risk-based classification system to determine the level of risk to individuals or society .

The EU’s Artificial Intelligence Act will promote transparency for limited-risk applications such as AI-enabled chatbots, emotion recognition, biometric categorization and deepfakes. However, stricter rules apply to high-risk applications, such as certain uses of AI in the health or immigration sectors. Additionally, some applications of AI for social scoring are banned outright due to fundamental rights concerns.

Consequences for the advertising industry

While the EU’s Artificial Intelligence Act does not aim to directly regulate the advertising industry and its services, as advertising is not on the list of high-risk AI systems, it does remind marketers to be mindful of ethical AI practices. It provides a good foundation and common ground for the responsible development of AI and AI-based systems and the sound management of data that the advertising ecosystem can use to generate positive change in the industry.

Such changes include generative AI service providers such as ChatGPT or Midjourney, which are now required to disclose any copyrighted material used in the development of their AI, including copyrighted material used in private algorithmic training. This ensures transparency and protects the rights of original content creators. used in the development of artificial intelligence, including copyrighted materials used in private algorithmic training. This ensures transparency and protects the rights of original content creators.

Companies could also be encouraged to re-evaluate the use of AI in ad targeting, audience profiling and decision-making processes, as proactively identifying and correcting potential biases and abuses not only promotes fairer and more inclusive advertising experiences, but also increases trust among consumers. For example, the EU Artificial Intelligence Act’s emphasis on the responsible development of AI may influence consumer preferences and advertising trends. Consumers may begin to pay more attention to the types of advertising they engage with, favoring brands that prioritize transparency and ethical AI practices.

Ultimately, the goal of the ethical use of AI is to support a more transparent, trustworthy and consumer-centric advertising ecosystem, respect consumer rights and promote fair competition. Advertisers must adapt to these changes with ethical AI practices that prioritize consumers’ well-being and respect their autonomy.

Shaping AI ethical standards in the advertising industry

While Southeast Asian countries are prioritizing driving innovation and economic growth through AI, supporting innovation without an ethical framework could lead to potential risks and undermine public trust. Instead of viewing the lack of stringent regulations as a challenge, companies in SEA can seize this opportunity and take a leading role in shaping the ethical landscape of AI in the region.

Companies may consider establishing a cross-functional team to oversee ethical AI practices across all operations. This can help bring in diverse perspectives from inside and outside the company, ensuring trust and continuous improvement through regular reviews and adjustments. Additionally, companies can actively contribute to industry-wide AI ethics initiatives and research by supporting regional collaboration among SEA countries to collectively raise ethical standards while supporting innovation.

It is imperative that companies implement strong data management and privacy-by-design practices, such as pseudonymization, as well as security measures, as they are fundamental elements of ethical AI. This cornerstone will enable enterprises to use ethical AI tools responsibly to predict consumers’ interests without compromising their privacy.

By integrating these practices into their operations, marketers can actively shape AI ethical standards, balancing regulatory compliance with opportunities for innovation. This holistic approach not only benefits the industry, but also increases public trust in AI technology, opening up new opportunities for the ethical development of AI.

Written by Diarmuid Gill, Criteo’s Chief Technology Officer

Learn more about how Criteo is leading the way in commercial and retail media thanks to an advanced AI engine Here.