close
close

Meta’s AI withdrawal may signal EU regulatory minefield

The decision by Facebook parent company Meta to withhold the availability of its latest multi-modal artificial intelligence (AI) model in the European Union highlights the growing gap between Silicon Valley innovation and European regulation.

As The Verge reports, citing an “unpredictable” regulatory environment, Meta joins Apple in withdrawing its AI offerings in the region.

The decision comes as Brussels prepares to introduce new rules on artificial intelligence, raising concerns about the potential impact on innovation and competitiveness of the EU’s digital economy.

Meta’s withdrawal is due to uncertainty over compliance with the General Data Protection Regulation (GDPR), specifically regarding training AI models using user data from Facebook and Instagram.

“Under GDPR, a person generally has the right to challenge any automated decision. However, as AI advances exponentially, human knowledge and understanding cannot keep up,” David McInerney, chief commercial officer at Cassie, a consent and preference management platform, told PYMNTS.

A significant issue that companies like Meta face is their ability to explain AI decision-making processes.

“Companies can say they’ve trained their AI and it’s made an automated decision. But if companies can’t properly explain how that decision was made, they can’t meet their legal obligation under GDPR,” McInerney said.

Some experts say the withdrawal of big tech companies like Meta and Apple from offering advanced AI services in the EU could significantly impact trade by limiting the availability of cutting-edge tools for companies operating in the region. This regulatory-induced technological gap could make it harder for EU companies to compete internationally, potentially slowing down innovation in areas like personalized marketing, customer service automation, and AI-driven business analytics that are key to modern trade.

EU Artificial Intelligence Act: A New Regulatory Landscape

On July 12, EU lawmakers published the EU Artificial Intelligence Act (AI Act), a groundbreaking regulation aimed at unifying the rules for AI models and systems across the EU. The act prohibits certain AI practices and sets out rules for “high-risk” AI systems, AI systems that pose transparency risks, and general purpose AI models (GPAI).

The AI ​​Act will be implemented in phases, with the prohibited practices provisions coming into effect on February 2, the GPAI model obligations coming into effect on August 2, 2025, and the transparency obligations and high-risk AI system provisions coming into effect on August 2, 2026. It should be noted that there are exemptions for existing high-risk AI systems and GPAI models already on the market, with extended compliance deadlines.

This regulatory uncertainty could have far-reaching consequences for the EU’s technology landscape. Despite these challenges, it also presents an opportunity for technology industry leadership.

“Meta has the opportunity to change the narrative and set the tone for big tech companies by putting consumer privacy first in a way that many big tech companies have not,” McInerney noted.

The Future of Artificial Intelligence in Europe

The tech industry is watching closely as the EU continues to grapple with balancing innovation and regulation. The outcome of this regulatory tug-of-war could shape the future of AI development and deployment in Europe, with potential knock-on effects across the global tech ecosystem.

EU officials say the AI ​​rules are designed to support technological innovation with clear regulations. They highlight the dangers of human-AI interactions, including risks to safety and security and potential job losses. The push for regulation also comes amid concerns that public distrust of AI could hamper technological progress in Europe, leaving the bloc lagging behind superpowers such as the US and China.

In a similar spirit, European Commission President Ursula von der Leyen called for a new approach to competition policy, emphasising the need for EU companies to scale up their operations on global markets.

The change is intended to create a more enabling environment for European companies to compete globally, potentially easing some of the regulatory pressures on tech companies. However, it remains to be seen how this will balance out with the strict AI regulations already in place.

As implementation of the AI ​​Act approaches, the Commission is tasked with developing guidance and secondary regulations on various aspects of the Act. The technology industry is looking forward to this guidance, particularly on the implementation of the AI ​​system definition and prohibited practices, expected within the next six months.