close
close

Meta is halting plans to train artificial intelligence using European user data as it bows to regulatory pressure

Meta has confirmed that it will pause plans to start training its artificial intelligence systems using data from users in the European Union (EU) and the UK

The move follows a response from Ireland’s Data Protection Commission (DPC), the main regulator for Meta in the EU, which acts on behalf of more than a dozen data protection authorities (DPAs) across the EU. The UK’s Information Commissioner’s Office (ICO) has also asked Meta to put its plans on hold until its concerns are addressed.

“DPC welcomes Meta’s decision to halt plans to train its multilingual model using public content shared by adults on Facebook and Instagram across the EU/EEA,” DPC said in a statement today. “This decision followed intense cooperation between DPC and Meta. DPC, in cooperation with other data protection authorities in the EU, will continue to cooperate with Meta on this matter.

While Meta already uses user-generated content to train its AI in markets such as the US, stringent European GDPR regulations have created obstacles for Meta – and other companies – looking to improve their AI systems with user-generated training materials.

However, last month the company began notifying users of an upcoming change to its privacy policy that it says will give it the right to use public content on Facebook and Instagram to train its artificial intelligence, including content from comments, interactions with companies, status updates, photos and associated captions. The company argued that it needed to do this to reflect “the diversity of languages, geographical locations and cultural references of the people of Europe”.

These changes were to enter into force on June 26, 2024, i.e. in 12 days. However, these plans prompted the privacy nonprofit NOYB (“it’s none of your business”) to file 11 complaints with EU countries, arguing that Meta violates various aspects of the GDPR. One of them concerns the issue of opt-in versus opt-out, vis à vis where processing of personal data takes place, users should first be asked for their consent rather than requiring action to refuse.

Meta, for its part, invoked the GDPR provision called “legitimate interest”, claiming that its actions are compliant with the regulations. This is not the first time Meta has used this legal basis in its defense, having done so previously to justify the processing of European users’ data for targeted advertising purposes.

It always seemed likely that regulators would at least put a hold on Meta’s planned changes, especially given how difficult it was for users to “opt out” of having their data used. The company says it has sent more than 2 billion notifications informing users about upcoming changes, but unlike other important public messages that are plastered at the top of users’ feeds, such as prompts to get out and vote, these notifications appear alongside standard user notifications – birthdays friends, photo notifications, group announcements and more. So if someone doesn’t check their notifications regularly, this could be easy to miss.

Those who see the notice will not automatically know there is a way to object or opt-out. It simply encourages users to click and learn how Meta will use their data. There is no indication that this is a possibility here.

Meta: AI notification
Meta: AI notification
Image credits: Meta

In today’s updated blog post, Meta’s Director of Global Privacy Engagement, Stefano Fratta, stated that he was “disappointed” with the request received from the DPC.

“This is a step backwards for European innovation, competition in AI development and further delays in making the benefits of AI available to European citizens,” Fratta wrote. “We are confident that our approach is compliant with European laws and regulations. AI training is not a feature of our services and we are more transparent than many of our industry peers.”