close
close

Google and Meta criticize UK and EU AI laws

Both Google and Meta openly criticized European artificial intelligence regulations this week, suggesting they would limit the region’s innovation potential.

Representatives from Facebook’s parent company, as well as Spotify, SAP, Ericsson, Klarna and others, have signed an open letter to Europe expressing their concerns about “inconsistent regulatory decision-making.”

It says that interventions by European Data Protection Authorities have created uncertainty about what data they can use to train their AI models. The signatories call for consistent and rapid decisions on data regulations that allow the use of European data, similar to the GDPR.

The letter also stressed that the bloc will not have access to the latest “open” AI models that are made available for free to everyone, nor to “multimodal” models that accept input and generate output in the form of text, images, speech, videos and other formats.

By preventing innovation in these areas, regulators “are depriving Europeans of the technological advances that the US, China and India enjoy.” Furthermore, without free reign over European data, models “will not understand or reflect European knowledge, culture or languages.”

SEE: Companies seek balance between AI innovation and ethics, according to Deloitte

“We want Europe to succeed and flourish, including in cutting-edge AI research and technology,” the letter reads. “But the reality is that Europe has become less competitive and less innovative compared to other regions and now risks falling even further behind in the AI ​​era due to inconsistent regulatory decision-making.”

Google suggests copyrighted data could be used to train commercial models

Google also spoke out separately about UK rules that prevent AI models from being trained on copyrighted material.

“If we don’t take proactive action, we risk being left behind,” Debbie Weinstein, Google’s UK managing director, told The Guardian.

“The unresolved copyright issue is blocking development, and the way to unblock it, of course from Google’s perspective, is to go back to the situation the government was in in 2023, when TDM was approved for commercial use.”

TDM, or text and data mining, is the practice of copying copyrighted works. It is currently only allowed for non-commercial purposes. Plans to allow it for commercial purposes were abandoned in February after widespread criticism from creative industries.

Google also published a paper this week titled ‘Unlocking the potential of AI in the UK’, in which it made a number of suggestions for policy changes, including enabling commercial TDM, creating a publicly funded mechanism for compute resources and launching a national AI Skills Service.

SEE: 83% of UK companies are increasing pay for AI skills

As reported by the Guardian, it also calls for a “pro-innovation regulatory framework” that will be based on a risk- and context-sensitive approach and will be managed by public regulators such as the Competition and Markets Authority and the Information Commissioner’s Office.

EU rules have affected Big Tech’s AI plans

The EU is a huge market for the world’s largest technology companies, with 448 million people. However, the implementation of the rigid AI Act and Digital Markets Act has prevented them from launching their latest AI products in the region.

In June, Meta delayed training its large language models on public content shared by adults on Facebook and Instagram in Europe after Irish regulators objected. Meta AI, its pioneering AI assistant, has yet to be released on the bloc due to “unpredictable” regulations.

Apple will not initially make its new suite of generative AI capabilities, Apple Intelligence, available on devices in the EU, citing “regulatory uncertainty caused by the Digital Markets Act,” Bloomberg reports.

SEE: Apple Intelligence EU: Potential Mac Release Due to DMA Rules

According to a statement Apple spokesman Fred Sainz gave to The Verge, the company “is concerned that DMA interoperability requirements could force us to compromise the integrity of our products in a way that puts user privacy and data security at risk.”

Thomas Regnier, a spokesperson for the European Commission, told TechRepublic in an emailed statement: “All companies are free to offer their services in Europe, as long as they comply with EU rules.”

Google’s chatbot Bard has been released in Europe four months after its US and UK launches, after the Irish Data Protection Commission raised privacy concerns. Similar regulatory resistance is believed to have led to a delay in the arrival of its second iteration, Gemini, in the region.

This month, the Irish DPC launched a new investigation into Google’s AI model, PaLM 2, because it may be in breach of GDPR. In particular, it is looking into whether Google had sufficiently completed an assessment to identify risks related to the way it processed personal data of Europeans to train the model.

X also agreed to permanently stop processing personal data from public posts of EU users to train its Grok AI model. The DPC took Elon Musk’s company to the Irish High Court after finding it failed to provide mitigating measures, such as an opt-out, for months after it began collecting data.

Many European tech companies have their headquarters in Ireland, as the country has one of the lowest corporate tax rates in the EU at 12.5%. As a result, the Irish data protection authority plays a leading role in regulating the technology market across the bloc.

UK AI laws remain unclear

The UK government’s stance on AI regulation has been mixed, partly due to a change in leadership in July. Some officials also fear that over-regulation could push out the biggest tech players.

On July 31, Peter Kyle, the Secretary of State for Science, Innovation and Technology, told executives from Google, Microsoft, Apple, Meta and other leading technology companies that the new AI bill would focus on large, ChatGPT-style baseline models created by just a few companies, according to the Financial Times.

He also assured them that it would not become a “Christmas tree bill” where more rules would be added through the legislative process. He added that the bill would focus primarily on making voluntary agreements between companies and the government legally binding and would transform the AI ​​Safety Institute into an “independent government body.”

As seen in the EU, AI regulations are delaying the introduction of new products. While the intention is to keep consumers safe, regulators risk limiting their access to the latest technologies that could bring tangible benefits.

Meta has taken advantage of the lack of immediate regulation in the UK by announcing that it will train its AI systems on public content shared on Facebook and Instagram in the country, something it does not currently do in the EU

SEE: Delaying UK AI adoption by five years could cost economy more than £150bn, Microsoft report finds

On the other hand, in August the Labour government set aside £1.3 billion earmarked by the Conservatives for artificial intelligence and technology innovation.

The UK government has also consistently indicated it plans to take a tough approach to regulating AI developers, saying in July’s King’s Speech that the government “will seek to establish appropriate regulations to impose requirements on those working to develop the most powerful AI models.”