close
close

EU tightens grip on Big Tech with sweeping digital regulations


EU tightens grip on Big Tech with sweeping digital regulations

Following the publication in the Official Journal of the EU Law on Artificial Intelligence, a series of provisions have been introduced to limit…

Following the publication in the Official Journal of the EU Law on Artificial Intelligence, a series of rules have been introduced to curb abuses by big tech companies across the EU.

With the publication of the AI ​​law in the EU’s Official Journal on Friday (July 12), it’s worth recognising that the bloc has been at the global forefront of regulating the digital space, passing a series of laws aimed at creating a fairer, more competitive and safer online environment for its citizens. These laws have had a significant impact on big tech companies like Google, Apple and Meta, forcing them to change their policies and practices.

GDPR GIVES USERS RIGHTS REGARDING PERSONAL DATA

The EU’s groundbreaking privacy legislation, the General Data Protection Regulation (GDPR), was passed in 2016 and came into effect in 2018. Transposed into UK law after Brexit as the UK GDPR, it gives users more control over their personal data, defined as “any information relating to an identifiable individual.” This includes names, email addresses, IP addresses, home addresses, location data, and health information. Notably, the GDPR applies not only to companies operating in the EU, but also to those outside the bloc that serve EU residents.

Under the GDPR, EU citizens have the right to access their personal data, request its deletion (“right to be forgotten”), and easily transfer it to other service providers. Companies must obtain explicit and informed consent before collecting or processing user data, and that consent can be withdrawn at any time. In addition, companies are required to notify data protection authorities within 72 hours of a data breach and inform affected users “without undue delay.” Failure to comply with the GDPR can result in hefty fines of up to 4% of the company’s global annual turnover or €20 million, whichever is higher. Facebook parent company Meta experienced this firsthand in May 2023, when it was fined €1.2 billion for unlawfully transferring user data from the EU to the US.

Another key requirement is that user data is stored securely within the EU, with cross-border data transfers only permitted to countries with adequate data protection systems.

The Digital Markets Act levels the playing field

The Digital Markets Act (DMA), passed in October 2022 and effective from May 2023, aims to ensure fair competition in the digital sector. It applies to large online platforms with “access control powers” ​​such as search engines, app stores and messaging applications. The DMA mandates that users have the freedom to choose and install apps from alternative sources and app stores, a practice commonly known as “sideloading”. Users can also uninstall pre-installed apps and choose their preferred browser or search engine, and developers must offer multiple options on a “choice screen”. Simplified access to platforms, data ownership, seamless data portability and unbiased search results are further guaranteed by the DMA, while third-party cookies to track user activity outside of a company’s website are prohibited unless the user provides explicit consent.

The DMA promotes a level playing field by allowing alternative methods of app distribution and interoperability between gatekeeper services—potentially requiring WhatsApp to work with third-party apps in certain situations, for example. Failure to comply with the DMA can result in hefty fines ranging from 10% to 20% of a company’s global turnover, and Apple has the dubious distinction of being the first gatekeeper to violate App Store rules by restricting developers from offering alternative purchasing options.

Digital Services Act creates a safer online environment

The Digital Services Act (DSA), passed in July 2022 and fully applicable from February 2024, aims to create a safer online environment. It classifies online platforms, with tech giants including Facebook, Instagram and the Google Play Store falling under the “very large online platform” (VLOP) designation because they have more than 45 million EU users. The DSA requires platforms to establish mechanisms to remove unlawful content, while giving users the ability to flag such content. Targeted advertising based on sensitive data such as sexual orientation, religion or ethnicity is prohibited, and companies must publish annual transparency reports detailing content moderation activities, including content removals, user complaints, government orders and algorithmic parameters used to recommend content.

VLOPs have additional obligations, such as establishing a point of contact for users and authorities, allowing users to opt out of recommendation systems, addressing potential crisis situations, maintaining a public library of advertisements, and undergoing independent audits. Interestingly, VLOPs are required to share data with the European Commission to comply with the DSA, with the Act giving the Commission and designated Digital Services Coordinators (DSCs) the power to require immediate action from VLOPs to address “very serious harm”. Failure to comply with the DSA can result in fines of up to 6% of a company’s global turnover, and repeated violations can lead to temporary bans in the EU.

The EU Artificial Intelligence Act regulates new technologies

The recently approved EU Artificial Intelligence Act, the first major piece of legislation to address the rise of generative AI tools like ChatGPT, Gemini and Copilot, which serves as a model for other jurisdictions, takes a risk-based approach, with stricter requirements for high-risk AI systems. User-centric generative AI tools are considered minimal risk, although developers must ensure their models do not generate illegal content. Developers are also required to clearly label AI-generated content like deepfakes and disclose copyright summaries of training data. More advanced generative models will require rigorous screening for systemic risk.

The bill bans AI systems that pose unacceptable risks, such as those that encourage dangerous behavior or socioeconomic profiling. Facial recognition and biometric identification systems are generally banned, with limited exceptions for law enforcement. High-risk AI systems, including autonomous vehicles, medical devices, and profiling systems, require prior risk assessments, logging, and built-in kill switches.

The EU proposes various penalties for non-compliance, while recognizing the need not to stifle innovation. National authorities are required to create test environments for startups and small companies to train and test their AI models before public deployment.