close
close

Artificial intelligence | UK Regulatory Outlook for May 2024 – Osborne Clarke

UK regulators publish their strategic approaches to artificial intelligence | Launch of the Artificial Intelligence and Digital Center | DRCF’s views on fairness in artificial intelligence

Updates from the UK

UK regulators publish their strategic approaches to artificial intelligence

As requested by the Government in response to the Artificial Intelligence White Paper consultation, various UK regulators presented and published their strategic approach to Artificial Intelligence to the Department of Science, Innovation and Technology (DSIT) in late April:

Not surprisingly, there is wide variation in the extent to which different regulators engage with AI. Some, in particular the four leading regulators of the Digital Regulatory Collaboration Forum (DRCF), have already invested significant time and resources in building understanding of how this technology connects to their respective jurisdictions. Others (such as Ofgem) are less advanced and work is just starting. We understand that these reports will be used to “gap analysis” of UK regulations and challenges related to artificial intelligence.

We have read the MHRA report in detail (see our Insight). We also organized a webinar on financial services regulators’ strategic approach to artificial intelligence – you can watch the recording here.

Details of the ICO’s response can be found in the Data Law section.

Launch of AI and digital center

DSIT announced the launch of an Artificial Intelligence and Digital Center run by DRCF. This multi-regulatory sandbox was promised in the UK government’s response to the February 2024 Artificial Intelligence White Paper.

The Center is an online portal offering free, informal advice to businesses on how to apply the regulations (within the remit of the four DRCF members) to their business proposals. Inquiries can be submitted online and the relevant regulatory authorities will receive a cumulative response. The aim is to enable companies to check compliance and bring products to market faster. This is a one-year pilot project.

Inquiries must relate to products, services or processes that are:

  • innovative – a new or adapted way of doing business;
  • focused on artificial intelligence and/or digital technology;
  • beneficial to consumers, businesses and/or the UK economy;
  • under the remit of at least two DRCF regulatory authorities.

DRCF’s views on fairness in artificial intelligence

The DRCF has published its views on the principle of fairness – one of five ‘high-level’ principles set out in the UK Government’s Artificial Intelligence White Paper. This is in line with the Government’s initial guidance to regulators, which sets out expectations for existing regulators to interpret and apply the rules within their respective regulatory remits. The EHRC contributed to the discussion.

The main fairness challenge identified by the DRCF is algorithmic bias in the implementation of artificial intelligence. Regulators may have difficulty determining whether an algorithmic decision-making process was biased due to the indirect nature of bias, the complexity of the models used, and the links between the different data points used.

Different regulators have different powers regarding fairness in AI. For example:

  • The ICO has issued guidance on fairness as it is an important data protection principle;
  • The FCA has various regulatory integrity requirements that may apply when firms use AI in connection with the provision of financial services;
  • The CMA addresses fairness in terms of consumer vulnerability and the requirement for effective competition, as recently reflected in the review of the CMA’s core model;
  • however, Ofcom has no direct powers to regulate fairness in AI.

This is a good example of how the approach of using the powers of existing UK regulators to regulate AI creates an uneven patchwork in enforcement.

The ICO publishes its fourth call for submissions on generative artificial intelligence and data protection

The ICO has published its fourth consultation on generative artificial intelligence and data protection. An overview of previous ICO consultations on Generative AI can be found in this Regulatory Forecast.

This call for comments focuses on how organizations implementing generative AI ensure that individuals can exercise their rights to:

  • to be informed about whether his or her personal data is being processed;
  • access to a copy of your personal data;
  • deletion of information about them, if applicable; AND
  • limit or stop the use of your information, if applicable.

This call for comments closes on June 10, 2024, and you can respond using this form.

Act on automated vehicles

The Automated Vehicles Bill received Royal Assent on 20 May 2024. More details can be found in this Regulatory Outlook.

Artificial intelligence legislation on the horizon?

The Artificial Intelligence (Regulation) Act expired with the dissolution of Parliament ahead of the general election scheduled for July 4, 2024 (although it was very unlikely to come into force). During a recent debate in the House of Lords, a representative of the Labor Party commented that “A Labor government would urgently introduce binding regulation and create a new Office for Artificial Intelligence Regulatory Innovation“. We are monitoring for more information on Labor’s plans in this regard in the coming weeks. Our working assumption is that the Conservative Party will continue the current government’s approach to regulating AI if re-elected.

EU updates

AI Act Schedule

The revised final text of the AI ​​Act was adopted by the European Parliament at the end of April and was also adopted by the EU Council on May 21. The final text is available here.

The last legislative step is the publication of the act in the Official Journal of the EU, which is expected at the turn of May and June, and the act’s entry into force will take place 20 days later, at the turn of June and July.

Our study explains various compliance deadlines, starting with bans on certain types of AI that will come into force at the end of this year.

International updates

OECD updates AI rules

The Organization for Economic Co-operation and Development (OECD) has updated its AI principles to more specifically address issues such as privacy, intellectual property rights, AI security and information integrity. Key changes include:

  • where AI systems may cause unnecessary harm or exhibit undesirable behavior, there should be robust mechanisms and safeguards to enable them to be safely bypassed, repaired or decommissioned;
  • mechanisms should be introduced to strengthen the integrity of information while respecting freedom of speech;
  • Consider AI risks and responsibilities at every stage of the AI ​​system lifecycle by adopting a responsible business approach, including working with AI knowledge and resource providers, AI system users and other stakeholders;
  • information about AI systems needed to ensure transparency and responsible disclosure should be clear;
  • Environmental sustainability should be part of the responsible management of AI; AND
  • Jurisdictions should work together to promote an interoperable AI governance and policy environment.

AI Summit in Seoul

Following the AI ​​Security Summit at Bletchley Park in the UK in November 2023 (see this Regulatory Outlook), the second summit in the series took place in South Korea on 21-22 May 2024.

On the first day, South Korea was joined by representatives of France, Germany, Italy, Canada, the US, Australia, Japan, Singapore, the EU and the UK. The “Seoul Declaration on Safe, Innovative and Inclusive Artificial Intelligence” was agreed, along with a statement committing to international cooperation on the science of AI security. Additionally, leading AI companies have agreed to voluntary security commitments for pioneering AI, focusing on the responsible development and deployment of these AI systems.

On the second day, a broader group of 28 countries (including China) and the EU discussed AI security, but also considered how to increase inclusion and innovation (and released this statement). The group also considered how trustworthy and sustainable AI could increase productivity.

Ahead of the summit, the UK published an interim ‘International Scientific Report on the Security of Advanced Artificial Intelligence’. The report aims to contribute to the debate on AI security rather than make recommendations, and its scope is limited to the current state of knowledge about general-purpose AI and its risks. His conclusions essentially conclude that there are many unknowns and a wide diversity of opinions, which means that the future of general-purpose AI is very uncertain, but nothing is inevitable. The final report is scheduled to be published before the next artificial intelligence summit in France.

And finally …

We have launched a series of studies examining the implications of the EU Artificial Intelligence Act for life sciences and healthcare companies. Over the coming months, the series will cover AI supply chains, product logistics, R&D, SMEs, compliance monitoring, accountability and more. Here are the insights we’ve published so far: