close
close

Where should the next UK government introduce regulation of artificial intelligence?

A recent parliamentary report, the last published before the upcoming UK general election, concluded that we need to “fundamentally change the way we think” about artificial intelligence.

Should we? The report, from the House of Commons Science, Innovation and Technology Committee, provides a compelling analysis of the UK’s current ‘wait and see’ strategy and argues persuasively that the UK will choose a third route between a rigorous EU approach and a hands-off US approach to management artificial intelligence in the private sector. But it leaves some important questions that we believe the next government will need to address if the UK is to deliver on its promise to become a world leader in artificial intelligence.

Here is a summary of some of the key takeaways that we believe the next government will need to address.

USP for the UK

In the US, AI legislation is currently gaining momentum at the state level, with the first comprehensive AI legislation passed in Colorado on May 17. We expect this to follow the same pattern as the development of data protection laws, where the US has created a patchwork of different state laws with sometimes unpredictable effects, such as the famous biometrics laws in Illinois, while federal law that many he has hope because he remains elusive.

This is where the UK can gain a competitive advantage by offering a unified approach where both the EU and the US will face divergences between countries (again, similar to privacy laws). However, to make full use of this, the UK will need to send clearer signals about its governance intentions, and in this respect the report (which reflects a wide range of cross-party views) clearly shows a preference for legislation in the next parliament. The three steps that we believe would benefit any emerging legislative framework are: (1) creating a dedicated AI regulator, (2) setting clear rules on the highest risk categories of processing, and (3) creating more definitive rules on key data protection and intellectual property rights issues in the context of AI model training.

With the UK election campaign now well underway, attention will inevitably focus on whether the change in administration could herald a different approach to UK regulation. Not long before the Prime Minister fired the starting gun in the election, Labor was poised to publish its long-awaited artificial intelligence strategy. Time will tell whether this strategy will be implemented before July 4, but previous statements by Shadow DSIT Secretary Peter Kyle suggest that a Labor government would likely be more willing to seek a statutory solution, potentially taking up the baton of Lord Holmes’ Private Members’ Bill on regulation regarding artificial intelligence, which gained more publicity than some expected in the last months of the previous parliament.

Lessons from history

Some say the UK risks releasing brilliant technology at an early stage – for example through the recent safety assessment platform launched by the AI ​​Safety Institute. For some, it has an uncomfortable echo of the free web of the 1990s. The incoming government will have a difficult balancing act between showcasing our AI capabilities by opening AI models to international scrutiny and maintaining the benefits of keeping them secret.

The report highlights the difficulty of getting developers (mostly big tech, cash-rich US) to submit models as promised at the AI ​​Security Summit at Bletchley Park last fall. As the report notes, previous parliamentary reports on AI have identified a potential strategic advantage for UK AI businesses in striking a balance between open and closed source approaches, but the risk of giving away the crown jewels remains.

An obvious problem that is not talked about

The significant potential environmental costs of the infrastructure underpinning AI technologies were raised in passing, citing concerns even within the AI ​​industry about the water and electricity consumption inherent in supporting the increase in computing power needed to maintain AI platforms. This is not the case

however, propose any convincing strategy to mitigate its effects, using. the environmental impact is often lost in the noise created by the much-vaunted “existential threat” of artificial intelligence.

Our work with leaders in the data center and GPU manufacturing sectors suggests there will be an appetite to address these issues if the government can set a clear and focused framework on issues such as planning reform, green energy and environmental standards. We also know that the work undertaken at the Bletchley summit was “inspired by the work of the Intergovernmental Panel on Climate Change”, so we hope this will inspire collective thinking.

The dangers of cutting too thin

The report mentions the likely costs of each regulator taking control of potentially complex AI issues in its sector, which is certainly true. Calls for a large technology levy to fund AI regulation (similar to the recent call for banks to introduce a levy on online fraud). However, overarching concerns about regulatory overlap may overlook the subtler but equal threat of “under-performance,” where some AI deployments may not materialize, given that many AI models are sector-agnostic.

An example of this has already emerged in the ICO’s discussion on legitimate interests in its draft data protection and web browsing guidance for generative AI, which does not fully address illegality in one regulatory area (such as contract terms or intellectual property rights). it can also fatally infect another person (data protection: see Shoosmiths response to the ICO consultation on generative artificial intelligence). This creates some potential gaps where difficult problems may not be solved by the “light touch” approach described so far. The government has suggested that these gaps will be addressed by a non-statutory “steering committee” it intends to establish between the government and regulators, but the composition and remit of this committee are not yet known. Against this backdrop, we’ll be intrigued to see whether the incoming administration sees the appeal of an overarching regulatory authority that many (including some AI vendors) see as a better solution.

Leadership and the data challenge

Technology companies in the US have a huge advantage over artificial intelligence due to the wealth of resources accumulated in advertising ecosystems. This leaves everyone else facing difficulties – not only financially, but also due to the lack of raw materials. As the report notes, the only data the UK has exclusively is that produced from national services such as the NHS and BBC. Exploiting this is a potential winner for the UK, but is extremely difficult to achieve without undermining trust and transparency (and assuming the data can be effectively collected from the often different government bodies that currently hold it).

In the case of intellectual property rights, the report calls for a “sustainable framework that recognizes the inevitable trade-offs.” The treatment of intellectual property by AI providers, which is the subject of increasingly high-profile legal proceedings, remains an issue for which there is no clear legislative solution (in the UK or elsewhere) and without a detailed and potentially far-reaching review of existing copyright and other laws, it is difficult to imagine imagine how to solve the intellectual property problem associated with artificial intelligence models.

On these and broader issues, we hope that those who take up the baton on regulating AI after the general election – regardless of who forms the next government – will be able to get to the bottom of it and address the issues that remain unresolved.