close
close

New UK government downplays AI regulation in next year’s agenda – Computerworld

As Britain’s King Charles III addressed the House of Commons on Wednesday to outline the proposed legislative programme of the new Labour government, technology experts were braced for any mention of artificial intelligence (AI).

In this case, amid the colourful pomp and ceremony for which the British state is famous at the opening of Parliament, the president’s speech was essentially a promise of future legislation, devoid of any details about what form it would take.

Talking head

During the Royal Speech, the elected UK government, in this case the recently elected Labour government, sets out the bills it intends to introduce over the next year.

The monarch gives a speech, but it is written for him by the government. His role is purely constitutional and ceremonial.

It’s hard to imagine a greater contrast than a ceremony whose origins date back hundreds of years and whose themes, such as artificial intelligence, embody the promise and peril of the 21saint technology of the century.

The government “will seek to establish appropriate regulations that will impose requirements on those working to develop the most powerful artificial intelligence models,” King Charles announced.

But beyond the focus on regulating models used for generative AI, that leaves the government’s plans and timeline open to interpretation. But even the willingness to act marks a shift in policy direction for the ousted Conservative administration to legislate AI within narrow constraints.

Everyone wants to regulate AI

The new government was expected to go further, having seen the broad statements of intent in the Labour Party 2024 Manifesto.

“We will ensure that our industrial strategy supports the development of the artificial intelligence (AI) sector and removes planning barriers for new data centres,” the Manifesto stated, before noting the need for regulation.

“Labor will ensure the safe development and use of AI models by introducing binding regulation for the handful of companies that develop the most powerful AI models and banning the creation of sexually explicit deepfakes.”

The disappearance of this modest ambition may indicate that the government has not yet come to terms with the concept of “binding regulation” at a time when other regulations seem more urgent.

The previous government feared that too much regulation risked stifling development. Similarly, too little regulation risks that by the time it becomes necessary, it will be too late to act.

The European Union, of course, already has its own AI bill, while the United States is still working on a package of proposed regulations, supported by executive orders from the Biden administration outlining the ground rules.

Still too early?

A comment from open source industry advocate OpenUK ahead of the King’s Speech conference sums up this dilemma.

“The UK can learn lessons from the EU’s AI Bill, which is likely to be an overly stringent and unwieldy cautionary tale of regulatory capture that only the largest companies can comply with, which is stifling innovation in the EU,” said the organisation’s chief executive, Amanda Brock.

It was still too early to enact a law that would create legal barriers and restrictions.

“For the UK to remain relevant in the world and build successful AI companies, openness is key. This will allow the UK ecosystem to build on its status as a world leader in open-source AI, behind the US and China,” she added.

However, not everyone is convinced that waiting is the right approach.

“Regulation is not just about setting limits on the development of AI. It’s about providing the transparency and guidance needed to promote safe and sustainable innovation,” said Bruna de Castro e Silva of AI governance firm Saidot.

“As the EU prepares to publish its official AI Bill, UK businesses are looking forward to clear guidance on how to safely and ethically develop and deploy AI.”

That’s why regulating AI is seen as a thankless task. Take an interventionist approach, and experts will line up to say you’re stifling a technology with enormous economic and social potential. Take a more cautious approach, and others will say you’re not doing enough.

Last November, the previous Conservative government of Rishi Sunak took up the cause of AI by hosting a global AI Security Summit on a symbolic scale at the famed World War II code-breaking centre Bletchley Park near London.


At the event, several major AI companies (OpenAI, Google DeepMind, Anthropic) committed to providing the new Frontier AI Task Force with early access to their models to conduct security assessments.

The new government is keeping that promise, even if many others may feel that certainty about the UK’s legal regime for AI is no closer to the truth than it was then.