close
close

UK, US and EU sign first international treaty on AI

The UK has signed the world’s first international treaty on artificial intelligence, along with the European Union, the United States and seven other countries.

The agreement commits signatories to adopt or maintain measures to ensure that the use of AI is consistent with human rights, democracy and the law. These measures should protect society from the inherent risks of AI models, such as biased training data, and the risks of their misuse, such as the spread of disinformation.

The Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law was opened for signature at the Council of Europe Justice Ministers’ Conference in Vilnius, Lithuania, on 5 September. Current signatories include:

  • Andorra.
  • EU.
  • Georgia.
  • Iceland.
  • Israel.
  • Norway.
  • Republic of Moldova.
  • San Marino.
  • Great Britain
  • US

The treaty adds to a growing body of international rules aimed at mitigating the risks of AI, including the Bletchley Declaration, which was signed by 28 countries in November 2023.

More signatories are expected from other countries that negotiated the treaty. These include the 39 other Council of Europe member states and nine non-member states: Argentina, Australia, Canada, Costa Rica, the Holy See, Japan, Mexico, Peru and Uruguay.

Lord Chancellor Shabana Mahmood represented the UK with her signature. She said in a statement: “AI has the ability to radically improve the responsiveness and efficiency of public services and turbocharge economic growth. But we can’t let AI shape us – we must shape AI.

“This convention is an important step towards ensuring that these new technologies can be used without undermining our most ancient values, such as human rights and the rule of law.”

SEE: UK and G7 countries to use AI to improve public services

Council of Europe Secretary General Marija Pejčinović Burić said in a press release: “We must ensure that AI growth upholds our standards, not undermines them. The Framework Convention is designed to do just that.

“I hope these will be the first of many signatures and that ratifications will follow soon so that the treaty can enter into force as soon as possible.”

The Treaty was adopted by the Committee of Ministers of the Council of Europe on 17 May. To enter into force, it must be ratified by five signatories, including at least three member states of the Council of Europe. Entry will take place three months after the fifth ratification, on the first day of the following month.

It is separate from the EU law on artificial intelligence, which came into force last month, because the Council of Europe is a 46-member organisation separate from the EU and non-EU countries can sign it.

The feasibility of the AI ​​treaty was first examined in 2019. The legislation was replaced by the Council’s Committee on Artificial Intelligence in 2022. It was formally adopted on 17 May of that year.

What does the treaty require from signatories?

To protect human rights, democracy and the rule of law, the Framework Convention requires signatories to:

  1. Ensure that AI systems respect human dignity, autonomy, equality, non-discrimination, privacy, transparency, accountability and reliability.
  2. Provide information about decisions made using AI and allow people to question those decisions or the use of AI itself.
  3. Provide procedural safeguards, including mechanisms for filing complaints and notifications about interactions with AI.
  4. Conducting ongoing human rights risk assessments and establishing protective measures.
  5. Allow authorities to block or pause certain AI apps if necessary.

The treaty covers the use of AI systems by public bodies, such as the NHS, and private companies operating in the jurisdictions of the parties. It does not cover activities related to national security, national defence matters or research and development, unless they could potentially interfere with human rights, democracy or the rule of law.

The UK government says the treaty will work to strengthen existing laws and measures, such as the Online Safety Act, and will work with regulators, devolved administrations and local authorities to ensure the treaty’s requirements are implemented.

SEE: UK government announces £32m AI projects

It is the responsibility of the “Conference of the Parties”, a group composed of official representatives of the Parties to the Convention, to determine the scope of implementation of the treaty’s provisions and to make recommendations.

UK moves towards safe AI

The treaty says that by regulating AI, it continues to promote its progress and innovation. The UK government has sought to strike that balance in its actions.

In some ways, the government has hinted that it will be clamping down hard on AI developers. In July’s King’s Speech, it was announced that the government “will seek to establish appropriate regulations to impose requirements on those working to develop the most powerful AI models.”

This is confirmed by Labour’s pre-election manifesto, which pledged to introduce “binding regulation for the handful of companies developing the most powerful AI models”. After the speech, Prime Minister Keir Starmer also told the Commons that his government would “harness the power of AI as we work to strengthen the security framework”.

SEE: Delaying UK AI adoption by five years could cost economy more than £150bn, Microsoft report finds

The UK also established the first national AI Safety Institute in November 2023, with the primary goals of assessing existing AI systems, conducting fundamental AI safety research, and sharing information with other domestic and international entities. Then, in April this year, the UK and US governments agreed to work together to develop safety tests for advanced AI models, implementing plans developed by their respective AI Safety Institutes.

On the other hand, the UK government promised tech companies that the upcoming AI Bill would not be overly restrictive and was clearly in no hurry to introduce it. It was expected that the bill would be included in the named legislation announced as part of the King’s Speech, but that did not happen.