close
close

Protecting human rights online: technological regulation and artificial intelligence for good

What are the key trends towards enabling accountability and #AIforGoodwhile protecting against adverse impacts on human rights? The EU Delegation organized an innovative event together with OHCHR, the Global Network Initiative (GNI) and The Humane Intelligence to discuss this important issue in more detail.

To effectively address and prevent violations and abuses of human rights online, often facilitated by increasingly powerful artificial intelligence systems, many technology regulations have emerged to establish safeguards at every stage of the technology’s lifecycle. Companies are required to conduct human rights due diligence and risk assessment, as well as related transparency and audit requirements for digital technologies, including artificial intelligence.

The event brought together over 70 experts from international organizations, diplomatic missions, private technology companies and non-governmental organizations dealing with the intersection of human rights and technology.

It is through a multi-stakeholder approach that we can most effectively not only address the potential harm of these new technologies, but also ensure that they truly empower individuals. Today we heard how important it is to establish protective barriers based on artificial intelligence, which, however, we do not do. You do not have to choose between security and innovation. They should go hand in hand! Increasing their scale will only be possible if society trusts artificial intelligence and other new technologies. Ambassador Lotte Knudsen, head of the EU delegation

The EU’s Digital Services Act (DSA) builds on risk assessment, mitigation, audit and data transparency practices to hold large digital services accountable in a way that protects fundamental rights. Also guided by a risk-based approach, the recently adopted EU Artificial Intelligence Act, the first-ever comprehensive global legal framework for artificial intelligence, establishes principles to support trustworthy AI by ensuring that AI systems respect fundamental rights , security and ethics, and addressing the risks associated with very powerful and influential AI models. Similar efforts have also intensified in other regions, including Latin America, where countries have begun to prepare their own AI regulations, and in Africa, with the African Union Commission’s ongoing work on AI.

Ideally, this new regulatory framework will build on decades of voluntary practices – transparency reporting, human rights risk assessment and audits – designed to encourage responsible business conduct in line with the UN Guiding Principles on Business and Human Rights (UNGPs). . However, these regulatory changes require the convergence of traditional audit and assessment processes with technical audits. For oversight and enforcement purposes, companies are now often required to share data and code, which allows auditors to evaluate algorithms and data sets. This is a promising step towards enabling accountability and AI for good, while safeguarding against adverse impacts on human rights.

However, many questions and challenges remain about how these regulatory changes will be implemented, verified and enforced in practice in a way that protects fundamental human rights and complies with technical requirements. In particular, there is a lack of guidance on how companies and assessors should implement risk assessment and audit mechanisms under the UNGPs and how civil society and academia can most meaningfully engage in these processes.

The UN Human Rights B-Tech Project, together with BSR, GNI and Shift, has helped produce several papers analyzing and explaining how international frameworks on human rights and responsible business should guide the approach to managing the risks associated with generative AI. Further work is needed to understand how business and human rights practices can inform and integrate AI-focused risk assessments in the context of regulations such as DSA and the EU Artificial Intelligence Act, and to engage with the technical community regarding these consequences.

The following issues were discussed during the event:

  • What are the key global regulatory trends requiring assessment by technology companies
  • threats to human rights?
  • How stakeholders (including engineers) can encourage comparable AI risk assessment
  • and auditing benchmarks?
  • What appropriate AI audit methodologies might look like and what data is needed
  • conduct responsible AI audits?
  • What is the role of enforcement/oversight mechanisms?
  • How can civil society and academia engage most meaningfully in these processes?
  • How AI risk assessments and audits can be used by companies and external stakeholders
  • to ensure accountability and catalyze change?

Speakers included:

  • Juha HeikkilaArtificial Intelligence Advisor at the European Commission’s Directorate-General for Communications Networks, Content and Technology (CNECT)
  • Rumman ChowdhuryCEO of Humane Intelligence
  • Lena Wendland, Director of Business and Human Rights, United Nations Human Rights
  • Mariana ValenteDeputy Director of Internet Lab Brazil/Professor of Law, University of St.Gallen, member of the Lawyers’ Committee for the Artificial Intelligence Law, Brazil
  • Alex Waldenglobal director of human rights at Google
  • Jason PielemeierExecutive Director of the Global Network Initiative