close
close

(Policy outline for students) Explainability in artificial intelligence

(source: NicoElNino/Shutterstock)

Authors: Janine Ecker, Paul Kleideinam, Claudia Leopardi, Anna Padiasek, Benjamin Saldich

The Department of Digitization, Governance and Sovereignty regularly publishes the best essays and articles written by Sciences Po students during their studies.

This political information was recognized as one of the best works written during the course conducted by prof. Suzanne Vergnolle’s “Comparative Approach to Big Tech Registration” in spring 2024 as part of the “Digital, Emerging Technologies and Public Policy” major in the School of Public Affairs.


In an era where artificial intelligence (AI) will play an increasingly important role in our society, it is essential to maintain a level of human control over AI systems. Explainability – broadly defined as the level of understanding or explanation of how AI systems function and make decisions – is a key element of this human control. Yet scientists, ethicists and policymakers have so far failed to combine a common strategy for regulating explainability in the field of AI. This policy brief, developed by our European advisory team, synthesizes academic insights and international regulatory approaches to propose actionable recommendations for U.S. policymakers. Our goal is to strike a balance between ethical imperatives and practical considerations, ensuring transparency, accountability and public trust in AI technologies.

After examining the current understanding of the concepts of transparency in “white box” and “black box” AI systems, the article examines how organizations and countries have attempted to define and regulate explainability of AI, with a particular focus on the EU, China, and the United States. Three main policies emerge from this analysis, the strengths and limitations of which are considered.

Drawing inspiration from recent regulatory efforts in the EU, this paper recommends a balanced approach to AI explainability that seeks to regulate the governance of AI based on levels of risk, recognizing technical limitations while ensuring accountability and transparency. We propose four key policies for the U.S. Congress to consider when developing AI legislation:

  1. Implement a risk-based approach: Adopting a structured framework similar to the EU Artificial Intelligence Act ensures consistency, transparency and proportionality of AI regulation.
  2. Mandatory binding obligations for high-risk systems: Enforcing transparency and a human-centric approach to high-risk AI systems, ensuring accountability and mitigating risk.
  3. Establish clear liability rules: introduce liability rules to facilitate redress for those harmed by AI systems, balancing preventive measures with damage response mechanisms.
  4. Establish an FTC Task Force: Create a dedicated task force within the FTC to oversee AI governance, ensure compliance, and support collaboration among stakeholders.

The article also highlights the complexity and evolving nature of the AI ​​sector, which poses unique challenges in planning and implementing regulations focused on explainability. Obtaining credible explanations for AI decision-making remains a significant challenge that must be addressed by future research.