close
close

The EU’s Artificial Intelligence Act is about to hit bookstores: the compliance steps you need to know

The EU Artificial Intelligence Act has ushered in a new era of AI governance. After three years of consideration on how to regulate artificial intelligence to protect citizens, businesses and government agencies from potential threats, the bill will soon become law, setting a new standard for artificial intelligence policy around the world.

IBM welcomed the bill and its risk-based approach to regulating artificial intelligence. This is consistent with our work on AI ethics, which shows that openness, transparency and accountability are hallmarks of best practices in AI implementation.

Although the law will be published shortly in the Official Journal of the European Union and will enter into force 20 days later, it will take up to three years for all aspects of the legislation to fully enter into force. During this time, policymakers and businesses have collective responsibility for the successful implementation of the Act. This starts with ensuring compliance, encouraging the adoption of AI and ultimately driving innovation across Europe.

Preparation of regulations

The main goal of the act is to make the development and use of artificial intelligence safer and more transparent. By providing guidance and safeguards for AI developers and implementers, the Act aims to increase trust and confidence in the use of AI technologies in Europe. This transparency will facilitate regulatory compliance and help organizations make more informed decisions about their AI investments and strategies. While the Act provides for a phased transition and implementation period, IBM recommends that all customers take AI management seriously and prepare for compliance today.

It is crucial to understand the risk-based approach of the Act. The Act divides artificial intelligence systems into four tiers depending on the level of risk posed by their use, including applications with “unacceptable”, “high”, “limited” and “minimal” risk. AI practices that pose unacceptable risks to society – such as the use of deceptive or manipulative techniques and social scoring – are completely prohibited. High-risk use cases require more regulation to mitigate issues like security and bias across all sectors of the economy, from critical infrastructure management to employment. It is worth noting that generative AI is not classified as high-risk AI, although certain requirements must be met for its use.

Basic steps to ensure compliance

To achieve compliance, organizations must take three key steps:

  • First, organizations need to conduct a comprehensive inventory of their AI applications. This provides a clear understanding of the existing use of AI in the organization.
  • Second, companies should conduct a risk assessment to determine their levels of responsibilities and ensure compliance with essential requirements such as human oversight, privacy and accountability.
  • Third, compliance with the technical standards set out in the Act will be the next key step to demonstrate compliance. European standardization organizations are currently developing these standards and more details will be made available in the coming months.

Reaping the benefits of responsible AI

Regulatory compliance will undoubtedly require an initial increase in investment. However, companies that make every effort to ensure that their AI solutions are managed responsibly will be able to quickly adapt to changing regulations while building more reliable AI and gaining a competitive advantage.

In parallel with compliance efforts, organizations should focus on strengthening their AI management strategies. This includes establishing cross-company workflow management tools and building automated workflows to ensure consistency and transparency across departments. That’s why IBM released the watsonx.governance file. This end-to-end platform combines trusted, pre-trained AI models with advanced governance controls to help companies innovate with regulatory compliance confidence.

Finally, important steps are to establish an AI ethics committee and define ethical guidelines for the use of AI. This ensures that ethical considerations are taken into account in AI development and implementation processes, which promotes trust and reduces reputational risk.

Monitoring the evolving AI policy landscape

This is not the end of the road for the EU law on artificial intelligence. Companies, governments and other organizations whose activities fall within the scope of the European AI rulebook will need to pay close attention to upcoming developments in the coming months. For example, the EU is expected to publish codes of conduct on transparency obligations and general-purpose AI models, provide templates for fundamental rights risk assessments, publish information on training data for baseline models, provide more guidance on the definition of high-end AI risks and establishes management bodies. Businesses that keep pace with changes to the Act will be well-positioned to ensure compliance and future-proof their businesses for further innovation and regulation.

We have known for years that artificial intelligence will touch all aspects of our lives. The EU Artificial Intelligence Act is a significant step towards balancing these impacts with the responsible management of AI. By prioritizing compliance and corporate responsibility, organizations can leverage regulatory transparency, build trust in AI systems, and foster a culture of open and responsible innovation.

–Christina Montgomery, director of privacy and trust at IBM

-Jean-Marc Leclerc, EU Director, IBM Government and Regulatory Affairs