close
close

Timeline of the EU Artificial Intelligence Act (and a look into the future) | Mitratech Holdings, Inc

The EU’s Artificial Intelligence Act marks a new milestone in the AI ​​governance landscape, and businesses are taking notice.

In March last year, the EU adopted new regulations on artificial intelligence included in the EU Artificial Intelligence Act, which had been in development for years. Let’s start with a quick look at the timeline:

While some parts are already enforced, other aspects have a longer tail before companies will have to comply. Let’s dive in.

How does the EU Artificial Intelligence Act work?

The EU Artificial Intelligence Act adopts a risk-based approach to the use of AI. This means that regulations governing AI focus more on how a company uses its technology and for what purposes, rather than restricting the technology itself. However, some risks will be deemed “unacceptable” and such technologies will be banned. The bill bans the use of artificial intelligence systems that monitor people based on sensitive characteristics such as political views and sexual orientation, although there is no blanket ban on the use of sensitive biometric information by law enforcement.

“High-risk” AI technologies will receive greater scrutiny in the form of “risk mitigation systems, high-quality data sets, activity logging, detailed documentation, clear user information, human oversight” and more. These threats concern artificial intelligence technologies that pose a risk to the health, safety or fundamental rights of individuals – such as CV scanning tools, credit assessment tools and remote biometric identification systems. Limited risk AI systems must meet certain transparency requirements – these are called “specific transparency risks”. Specific transparency risks are managed through appropriate labeling. In other words, when you talk to an AI bot, companies are responsible for identifying the bot as such. The EU has concluded that most AI technologies pose no or minimal risk and these AI technologies will not be subject to additional obligations.

How is the EU Artificial Intelligence Act enforced?

The EU Artificial Intelligence Act will be enforced both at Member State level and through the European AI Authority. The act is not fully enforceable immediately, but will be introduced in stages. For example, high-risk liabilities will become fully effective 36 months after the act enters into force.

Like non-compliance with GDPR, breaches of the EU Artificial Intelligence Act carry stiff penalties and fines. These fines range from EUR 7.5 million, or 1.5% of global turnover, to EUR 35 million, or 7% of turnover. In addition to regulatory penalties, companies that fail to comply with the EU Act may face civil consequences and reputational damage. Citizens have the right to submit complaints about AI systems and “receive explanations about decisions based on high-risk AI systems that affect their rights.” These complaints will depend on the skill of using artificial intelligence and the smartness of citizens harmed by these technologies.

How are companies preparing for the EU Artificial Intelligence Act?

Before you start implementing management, you must first understand what you are managing. Strategic companies looking to comply with the EU AI Act are prioritizing the creation of a comprehensive risk inventory – a central location where all AI-enabled technologies can be monitored and routinely assessed for risk.

Ask yourself the following questions:

EU Act on Artificial Intelligence
  1. Risk identification – Have you cataloged all AI systems used or being developed in your organization and documented their goals and potential risks?
  2. Risk assessment – Do you know whether your AI technologies are subject to EU unacceptable, high, limited or minimal risk requirements?
  3. Artificial Intelligence Validation – Is there a system to formalize
    validation of AI applications for use in your company?
  4. Artificial Intelligence Overview – Are all stakeholders aware of the use cases, existing security measures, and key risks associated with deploying AI applications?
  5. AI Risk Mitigation – Have you formalized a system for establishing appropriate controls, in some cases increasing documentation, in others increasing human oversight?
  6. Live monitoring – Can you continue to engage in monitoring changes to AI regulations, as well as to your internal AI systems, to ensure your business remains compliant with the EU AI Act?

Learn more about the Act’s requirements and how implementing strong compliance measures will not only help you avoid penalties, but also increase the trust and reliability of your AI systems.

(Show source.)