close
close

Business reporter – Risk management

EU rules on artificial intelligence have been finally approved. Nima Montazeri from Liberis argues that there is still much to consider to ensure that regulations do not stifle innovation

As widely expected, the European Parliament has approved the world’s first comprehensive framework for mitigating the potential risks of artificial intelligence (AI).

The Artificial Intelligence Act is expected to become law in May or June 2024, once the final few formalities have been completed. The rules will then come into force in stages, with EU member states obliged to ban banned artificial intelligence systems within six months of the rules being placed on the law books.

The Act works by classifying products according to risk and adapting controls accordingly. The greater the risk recognized, the stricter the rules will be. Businesses operating across Europe will undoubtedly now be busy considering how best to adapt to the upcoming regulations.

From a financial services perspective, it is worth noting that the bill creates provisions to address the risks posed by generative artificial intelligence tools and chatbots, which are increasingly used by embedded financial services providers to improve user experience and customer service levels. These rules will require manufacturers of core AI systems to be more transparent about the data used to train their models to ensure full compliance with EU copyright law.

The ultimate goal is to make technology more human-centric, and the new law should be seen as a starting point for new technology-based governance, says MEP Dragos Tudorache.

It will be interesting to see how other jurisdictions respond now. For example, the UK is committing to research and understand the risks of adopting AI before introducing legislation. In recent years, China has introduced a patchwork of regulations and guidelines. President Joe Biden has issued an executive order requiring AI developers to share security results with the U.S. government.

Be careful

The Artificial Intelligence Act is a landmark moment in the application of artificial intelligence across sectors, including the financial services industry, and demonstrates the EU’s commitment to addressing the complex ethical, social and economic implications of artificial intelligence. It recognizes the undeniable need to set boundaries to protect individual rights without stifling technological progress.

The bill’s requirement to incorporate human accountability and oversight mechanisms into AI processes is also commendable. This will help ensure that emerging AI technologies support, rather than replace, human decision-making, making AI a useful tool through which financial services providers can augment the capabilities of their human talent.

However, despite its advantages, the act is not without flaws.

One of the main concerns is the underlying assumption that AI is inherently dangerous. While it is wise to approach new technologies with caution, this prospect risks stifling innovation by imposing overly restrictive measures on the development and application of artificial intelligence.

The technology sector is a significant growth area for the EU bloc, and regulations must be carefully crafted to ensure they do not inadvertently limit the potential for innovation and economic expansion.

The Act is normative, not adaptive, and may impede the dynamic evolution of artificial intelligence technology. The rapid pace of technological innovation requires a regulatory approach that can adapt to new changes and challenges, ensuring that regulation remains relevant and effective without hindering progress.

Blocking competition

Another significant concern is that early and stringent regulation tends to favor incumbents with the resources to navigate a complex legal environment.

These entities can afford the legal and technical expertise required to ensure compliance, which can create barriers to entry for start-ups and smaller companies. This can stifle competition and innovation because smaller players are essential to driving technological progress and diversification in the market.

While we should all welcome the EU’s Artificial Intelligence Act for its attempt to regulate the ethical use of AI, we also need to recognize and address concerns about its potential to hinder technological innovation and favor established companies.

It is imperative that policymakers continue the regulatory journey now and address these concerns, creating an environment in which AI can be developed and applied ethically and effectively, without limiting the dynamic innovation that characterizes the technology sector.

To use Tudorache’s language, the Artificial Intelligence Bill is indeed a starting point for new governance, but the next steps require further thought if we are to achieve the right balance between regulation and innovation.


Nima Montazeri is the Chief Product Officer at Liberis

Main photo courtesy of iStockPhoto.com and Madmaxer