close
close

How the Rush to Regulate AI Could Bring New Cybersecurity Challenges

Artificial Intelligence, AI

Since the emergence of generative AI, its potential to increase privacy and cybersecurity challenges has become a major concern. As a result, governments and industry experts are hotly debating how to regulate the AI ​​industry.

So where are we headed, and how will the intersection of AI and cybersecurity likely play out? Looking at the lessons learned from previous efforts to regulate cybersecurity over the past few decades, achieving something similar for AI is a daunting prospect. But change is necessary if we are to create a regulatory framework that protects against the negative potential of AI while not blocking the positive applications that AI already provides.

Part of the challenge is that the existing compliance environment is becoming increasingly complex. For example, for UK multinationals, the work required to comply with regulations such as GDPR, PSN, DORA and NIS, to name just a few, is significant. This does not include customer or government requirements to comply with information standards such as ISO 27001, ISO 22301, ISO 9001 and Cyber ​​Essentials.

Add to that the policies put in place by individual companies, such as technology vendors and their customers conducting cybersecurity audits of each other. In both scenarios, organizations have specific and sometimes unique questions they want to ask, and some require evidence and proof. As a result, the overall compliance task becomes even more nuanced and complex—a challenge that is now likely to only intensify.

Needless to say, these rules and regulations are extremely important to ensure minimum standards of performance and protect the rights of individuals and businesses. However, the lack of international coordination and uniformity of approach threatens to make the task of compliance untenable.

New rules at home and abroad

Take, for example, the EU Artificial Intelligence Act, which was passed in March this year and aims to ensure “security and compliance with fundamental rights, while boosting innovation.” It covers a wide range of important cybersecurity points, from restrictions on the use of biometric identification systems by law enforcement and a ban on the use of social scoring and AI to manipulate or exploit user vulnerabilities, to the rights of consumers to file complaints and receive meaningful explanations.

Violations of the regulations can result in significant financial penalties of up to €35 million or 7% of global annual turnover for prohibited AI applications, €15 million or 3% of turnover for violating obligations under the AI ​​Act and €7.5 million or 1.5% of turnover for providing false information.

It also seeks to address cybersecurity risks faced by AI system developers. Article 15 states that “high-risk AI systems must be resistant to attempts by unauthorized third parties to alter their use, output, or performance by exploiting vulnerabilities in the system.”

While this also applies to the UK for organisations trading in the EU, there are also moves to pass additional legislation that would localise regulation even further. In February, the UK government published its response to a White Paper consultation process aimed at guiding the regulation of AI in the country – including cybersecurity. Depending on the election outcome, it remains to be seen how this will pan out, but regardless of who is in power, further regulation is inevitable. Elsewhere, lawmakers are also busy preparing their own approaches to how AI should be governed, and from the US and Canada to China, Japan and India, new regulations are emerging as part of a rapidly evolving environment.

Regulatory challenges

As these various local and regional regulations come into effect, the level of complexity for organizations building, using, or securing AI technologies also increases. The practical difficulties are considerable, not least because AI decision-making processes are opaque, making it difficult to explain or audit how they are achieved—a factor that is already a requirement in some regulatory environments.

Some also fear that strict AI regulations could stifle innovation, particularly for smaller companies and open-source initiatives, while larger entities may support anti-competitive regulations. There has also been speculation that AI startups could move to countries with fewer regulatory requirements in such circumstances, potentially leading to a “race to the bottom” in terms of regulatory standards and the security risks this could pose.

Add to this the fact that AI is resource-intensive—a fact that raises concerns about sustainability and energy consumption and creates the potential for further regulatory oversight—and the list may seem long. Ultimately, however, one of the most important requirements for effective AI regulation is that governments should, wherever possible, work together to develop uniform and consistent regulations. For example, existing privacy laws and considerations vary by region, but the basic principles of security should remain the same.

If these issues are not addressed, there is a high probability that organizations will continue to break the rules. Equally worrying is that AI-related cybersecurity will open up loopholes that attackers will be more than happy to exploit.

Richard Starnes is Chief Information Security Officer at Six Degrees.