close
close

What New Zealand can learn from EU regulations

face recognition

Source: Pixabay/CC0 Public Domain

AI now underpins many of our public and private activities, including work, education, travel and leisure. Yet its rapid development and increased use have been driven primarily by commercial interests rather than prudent policy or legislative choices.

As a result, there is growing concern about the impact on individual and collective human rights, such as privacy and freedom of speech, as well as intellectual property.

Recently, these concerns have translated into a wave of global, regional and national regulatory initiatives. The European Union’s Artificial Intelligence Act, due to come into force next month, is groundbreaking in its scope and scale.

The Act expands existing global and national initiatives (such as Responsible AI Charters and AI Ethics Statements) with a comprehensive regulatory framework, including a system of enforcement and penalties.

Some New Zealand-based businesses may be subject to specific compliance requirements (if their product is available on the EU market or affects EU citizens) but the broader impact of the Act on global standards will be significant.

The EU is the world’s largest single market and a global standard-setter. Given the closer links being created through the New Zealand-EU Free Trade Agreement, New Zealand should monitor regulatory developments closely.

The EU AI Act can be seen as product safety legislation that aims to protect people from harm and promote the trustworthy and safe use of AI.

Its basic structure is a risk-based framework for AI, with requirements based on risk levels. High-risk AI systems (for example, those used in employment, education, and critical infrastructure) will be subject to a compliance assessment, where the provider must demonstrate compliance with requirements such as transparency and cybersecurity.

Most high-risk systems will need to register in a public database. Governance structures are being established at EU level and in EU jurisdictions to monitor compliance, set implementation standards and provide expert advice.

Global interest has been sparked by decisions to ban certain types of AI (via the categorization of “unacceptable risk” in the Act). These decisions are likely to shape the evolving global discussion on whether some AI systems should be banned altogether as incompatible with fundamental human rights and dignity.

The bill prohibits the use of, among other things, social scoring, image scraping, and most types of emotion recognition, biometric categorization, and predictive policing applications.

However, despite concerted efforts by civil rights groups, the regulation has not led to an outright ban on real-time remote biometric surveillance – the most high-profile example being live facial recognition technology.

While this type of surveillance falls into the category of “unacceptable risk,” there are important exceptions for law enforcement purposes, as explained in Laws. In addition, national security, defense, and military purposes are outside the scope of the Act. This means the technology will continue to be used.

There is also a serious risk that by defining cases in which real-time remote biometric surveillance is justified, the EU could be seen as endorsing this technology as acceptable, which could lead to its increased use in national jurisdictions without the necessary community buy-in or engagement of particularly affected groups.

In the global race to host and develop technology hubs, there is considerable debate about what regulations best promote AI innovation. Some argue the EU’s approach could stifle innovation, but others would say certainty and clear lines provide a solid basis for investment and innovation.

In general, the EU approach is based on the assumption that if consumers trust the systems and are confident of quality, they will be more willing to use AI commercially and consume public services.

So what can New Zealand learn from the EU?

New Zealand has serious regulatory gaps that could stifle innovation and undermine public trust in AI.

The privacy and data protection regime is relatively weak, with a lack of accessible ways for individuals and communities to learn when AI is being used or to raise complaints about the impact of AI and similar technologies on their rights and interests, a lack of robust enforcement or punishment mechanisms, and an outdated legal system for state oversight.

Without a people-centric, trustworthy and robust regulatory framework, adoption and trust in AI will likely suffer. While Aotearoa New Zealand has a unique social and cultural context requiring a tailored approach, the concepts and framework in the EU AI Act provide a solid foundation for people-centric regulation of AI.

More information:
Nessa Lynch, Facial Recognition Technology in Policing and Security Services – Case Studies in Regulation, Laws (2024). DOI: 10.3390/laws13030035

Brought to you by Victoria University of Wellington

Quote:Reining AI: What New Zealand Can Learn from EU Regulation (2024, July 12) retrieved July 12, 2024, from https://techxplore.com/news/2024-07-reining-ai-nz-eu.html

This document is subject to copyright. Apart from any fair use for private study or research, no part may be reproduced without written permission. The content is provided for informational purposes only.