close
close

New Zealand develops AI regulation and digital identity strategy with Europe in mind

As governments around the world come to terms with the need for AI regulation and a digital identity strategy, many are looking to the EU as a model. For New Zealand, the EU AI Act provides a comprehensive regulatory framework against which to assess its own limitations and rights, and the EUDI Wallet Program provides a demonstrative roadmap for implementing digital ID as part of a larger digital transformation of the economy and social services.

EU Artificial Intelligence Act offers “a solid foundation for regulating human-centric AI”

In an article for Newsroom, Dr Nessa Lynch, a researcher specialising in the regulation of emerging technologies, argues that because the rapid development of AI has so far been driven by commercial interests rather than public interests, it is time to introduce some legal and policy safeguards for AI to ensure that public data privacy does not spiral out of control.

“The EU is the world’s largest single market and a global standard-setter,” Lynch writes. “And with the closer links being created through the New Zealand-EU Free Trade Agreement, New Zealand should monitor regulatory developments closely.”

He says the EU AI Act is best described as “product safety legislation that aims to protect people from harm and promote the trustworthy and safe use of AI.” Its risk-based structure, with requirements of varying degrees of difficulty, imposes thresholds for applications that are deemed high risk.

It will also inform global ideas about what AI applications pose unacceptable risks and should be banned outright. “The systems the bill would ban include social scoring, image scraping, and most types of emotion recognition, biometric category, and predictive policing applications.”

Still, Lynch notes controversial exceptions for law enforcement, particularly regarding the use of facial recognition for real-time biometric surveillance, as well as exceptions for national security, defense, and military purposes.

As always, the balancing act is regulation and innovation. Innovation too often means pushing the boundaries of the law, but regulation that is tight-lipped can be stifling. Lynch says that, broadly speaking, the stabilizing element is trust. “The EU’s approach is based on the premise that where consumers have confidence in the systems and certainty about the quality, they are more likely to use AI commercially and engage with public services,” he writes.

For New Zealand, that means there’s some work to do. Lynch says significant gaps in its regulatory system could undermine innovation and public trust in AI. He notes that New Zealand has relatively weak privacy and data protection laws, and a fog of confusion around when AI is used and how to prevent misuse.

“Without a human-centric, trustworthy and robust regulatory framework, the level of acceptance and trust in AI will likely suffer,” Lynch says.

Digital identity “just one aspect of the broader digital economy”

As in the EU, New Zealand’s AI regulation is evolving alongside the country’s digital identity strategy. This week, the government launched the New Zealand Trust Framework Authority to determine which organisations are vetted to provide digital identity services.

Writing for The Conversation, Lynch’s colleague at Te Herenga Waka – Victoria University of Wellington, computer science professor Markus Luczak-Roesch, argues that the transformative potential of digital ID and digital credentials is real – but so are the “principles on which our digital economy is based”, which shape the much broader context in which digital ID will be introduced.

“While digital ID is key to access and trust in digital services, it must be protected and managed in accordance with our values, including personal, community, and national perspectives,” Luczak-Roesch writes. “Digital ID is just one aspect of the broader digital economy. We need to think more systemically about how we develop new digital services and who develops them.”

Big tech corporations, he notes, are probably not the best entities to entrust all our personal data to. Noting a recent report on how digital ID is linked to data governance, national data infrastructure and national values, he points to models provided by Estonia – long a world leader in digitising public services and home to a robust national data infrastructure – and Norway, where the national AI Innovation Research Centre (NorwAI) “develops and maintains a set of Norwegian large language models based on Norwegian data and values”.

In summary, says Luczak-Roesch, whether it’s AI or digital identity, it’s important to try to minimize “the risk of creating technology that unwittingly imports components that may have been developed unethically or that embed values ​​that are incompatible with the local context.”

Article Topics

AI | biometrics | digital economy | digital identity | facial recognition | New Zealand | regulation

Latest Biometric News

Kurdistan has distributed 438,000 biometric ballots in the past seven months as the Iraqi region prepares for the much-anticipated…

If you think age verification measures for porn are controversial, American Rounds would like you to hold your beer. The…

The federal government has bowed to persistent lobbying efforts by the technology and banking sectors, leading to the deregulation of…

The Maldives has embarked on an ambitious journey to transform its identity management systems by incorporating digital technologies to enhance…

A report by the Africa Policy Research Institute (APRI) shows that implementing strong and efficient digital government systems…

Botswana government ordered to strengthen leaky national data protection legal framework to address serious…