close
close

Omdia research highlights global consensus on regulating high-risk AI scenarios

Analysis from Omdia’s new report AI Regulation: Comparing Global Policies and Regulatory Frameworks shows Several countries have already made efforts to regulate AI through public consultations, market reviews, global discussions, draft regulations and policies, etc. However, there is still a long way to go before many of them are finalized and implemented. Overall, some form of consensus is beginning to emerge on the need to regulate the use of AI in high-risk situations such as healthcare settings.

“The implementation of new technologies often sparks debate, but the debate around artificial intelligence has developed particularly rapidly, with discussions quickly becoming mainstream among the general public,” he said Sarah McBride, Chief Regulatory Analyst at Omdia. Artificial intelligence technologies are proving so controversial that even the artificial intelligence companies themselves are calling for regulations to provide them with clear boundaries. “Some form of regulation is inevitable, but in the absence of guidance, companies are creating their own framework of standards for developing, assessing and implementing responsible AI. However, without knowing exactly what regulations will look like in the future, there is a risk that these companies may make poor choices now that may be difficult to reverse in the future,” suggests McBride.

Singapore has already published its second national AI strategy, while the European Commission was the first regulator to publish a draft AI regulatory framework in April 2021. It has adopted a risk-based approach that imposes bans on AI systems based on levels of risk security, while Singapore’s regulatory roadmaps in some other countries, such as the UK, envisage a sector-specific approach to AI regulation. Meanwhile, in the US, the initial approach to AI regulation is focusing on specific AI use cases, with the National Institute of Standards and Technology developing standards for the design, testing and implementation of AI technologies. China has also made progress on its AI regulatory framework, prioritizing aspects such as generative AI and focusing on sovereignty over AI development, deployment and security.

Seven key regulatory challenges for artificial intelligence

Omdia’s report identifies seven key challenges that regulators must address to ensure the many opportunities arising from the development of artificial intelligence are realized, including: security, privacy, ethics, controllability, transparency and accountability, security, as well as copyright and property rights intellectual. Regulators have begun to address some of these issues, with most of the guidance issued to date largely focusing on the ethical and legal issues surrounding the implementation of AI, but this process is only expected to gain momentum in the coming year as they focus on sector-specific regulatory issues and areas where there may be a conflict of interest between AI applications and existing regulatory policies, such as data protection regulations, copyright, outdated user consent methods, licensing or authorization agreements, and sector-specific regulations.

“The first step should be to identify sectors or use cases where AI adoption is being significantly held back by inappropriate legislation, and modify the few existing or even outdated regulations to avoid stifling innovation,” McBride said. Another challenge facing the sector is the possible conflict between areas where AI is regulated directly, such as the EU Artificial Intelligence Act, and areas where it is also regulated through other existing and new legislation, such as the EU DSA and DMA .

“The broad nature of AI creates a problem for regulation and protecting end users from harm. Not only does this blur the traditional definition of markets, which poses a challenge for law enforcement; it also crosses administrative boundaries internationally. AI also poses a challenge for regulators in predicting when a particular outcome of an AI service will be malicious. This has led to governments and regulators around the world accelerating efforts to assess the level of regulatory involvement required,” McBride said.