close
close

Can regulation of artificial intelligence help build trust in technology?

Some advocate voluntary codes of conduct to create a climate of trustworthy AI. It is argued that it is in the interest of AI developers to provide AI systems that protect the health, safety and fundamental rights of users.

However, voluntary codes of conduct lack accountability for ensuring that the potential negative impacts of AI systems are mitigated. This inherent weakness of voluntary codes of conduct has been recognized by the signatories of the EU-US voluntary codes of conduct as “a temporary measure to put a finger in the sand while awaiting the adoption and implementation of European legislation.”

Balancing innovation and harm

There is also considerable debate about the nature and scope of AI regulation that will govern the responsible use of AI systems. Should he favor the pro-innovation approach proposed by the British House of Lords Select Committee on the Artificial Intelligence Bill? The principle embodied in the bill is not to create an “overarching, one-size-fits-all regulatory body” but rather a rules-focused approach based on “safety, security and soundness; appropriate transparency and explainability.

At the other end of the spectrum is the proposed EU Artificial Intelligence Act, which recently received approval from the European Parliament. The bill is expected to be the first and most comprehensive artificial intelligence regulation in the world. It provides risk-based and prescriptive regulations that impose obligations on different tiers of AI systems, covering prohibited, high-risk, and minimal-risk AI systems.

Although the ambition of the EU Act is to balance innovation and harm, some commentators argue that its scope is excessive and impractical to enforce due to its complexity. Moreover, regulation often lags behind innovation; an example would be the time elapsed between the initial introduction of the law in 2021 and its final ratification in 2024. Meanwhile, rapid advances in artificial intelligence, especially the LLM, have resulted in prolonged negotiations in the tripartite process, leading to amendments to last minute.

The United States has seen a patchwork of state and federal efforts to address AI, particularly in light of the rise of LLMs. At the federal level, the focus is on promoting best practices in AI governance, such as those outlined in the draft “AI Bill of Rights” and the executive order issued by the Biden administration. These initiatives, although a step in the right direction, lack concrete solutions. Proposals were considered to strengthen AI enforcement measures, including more stringent AI security standards such as those proposed in the NIST Risk Management Framework. However, given the current political climate, it is unlikely that the United States will adopt significant AI regulation at the federal level.

Finally, there is a push for globally harmonized AI regulations, as demonstrated by the G7 Hiroshima Process on AI, aiming to foster an environment for the “common good throughout the world.” Given the transformative and disruptive impact of AI around the world, the need for legal certainty governing the use of AI systems is a welcome ambition. According to Stanford University’s 2023 AI Index, 127 countries have proposed legislation focusing on AI regulation.

Globally harmonized AI regulations remove trade barriers, support innovation and create a framework for economic opportunity. If, on the other hand, AI is viewed as a geopolitical advantage, then the prospects for a more insular regulatory framework are likely to become the norm, potentially resulting in a zero-sum, “winner-takes-all” game.

Companies will not wait for regulations regarding artificial intelligence

Companies will not wait patiently for regulators to tell them how to manage the development, use and performance monitoring of AI. They have a selfish motivation to get the product to market as quickly as possible to gain a first-mover advantage. The mentality is ‘ship now and fix later’, which results in evidence of harm to consumers’ health, safety and fundamental rights. Consumers are also at a distinct disadvantage when it comes to exercising their rights. AI systems are opaque, unchallengeable, and unregulated, as Cathy O’Neil (a leading advocate of responsible AI) noted in her groundbreaking book Weapons of Math Destruction.

Leaving AI regulation in the hands of developers is definitely not the solution. The bottom line is that operationalizing trustworthy AI requires a “whole of society” effort that includes a combination of approaches, including voluntary codes of best AI ethical practice, AI standards and risk management frameworks, strengthened by practical regulations that balance innovation and at the same time protect against its adverse effects.

Moreover, there is a need for an independent audit of AI systems, similar to financial audits, by certified financial auditors. There is momentum toward such independent oversight by certified AI auditors. Like compliance and financial performance auditors, subject matter experts in objectively assessing AI systems will become invaluable in ensuring the responsible use of AI in business. ForHumanity, a not-for-profit public charity, is at the forefront of supporting individuals and organizations to develop audit criteria approved by regulators empowered to conduct independent audits.

Operationalizing trustworthy AI

In addition to common-sense AI governance, AI risk management, regulation and independent audit criteria, organizations can consider implementing measures that can mitigate the risks associated with AI while still reaping its benefits.

First, much of the concern about LLMs stems from their susceptibility to hallucinations, the significant environmental consequences, and the large amounts of data they must use.

An effective and proven alternative is to deploy “purpose-built” AI platforms known as small language models (SLMs), which are likely to become the focus of innovation leaders as companies strive to deliver positive business outcomes while mitigating compliance risks.

By adopting SLM, companies can reduce the risk of harmful inaccuracies and biases that can permeate models at scale, enabling more secure and ethical outcomes. This strategy also builds familiarity and dialogue between business and development professionals as AI developments become increasingly closely linked to core business needs and processes. Such collaboration bodes well for a purposeful AI strategy that considers ethics, compliance and efficiency as much as it does comprehensiveness.

Secondly, applications of process mining technologies have been proven to help organizations mitigate compliance risk by providing detailed insight into the operation of compliance processes, revealing potential vulnerabilities and identifying their root causes, thereby demonstrating the auditability and traceability of compliance with mandatory AI risk management frameworks .

Business benefits

Implementing regulatory compliance best practices that enable organizations to proactively navigate complex and rapidly changing regulatory frameworks is just one clear and compelling business benefit. Perhaps more importantly, such investments foster a culture of trustworthy AI both internally and externally among customers and consumers. Effective AI management and best practices result in greater brand loyalty and repeat business. That’s why implementing trustworthy AI best practices and governance frameworks is simply good business. It inspires trust and a lasting competitive advantage.

Describing trust as “an extraordinary force that pulls you across the chasm between certainty and uncertainty; “bridge between the known and the unknown,” author and trust expert Rachel Botsman reminds us that as AI innovation continues to expand, closing this gap through regulation will become even more important to ensure trust in the technology.