close
close

The best way to regulate AI may not be to specifically regulate it. Here’s why

The new wave of artificial intelligence – so-called AI – brings both promises and threats.

By helping workers, it can increase productivity and real wages. By leveraging big, untapped data, it can improve outcomes in services, including retail, healthcare, and education.

Threats include deepfakes, privacy violations, irrevocable algorithmic decisions, intellectual property infringements, and massive job losses.

Both the risks and the potential rewards seem to be growing by the day. On Thursday, Open AI released new models that it claims can reason, perform complex calculations and draw conclusions.

However, as a competition and consumer protection specialist, I have come to the conclusion that calls for new AI regulations are largely unfounded.

Most AI applications are already regulated

A Senate committee is due to report soon on the possibilities and implications of AI deployment. I helped develop the Productivity Committee’s submission.

The government is running a separate consultation on mandatory safeguards for AI in high-risk environments. This would act as a sort of checklist of what developers should consider, alongside the voluntary security standard.

Here are my thoughts: Most potential applications of AI are already covered by existing laws and regulations that are designed to protect consumers, protect privacy, and eliminate discrimination.

These laws are far from perfect, but if they are not perfect, the best solution is to fix or extend them rather than introduce special additional rules for AI.

Artificial intelligence can certainly pose a challenge to existing regulations – for example, by making it easier to mislead consumers or by using algorithms that make it easier for companies to fix prices.

The most important thing is that there are laws governing these issues and regulatory bodies that have experience in enforcing them.

The best approach is to make existing rules work

One of Australia’s greatest advantages is the strength and experience of its regulatory bodies, including the Competition and Consumer Commission, the Communications and Media Authority, the Australian Information Commissioner, the Australian Securities and Investments Commission and the Australian Energy Regulator.

Their role should be to identify the extent to which AI is covered by existing laws, to assess the ways in which AI may violate those laws, and to conduct test cases that clearly demonstrate the applicability of those laws.

It’s an approach that will help build trust in AI as consumers see they are already protected, while also providing transparency to companies.

Artificial intelligence may be new, but the accepted consensus on what is and is not acceptable behavior has not changed much.

Some rules will have to be changed

In some situations, existing regulations will need to be changed or expanded to accommodate AI-facilitated behaviors. Approval processes for vehicles, machinery, and medical equipment are among those that will increasingly need to incorporate AI.

People make mistakes too.
DenisStarostin/Shutterstock

And in some cases, new regulations will be needed. But that should be where we end, not where we start. Trying to regulate AI because it’s AI will be ineffective at best. At worst, it will stifle the development of socially desirable uses of AI.

Many AI applications will pose little risk, if any. If there is potential harm, it will need to be weighed against the potential benefits of its use. Risks and benefits need to be weighed against real-world, human-based alternatives, which are themselves far from risk-free.

New rules will only be needed if existing rules – even if clarified, amended or extended – prove insufficient.

Where needed, they should be technology neutral where possible. Rules written for specific technologies are likely to quickly become outdated.

Last Man Standing Advantage

Finally, there is much to be said for becoming an international “regulation taker.” Other jurisdictions, such as the European Union, are leaders in designing AI regulations.

Product makers around the world, including in Australia, will need to comply with the new rules if they want access to the EU and other major markets.

If Australia developed its own specific AI rules, developers could ignore our relatively small market and go elsewhere.

This means that in those limited situations where AI regulation is necessary, existing foreign regulations should be the starting point.

Being a late or last mover has its advantages. This does not mean that Australia should not be at the forefront of developing international standards. It simply means that it should help design those standards with other countries in international forums, rather than going it alone.

The landscape is still evolving. Our goal should be to give ourselves the best chance of maximizing AI gains while providing safety nets to protect us from negative consequences. Our existing rules, not new AI-specific rules, should be the starting point.