close
close

Navigating the hype and fair regulation cycles around AI – Monash Lens

When OpenAI launched ChatGPT in November 2022, it sparked extreme reactions: “Oh my God! That’s amazing – expand it!’, versus ‘Oh no, that’s terrible – ban it.’

Since then, the hype surrounding artificial intelligence has been enormous. For example, Elon Musk made some bold claims last year. He said Tesla would be fully autonomous within a year or two, that artificial intelligence would surpass human intelligence by next year, and that by 2040 an army of a billion AI-powered robots could replace human workers.

Such predictions suggest that the development of artificial intelligence is progressing exponentially, unstoppably, and we humans are unable to control it.

But many experts say this is far from the truth, pointing to concerns about AI stagnating due to diminishing returns from larger data sets and increasing computational demands.

Modern artificial intelligence systems rely on deep learning and neural networks, trained on massive data, to identify patterns and predict trends.

However, the benefits of increasing dataset sizes and computing power are diminishing. For example, increasing AI recognition accuracy from 60% to 67.5% required quadrupling the amount of training data, resulting in diminishing returns.

Additionally, the computational burden of training increases exponentially with each additional data point, making further progress costly and energy-intensive.

This trend is illustrated by the marginal improvements in newer AI models compared to their predecessors, such as the move from GPT-3.5 to GPT-4, despite massive increases in data volumes.

Is “winter” approaching?

So can we say that we may be witnessing an “AI winter”? To understand what AI winter is, let’s discuss it a bit more.

According to Professor Luciano Floridi, the AI ​​winter is a period in which enthusiasm for AI will cool significantly, often due to failed, overhyped projects and economic downturns.

The first AI winter took place in the 1970s, and the next one took place in the late 1980s and early 1990s.

During these winters, many AI projects, especially those dependent on government funding and venture capital, have experienced significant cuts. The reason was often a combination of overambitious AI projects that fell short of expectations, coupled with broader economic issues.

This has led to skepticism and a general decline in enthusiasm around AI technology.


Read more: Robot breaks seven-year-old’s finger: A lesson in the need for stronger artificial intelligence regulations


While some experts are predicting an upcoming “AI winter,” I don’t think the world will experience it anytime soon. Rather, AI investments are witnessing an unprecedented boom and wide integration of AI technologies across sectors.

However, like any seasonal change, I am sure winter will return and you better be prepared. When this happens, it will involve significant financial and socio-political costs.

Over the past two years, revenues for AI companies have skyrocketed, as have the enormous computational costs associated with running increasingly complex AI models.

If AI technologies fail to deliver on their promises, startups could fail, larger companies could shed jobs, and overall financial instability could ensue.

Despite the hype, many AI companies, including OpenAI, are currently losing money.

This unprofitability is fueling the fear of an AI winter, and among the many challenges, a new problem is emerging – AI regulation.

The issue of AI regulation

As the European Union and other governments and regulators race to regulate AI, the question arises whether their rapid regulatory response to AI could lead to expectations not being met by imposing more stringent regulations on AI development and deployment.

Many experts believe that upcoming regulations on artificial intelligence could stifle innovation and lead to conflicts between government bodies and technology industries, creating a socio-political dispute over the direction of future technologies.

So what should we do about regulating artificial intelligence? Should we not regulate artificial intelligence at all?

While we definitely need to regulate AI, any regulation should be based on principles of fairness. This includes distribution, procedure and recognition rules. By following these principles, we can adopt a sustainable approach that promotes innovation while protecting societal interests.


Read more: What does the rise of artificial intelligence in agriculture mean for the future of agriculture?


For example, the principle of distributive justice focuses on the fair distribution of the benefits and burdens of artificial intelligence. Regulations should ensure that AI technologies do not exacerbate inequalities, but instead contribute to closing the gaps between different socio-economic groups.

For example, implementing artificial intelligence in public services such as health care and education should improve access and quality for all, not just the privileged few.

If AI technologies mainly benefit certain sectors or demographics, the wider public may become skeptical and withdraw support, leading to reduced funding and interest.

On the other hand, the principle of procedural justice covers transparent and fair processes in the development, implementation and management of artificial intelligence.

Developers need to be transparent

Regulations should enforce accountability by requiring AI developers to be transparent about the functions of their algorithms and the data they use. This includes open audits, ethics reviews and the involvement of various stakeholders in the regulatory process.

Moreover, investments in artificial intelligence should be transparent and benefit everyone, not just a specific society.

Trust is key to continued investment and innovation in AI, as stakeholders are more likely to support and engage in technologies they believe are developed and used responsibly.

Finally, recognizing fairness in AI regulation involves acknowledging and addressing the potential negative impacts of AI on human life.

This means that regulations should require AI systems to respect and protect individual identities and cultural diversity. Artificial intelligence should not perpetuate stereotypes or violate privacy or personal dignity.


Read more: AI, we need to talk: The gap between the humanities and objective truth


A risk-based approach to AI regulation is required to ensure that AI systems respect and protect diverse human values ​​and cultural norms. Adapting to different social contexts can prevent backlash and potential stalling of AI development due to ethical concerns or public outcry due to insensitivity or bias.

Basing AI regulation on these principles of fairness not only addresses immediate ethical issues, but also strategically positions AI development for long-term viability and support.

This approach can mitigate the risk factors associated with AI winters, such as loss of public trust, backlash against unintended consequences, and unequal benefits leading to disappointment.

By fostering an environment of trust, equality and adaptability, such regulations can help maintain the momentum needed for sustainable AI development.