close
close

AI advances bring economic benefits, but also potential risks

Swiss Re is assessing the potential risks associated with artificial intelligence now and in the future for several key industry sectors, creating potential demand for insurance solutions.

futuristic artificial intelligence and VR

Artificial intelligence (AI) has the potential to add trillions of dollars to the global economy annually – with one estimate suggesting that generative AI alone could increase production by as much as $4.4 trillion per year. However, while AI offers enormous opportunities, it also comes with risks that need to be carefully managed, creating an important role for the insurance industry in helping customers deliver new risk hedging products, according to a report by Swiss Re.

“Where there are opportunities, there are also risks. Artificial intelligence, like any technology, can fail. AI may not meet performance tests; may unintentionally perpetuate discrimination; may become the subject of a malicious attack; or possibly cause actual harm,” the report states.

Swiss Re has developed a model for assessing the risks posed by artificial intelligence in 10 different industries. The model uses a combination of historical data on past AI incidents and forward-looking patent data to provide a comprehensive view of the AI ​​risk landscape.

A key aspect of the model is that it takes into account both the probability or frequency of an AI-related incident and the potential severity of losses resulting from such incidents. This approach enables a more nuanced assessment of the risks associated with artificial intelligence, taking into account not only the frequency of problems, but also the scale of their effects, the report explains.

The model focuses on six main risk categories:

  • Data bias or lack of integrity: The risk that AI systems will unintentionally discriminate against certain groups based on characteristics such as gender, race, age or geographic location.
  • Cyber: Vulnerabilities in artificial intelligence systems that can be exploited by malicious actors, as well as the possibility of using artificial intelligence for malicious purposes.
  • Algorithmics and efficiency: The possibility that the AI ​​will not meet the required performance criteria.
  • Lack of ethics, accountability and transparency: AI systems failing to adhere to necessary ethical standards and accountability measures, exacerbated by a lack of transparency in their internal workings.
  • Intellectual Property (IP): Issues related to the use of third-party intellectual property in AI training and the risk of AI violating intellectual property rights.
  • Privacy: Exposure of sensitive personal data during AI training and the potential for AI to compromise individuals’ privacy through unintentional disclosure or identification.

By quantifying risk along these key dimensions, the Swiss Re model provides a framework for understanding and managing the complex challenges posed by artificial intelligence as it becomes increasingly embedded across industries. The insights can help insurers, companies and policymakers develop more robust strategies to harness the power of AI while mitigating its risks.

Key AI risk factors over time

According to Swiss Re, while artificial intelligence offers enormous potential, the threats posed by this technology are expected to evolve in worrying ways in the coming years. In the near term, the most serious category appears to be intellectual property risk, likely resulting from issues related to generative AI models and copyright infringement. The report notes that because these AI systems are trained on vast amounts of online data, the risk of them reproducing copyrighted text, images or code in their results is high.

Looking ahead, the risk of AI perpetuating social biases and discrimination may become increasingly severe unless proactive corrective measures are taken. Algorithmic biases have the potential to unfairly skew results in high-stakes fields such as loan approvals and pharmaceutical research. Historical errors contained in training data, if not eliminated, may be amplified as AI systems are more widely deployed.

However, the biggest losses in the future are expected to come from algorithmic and performance risks.

“However, in the longer term, as AI becomes entrenched in many industries, we expect that the single most significant risk will become performance risk, whether it relates to vehicles, manufacturing plants, crop modeling, consumer chatbot interfaces or any other form of other uses,” the report states.

Short-term AI risk rankings by industry

Swiss Re assessed the likelihood and severity of AI-related risks faced by specific industries to develop an overall risk ranking for the near-term timeframe of 2024-2025.

The IT sector currently has the highest overall risk ranking, driven by the highest likelihood of being at risk as a “first mover” in developing and using AI technologies. The analysis found that 55% of the total near-term AI risk probability falls within the IT sector.

Government/education is the second most likely source of AI risk in the near term, reflecting the broad scope of AI use in the public and education sectors. Media/Communications is third most likely to be at risk due to the high potential use of AI and legacy intellectual property (IP) issues in the sector.

Although less likely to occur, the energy/utility sector has the highest severity ranking for short-term AI risk incidents due to the critical nature of infrastructure. Health/pharma is currently the second most affected sector, given the potential risks associated with the use of AI in this highly regulated industry.

Rankings of future AI threats by industry

Looking ahead 8-10 years, when AI will be widely used across industries, Swiss Re expects the likelihood of AI-related risks to be much more evenly distributed across sectors compared to the near future.

However, the health/pharma sector will face the greatest overall risk in the future, which will remain high while being exacerbated by increasing frequency, Swiss Re predicts. This is due to many healthcare delivery processes that can be improved with AI, such as pharmaceutical product development and AI-based diagnosis.

“The use cases for AI across the spectrum of healthcare delivery are exhaustive, from improving and streamlining administration, to patient monitoring, diagnosis, drug development and more. “In summary, with so many touchpoints, the potential incidence of adverse impacts from AI is high,” the report notes. “The other half of the equation is that the risk potential is serious. Healthcare is a highly regulated industry where approval processes are closely scrutinized, with the risk of personal injury or even death.

The mobility/transportation sector ranks second in terms of future AI risk, driven by the severity of potential incidents related to automation such as autonomous cars. Over the next decade, AI risks will also increase in the energy/utility sector as AI-based smart grid technologies increasingly come online to support the transition to net zero emissions.

Consequences for insurers

Providing products and services that protect against AI-related risks represents a significant business opportunity for insurance companies. However, it can also become a security vulnerability if AI risks accumulate unnoticed in insurers’ portfolios.

Insurers are already providing protection against some AI-related risks, particularly in the rapidly growing cyber insurance market. The Swiss Re Institute estimates that $13 billion in cyber insurance premiums will be written globally in 2022, a three-fold increase in just five years.

While cyberattacks targeting AI systems have been limited so far, Swiss Re warns that the risk could increase significantly in the future: “If cybercriminals start attacking AI systems in the same way they attack non-AI digital systems, the risk may become much greater. “You can imagine the damage that could be caused by, for example, an AI hack into a fleet of autonomous cars, let alone using AI as a weapon for a hostile attack.”

In addition to cybersecurity, other categories of AI-related risks may fall partially or completely under insurers’ existing coverage. For example, AI performance issues leading to property damage may be covered by property insurance policies. Infringement of intellectual property by artificial intelligence may be dealt with under professional liability. Data privacy breaches involving AI may be covered by cybersecurity policies.

As AI becomes more widely used, insurers have an important role to play in assessing AI systems for risks related to ethics, liability and transparency, Swiss Re says. Insurers that develop expertise and solutions in these areas can help their customers mitigate AI risks.

However, insurers must remain vigilant about potential “silent AI risks” as the technology becomes ubiquitous across industries. If AI-related risks are not explicitly included or excluded in traditional insurance policies, it could lead to unexpected losses and risk accumulation in insurers’ portfolios, notes Swiss Re. AND

View the full report on the Swiss Re website. &