close
close

How investors can navigate the maze

From potentially brand-damaging ethical risks to regulatory uncertainty, AI poses challenges for investors. But there is a way forward.

NORTHAMPTON, MA / ACCESSWIRE / June 18, 2024 / AllianceBernstein
Author: Saskia Kort-Chick| Director of Social Research and Engagement Accountability and Jonathan Berkow| Director of Data and Action Analytics

Artificial intelligence (AI) poses many ethical issues that can translate into risks for consumers, companies and investors. And AI regulation, which is developing unevenly across multiple jurisdictions, adds to uncertainty. We believe the key for investors is to focus on transparency and explainability.

The ethical issues and risks associated with AI start with the technology creators. From there, they go to developers’ clients – companies that integrate artificial intelligence in their enterprises – and then to consumers and society more broadly. By holding stakes in AI developers and AI-enabled companies, investors are exposed to both ends of the risk chain.

Artificial intelligence is developing rapidly, far ahead of most humans’ understanding. Among those trying to catch up are global regulators and lawmakers. At first glance, their AI activity has grown rapidly over the past few years; many countries have published related strategies and others are close to implementing them (Display).

In reality, progress is uneven and far from complete. There is no uniform approach to regulating AI across jurisdictions, and some countries have introduced their regulations before ChatGPT launches in late 2022. As AI spreads, many regulators will need to update and perhaps expand on the work already done.

For investors, regulatory uncertainty increases other risks associated with AI. To understand and assess how to deal with these threats, it is worth familiarizing yourself with the business of AI, its ethics and the regulatory environment.

Data threats can harm brands

Artificial intelligence includes a range of technologies aimed at taking tasks typically performed by humans and performing them in a human-like manner. AI and business can intersect through generative AI, which covers various forms of content generation, including video, voice, text and music; and large language models (LLM), a subset of generative artificial intelligence focused on natural language processing. LLMs serve as foundational models for a variety of AI applications – such as chatbots, automated content creation, and analyzing and summarizing large amounts of information – that companies are increasingly using to engage with customers.

But as many companies have discovered, AI innovation can come with potentially brand-damaging risks. These can result from biases inherent in the data on which LLMs are trained, and result in, for example, banks unintentionally discriminating against minorities in approving home loans, and a US health insurer facing a class action lawsuit alleging that its use of an artificial intelligence algorithm resulted in unlawful rejection of applications for extended care for elderly patients.

Bias and discrimination are just two of the risks that regulators are targeting and should be of concern to investors; others include intellectual property rights and data privacy issues. Risk mitigation measures should also be explored, such as developers testing the performance, accuracy and robustness of AI models and providing enterprises with transparency and support in implementing AI solutions.

Dive deep to understand AI regulations

The AI ​​regulatory environment is evolving differently and at different speeds in different jurisdictions. Recent developments include the European Union’s (EU) Artificial Intelligence Bill, which is expected to come into force around mid-2024, and the UK Government’s response to the consultation process launched last year with the introduction of the Government’s Artificial Intelligence Regulation. paper.

Both efforts illustrate differences in regulatory approaches to AI. The UK is adopting a principles-based framework that existing regulators can apply to AI issues in their respective fields. In turn, the EU act introduces a comprehensive legal framework including risk compliance obligations for developers, companies and importers and distributors of artificial intelligence systems.

We believe investors should do more than delve into the details of each jurisdiction’s AI regulations. They should also become familiar with how jurisdictions deal with AI issues using laws that predate and go beyond AI regulation – for example, copyright law on data breaches and employment laws where A.I. intelligence influences labor markets.

Fundamental analysis and commitment are key

A good rule of thumb for investors trying to assess AI risk is that companies that actively disclose full information about their AI strategies and policies are likely to be well prepared for new regulations. More generally, fundamental analysis and issuer engagement – ​​the cornerstones of responsible investing – are central to this area of ​​research.

Fundamental analysis should consider not only AI risk factors at the enterprise level, but also along the business chain and across the regulatory landscape, examining insights against the fundamental principles of responsible AI (Display).

Engagement conversations can be structured to discuss AI issues not only in terms of their impact on business operations, but also from an environmental, social and governance perspective. The questions investors should ask boards and management are:

  • Artificial intelligence integration: How has the company incorporated AI into its overall business strategy? What are specific examples of AI applications in the company?
  • Supervision and knowledge of the Management Board: How does management ensure sufficient expertise to effectively oversee the company’s AI strategy and implementation? Are there any special training programs or initiatives?
  • Public commitment to responsible AI: Has the company published a formal policy or framework for responsible AI? How does this policy align with industry standards, AI ethical considerations, and AI regulations?
  • Proactive transparency: Has the company implemented any proactive transparency measures to withstand future regulatory impacts?
  • Risk management and responsibility: What risk management processes does the company have in place to identify and mitigate AI-related risks? Is there delegation of responsibility for overseeing these risks?
  • Data Challenges in LLM: How does the company address privacy and copyright challenges related to the inputs used to train large language models? What measures are in place to ensure input complies with privacy and copyright laws, and how does the company address restrictions or requirements related to input?
  • Bias and fairness challenges in generative artificial intelligence systems: What steps does the company take to prevent or mitigate biased or unfair effects of AI systems? How does the company ensure that the results of any generative AI systems used are fair, unbiased, and do not perpetuate discrimination or harm against any individual or group?
  • Incident tracking and reporting: How does the company track and report incidents related to the development or use of AI, and what mechanisms are in place to respond to and learn from these incidents?
  • Metrics and reporting: What metrics does the company use to measure the performance and impact of its AI systems, and how does it report these metrics to external stakeholders? How does the company exercise due diligence in monitoring regulatory compliance of its AI applications?

Ultimately, the best way for investors to find their way through the maze is to remain grounded and skeptical. Artificial intelligence is a complex and rapidly developing technology. Investors should insist on clear answers and not be overly misled by elaborate or complicated explanations.

The authors would like to thank Roxanne Low, an ESG analyst on AB’s responsible investing team, for her research contributions.

The views expressed herein do not constitute research, investment advice or trading recommendations and do not necessarily reflect the views of all AB’s portfolio management teams. Views may change over time.

More information about AB’s approach to responsibility can be found here.

View additional multimedia and more ESG stories from AllianceBernstein at 3blmedia.com.

Contact info:
Spokesperson: AllianceBernstein
Website: https://www.3blmedia.com/profiles/alliancebernstein
E-mail: (email protected)

SOURCE: AllianceBernstein