close
close

Global Financial Services: The Industry’s Current Regulatory Landscape for Artificial Intelligence | Morgan Lewis

As part of our Tech Marathon webinar series, partners Kristin Lee, Mike Pierides, and Steven Stone recently discussed financial regulators’ increasing focus on artificial intelligence (AI). Here are some of their key findings.

GIVEN THE CHANGING AI REGULATORY POLICY LANDSCAPE, WHAT ACTIONS SHOULD FINANCIAL ENTITIES CONSIDER TO TAKE AT THIS TIME?

Regulators expect financial entities to have appropriate controls, policies and procedures, supervision and monitoring to ensure compliance with existing systems, including ensuring prudent management of operational risk. In a similar vein, U.S. regulators will not shy away from raising issues in analysis and investigations as their policy approaches to AI continue to evolve. Financial entities should look to build on what they already have, based on existing prudential requirements, to document and effectively manage risks arising from the use of AI.

A significant (and growing) portfolio of applications used by financial services companies already uses AI, and most, if not all, companies will have some form of AI built into their software and services supply chain.

HAVE YOU OBSERVED any particular trends or use cases for AI in the financial services sector?

Spending on AI solutions is likely to increase significantly. Some estimates project total annual spending on AI solutions in financial services to be ~$97 billion in 2027 compared to ~$35 billion in 2023.

Additionally, most asset managers are already using some form of generative AI across a variety of business cases. According to a recent study conducted by It’s burningasset management media, 59% of asset managers are implementing generative AI for IT applications such as code generation and debugging, and 56% are implementing generative AI for marketing applications such as developing custom marketing materials.

One emerging use case for AI is research. Buy-side companies such as asset managers want to be able to personalize content and delivery using AI (e.g. bite-sized snippets, podcasts, charts, expert call transcripts, data feeds, machine-readable content to support quantitative use cases, aggregators and workflow solutions). Some large buy-side companies are already creating internal large language models (LLMs) for research and seeking permission from research service providers to train these LLMs on the content they receive from research service providers.

While AI in research raises concerns among research service providers about IP ownership and the disintermediation of customer relationships by research aggregators, it also holds promise for reducing the “mundane” parts of manual research such as maintenance studies and earnings summaries.

WHAT DOES THE REGULATORY FOCUS ON AI LOOK LIKE?

There are regulatory expectations around technology and third-party risk management that financial entities and their service providers should be familiar with. In United States Interagency guidance on third party relationships: risk management (Interagency Guidelines on TPRM) published in June 2023 by the Board of Governors of the Federal Reserve System, the Federal Deposit Insurance Corporation, and the Office of the Comptroller of the Currency, set out robust risk management principles for banking organizations throughout all stages of the third-party relationship lifecycle, including planning, due diligence, contract negotiation, ongoing monitoring and termination of the contract, as well as ensuring supervision and accountability. These expectations will apply to arrangements between third-party AI providers and banks and have also been used by non-bank financial entities as a framework of reference.

In Europe, the European Banking Authority Guidelines for outsourcing contracts and the European Securities and Markets Authority (ESMA) Guidelines for outsourcing to cloud service providers set expectations at both the enterprise and transaction levels, which may include the purchase and use of AI tools provided by third-party vendors, depending on the context. As in the interagency guidance on TPRM, the key principles arising from these expectations relate to planning and due diligence, security and privacy, governance and accountability, business resilience and business continuity, exit strategy and, in the case of critical services, mandatory contract terms (many of which EU and UK financial institutions will be known).

From January 2025, the EU’s Digital Operational Resilience Act (DORA) will extend many of these principles to all technology services, cloud services and applications (ICT services) provided to EU financial entities, and in particular require specific contractual provisions with providers ICT services (external and intra-group suppliers).

There has been a significant number of publications on artificial intelligence that financial entities should be aware of. Report of the US Department of the Treasury on Managing cybersecurity risks related to artificial intelligence in the financial services sectorreleased in March 2024 in response to last October’s publication U.S. Presidential Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, summarizes AI use cases and risk trends, and identifies opportunities, challenges, and best practices for AI operational risk challenges, cybersecurity, and fraud risks. Securities and Exchange Commission (SEC) Examination priorities for 2024 The report pointed out that artificial intelligence is a major area of ​​emerging technology and that recent enforcement has included “AI laundering” by issuers, brokers and advisors, as well as technology governance more broadly.

In the UK, sector-specific regulatory guidance on the use of AI tools under existing legislation (rather than regulating the technology itself) remains a likely approach. As Chief Executive of the UK’s Financial Conduct Authority (FCA), Nikhil Rathi highlighted last year: “(W)hile the FCA does not regulate technology, we do regulate its impact and use in financial services.” Meanwhile, in its latest release, the FCA has re-emphasised its technology-agnostic and results-focused approach update on its approach to artificial intelligence.

As for the European Union, the EU Law on Artificial Intelligence has attracted a lot of attention and is expected to enter into force shortly after its approval by the European Council on 21 May. Non-retail use cases in financial services will likely fall within the scope of “general purpose” AI with a more limited set of requirements, mainly around transparency. EU regulatory guidance on such use cases will most likely fall under existing regimes, and transparency will be key. ESMA highlighted in its February 2023 report Artificial intelligence in EU securities markets:

Complexity and lack of transparency, while likely not inherent to AI, may in fact constitute barriers to the implementation of innovative tools due to the need to maintain effective human oversight and management of upskilling. Some companies appear to be limiting or abandoning the use of AI and machine learning algorithms due to operational issues such as compatibility between AI and their existing technology.

With this in mind, what are the key issues facing financial services firms?

Starting with governance and risk management, it is critical that AI systems include measures to ensure data security and integrity, auditability, and mechanisms to verify data provenance (e.g. by properly tagging data). This is to ensure the ability to manage, among other things, the risk of training on incomplete, outdated or unverified sources, as well as the risk of distortions and hallucinations.

Another major concern is protecting the confidentiality of company and customer information – some publicly available AI solutions can be trained on queries and feedback received from staff at financial entities, which, if not managed carefully, may contain sensitive information.

Companies must also ensure that disclosures about the use of AI are appropriately reviewed for precision, accuracy and balance.

In some cases, financial entities may act as AI providers when AI is used in their services, which will impact how regulations apply, the contractual positions they take with customers, and the policies and procedures they must implement.

Finally, we see companies struggling to develop AI policies and procedures. Companies should take a holistic approach and consider what policies and procedures they already have in place regarding procurement and the use of AI and then change them. As we noted above, AI will likely already be embedded in a company’s software and service supply chain.

Learn more about our Technology Marathon webinar series >>

(Show source.)