close
close

Audit Quality Center Comes to the Aid of Audit Committees Responsible for AI Oversight | Cooley LLP

In this article from 2023 FortuneA survey of 2,800 executives and directors by consulting firm Aon found that business leaders were “not very concerned about AI… Not only is AI not the biggest risk they cited for their companies, it doesn’t even make the top 20. AI was ranked as the 49th biggest risk to companies.” Has “the AI ​​threat been exaggerated,” Aon asked, or could “survey participants be wrong”? If so, it didn’t last long. Fast forward less than a year and another Article about Fortuna, citing a report by research firm Arize AI, it revealed that 281 of the Fortune 500 companies have identified AI as a risk, which is “56.2% of companies and a 473.5% increase from the previous year, when only 49 companies identified AI as a risk. “If the Fortune 500 annual reports make one thing clear, it’s that the impact of generative AI is being felt across a wide range of industries—even those that have not yet adopted the technology,” the report stated. “This widespread recognition of the potential risks associated with generative AI will likely force companies to focus on risk oversight, and that will almost certainly involve audit committee oversight. To help audit committees in this process, the Center for Audit Quality has released a new resource—an excellent new report, Audit committee oversight in the era of generative artificial intelligence.

As noted on thecorporatecounsel.net, the CAQ recently released a new report aimed at assisting audit committees in these activities.. According to the report, “a recent CAQ survey found that one in three audit partners sees companies in their primary industry sector implementing or planning to implement AI in their financial reporting process… The rise of genAI raises important questions about when and how to invest in the right technologies that can impact the finance organization and the speed of transformation.” However, 66% of survey respondents said their audit committees “have spent insufficient time in the past 12 months discussing AI governance.”

Given the many potential risks associated with the use of genAI, it will be important for audit committees to improve their understanding of AI to enable and facilitate oversight of these risks. While the report focuses largely on the use of genAI in processes relevant to financial reporting and internal controls over financial reporting, the report also includes a background information on AI, intended to provide audit committees with “a basic understanding of some of the fundamental principles of genAI, including key features of the technology and how it differs from other technologies that firms may use,” along with other general-purpose guidance. For example, for those whose knowledge of AI is like mine—virtually zero—the report discusses the differences between AI, machine learning, deep learning, and genAI. Additionally, the report discusses in very basic terms how genAI works, explaining that because genAI technologies are

“predictive technologies,…the results are based on what the genAI technology has determined to be a likely answer(,) a key difference from other technologies that may have been historically used in a company’s financial reporting processes. If a user asks the same question multiple times, they may get different answers each time. The different answers may be because genAI technologies are designed to generate diverse answers and are trained on different data sets, leading to a wide range of likely answers to a single question. As such, genAI technologies are particularly helpful for tasks that require creativity or diversity of answers, including generating new content or information, but genAI cannot always provide reliable or repeatable information. GenAI technologies do not act like search engines that find facts in their training data, but instead create new coherent, human-like text.”

The report also discusses the challenge that genAI can be a “black box,” meaning that the process of arriving at a particular outcome is not easily explainable or interpretable, due to the inherent complexity of AI algorithms and the nonlinearity of the relationship between the underlying data and the outcomes or decisions made. In relation to financial reporting, the report acknowledges that “explainability and interpretability may become increasingly important for effective human oversight of technology,” especially as the use of genAI becomes more sophisticated over time.

In the context of financial reporting, the report notes that in general, companies “will initially use it to augment processes (rather than fully automate them), which enables efficiency but does not eliminate human judgment and decision-making. Particularly in financial reporting and ICFR processes, humans are still involved in overseeing, understanding, and assessing the validity and reliability of genAI technology outputs. In the future, companies may evolve to implement more advanced and complex use cases or reduce the level of human involvement.”

Among other concerns, the report advises that companies will need to consider privacy and security needs, including determining whether using publicly available genAI technologies (such as some genAI chatbots) is appropriate, given that data may be saved for use by a third-party technology provider to further develop the genAI model. In the case of genAI technologies used in financial reporting and ICFR processes, “companies may want to ensure that information fed into the genAI technology is not tracked, saved, or used by third parties” to ensure that the company maintains control “over how information fed into the genAI technology is managed and saved.” The report also warns that “GenAI technologies may also be susceptible to cyberattacks that could impact the reliability of the results provided by the technology or put the company’s confidential data at risk.” Additionally, according to the report, “the use of genAI may pose increased fraud risks to companies, including the risk of fraud by management and the risk of the company falling victim to fraud by outside parties.”

The report advises that strong oversight and governance, including through an audit committee, will be critical to the successful implementation of AI technologies. Among the key issues highlighted in the report are determining who within the company is responsible for oversight; developing a framework and principles for the responsible, acceptable, and ethical use of genAI, along with a process for monitoring compliance; and identifying those uses of genAI that are subject to oversight, frameworks, and principles. The report advises that “it is important for companies to track and monitor the use of genAI across the company, including use by third-party service providers, to understand the impact of these technologies on processes and to identify, assess, and manage risks arising from their use.” Companies will also want to “establish processes to monitor the ongoing effectiveness of genAI technologies to verify that they continue to operate effectively and as intended.” Other issues to consider include “the knowledge and skills of employees who will operate genAI technologies, training provided to employees on the use of prompts, reliance on results, and other relevant topics, and policies and procedures established to promote human review of the results of genAI technologies.” Companies will also need to be knowledgeable about the “regulatory environment and any agreements, laws or regulations that impact how a company may use genAI.”

The report (see Appendix A) also includes a series of questions that the audit committee should ask management and the auditor about a range of important issues, including governance, data privacy and security, selection and design of genAI technologies, implementation and monitoring of genAI technologies, fraud, and the regulatory environment. For example, the report advises that audit committees should seek to understand “where genAI is being deployed and why management has selected a particular genAI technology to deploy,” including “how management decides whether to build or purchase genAI technologies that have the right capabilities to meet the needs of the company.” Accordingly, the report suggests the following questions for management:

  • “How does management identify processes that are suitable for extension to genAI?
  • How does management design genAI technologies, including determining which genAI technologies to use (e.g., selecting an existing genAI technology, using a base model with additional customizations, or developing the company’s own model) and what data is needed for those technologies?
  • How does management select which third-party genAI technologies to use?

In connection with this topic, the audit committee may wish to ask the auditor, “how does the company’s use of a baseline model or development of its own model affect the auditor’s risk assessment?”

The report concludes with a reminder that the AI ​​regulatory environment is rapidly evolving, and “there are increasing calls for stronger regulation of the safe and responsible development and use of AI, including genAI. While existing laws in many countries already govern the use and protection of data or emerging technologies and apply to AI, many countries have also begun adopting new laws and frameworks specifically to mitigate AI security risks and promote ethical and responsible use of AI. It is important for audit committees to provide oversight and understand whether management is engaging the appropriate parties to monitor, assess, and comply with applicable laws and regulations,” including compliance departments, legal counsel, and other outside counsel.

There is much more useful information in this source, so be sure to check out the CAQ report!

(See source.)