close
close

Companies need to update TPRM programs ahead of AI regulations

While much of the current focus is on internal compliance with emerging AI regulations, Prevalent’s Alastair Parr argues that companies shouldn’t overlook an important external factor: third parties.

Artificial Intelligence (AI) is rapidly transforming the modern world, and governments are rushing to build safeguards to ensure responsible use of these funds. The rapid development of technology has also led companies in almost every industry to embrace artificial intelligence as it provides increased productivity and efficiency with the ultimate goal of increasing profits.

However, with these opportunities come significant responsibilities for companies to implement AI in an ethical and legal manner. This responsibility should extend not only to their own practices, but also to the practices of all third parties with which they work, including suppliers and service providers.

Navigating the many moving parts involved in safely and responsibly implementing AI will be especially difficult for companies based in regions with leading AI regulation, including the US, Canada, the EU and the UK.

These regions are developing unique frameworks to regulate this rapidly evolving technology. Understanding and complying with these regulations will be crucial for businesses operating in these regions to avoid legal consequences and maintain stakeholder trust.

The road lies ahead

Regulators around the world are deciding how to regulate artificial intelligence, and companies should pay close attention to this as the proposals become binding regulations. And while they will vary from country to country, most of the proposed regulations focus on privacy, security and ESG issues around how companies can ethically and legally use artificial intelligence.

For example in the USA NIST AI Risk Management Framework was introduced in January 2023 to “provide resources to organizations designing, developing, deploying or using artificial intelligence systems to help manage the many risks associated with artificial intelligence and to promote the trustworthy and responsible development and use of artificial intelligence systems.” This voluntary framework offers comprehensive guidance on developing an AI governance strategy for organizations.

Organizations should apply risk management principles to mitigate the potential negative impacts of AI systems, such as:

  • Vulnerabilities and AI applications: Without the right one management and security, your organization may be at risk of system or data breaches.
  • Lack of transparency in AI risk methodologies or measurements: Inadequate measurement and reporting practices may result in underestimation of the impact of potential AI threats.
  • Inconsistent AI security policies: If AI security policies are not consistent with existing risk management procedures, it can result in complex and time-consuming audits, potentially leading to negative legal or regulatory consequences. compatibility results.

All of the above applies not only to companies, but also to partners, suppliers and other third parties with whom they do business. Increasingly, companies should expect to be responsible for how their vendors, suppliers and other third-party partners use AI, especially in how they manage their customers’ data.

The coming years will illuminate how organizations around the world must adapt their AI strategies and management third party risk will likely become an increasingly important part of the equation.

With the entry into force of new regulations, new realities will come for entrepreneurs from every industry. It’s time to start preparing for the new reality, including establishing rules for the acceptable use of AI and communicating those rules to third parties.

Mitigate risks associated with third-party AI

Regardless of location, a cautious approach and proactive supplier engagement are essential strategies to manage these risks. Companies must recognize that responsible AI governance extends beyond their internal operations and encompasses the practices of all parties involved in their AI ecosystem.

Each company has unique goals and challenges, which means relationships with external partners will vary significantly. However, there are a few basic steps that every company can take to proactively reduce the risks associated with artificial intelligence in its relationships with third parties:

  • Identify which external partners use AI and how they use it. Conduct a thorough inventory to determine which third-party vendors and suppliers are using AI and the extent of its use. This process involves asking the right questions to understand the inherent risks associated with AI applications, including data privacybias and responsibility.
  • Develop a system to rate and evaluate third-party use of AI. Update your tiering system for external partners based on their use of AI and associated risks. Consider factors such as the sensitivity of the data being processed, the impact of AI applications on stakeholders and business processes, and the level of transparency and accountability in AI decision-making processes.
  • Assess the risks in detail. Moving beyond surface-level assessments is essential and can be done by conducting detailed analyzes of third-party AI practices. This includes assessing their governance structures, data security protocols, transparency of AI use, and the extent of human oversight and intervention in AI decision-making. Use established compliance frameworks and industry best practices, such as the NIST framework, to guide your due diligence process.
  • Recommend mitigation strategies when possible. Based on what you discover from risk assessments and tiered scoring, recommend specific remediation measures to external partners. These measures may include improving data security protocols, implementing strategies to detect and mitigate bias, ensuring transparency in AI decision-making, and establishing contractual clauses to enforce ethical AI practices.
  • Implement ongoing monitoring. Please be aware that mitigating third party risk is an ongoing process that requires ongoing monitoring and evaluation. For this reason, mechanisms should be developed to monitor third-party AI practices on an ongoing basis, including on a regular basis auditsreviews policy and control changes and stays up to date on emerging AI issues that may impact your business.

As governments introduce new regulatory and legal frameworks for AI, enterprises must increasingly view their suppliers and external partners as another source of risk that needs to be mitigated and managed. Taking these important steps requires AI management expertise, which is currently in high demand. Companies that lack dedicated AI risk management teams can obtain external assistance from organizations that specialize in successfully navigating this complex environment.