close
close

Does AI deserve a seat at the conference table?

Will AI integration create a powerful new player or a risky interloper?

The pervasiveness of AI in the enterprise and its impact on corporate governance and strategic decision-making is only getting deeper. The integration of AI, particularly predictive models and large language models (LLMs), into high-level business operations comes at a time when corporate boards are increasingly calling on CEOs to develop comprehensive AI strategies and implement systems that facilitate real-time, data-driven decision-making. This pressure is being further fueled by investors who are demanding more rigorous and transparent models, particularly in areas such as revenue forecasting, where AI can provide strategic insights. While AI has demonstrated the ability to offer strategic recommendations at the C-suite level when provided with the right data, its integration is not without its challenges. The implications of AI-based recommendations for CEOs, CFOs, and other senior executives are profound, potentially changing the dynamics within the C-suite.

I pulled insights from Andy Byrne, CEO of Clari, and Rak Garg, Partner at Bain Capital Ventures, to explore how AI is reshaping boardroom dynamics, increasing transparency, and influencing C-suite responsibilities. This shift raises important questions about the balance between AI-driven insights and human judgment in high-level corporate decision-making and offers two perspectives on the future of AI in corporate leadership and its implications for business strategy and operations.

Strong pressure to integrate AI

AI has made significant inroads into the enterprise, with predictive AI becoming deeply embedded in decision-making. We’re seeing AI’s transformative impact, with the advent of large language models (LLMs) and the integration of human-generated unstructured data with predictive AI taking capabilities to new heights. Byrne says this shift has changed the dynamic between corporate boards and executives. He says, “If you look at the interface between boards and executives, the traditional way of doing things was to share PDFs of 90-day retrospective performance metrics and hunch-based forecasts. To me, that seems like a breach of fiduciary duty.” He likens this outdated approach to “using a rotary phone instead of a smartphone, not to mention it’s increasingly archaic.”

The landscape is rapidly evolving, with investors demanding more rigorous, transparent and real-time business models and processes. Byrne notes: “Gone are the days when markets focused on ‘growth at all costs.’ The new trend is efficient growth and operational rigor, and the value that comes with it, and investors will pay for it.”

Modern boards expect real-time, comprehensive data and forward-looking metrics. Byrne explains, “They know that through APIs (Application Program Interfaces), historical tracking, predictive models, and now generative AI (GenAI), they can gain insight into specific and granular financial metrics—whether it’s by product line, segment, geography, or anything else—to accelerate decision-making and action.”

Garg, a partner at Bain Capital Ventures, agrees that AI is now a strategic imperative for large corporations, explaining that companies are seeing AI not only as a cost-cutting tool but also as a way to increase productivity with existing resources. Garg notes, “Part of the larger companies are afraid that if they don’t implement an AI strategy, their competitors will and they will be forced out of the market. And part of the companies are afraid that without an AI strategy, they are losing money and productivity.”

As a common starting point, most companies will gather knowledge internally to identify a set of use cases that can be impacted by generative AI in the short term. As Garg notes, “These initial use cases often focus on areas like developer productivity, customer service and support, and general productivity improvements like note-taking and follow-up meetings.”

Employee Recruitment in the Face of Challenges

Among Bain Capital portfolio companies, Garg reports, “…90%+ are using co-pilots for coding, 85%+ are using meeting transcription and follow-up tools, and 60%+ are experimenting with generative AI for customer service and support.”

However, in a study conducted by Morning Consult on behalf of IBM, the Global Adoption Index published in January 2024, which surveyed 8,500 people in 20 countries, IT professionals highlighted that 42% of IT professionals in large organizations have implemented AI, and another 40% are exploring the technology. Generative AI was particularly highlighted, with 38% of enterprises actively implementing it and 42% exploring its use. However, barriers such as limited AI skills and ethical concerns remain significant challenges.

Transparency is the new imperative

Executives who must make multi-billion dollar decisions must trust the results of AI systems. Bryne acknowledges that decision-makers need a deep understanding of AI’s inner workings to effectively evaluate its recommendations, explaining,

“Ensuring robust AI governance, security measures, data privacy, and compliance are key to closing the trust gap and enabling scalable AI adoption across enterprises… AI cannot be a black box. To trust AI, executives need complete transparency into how AI arrives at its conclusions and recommendations, from the underlying data to the algorithms and logic.”

He says there’s skepticism in conversations with C-suite executives about forecasts and projects in the absence of real-time data. To validate AI results, Byrne suggests companies need to align AI recommendations with their organization’s goals and values.

Balancing innovation with risk management and upskilling

If the implementation of AI in operational activities is inevitable, it is necessary to develop clear rules and frameworks regulating the key issues of responsibility, governance and auditing of AI systems, as well as recommendations in this regard.

Garg acknowledges the need for a comprehensive approach to risk mitigation that takes into account the interests of customers, partners and employees, stating, “I completely agree. It’s important for companies to understand not only the technology but also the potential risks and trade-offs associated with LLM and define a plan to mitigate those risks.”

Garg, who has extensive experience in identity and security, outlines some key points:

1. Risk assessment: “The first step for an engineering organization is to work with the security and compliance team to map and understand the risks that exist within the company. If you don’t know your risks, you won’t know how to mitigate them when the time comes.”

2. Solid rating: “Invest in robust evaluation, testing, and adversarial testing. LLMs from every major vendor have been proven to have a non-zero probability of disclosing sensitive or dangerous material when properly monitored. Extend the principles of zero trust to AI and believe that every user is malicious. How would you test and evaluate your AI applications in this world?

3. Explainability: “Clearly document the data sources, model architectures, and processes that are implemented to facilitate accountability.”

4. Communication: “Communicate with customers, partners and stakeholders. Don’t pretend that the system is bulletproof when it isn’t. Clear communication of risks and intended behaviors can go a long way to maintaining reputation.”

Garg suggests that this overhaul should encompass not only the technological aspects but also the organizational culture, emphasizing the importance of upskilling and training employees to work effectively with AI systems. He adds, “LLMs are nondeterministic. This means that the same input or action can yield a variety of likely outcomes. There are AI engineers and AI product managers today who are particularly adept at prompting and prompting these LLMs, making them perform when they are quite slow out of the box, and building experiences around them.”

He further emphasizes the need for new roles to address emerging challenges: “At the same time, new risks and governance measures require AI data curators and cleaners, and expert training to ensure nothing biased gets into the model.”

The Future: AI-Enabled Leadership

As AI becomes increasingly integrated into enterprise decision-making, will AI recommendations be viewed as valuable inputs or final decisions? Byrne and Garg emphasize the continued reliance on human judgment, experience, and leadership to evaluate and interpret AI-generated insights in the broader context of organizational strategy and the competitive landscape.

Byrne envisions a future where AI augments, rather than replaces, executive decision-making, illustrating this: “Imagine a company’s executive team deciding whether to build a new product or acquire the capabilities it needs. AI can analyze all the relevant data, the company’s cash position, its human capital, and its skill sets, and weigh all the trade-offs, giving executives the ability to make much smarter decisions.”

For executives to interpret and effectively evaluate AI recommendations, Garg emphasizes the importance of developing new skills. He says that “C-suite executives need to additionally develop critical thinking skills about AI… the types of data that exist in the company and how to best leverage that data.”

He adds that these recommendations must be aligned with the organization’s strategic goals and must take into account ethical, legal and regulatory issues.

Garg concludes with a powerful insight into the synergy between AI and human judgment: “AI has solved the blank slate problem, which is that whenever we need to make a decision, we can reasonably get a set of initial suggestions from AI. But it’s still up to us to use our judgment, ethics, and values ​​to turn those suggestions into something we’re proud to present to employees, customers, and partners.”