close
close

Teams in India are driving our global AI ambitions, says IBM Research head of AI

“Our research teams are actually the heart of most of our global code work. The Indian development teams are also leading the development of watsonx Orchestrate, a digital workflow automation tool, and watsonx.data, the data component of IBM’s watsonx platform,” said Sriram Raghavan, vice president of AI at IBM Research. Mint in an interview.

Raghavan, who leads a global team of over 750 scientists and engineers across IBM Research operations, including India, was in the city to attend the company’s flagship annual event, which was held in Mumbai this year.

India is like a microcosm for IBM. Every part of IBM is represented here – research labs, software labs, systems labs, and we are still growing.

For example, IBM, a partner in India’s AI Mission and the country’s Semiconductor Mission, has installed WatsonX technology on the Airavat graphics processing unit (GPU) infrastructure at its Center for the Development of Advanced Computing (C-DAC), which “can be used by startups and ecosystem partners.”

On September 23, Prime Minister Narendra Modi met top technology leaders, including IBM CEO Arvind Krishna and Google CEO Sundar Pichai, in New York to discuss topics including AI, quantum, biotechnology and life sciences, and semiconductor technologies.

Read also: What made IBM go from tech titan to cautionary tale

Raghavan noted that IBM has a strong public-private ecosystem in New York, where its Albany lab works closely with the State University of New York and the New York Nanotechnology Center. “We’re taking that and helping the Indian government build similar ecosystems,” he explained.

Closer to home, IBM is working with L&T Semiconductor Technologies Ltd, combining its semiconductor intellectual property (IP) expertise with L&T’s industry knowledge to drive innovation in semiconductor solutions.

The Evolution of Artificial Intelligence

Raghavan emphasized that AI is gaining serious attention in hardware, software and enterprise applications. “Companies want fit-for-purpose models that are efficient, scalable and affordable, which is also what IBM is focused on,” he said.

He says IBM’s approach to AI has three key components: the Granite series (IBM’s flagship brand of open and proprietary large language models, or LLMs); the InstructLab open-source project for customizing models; and the watsonx platform for integrating, managing, and securely deploying AI models across environments, including on-premises, public, and IBM clouds.

That said, like Meta Platforms Inc., IBM believes in making open-source models publicly available. “The real value is in managing and optimizing those models, much like we did with Red Hat and Linux,” Raghavan said.

The emergence of Gen AI has raised concerns that these models will be closed, proprietary, and unsafe.

“Hence, we (IBM and Meta) launched the AI ​​Alliance (in December 2023) to highlight the value of an open approach, and many Indian companies have also joined the movement, recognising that AI is too important to be developed behind closed doors,” Raghavan said.

Read also: Quantum supercomputers will soon become a reality: IBM’s Dario Gil

The AI ​​Alliance currently includes IIT-Bombay, AI4Bharat (IIT-Madras), IIT-Jodhpur, Infosys Ltd, KissanAI, People+AI and Sarvam AI.

“By keeping models open, we invite more eyes to help innovate and build better safeguards. It’s not the model that creates the risk, but how it’s used,” Raghavan insisted.

He stressed that the U.S. government in recent executive orders recognized that overly restrictive measures would limit innovation, especially in academia and start-ups.

According to Raghavan, the underlying technologies should be open to supporting collaboration and driving new ideas, even if customers still pay for enterprise-grade support, security, and management. “The monetization will come from managing AI applications,” he explained.

But are enough companies moving from pilot to production, and how do they realize a return on investment (ROI) from GenAI? “Our priorities are cost, performance, security, and skills as customers move from proof-of-concept (POC) to production,” Raghavan said. He cited an IBM study that found 10% to 20% of companies had scaled at least one AI use case. He acknowledged that the number is growing, but challenges remain, especially in regulated industries.

Read also: Let’s see if artificial intelligence can work wonders and eliminate the gaps in education

“Successful companies focus on key areas with clear ROI potential, rather than spreading their efforts too thinly across multiple POCs. This focused approach allows them to scale effectively and realize significant gains. As companies scale their AI use cases, they are discovering the importance of balancing technology, process, and culture,” he said.

Examples of artificial intelligence applications

He says IBM sees three key use case categories: customer service, application modernization, and digital work and business automation. “Customer care is a natural fit, even before Gen AI. Everyone wants better customer service at a lower cost. The real value comes from building models that are tailored to specific needs,” he explained, adding that, for example, the customer service model doesn’t have to solve complex problems, which helps keep costs low.

Application modernization is also critical, especially when companies are dealing with large legacy code bases. “For example, IBM’s watsonx code assistant for COBOL (a legacy programming language) helps modernize mainframe code by allowing developers to work more easily with legacy languages. We’re extending this to Java, another key language for enterprises. Digital work, or business automation, encompasses processes like supply chain, finance, and HR. Our watsonx Orchestrate suite is designed to streamline these operations using AI,” he explains.

But Raghavan acknowledged that as companies adopt AI, they face challenges in three areas: skills, trust, and cost. “That’s where watsonx.governance comes in—it helps automate model governance, ensuring proper usage, tracking data, and conducting risk assessments.”

Asked about the debate surrounding AI acquiring enhanced “reasoning” abilities, Raghavan acknowledged that it’s a “nuanced topic.” Models don’t reason like humans do, using logic, he explained. Instead, they learn from examples. “While current AI can reason about specific domains, like IT systems or code, general-purpose reasoning remains out of reach.”

Read also: GenAI has a killer app. It’s coding, says Databricks AI boss Naveen Rao

Domain-specific reasoning is also “extremely useful,” according to Raghavan. For example, AI can improve IT automation or help fix code problems by learning from examples, making it a practical and valuable approach.

He concluded: “We also see a shift from models that simply provide answers to those that ‘think’ before they answer. Those models that engage in system two behavior (like Daniel Kahneman’s analogy) can self-critique and refine their answers. This will lead to more complex AI tasks, but it will increase the cost because inference times increase with more in-depth reasoning.”