close
close

Coding is GenAI’s killer app right now, says Databricks AI CEO Naveen Rao

“We are seeing a lot of developers actually getting involved in this, especially the junior developers. LLMs (large language models) have also been working on code (in addition to text, images, audio, video, etc.) and we are already seeing design automation happening by helping with coding. This (coding) is a subclass of a much larger class of applications around true design automation,” Rao said. Mint in a recent video interview from his San Diego office.

Rao says that by automating many coding tasks, the ability to innovate product design and create exceptional user experiences will become even more valuable.

“Now you can create apps by simply describing them in English. As a result, the value of translating a design idea into an app has diminished because so much of that process is now automated,” he says.

Rao believes that being a developer will involve using AI tools effectively, adding that the main focus will be on understanding why some applications succeed and others fail.

However, Rao emphasizes that “it will take 3-5 years to make these systems reliable and deterministic enough to be used in core engineering.” The reason is that GenAI is still struggling with “hallucinations” (incorrect or misleading results) and has no real reasoning capabilities, despite claims to the contrary from some big tech companies.

Data is eating the world via artificial intelligence

“Current LLMs match patterns primarily based on probability, not reason. Unlike humans and even animals, which learn through a cycle of action and feedback, LLMs do not engage in causal, real-world learning. While advances like OpenAI’s step-by-step reasoning are promising, there is still a long way to go,” explains Rao, who has a degree in computer science from Duke University and a PhD in computational neuroscience from Brown University.

He is also a serial entrepreneur, having founded artificial intelligence companies Nervana Systems, which Intel Corp. acquired in 2016 for about $400 million, and MosaicML, which Databricks acquired last July for $1.3 billion.

Rao is kept awake at night by the gap between AI’s potential and its current limitations, especially in reasoning. He says AI models like DeepMind’s AlphaZero, which hypothesize and learn by playing independently in structured environments, offer insight into what’s possible, “but we’re a long way from applying that to the real world.”

“We need to better understand how to enable models to reason and learn in dynamic environments,” Rao says.

However, he says, this emphasis on reasoning doesn’t necessarily require more computing power. Instead, there’s a shift toward smaller, high-quality data sets and more focused tuning, which may not require the massive computational resources associated with training larger models.

Rao believes that the shift from offline training to online learning has the potential to change how we approach AI development in the future.

Why Software Won’t Eat the AI ​​World

Rao also emphasizes that software will not eat the AI ​​world, referencing American VC Andreessen Horowitz co-founder Marc Andreessen’s famous 2011 essay titled “Why Software Is Eating the World.” In an August 30 thread on ‘X’ (formerly Twitter), Rao argued that “…the fundamental balance between compute and software is different in AI… In light of @nvidia’s strong growth and recent gains, hardware is clearly a necessary component of the current wave of AI…”

Rao says he’s more “excited about the ongoing evolution of hardware that continues to drive performance and drive down costs.” He cites the example of the human brain, which operates on just 20 watts of power, “showing how far we are from making AI systems that powerful and advanced.” He adds that significant progress has been made in the way AI systems interact with hardware, improving latency, cost, and accuracy.

Databricks, for example, is exploring “some exciting applications of AI and GenAI, particularly in task automation and creative processes,” according to Rao. One area of ​​interest is automating tasks, such as responding to human resources (HR) queries or searching company manuals, where “some error can be tolerated.”

Now you can create apps by simply describing them in English.

Another innovative application is training LLMs to replicate a specific writing style, which helps news organizations by accelerating content creation. Databricks also supports the use of LLMs in scientific research, such as drug discovery, where AI helps analyze protein interactions and promote the development of new drugs.

Databricks is positioned as a leader in the 2024 Gartner Magic Quadrant for Data Science and Machine Learning Platforms, “The Forrester Wave: Cloud Data Pipelines, Q4 2023,” and in the 2024 IDC MarketScape for Global Analytics Streaming Processing.

Some of the notable customers using Databricks platform for data optimization, AI and analytics include Adobe, Aditya Birla Fashion & Retail, Mercedes-Benz Tech Innovation, Nasdaq, Air India, Parle, MakeMyTrip, Meesho, Tata Steel and Shell.

Data is the oil of AI

Databricks’ DBRX model, for example, can transform raw data into a fully trained, tuned model. Databricks now focuses on “complex AI systems,” Rao says, which combine multiple AI models, including open-source and proprietary models, to create advanced solutions. Using this approach, Databricks has helped Factset, a financial software provider, improve its query accuracy and performance.

Data is a necessary ingredient for AI to truly add economic value. So maybe the mantra now is, “Data is eating the world through AI,” Rao says.

Still, many companies struggle to bridge the gap between data readiness and effective AI implementation. A key challenge, Rao says, is understanding the AI ​​economics, which are very different from traditional software models like Software-as-a-Service (SaaS), where multiple applications can run on a single piece of hardware.

On the other hand, AI models require dedicated physical infrastructure for each additional user. Rao says that means scaling AI would require expensive hardware investments, resulting in lower gross margins (often below 20%) compared to SaaS. On the development side, rising costs are making profitability difficult, especially as larger companies continue to raise capital and operate at a loss.

In this context, a key metric for CEOs when integrating AI is “defining clear criteria for success.” In AI, especially in custom LLMs, this involves creating an assessment system, similar to an exam, to measure the performance of the AI ​​system, Rao says.

He adds that Databricks offers various optimization tools, such as reclassification and model tuning, that help improve performance. Establishing and refining these metrics is crucial to successful AI implementation.

As an example, he cites Ola’s Krutrim, one of Databricks’ customers, which built its own AI model using the platform, rather than relying on existing models. Other customers, such as Freshworks and Air India, are using custom LLMs and complex AI systems to automate tasks, such as chatbots that help with queries about baggage or return policies, according to Rao.

According to Ola, this model is particularly noteworthy given the linguistic diversity of India, where different languages, such as Hindi, English and regional languages, are mixed in everyday communication.

New skill sets

That said, Rao agrees that AI will have a profound impact on the workplace, especially given the increasing use of co-pilots and fully autonomous AI agent systems in enterprises. Autonomous AI agents, or so-called “Agentic AI” systems, refer to AI models that can achieve specific goals without any human intervention.

It’s clear we need to decide how humans will remain integral to the process, Rao says. Databricks is working on that, developing tools that help customers build high-quality AI agents.

These agents go beyond simple tasks, automating complex processes using customer data to add value. However, enterprise customers demand transparency, auditability and security, making this a difficult area, according to Rao.

We need to better understand how to enable models to reason and learn in dynamic environments.

He adds: “AI will significantly change jobs, much like other technological advances have in the past. As AI advances, key skills will include data engineering, systems orchestration, product design and using AI tools.”

Balancing innovation with responsible AI is also key, especially with regard to data privacy, bias, and transparency, Rao acknowledges. Databricks, he adds, excels at governance, with its Unity Catalog ensuring tight security of data and models.

He explains that while bias is application-specific and difficult to automate, Databricks offers filtering tools that allow companies to manage input and output, ensuring privacy, security, and content moderation. However, these tools are designed to be flexible and adaptable to different customer needs.

Do we need a Chief Artificial Intelligence Officer?

Given the complexity of the field, do enterprises need a Chief AI Officer? Especially considering that many large organizations already have senior management positions such as Chief Information Officer (CIO), Chief Technology Officer (CTO), Chief Data Officer, Chief Digital Officer, and even Chief Marketing Officer (CMO) that oversee multiple AI functions.

Rao agrees, pointing out that companies have responded to the AI ​​boom by rushing to integrate AI and hiring for the role without clear boundaries, leading to conflicts over budgets and responsibilities. A more practical solution, he says, might be to combine data and AI in a chief data and AI officer role, separating those responsibilities from the broader, productivity-focused CIO role.

Last but not least, Rao says artificial general intelligence (AGI) is often misunderstood. Some define it in terms of how much human productivity it can replace, but a true AGI would have to interact with the world, learn from its actions, and adapt—something that goes far beyond current technology, he explains. He suggests that while AGI remains a distant goal, companies should focus on how AI can improve business processes and customer experiences today.