close
close

Solondais

Where news breaks first, every time

sinolod

DataStax seeks to help companies stuck in AI ‘development hell,’ with a little help from Nvidia


Join our daily and weekly newsletters for the latest updates and exclusive content covering cutting-edge AI. Learn more


DataStax has continued to expand its data platform in recent years to meet the growing needs of enterprise AI developers.

Today, the company takes the next step with the launch of the DataStax AI platform, built with Nvidia AI. The new platform integrates existing DataStax database technology, including DataStax Astra for cloud native and DataStax hyper-converged database (HCD) for self-managed deployments. It also includes the company’s Langflow technology, used to help create agentic AI workflows. Nvidia’s enterprise AI components include technologies that will help accelerate and improve organizations’ ability to quickly build and deploy models. Among the Nvidia enterprise components in the stack are NeMo Retriever, NeMo Guardrails, and NIM Agent Blueprints.

According to DataStax, the new platform can reduce AI development time by 60% and handle AI workloads 19 times faster than current solutions.

“Time to production is one of the things we talk about, it takes a lot of time to build these things,” Ed Anuff, chief product officer at DataStax, told VentureBeat. “What we found is that a lot of people are stuck in development hell.”

How Langflow enables businesses to benefit from agentic AI

Langflow, DataStax’s AI visual orchestration tool, plays a crucial role in the new AI platform.

Langflow allows developers to visually build AI workflows by dragging and dropping components onto a canvas. These components represent various features of DataStax and Nvidia, including data sources, AI models, and processing steps. This visual approach significantly simplifies the process of creating complex AI applications.

“What Langflow allows us to do is surface all of DataStax’s capabilities and APIs, and all of Nvidia’s components and microservices as visual components that can be connected together and run interactively,” said Anuff.

Langflow is also the core technology that also enables agentic AI to the new DataStax platform. According to Anuff, the platform facilitates the development of three main types of agents:

Task-oriented agents: These agents can perform specific tasks on behalf of users. For example, in a travel app, an agent could create a vacation package based on the user’s preferences.

Automation Agents: These agents operate behind the scenes, handling tasks without direct user interaction. They often involve APIs communicating with other APIs and agents, facilitating complex automated workflows.

Multi-agent systems: This approach consists of breaking down complex tasks into subtasks managed by specialized agents.

What the Nvidia DataStax combination enables for enterprise AI

Combining Nvidia’s capabilities with data from DataStax and Langflow will help enterprise AI users in several ways, according to Anuff.

He explained that the Nvidia integration will make it easier for enterprise users to invoke custom language models and integrations through a standardized NIM microservices architecture. By using Nvidia’s microservices, users can also leverage Nvidia’s hardware and software capabilities to efficiently run these models.

Guardrail support is another key addition that will help DataStax users prevent unsafe content and model output.

“Railguard capability is one of the features that I think probably has the most impact on developers and end users,” Anuff said. “Guardrails are essentially a sidecar model, capable of recognizing and intercepting dangerous content originating either from the user, from ingestion, or from content retrieved from databases.”

The integration of Nvidia will also help enable continued improvement of the model. Anuff explained that NeMo Curator allows enterprise AI users to determine additional content that can be used for fine-tuning purposes.

The overall impact of the integration is to help businesses benefit from AI more quickly and cost-effectively. Anuff noted that this is an approach that doesn’t necessarily have to rely entirely on GPUs either.

“The Nvidia enterprise stack is really capable of running workloads on CPUs as well as GPUs,” Anuff said. “GPUs will be faster and will generally be placed where you want to put these workloads, but if you want to offload some of the stuff to the CPUs to save money in areas where, where it doesn’t have importance, it allows you to do it too.