close
close

Liquid AI introduces new LFM-based models that appear to outperform most traditional large-language models

Artificial intelligence startup and spin-off from MIT Liquid AI Inc. today launched its first set of generative AI models, which are significantly different from competing models because they are built on an entirely new architecture.

The new models are called “liquid ground models,” or LFM, and are said to deliver impressive performance comparable to or even better than some of the best big-tongue models available today.

The Boston startup was founded by a team of researchers from the Massachusetts Institute of Technology, including Ramin Hasani, Mathias Lechner, Alexander Amini and Daniela Rus. They are said to have pioneered the concept of “fluid neural networks,” a class of artificial intelligence models that is significantly different from the pre-trained generative transformer-based models we know and love today, such as OpenAI’s GPT series and LLC’s Google Gemini Models.

The company’s mission is to create high-performance, high-performance, general-purpose models that can be used by organizations of all sizes. To achieve this, it builds LFM-based AI systems that can operate at any scale, from the edge to enterprise-level deployments.

What are LFMs?

According to Liquid, its LFMs represent a new generation of AI systems that are designed with both performance and efficiency in mind. They use a minimal amount of system memory while providing exceptional computing power, the company explains.

They are based on dynamical systems, numerical linear algebra and signal processing. This makes them ideal for handling a variety of types of sequential data, including text, audio, images, video and signals.

Liquid AI first made headlines in December when it raised $37.6 million in seed funding. He then explained that his LFMs are based on a newer fluid neural network architecture that was originally developed at MIT’s Computer Science and Artificial Intelligence Laboratory. LNNs are based on the concept of artificial neurons, i.e. data processing nodes.

While traditional deep learning models require thousands of neurons to perform computational tasks, LNNs can achieve the same performance with significantly fewer neurons. It does this by combining these neurons with innovative mathematical formulas, allowing it to achieve much more with less.

The startup claims that its LFMs retain these flexible and efficient capabilities that enable them to perform real-time adjustments during inference without the massive computational overhead costs associated with traditional LLMs. As a result, they can efficiently handle up to 1 million tokens without any noticeable impact on memory usage.

Liquid AI is starting with the launch of a family of three models, including the LFM-1B, which is a dense 1.3 billion parameter model designed for resource-constrained environments. Slightly more powerful is the LFM-3B, which has 3.1 billion parameters and is intended for edge deployments such as mobile applications, robots and drones. Finally, there is the LFM-40B, which is a much more powerful “expert mix” model with 40.3 billion parameters, designed to be deployed to cloud servers to support the most complex use cases.

The startup believes its new models have already demonstrated “state-of-the-art performance” in a number of important AI benchmarks and believes they are emerging as formidable competitors to existing generative AI models such as ChatGPT.

While traditional LLM solutions see a sharp increase in memory consumption when processing long context, the LFM-3B model in particular takes up much less memory space (above), making it an excellent choice for applications requiring processing large amounts of sequential data. Example use cases could include chatbots and document analysis, the company said.

Good benchmark results

In terms of performance, LFMs delivered impressive results, with the LFM-1B outperforming transformer-based models in the same size category. Meanwhile, the LFM-3B compares well with models such as Microsoft Corp.’s Phi-3.5. and the Llama family of Meta Platforms Inc. As for the LFM-40B, its performance is such that it can even outperform larger models while maintaining an unrivaled balance between performance and efficiency.

According to Liquid AI, the LFM-1B achieved particularly dominant performance in benchmarks such as MMLU and ARC-C, setting a new standard for models with 1B parameters.

The company makes its models available in early access through platforms such as Liquid Playground, Lambda – via chat and application programming interfaces – and Perplexity Labs. This will give organizations the opportunity to integrate their models with various AI systems and see how they perform in various deployment scenarios, including on edge devices and on-premises.

One of the things he’s currently working on is optimizing LFM models to run on specific hardware built by Nvidia Corp., Advanced Micro Devices Inc., Apple Inc., Qualcomm Inc. and Cerebras Computing Inc. so that users can squeeze even more performance before they reach general availability.

The company says it will publish a series of technical blog posts detailing the mechanics of each model ahead of their official launch. Additionally, it encourages red teams by inviting the AI ​​community to test their LFMs to their limits to see what they can and cannot yet do.

Photo: SiliconANGLE/Microsoft Designer

Your vote of support is important to us and helps us keep our content FREE.

One click below supports our mission of providing free, detailed and relevant content.

Join our community on YouTube

Join a community of over 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, ​​Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many other luminaries and experts.

“TheCUBE is an important partner of the industry. You are truly a part of our events and we truly appreciate your arrival. I know people also appreciate the content you create” – Andy Jassy

THANK YOU