close
close

ChatGPT 4o, Increased Performance and Reduced Costs: CIO Insights VirtualExpo

OpenAI released a new version of GPT-4 a few weeks ago: model 4o, short for “Omni”. This latest model improves the processing of text, images and audio, both in input (hint data) and output (generated content). It also offers better support for non-English languages, all with supposedly better performance and halved costs. We asked Sébastien Landeau, CIO at VirtualExpo, what ChatGPT 4o can bring to an e-commerce company.

Last week, OpenAI made headlines again with the launch of its newest artificial intelligence model, GPT-4o, derived from the Latin word “omnis,” which means “all.” It is designed to understand and respond in real time to “all” inputs, including text, audio and images. Basically, it can have conversations with users rather than just answering questions.

Increase efficiency

The key technological achievements of GPT-4o include its complete multimodality. The new model can accept and generate text, audio and graphic responses.

It is also characterized by a fast response time. It can respond to audio input signals in as little as 232 milliseconds, with an average of 320 milliseconds. This is similar to human reaction time during a conversation.

Additionally, GPT-4o offers improved performance in non-English languages, outperforming GPT-4 Turbo in English text and encoding performance while making significant progress in other languages.

GPT-4o also excels in audio and video understanding compared to existing models. In video mode, you can comment on video streams in real time. He may even add humorous comments.

Early user testing has been impressive. For example, when analyzing the same photo using the same prompt, the new model provides a more detailed understanding of the image than GPT-4. For example, it can accurately identify the surface an object is on or even correctly guess the location of a photo.

Cost optimization

But what can this new model bring to a B2B company? We asked Sébastien Landeau, CIO of VirtualExpo Group, about the potential positive impact of such changes. VirtualExpo is developing several online stores and aims to streamline and scale the process of adding new products to its e-commerce sites. The company already uses ChatGPT 4.

According to Mr. Landeau, ChatGPT 4o opens up many opportunities to optimize the costs and efficiency of the onboarding process of VirtualExpo stores. It also highlights the strategic need for every company to use artificial intelligence to manage costs.

What do you think about the new Openai model?

Sébastien Landeau: “This is the culmination of further improvements to their models. They claim that this improves language processing for languages ​​other than English and doubles performance. What’s really interesting for a company like ours that uses these tools primarily to implement products in growing our e-commerce business is that it cuts costs in half. AI tools are fantastic, but we should not forget that they are expensive. For example, OpenAI charged us $10 per million tokens for input and $30 per million tokens for output. To categorize products, we need to include our store’s taxonomy in the prompt, which OpenAI also charges for. Thanks to 4o, we reduced costs by almost 50%. They charge us about $5 per million tokens for input and $10 per million tokens for output. It used to cost us 50 cents to implement a product; now the price has dropped to 30 cents per product.”

How do you explain this cost reduction?

Sébastien Landeau: “I think they are optimizing their model. They took their infrastructure into account and reduced the number of models. They’re democratizing it. I think they need to stay competitive to avoid losing customers to such competitors Anthropic Or Mistral

Is it difficult to keep up with updates?

Sebastian Landeau: “Evolution is both exciting and challenging. OpenAI is constantly improving in terms of performance and programming. I think we had to refactor our code for the third time to keep up with the constant updates and improvements they make. This is inherent in every new development resulting from research. It’s a dynamically developing field that is evolving every day, and that’s normal. That’s why we have set up an internal technical unit to monitor these changes. After 4o was released, my teams immediately updated the code. OpenAI specializes in providing up-to-date documentation regarding the use of the GPT assistant and knowledge files in agents. Each OpenAI release includes updated conversational clients, APIs, and excellent documentation.

As for reliability, what are your impressions so far?

Sébastien Landeau: “These tools rely on Large Language Models (LLM), so there will always be some level of ‘hallucination’ or unexpected results. Achieving perfect industrial processes is an idealistic goal because it is a statistical model with inherent randomness. We always use a quality control approach of either manually testing or having one AI analyze the results of another AI. For example, we use GPT-4 to categorize products and GPT-3.5 Turbo to verify these results. We then conduct manual acceptance tests, the results of which are quite reliable.”

Is it strategic to rely solely on OpenAI?

Sébastien Landeau: “Putting all your eggs in one basket is complicated. In the event of maintenance or service interruption, how can we continue our processes? One possibility is to combine SaaS technologies such as Anthropic or Mistral. Another strategy for reducing dependencies is to integrate neural networks and train them based on our data. The advantage is the low unit cost compared to online services. We are experimenting with neural networks for image analysis and training models using our store images. Although the results are not perfect yet, the results are promising. Integration of internal LLM solutions, especially open source models such as Mistral, is under our belt. The training requires significant computing resources, but this is necessary to manage costs. Using OpenAI for categorization, approval, image vision, and attribute evaluation leads to multiple calls for each product, which quickly becomes expensive. This can become costly and difficult to manage when deploying millions of products. So strategically we need to keep costs reasonable and therefore use OpenAI in limited quantities.”

Would you say that artificial intelligence enables time optimization or cost optimization?

Sébastien Landeau: “So far we have not achieved budget optimization using AI; we only optimized the time. The target ratio of manual processing to AI is 20 to 80 in favor of AI. However, with the new GPT-4o model, we are definitely seeing cost reductions. Moreover, from a technical point of view, thanks to these tools, a programmer who gets stuck in technical problems for three days does not happen anymore today.”

RELATED ARTICLE