close
close

OpenAI warns against making ’emotional connections’ with new chat technology

An illustration shows the introduction page of ChatGPT, an interactive AI chatbot model trained and developed by OpenAI, on its website in Beijing in March 2023. The developer warns against making “emotional connections” with its latest version. EPA-EFE/WU HAO

An illustration shows the introduction page of ChatGPT, an interactive AI chatbot model trained and developed by OpenAI, on its website in Beijing in March 2023. The developer warns against making “emotional connections” with its latest version. EPA-EFE/WU HAO

August 11 (UPI) — Artificial intelligence company OpenAI is concerned that users may develop emotional bonds with its chatbots, changing social norms and creating false expectations for the software.

AI companies are working to make their software as human-like as possible, but they worry that humans might become emotionally involved in conversations with AI-powered chatbots.

OpenAI said in a blog post that it plans to further investigate users’ emotional dependence on its ChatGPT-4o model, the latest version of its chatbot product, after observing early testers saying things like “This is our last day together” and other messages that “may indicate they are forming connections with the model.”

“While these cases appear mild, they point to the need for further research into how these effects may manifest over a longer period of time,” the company concluded.

The company theorized that AI-style socialization could impact human interactions and reduce the need to connect with other people, which the company described as potentially beneficial for “lonely people” but potentially detrimental to healthy relationships.

Describing its human-like characteristics, OpenAI said GPT-4o can respond to audio signals in an average of 320 milliseconds, which is about the same as a human’s response time to a conversation.

“It matches the performance of GPT-4 Turbo for English text and code, with significant improvements for non-English text, and is also significantly faster and 50% cheaper in API,” the company said. “GPT-4o is particularly superior in understanding video and audio compared to existing models.”

The company uses scorecard scores to assess risk assessment and mitigation across several elements of AI technology, including voice technology, speaker identification, attribution of sensitive features, and other factors. The company rates factors on a scale of Low, Medium, High, and Critical. Only factors with a score of Medium or lower can be implemented. Only those with a score of High or lower can be further developed.

The company said it is infusing ChatGP-4o with knowledge gained from previous ChatGPT models to make it as human as possible, but is aware of the risks of the technology becoming “too human.”