Please consider supporting us by disabling your content blocker.
loader

An illustration picture shows the introduction page of ChatGPT, an interactive AI chatbot model trained and developed by OpenAI, on its website in Beijing on March 2023. The developer has warned against making ’emotional connections’ with its latest version. EPA-EFE/WU HAO

Aug. 11 (UPI) — The artificial intelligence company OpenAI is concerned that users may form emotional connections with its chatbots, altering social norms and having false expectations of the software.

AI companies have been working to make their software as human as possible, but are now concerned that people could make emotional investments in the artificial intelligence conversations they are having with chatbots.

OpenAI said in a blog post that it intends to further study the emotional reliance of users on its ChatGPT-4o model, the latest iteration of its chatbot product, after observing early testers saying things like “This is our last day together” and other messages that “might indicate forming connections with the model.”

“While these instances appear benign, they signal a need for continued investigation into how these effects might manifest over longer periods of time,” the company concluded.

The company theorized that human-like socialization with AI models might affect a person’s human interactions and reduce the need for connection with another human, which the company framed as potentially beneficial to “lonely individuals” but possibly damaging to healthy relationships.

In describing its human-like qualities, OpenAI said GPT-4o can respond to audio inputs with an average of 320 milliseconds, which is similar to human response time in a conversation.

“It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API,” the company said. “GPT-4o is especially better at vision and audio understanding compared to existing models.”

The company uses scorecard ratings to grade risk evaluation and mitigation in several elements of the AI technology, including voice technology, speaker identification, sensitive trait attribution, and other factors. The company rates factors on a scale of Low, Medium, High and Critical. Only factors with a scale of Medium or below can be deployed. Only those with a score of High or below can be developed further.

The company said it is folding what it has learned from previous ChatGPT models into ChatGP-4o to make it as human as possible, but is aware of the risks associated with technology that could become “too human.”