Please consider supporting us by disabling your content blocker.
loader

OpenAI’s Concerns About AI Voice Technology

Actors Scarlett Johansson (R) and Joaquin Phoenix starred in the 2013 fiction film ‘Her’, in which a man falls in love with a human sounding AI assistant – Copyright AFP/File Amy Osborne

Glenn CHAPMAN

OpenAI has expressed concerns that its realistic voice feature for artificial intelligence may lead users to form emotional bonds with the AI, potentially at the expense of human interactions.

The San Francisco-based company referenced literature indicating that conversing with AI as one would with a person can lead to misplaced trust, a concern that may be intensified by the high-quality voice of GPT-4o.

“Anthropomorphization involves attributing human-like behaviors and characteristics to nonhuman entities, such as AI models,” OpenAI stated in a report regarding the safety measures being taken with the ChatGPT-4o version of its AI.

They noted that the audio capabilities of GPT-4o facilitate more human-like interactions, which could heighten the risk of emotional attachment.

OpenAI observed testers engaging with the AI in ways that suggested shared bonds, such as expressing sadness about their last interaction.

While these instances may seem harmless, they warrant further study to understand their potential evolution over time.

Moreover, OpenAI speculated that socializing with AI could diminish users’ skills or willingness to engage in human relationships.

“Extended interaction with the model might influence social norms,” the report indicated.

For instance, the AI’s deferential nature allows users to interrupt and ‘take the mic’ at any time, a behavior that would be considered inappropriate in human interactions.

The AI’s ability to remember details during conversations and manage tasks could lead to over-reliance on technology, according to OpenAI.

“The recent concerns shared by OpenAI around potential dependence on ChatGPT’s voice mode indicate what many have already begun asking: Is it time to pause and consider how this technology affects human interaction and relationships?” questioned Alon Yamin, co-founder and CEO of AI anti-plagiarism detection platform Copyleaks.

He emphasized that AI should never replace genuine human interaction.

OpenAI plans to conduct further tests to explore how voice capabilities in its AI might lead to emotional attachments.

Teams testing ChatGPT-4o voice capabilities also found that they could prompt the AI to repeat false information and generate conspiracy theories, raising concerns about the model’s ability to convincingly disseminate misinformation.

In June, OpenAI had to apologize to actress Scarlett Johansson for using a voice that closely resembled hers in its latest chatbot, bringing attention to voice-cloning technology.

Although OpenAI denied that the voice used was Johansson’s, the situation was complicated by CEO Sam Altman’s social media post labeling the new model with the word “Her.”

Johansson voiced an AI character in the film “Her,” which Altman has previously described as his favorite film about technology.

The 2013 film features Joaquin Phoenix as a man who falls in love with an AI assistant named Samantha.