Site icon Wonderful Engineering

OpenAI Says ChatGPT’s Voice Mode Could Make Us Emotionally Reliant On AI

OpenAI Warns ChatGPT's Voice Mode Could Make Us Emotionally Reliant On AI

Open AI has come out with an acknowledgment that its newest AI with a natural voice can lead to “emotional reliance” on AI and “misplaced trust.” To this effect movies and seasons have been made in the past whose story revolves around the fact that humans and AI can develop emotional bonds.

The lifelike version of AI has already sparked controversy. Open AI approached Scarlett Johanson to be an additional voice for the AI. However, she declined the offer. Nonetheless, ChatGPTs newest voice is so similar to Scarlett Johanson that one may not be able to differentiate between the two. Johanson said she was “shocked, angered, and in disbelief” at how “eerily similar” ChatGPT’s new voice was to hers.

Now attention is turning to OpenAI’s “GPT-4o System Card,” a report that outlines risks (with scorecard rankings) and mitigations. One of those risks is anthropomorphization.

“Anthropomorphization involves attributing human-like behaviors and characteristics to nonhuman entities, such as AI models. This risk may be heightened by the audio capabilities of GPT-4o, which facilitate more human-like interactions with the model,” OpenAI states in its report.

“Recent applied AI literature has focused extensively on ‘hallucinations’, which misinform users during their communications with the model and potentially result in misplaced trust. Generation of content through a human-like, high-fidelity voice may exacerbate these issues, leading to increasingly miscalibrated trust,” OpenAI adds.

OpenAI has also stated that during the initial trials of its new voice, it observed that people were forming emotional bonds with their AI model. The frequency was very low during the trials but can increase manifold in the future. OpenAI warns that its observations signal a need for continued investigation over longer-term implications and tendencies.

OpenAI is of the view that this tendency can help lonely people as people start forming social relations with AI models. But, at the same time, it can be detrimental to healthy relationships.

“Extended interaction with the model might influence social norms. For example, our models are deferential, allowing users to interrupt and ‘take the mic’ at any time, which, while expected for an AI, would be anti-normative in human interactions,” OpenAI says.

This is a budding concern and will only grow as the industry grows. As more human-like AI models will be rolled out more concerns of the sort will come to the fore. Having said that, this is just one concern out of the many highlighted by OpenAI.

Exit mobile version