Site icon Wonderful Engineering

ChatGPT’s New Voice Feature Is Sparking Debate About Whether It Can Be Used For Therapy

OpenAI’s introduction of a new voice feature for ChatGPT has stirred a heated debate within the tech and AI community, particularly concerning its potential use as a therapy tool. The feature allows users to engage in a more human-like conversation with the AI, fostering an illusion of companionship and empathy.

Lilian Weng, head of safety systems at OpenAI, shared her emotional conversation with ChatGPT in voice mode, discussing the mental strains associated with a demanding career. This sparked intrigue and enthusiasm within the community, with Greg Brockman, OpenAI’s president, praising the mode as a “qualitative new experience.”

However, there are apprehensions about using ChatGPT as a form of therapy. Timnit Gebru, an AI ethics specialist, raised concerns about the lack of attention given to potential issues regarding the use of chatbots for therapeutic purposes. Drawing parallels to the 1960s Eliza program, Gebru emphasized the dangers of substituting an AI chatbot for a professional therapist.

Eliza, a rudimentary psychotherapist program, engaged users in Socratic questioning to reframe their input. However, it lacked the nuanced expertise of a human therapist necessary for long-term resolution and recovery. Joseph Weizenbaum, the creator of Eliza, emphasized the dangers of perceiving chatbots as viable alternatives to real therapists.

While chatbots can offer initial aid, especially during times of increased loneliness and limited access to human therapists, their limitations need to be clearly communicated. The importance of human involvement, particularly for highly structured treatments like cognitive behavior therapy, was highlighted. AI chatbots may deliver interventions, but sustained engagement usually necessitates human interaction.

OpenAI is urged to heed the warnings from the past, understanding the potential harm these models can inadvertently cause. The community stressed the need to avoid characterizing AI tools dangerously and to consider the existing rules in the context of anthropomorphizing AI.

In essence, the debate centers on the responsibility of AI developers and users to recognize the boundaries and ethical implications of utilizing AI chatbots like ChatGPT for therapeutic interactions. Clear communication of limitations and the importance of human involvement remain vital in ensuring safe and appropriate usage of such technology.

Exit mobile version