Could chatbots with artificial intelligence replace therapists in the future? This age-old debate has been revived by ChatGPT, a text-generating conversational chatbot built with OpenAI’s potent GPT-3 third-generation language processing model.
One of many language models, GPT-3, was released in 2020, as were its forerunners years before. When OpenAI made a public preview of ChatGPT available as part of its research phase, it attracted more general interest. A third-generation generative pre-trained transformer (GPT-3) is a neural network machine learning model that has been improved with training from human reviewers. It was trained on vast volumes of conversational text from the internet.
GPT-3 has been applied in a variety of fields, including writing a play that was staged in the United Kingdom, developing a text-based adventure game, building apps for non-coders, and producing phishing emails as part of an investigation of dangerous use cases. A game developer built a chatbot that impersonated his deceased fiancé in 2021, however OpenAI eventually shut down the project.
Certain forms of therapy that are more structured, concrete, and skills-based may benefit from the use of AI chatbots (e.g., cognitive behavioural therapy, dialectical behavioural therapy, or health coaching).
According to research, chatbots can coach users on how to give up smoking, eat better, and exercise more. People who wanted to reduce weight were assisted by the artificial empathy chatbot SlimMe AI. The World Health Organization created chatbots and virtual people to assist smokers during the COVID outbreak. Numerous businesses have produced chatbots for customer service, such as Woebot, developed in 2017 and based on cognitive behavioural therapy. Others offer guided meditation or keep tabs on a person’s mood.
However, given that it is not certain how effective these therapies can be without a human component, some forms of therapy, such as psychodynamic, relational, or humanistic therapy, may be more difficult to administer via chatbots.
Potential Benefits of Robotic Therapists
Accessibility, affordability, and scalability. If done well, virtual chatbot therapy could let more people receive mental health services on their own schedule and in the comfort of their own homes.
With a chatbot, people may be more open and less reticent. According to certain studies, people may feel more at ease sharing private or humiliating information with chatbots.
delivery of care that is standardised, uniform, and trackable. A uniform and more predictable set of responses can be provided by chatbots, and these encounters can be looked over and studied later.
a number of modalities. Beyond what a single human therapist might do, chatbots can be trained to deliver a variety of therapy modalities. What kind of therapy would be most effective in each case might be determined by an algorithm.
treatment personalization. ChatGPT can act as a customised therapist since it can respond to text prompts with conversational text and recall prior prompts.
access to a variety of materials for psychoeducation Chatbots could access and connect users to a variety of widely accessible digital resources, such as websites, books, or online tools.
collaboration or augmentation of professional therapists. By providing feedback or recommendations, such as developing empathy, chatbots could enhance treatment in real-time.
Potential Drawbacks and Issues with Chatbot Therapists
Barriers unique to human-AI connection are faced by chatbot therapists.
Genuineness and compassion. What are people’s opinions about chatbots, and will they prevent healing? Will patients miss the interpersonal interaction in therapy? Even if chatbots were able to use empathic language and the appropriate vocabulary, this could not be enough on its own. According to research, people prefer interpersonal engagement when venting or expressing strong emotions like rage or irritation. According to a 2021 study, consumers preferred a human over a chatbot depending on their emotional state. For example, when people were furious, they preferred a chatbot less.
Knowing at the end of the interaction that it is not a real person may make people feel less listened or understood. The human-to-human connection, or having another person bear witness to one’s struggles or suffering, may be the “active component” of therapy. Most likely, AI replacement won’t be effective in all circumstances.
synchronicity and subtle interactions. Beyond empathy, many treatment approaches also call for a delicately balanced combination of challenge and support. Chatbots can only respond in words and cannot communicate themselves through eye contact or body language. This might be conceivable with AI-powered “virtual human” or “human avatar” therapists, but it’s not yet clear if virtual humans can offer the same level of comfort and trust.
Accountability and retention rates are difficult. Compared to chatbots, people could be more likely to visit and follow through with real therapists. With regard to mental health apps, user engagement is a major concern. According to estimates, only 4% of people who download a mental health app continue using it after 15 days, and only 3% continue using it after 30. Will clients visit their chatbot therapist on a frequent basis?
Suicide evaluation and crisis management are two complex, high-risk circumstances that might benefit from human judgement and oversight. In high-risk situations, AI augmentation under human supervision is less dangerous than AI substitution. Regarding the responsibility of defective AI, there are still unanswered ethical and legal problems. For example, who will bear responsibility if a chatbot therapist fails to accurately diagnose or manage an urgent crisis or gives incorrect advice? Will the AI be taught to identify and warn professionals of circumstances where there may be an immediate risk to oneself or others?
User data security, privacy, openness, and informed consent are more important than ever. High levels of confidentiality and protection are necessary for mental health data. Many apps for mental health are secretive about what happens to user information, even when it is utilised for research. Any chatbot platform will need to have transparency, security, and unambiguous informed consent as critical components.
Possibly undetected bias. It’s critical to be aware of inherent biases in these chatbots’ training data and work to mitigate them.
Further study is required to determine whether chatbot therapists can effectively deliver therapy beyond behavioural coaching as human-AI interaction becomes more commonplace. Studies contrasting human and chatbot therapists in various therapy modalities will clarify the potential benefits and constraints of chatbot therapists.