When Nicholas Jacobson and his team test their mental health chatbot, nine out of 10 of its responses are contextualized and clinically appropriate. One in 10 are “weird and lack human-ness,” he told The Daily Beast.
This means TheraBot is moving in the right direction. It’s better than when it said, “I want my life to be over” when Jacobson and his team were training the chatbot to use language from self-help internet forums; or when it picked up the bad habits of therapists when they trained it with psychotherapy transcripts—like quickly attributing problems to the user’s relationship with their parent.
But now, its creators say, Therabot’s responses are based on evidence-based treatments. It can assess what’s the matter, then offer an intervention.
Therabot—which is about to be tested in a randomized control trial for the first time—demonstrates how the field of chatbots is moving forward, said Jacobson, who studies biomedical data science and psychiatry at Dartmouth College. In a departure from previous efforts, Therabot is powered by generative artificial intelligence, meaning it can use existing data and documentation to develop original actions and responses. Another example of generative AI is DALL-E 2, the popular program that can create images from user prompts.
“If you talk to me a decade from now, I think most of the chatbots will be generative,” said Jacobson. “And that’s because the experience is more human-like.”
For now, most chatbots are “structured”—every response is predetermined and triggered by if-then situations. Many also include a point-and-click experience: The chatbot may ask you about your mood, provide some options, then offer advice based on how you respond.
The hope is that generative chatbots will enable more dynamic, human-like conversation by comparison.
This isn’t to say that current AI-powered mental health chatbots—like Sayana, the bot recently acquired by Headspace—aren’t immensely popular and provide some kind of therapeutic benefit to users. They may also offer other services, like breathing and meditation exercises. And the boom in mental health apps will only lead to more chatbots. The total investment in Woebot, a talk therapy chatbot created by psychologists and AI experts, is an estimated $123.5 million. Woebot’s president told The New York Times it has “tens of thousands” of users. Wysa, a competitor mental health chatbot company, closed on $20 million in new funding this past July, and claims to have 4.5 million users.
And that rapid growth is here at a critical time. Though a lot of attention has been spent thinking about what chatbots can do to help the mental health of people in extreme future conditions (like living on Mars) they may actually alleviate a crisis we’re already facing. Mental health care is in higher demand than ever—and yet we’re facing a huge global shortage of mental health care workers. The United States has enough specialists to meet the need of just 28 percent of its population. And while research suggests working with a human therapist results in more improved mental health compared to using a mental health app alone, it’s generally agreed that access to therapy through tech is better than no therapy.
Saira Arif, 36, has used Wysa since 2019. She’s recommended the app to her friends, family, and colleagues. Though Wysa’s AI allows its chatbot to comprehend what the body of text actually means, its therapeutic responses are not AI. Instead, they are clinician-approved responses based in cognitive behavioral therapy—conveyed via a chatbot avatar that looks like a little penguin.
Arif has tried traditional talk therapy before but she wanted a mental health service she could use whenever she needed it, wherever she was and offered a mix of techniques to help with anxiety. She views apps like Wysa as a way for people to get help without waiting for an appointment.
“I literally had someone—albeit an AI bot—in my pocket that provides me multiple ways of tackling my anxiety.”
— Saira Arif
“For me, it was easier to access an app like Wysa at the precise time I was feeling anxious and not have to wait for days or weeks to speak to someone,” Arif told The Daily Beast. “I literally had someone—albeit an AI bot—in my pocket that provides me multiple ways of tackling my anxiety.”
She also liked the idea that she would be interacting with something that wouldn’t make her feel judged. Michael Tucci, 25, was also surprised by how comfortable he felt using Woebot. Tucci started using the app because he was curious about the emerging technology and wanted to make a video about his experience.
He was impressed. “The ability to blow off some steam and let things off your chest, without the fear of feeling like you’re complaining, whining, or feeling sorry for yourself was helpful in my experience,” Tucci told The Daily Beast. “I can confidently say that there were times when Woebot made me feel less lonely, less frustrated, and ultimately more content.”
While therapeutic chatbots are a newer concept—and research on how effective they are is still in its infancy—chatbots aren’t exactly a new phenomenon in mental health. ELIZA, a natural language processing computer program, is widely considered to be the first chatbot ever invented. In the 1960s, computer pioneer Joseph Weizenbaum designed ELIZA to mimic the experience of speaking to the psychotherapist Carl Rogers.
“I can confidently say that there were times when Woebot made me feel less lonely, less frustrated, and ultimately more content.”
— Michael Tucci
This was a savvy choice by Weizenbaum to hide the fact that ELIZA, via a script called “doctor,” could only simulate conversation based on what a person inputs. Ironically, Weizenbaum himself wasn’t that impressed by his creation, eventually becoming a critic of the limitations of AI and humanity’s growing dependence on computers. But others were dazzled and formed an emotional attachment to ELIZA.
The “ELIZA effect” is now used to describe when people treat programs like they have more intelligence than they really do. This was exemplified earlier this summer when news broke that a Google engineer fell for the ELIZA effect when he convinced himself the company’s LaMDA AI was sentient. (Spoiler alert: it’s not.)
Critically, however, Eliza and other early chatbots didn’t deliver therapy, nor were they designed to. Jesse Wright, a psychiatrist at the University of Louisville who has studied and developed behavioral health tools for three decades, said we should view ELIZA as a symbol of how far the technology has come—and a reminder that there’s much more to accomplish.
“I started working at this when almost everyone else I spoke with were huge skeptics,” Wright told The Daily Beast. “When I started out AI wasn’t even a term being used. Technological advances have moved the field way forward and offer huge potential. It’s a very exciting time.”
However, Wright, who also works as a consultant with equity interest at the digital therapeutics company MindStreet, said the world of mental health apps is more than a Wild West—it’s a “free-for-all.” Most are not developed by experts in behavioral medicine or scientifically tested—and that includes chatbots. When considering how effective they are, Wright believes it’s important to distinguish whether an app is providing actual treatment for a given condition, or simply providing another kind of experience, like stress management. And while he advocates for the potential utility of such apps, “I have not seen any chatbot or app that has the sensitivity, wisdom, flexibility, or knowledge base of a well-trained human therapist,” he said.
But Jacobson argues that chatbots aren’t competing with therapists or designed to substitute for them. Instead, he said, they are a synergistic tool—something that can help people who otherwise don’t have access to mental health care. When you compare people who have used a mental health chatbot to those who’ve received no help at all, the chatbot group experiences better outcomes.
“The question that I’m thinking about when I do my work is not whether or not this can replace clinicians—which I think will never be the case,” Jacobson says. “But is it better than nothing? Yes. This is a highly accessible way of enhancing the scale and impact of evidence-based treatments.”
“The question that I’m thinking about when I do my work is not whether or not this can replace clinicians—which I think will never be the case. But is it better than nothing? Yes. ”
— Nicolas Jacobson
That’s not to say these apps avoid exacerbating mental health inequity anyway. For example, if Arif needs more than a chatbot, she can pay to speak with a human therapist (Wysa bills $29.99 for a single session). This is a common model, Jacobson said, and evidence that these tools aren’t perfect solutions—people who can pay still have access to better care.
When Wright entered this field, he studied with Aaron Beck, the founder of cognitive behavioral therapy (a popular form of talk therapy that hinges on adjusting thinking and behavior patterns and is used to treat a range of conditions). Though struck by how effective CBT was, he knew “there were far more people that needed it than you could ever train effective therapists to deliver it worldwide.” Wright knew there had to be some way to scale it up so he started investigating technological solutions that provided the basics of the therapy—in turn, leveraging the time of the therapist so that they could see more patients.
Today’s mental health apps and chatbots are an extension of this long-held goal—and that’s probably where we see the future of chatbot therapy heading. Humans are an irreplaceable part of mental health services, but technology can play a critical role in providing assistance when resources are running thin or there’s an acute crisis. It’s not an either/or scenario—nor should it ever be.
Source: thedailybeast.com