In the last several years, thousands of applications have flooded the mental health market, promising to “disrupt” conventional therapy. With all the excitement surrounding AI developments like ChatGPT, the assertion that chatbots may offer mental health care is now possible.
The figures demonstrate why: Millions more Americans sought medical care as a result of the pandemic stress. In addition, there has long been a dearth of mental health specialists in the US. This is accurate everywhere in the world, including Hong Kong.
There is a huge imbalance between supply and demand as a result of the US Affordable Care Act’s requirement that insurers provide parity between mental and physical health coverage.
That presents entrepreneurs with an untapped market. There was a near-religious confidence that AI could reconstruct healthcare during the South by Southwest convention in the US state of Texas in March, where health start-ups exhibited their goods. With claims that they could displace doctors and nurses, devices and apps that could diagnose and treat a variety of illnesses were made available.
Unfortunately, there is little proof of success in the field of mental health. Independent outcomes studies have shown that only a small number of the numerous apps available help. The US Food and Drug Administration (FDA) hasn’t given most of them any consideration at all.
Many caution users (in small print) that they are “not intended to be medical, behavioural health, or other healthcare servies,” even if they are promoted to treat ailments including anxiety, attention-deficit/hyperactivity disorder, and depression, or to forecast suicidal inclinations.
In the face of this marketing force, there are strong reasons to be cautious.
Joseph Weizenbaum, a professor at the Massachusetts Institute of Technology who is regarded as one of the inventors of artificial intelligence, warned decades ago that, despite being able to sound like one, AI would never make a decent therapist.
In reality, Weizenbaum’s first AI programme, Eliza, a psychotherapist who employed word and pattern recognition along with natural language programming to sound like a therapist, was developed in the 1960s.
Eliza’s “success” was celebrated as an AI achievement, but Weizenbaum was horrified. Although what he had made was “just a party trick,” he said that students would interact with the machine as though Eliza were a real therapist.
He anticipated the development of much more advanced software like ChatGPT. But he said that “the experiences a computer might have in such situations are not human experiences.”
“The computer, for instance, will not feel loneliness in any sense that we define it.”
The same is true for feelings like worry or pleasure, which are so neurologically intricate that researchers have not been able to identify their brain causes.
Can a chatbot create transference—the sympathetic flow between a patient and a doctor that is essential to many therapeutic approaches—between human and machine?
According to Bon Ku, director of the Thomas Jefferson University Health Design Lab and a pioneer in medical innovation, “the fundamental tenet of medicine is that it’s a relationship between human and human—and AI can’t love.” “AI will never take the place of my human therapist,” the speaker said.
To “free up more time for humans to connect,” Ku said he would want to see AI used to replace practitioner activities like record-keeping and data entry.
There is evidence that some mental health apps can be harmful, even though some may ultimately be useful. One study found that some users complained about these apps’ “scripted nature and lack of adaptability beyond textbook cases of mild anxiety and depression.”
The mental health parity requirement, a US rule passed in 2008 requiring insurance coverage for mental health illnesses to be no more limited than insurance coverage for other medical conditions, may drive US insurers to provide apps and chatbots.
In contrast to the challenge of providing a panel of human therapists, it would be a cheap and simple option, especially given that many refuse insurance because they believe insurers’ payments are too low.
The US Department of Labour said in 2022 that it was stepping up efforts to ensure better insurer compliance with the mental health parity rule, possibly in response to the flood of AI programmes that were entering the market.
In late 2022, the FDA announced that it “intends to exercise enforcement discretion” over a variety of mental health apps that it will review as medical devices.
No one has been authorised thus far. The agency’s breakthrough device designation, which expedites evaluations and studies on devices that show promise, has only been given to a very small number of products.
Most of these apps provide what therapists refer to as “organised treatment, in which patients present with particular issues and the app can respond with a method akin to one found in a workbook.
For postpartum depression, Woebot, for instance, mixes self-care and mindfulness exercises with solutions created by teams of therapists.
Wysa, another software programme that has been recognised as a breakthrough technology, provides cognitive behavioural treatment for chronic pain, sadness, and anxiety.
However, it will take some time to obtain valid scientific evidence regarding the efficacy of app-based treatments.
In order for the government to draw any judgements, there is now very little evidence, according to Kedar Mate, director of the Institute for Healthcare Improvement in Boston.
We don’t know whether app-based mental health care performs better than Weizenbaum’s Eliza until we get that research.