Artificial intelligence (AI)-driven chatbots and therapeutic apps are rapidly transforming the mental health services sector. One of these is the chatbot for mental health called Earkick, which features a vibrant interface with a panda donning a bandana.
Once downloaded, the mental health chatbot offers comforting and sympathetic responses, like to those of therapists, to apprehensive conversations and typing sessions. Karin Andrea Stephan, co-founder of Earkick, disagrees with the term “therapy,” even if the AI-powered business employs therapeutic techniques.
According to AP News, the discussion surrounding AI-powered mental health services extends beyond Earkick’s novelty to consider their efficacy and security. Concerns over the therapeutic vs self-help nature of AI chatbots for Gen Z adolescent and young adult mental health are growing as a result of their popularity.
Their effectiveness is still up for debate, despite their 24/7 availability and efforts to reduce the stigma attached to traditional treatment. Since mental health chatbots are not often used to diagnose or treat medical conditions, the Food and Drug Administration does not regulate them. Vaile Wright and other psychologists and technology directors are concerned about this regulatory gap since it leaves them without control or proof of its efficacy.
Growing Need for Chatbots for Mental Health
The non-medical disclaimers on the app, according to some health experts, might not be sufficient to address the dearth of mental health professionals and the protracted treatment wait periods. The conversation is complicated by chatbots’ emergency response capabilities and the replacement of well-established medications for critical illnesses.
In response to these discussions about the safety of AI chatbots, psychologists trained at Stanford University created Woebot in 2017 as a substitute for massive language model-based chatbots. Woebot stresses the efficacy and safety of mental health treatments through well-organized scripts.
The Guardian reports that the need for digital alternatives to in-person treatment is demonstrated by the growing demand for talking therapy services provided by the National Health Service. 1.76 million people were referred for mental health treatment in 2022–2023; of those, 1.22 million began receiving therapy in person.
Treatment barriers including unapproachable therapists and limited practitioner availability are addressed by BetterHelp. There are issues with the way these sites handle private user information. Because of privacy issues, UK officials are controlling these kinds of applications.
The US Federal Trade Commission penalized BetterHelp $7.8 million last year for deceiving consumers and giving sensitive information to third parties in spite of privacy assurances. The mental health app industry, which includes chatbots, mood monitoring, and virtual treatment, is rife with privacy violations.
Independent watchdogs like the Mozilla Foundation claim that numerous platforms exchange or sell personal data by taking advantage of legal loopholes. The foundation conducted a survey of 32 well-known mental health applications and found that 19 of them did not adequately protect user security and privacy, which raised concerns about the monetization of mental health issues.
Although with varying outcomes, chatbot makers have taken precautions to secure users’ privacy. Users can disable ChatGPT’s infinite conversation history storage by using the option on the homepage. The site guarantees that users’ interactions will only be kept for 30 days and won’t be utilized for model training, even though this doesn’t stop breaches.
Bard users can go to Bard.Google.com to find default ways to delete chat activity. According to Microsoft representatives, Bing users can track and delete conversations from their search history by visiting the chatbot homepage. Chat history cannot be disabled by users.
How to Use AI Chatbots for Mental Health Safely? According to The Wall Street Journal, experts caution consumers about privacy when utilizing generative AI technologies. A privacy notice’s absence, according to Dominique Shelton Leipzig, a partner in Mayer Brown’s privacy and cybersecurity practice, is a sign of inadequate governance. Gathering personal data excessively is another cause for concern.
Additionally, users should avoid giving sensitive information to unknown AI chatbots because they might be controlled by malevolent parties. The terms of service of chatbot companies usually let human people to observe specific talks, thus Irina Raicu, head of the Internet ethics program at the Markkula Center for Applied Ethics at Santa Clara University, advises against sharing health or financial details.
Customers should consider the benefits and drawbacks of using an AI chatbot for mental health before utilizing it, and they should review the platform’s privacy and terms of service. As the digital health sector develops, stakeholders emphasize the importance of comprehending AI technology in order to promote mental health.