Chatbots, like OpenAI’s ChatGPT, may have enjoyable talks about a variety of subjects. But they need assistance from people when it comes to giving consumers appropriate health information.
We are hopeful about the part these agents will play in providing consumer-centered health information because we are tech enthusiasts who study and build AI-driven chatbots for the healthcare industry. Yet they must be created with a specific purpose in mind and constructed with safety measures to protect their users.
The answer to our question about whether children under the age of 12 should receive Covid-19 immunizations from ChatGPT in January 2023 was “no.” Also, it advised an elderly person to take it easy in order to treat his Covid-19 infection, although it was unaware that Paxlovid was the appropriate course of treatment. Such advice might have been accurate during the algorithm’s initial training because it was based on generally accepted knowledge, but it hadn’t been updated.
We surveyed young people in East Coast U.S. cities during the initial rollout of the Covid-19 vaccination to find out what would motivate them to utilize a chatbot to learn more about the vaccine. They claimed that because chatbots provided a succinct, immediately focused response, using them felt simpler and faster than conducting web searches. On the other hand, looking for that information online could return millions of hits and swiftly turn into themes that are more and more concerning; within a one-page scroll, a chronic cough could turn into cancer. The targeted advertisements that appeared after a web search for health issues were likewise hated by our respondents.
Moreover, chatbots gave the appearance of anonymity by portraying themselves as a secure environment where any question, even a frightening one, can be posed without leaving a clear digital footprint. Furthermore, the often-anodyne personas of the bots gave off a neutral vibe.
We created the Vaccination Information Resource Assistant (VIRA) chatbot in the second year of the pandemic to respond to inquiries consumers had about Covid-19 immunizations. VIRA employs natural language processing, much like ChatGPT. Yet in order for the chatbot to provide up-to-date health information, we regularly examine and update the code for VIRA. Its interaction with users has made it easier to have discussions regarding the Covid-19 vaccinations without passing judgement.
To improve VIRA’s responses, spot new sources of disinformation and counteract them, and spot developing community issues, we also keep an eye on the questions users ask it (all queries are anonymous in our data, with IP addresses removed).
In order to provide their residents with reliable, current information regarding Covid-19 vaccines, many health departments have adopted VIRA.
Our experience adds to the expanding amount of research that demonstrates the efficacy of using chatbots for diagnosis, assistance, information, and even therapy. These medications are being used more frequently to treat anxiety, depression, and substance abuse in between doctor visits or even when no therapeutic intervention is present.
Additionally, early studies on the effectiveness of chatbots like Wysa and Woebot in delivering cognitive behavioral therapy to users and lowering self-reported depression scores have yielded encouraging findings. Several other chatbots are now providing anonymous advice on abortion treatment, and Planned Parenthood has a chatbot that offers vetted, confidential advice on sexual health.
Major healthcare companies are aware of the significance of this technology, which is expected to have a $1 billion market by 2032. Chatbots give a pressure release with labor shortages getting worse and the amount of patient messages to physicians getting higher and higher. The previous ten years have seen incredible progress in AI capability, opening the door for deeper discussions and subsequently increased uptake.
While ChatGPT can quickly plan a party, create a poem, or even prepare an essay for school, it cannot be used to address health issues. For this reason, conversations about health issues should be avoided when using such chatbots. Disclaimers are offered by ChatGPT, and users are urged to see doctors, yet such language is surrounded by convincing, thorough answers to health-related questions. Health care should be no different as it is already set up to refrain from making political and profane comments.
There is little appetite for risk in the healthcare industry, and for good reason—potentially disastrous outcomes may be severe. Chatbot detractors have several valid concerns in the absence of required checks and balances.
Yet, experiment by experiment, the science relating to how chatbots work and how they may be used in healthcare is developing. They might one day provide unmatched chances to promote inherently human inquiry. Hippocrates, a respected, god-like physician in ancient Greece, advised practitioners to avoid harming patients. Today, AI technology must heed this message.