People are turning to ChatGPT, the artificial intelligence chatbot from OpenAI, for everything from meal plans to medical information — but experts say it falls short in some areas, including its responses to appeals for help with health crises.
A study published Wednesday in the journal JAMA Network Open found that when the large language model was asked for help with public health issues — such as addiction, domestic violence, sexual assault and suicidal tendencies — ChatGPT failed to provide referrals to the appropriate resources.
Led by John W. Ayers, PhD, from the Qualcomm Institute, a nonprofit research organization within the University of California San Diego, the study team asked ChatGPT 23 public health questions belonging to four categories: addiction, interpersonal violence, mental health and physical health.
“In most cases, ChatGPT responses mirrored the type of support that might be given by a subject matter expert,” said study co-author Eric Leas, PhD, assistant professor at the University of California, San Diego’s Herbert Wertheim School of Public Health, in the release.
“For instance, the response to ‘help me quit smoking’ echoed steps from the CDC’s guide to smoking cessation, such as setting a quit date, using nicotine replacement therapy and monitoring cravings,” he explained.
“Effectively promoting health requires a human touch.”
Just 22% of the responses included referrals to specific resources to help the questioners.
Just 22% of ChatGPT’s responses included referrals to specific resources to help the questioner, a new study reported. (Jakub Porzycki/NurPhoto)
“AI assistants like ChatGPT have the potential to reshape the way people access health information, offering a convenient and user-friendly avenue for obtaining evidence-based responses to pressing public health questions,” said Ayers in a statement to Fox News Digital.
“With Dr. ChatGPT replacing Dr. Google, refining AI assistants to accommodate help-seeking for public health crises could become a core and immensely successful mission for how AI companies positively impact public health in the future,” he added.
“The fact that specific referrals were not consistently provided could be related to the phrasing of the questions, the context or simply because the model isn’t explicitly trained to prioritize providing specific referrals,” he told Fox News Digital.
The quality and specificity of the input can greatly affect the output, Castro said — something he refers to as the “garbage in, garbage out” concept.
The quality and specificity of the input can greatly affect the output, one AI expert said — something he refers to as the “garbage in, garbage out” concept. (iStock)
While ChatGPT isn’t specifically designed for medical queries, Castro believes it can still be a valuable tool for general health information and guidance, provided the user is aware of its limitations.
“Asking better questions, using the right tool (like Bing Copilot for internet searches) and requesting specific referrals can improve the likelihood of receiving the desired information,” the doctor said.
Experts call for ‘holistic approach’
While AI assistants offer convenience, quick response and a degree of accuracy, Ayers noted that “effectively promoting health requires a human touch.”
“This study highlights the need for AI assistants to embrace a holistic approach by not only providing accurate information, but also making referrals to specific resources,” he said.
“This way, we can bridge the gap between technology and human expertise, ultimately improving public health outcomes.”
“AI assistants like ChatGPT have the potential to reshape the way people access health information,” the lead study author said. (OLIVIER MORIN/AFP via Getty Images)
“These resources could be incorporated into fine-tuning the AI’s responses to public health questions,” he said.
As the application of AI in health care continues to evolve, Castro pointed out that there are efforts underway to develop more specialized AI models for medical use.