Connect with us

Trends

New study raises concerns about using ChatGPT for medical advice

A recent study raises concerns about the use of OpenAI’s AI chatbot, ChatGPT, for seeking medical advice. The study reveals that the chatbot often provides inaccurate or incomplete responses to medication-related queries, posing potential risks to patients.

Until now, it was commonly advised not to search for illness symptoms on Google, as the information might be inaccurate. Now, a similar caution is being raised about ChatGPT, the AI chatbot developed by OpenAI, which has gained popularity as a platform for users to seek answers to their queries. According to researchers, the free version of ChatGPT may provide inaccurate or incomplete responses, or even no response at all, to medication-related queries. This poses a potential risk to patients who depend on OpenAI’s popular chatbot for medical guidance.

In a recent study conducted by pharmacists at Long Island University, the free version of ChatGPT came under scrutiny for its responses to drug-related questions. A study by pharmacists found that chatbot, ChatGPT, provided inaccurate or incomplete answers to nearly three-fourths of drug-related questions. The study posed 39 questions to ChatGPT, and the study deemed only 10 responses as “satisfactory” based on established criteria. The remaining 29 drug-related questions either received incomplete responses, inaccurate information, or failed to directly address the query.

As reported by CNBC, researchers sought references from ChatGPT to verify the accuracy of its responses. And to the prompts, ChatGPT provided inaccurate or incomplete answers to nearly three-fourths of drug-related questions. Additionally, when requested to provide references for its responses, ChatGPT only included references in eight responses, and each of those references cited nonexistent sources.

One notable instance highlighted by the study involved ChatGPT inaccurately stating that there were no reported interactions between Pfizer’s Paxlovid and the blood-pressure-lowering medication verapamil. The reality is that these medications can excessively lower blood pressure when taken together, posing potential risks to patients.

Looking at the results, the study highlighted the importance of exercising caution for both patients and healthcare professionals who may consider using ChatGPT for drug-related information. Lead author Sara Grossman, an associate professor of pharmacy practice at LIU, recommends verifying any responses from the chatbot with trusted sources. “Healthcare professionals and patients should be cautious about using ChatGPT as an authoritative source for medication-related information,” said Grossman.

Responding to the study, an OpenAI spokesperson stressed that users are clearly advised against using ChatGPT’s responses as a substitute for professional medical advice. The usage policy explicitly states that the models are not optimized to provide medical information, recognizing the chatbot’s limitations in the healthcare domain.

The free version of ChatGPT, in particular, is restricted to datasets up to September 2021, potentially leading to outdated advice in the rapidly changing medical field. The study underscores the need for patients and healthcare professionals to verify ChatGPT’s responses with reliable sources to ensure accuracy.

Nevertheless, regardless of whether using the free or paid version with access to real-time information, users should always exercise caution when seeking information online and consult professionals directly for any medical advice. India Today

Copyright © 2024 Medical Buyer

error: Content is protected !!