Patients are increasingly turning to ChatGPT for medical advice, but new research warns it isn’t yet accurate enough to be relied on alone.
The study, published in Cureus, evaluated ChatGPT-4o’s responses to over 40 common questions asked by patients with chronic obstructive pulmonary disease (COPD). These spanned everything from “What is COPD?” and “How is a diagnosis established?” to more complex issues such as genetic risk factors, the role of smoking and whether the condition is reversible.
The AI’s answers were strikingly consistent, with a reproducibility score of 93.2%. But accuracy was another story: only one in five responses were fully correct, while nearly 80% were judged “partially accurate”. None were outright wrong, but omissions were frequent.
“While ChatGPT shows consistency in responding to COPD-related queries, it cannot yet serve as a standalone patient-education tool,” the authors concluded. They added that “oversimplification and omitting key information are major limiting factors”, especially when it came to treatment advice, which scored the lowest for accuracy.
Key gaps appeared around inhaler technique, oxygen therapy and rehabilitation – areas where incomplete guidance could directly affect adherence. As the authors warn, partial facts can be just as harmful as errors, leading to misunderstandings that impact condition management and health-seeking behaviour.
With COPD prevalence projected to reach 600 million globally by 2050, the demand for trustworthy patient education will only grow over time. Pharma has an opportunity to step in, providing clinician-reviewed, guideline-aligned resources that could be integrated with AI platforms – not just for COPD, but for other chronic conditions too.
The bottom line? AI can be helpful in patient education, but it’s not a replacement for expert guidance – yet.