A NEW study has demonstrated that an explainable artificial intelligence (XAI) framework can significantly improve Parkinson’s disease (PD) prediction while offering clinically meaningful insights into how decisions are made.
PD is a progressive neurological disorder characterised by motor symptoms such as tremor and rigidity, alongside non-motor features including cognitive impairment and sleep disturbances.
Early diagnosis remains challenging, particularly in the initial stages when symptoms may be subtle or overlap with other conditions. While machine learning has shown promise in aiding diagnosis, limited interpretability has hindered its clinical adoption.
Explainable AI Enhances Parkinson’s Disease Prediction
In this study, researchers developed a multimodal framework combining machine learning with XAI techniques to improve PD prediction. The model integrated heterogeneous data sources, including neuroimaging, clinical characteristics, and both motor and non-motor symptoms, enabling a more comprehensive assessment of disease risk.
Several machine learning algorithms were evaluated, including support vector machines, random forests, k-nearest neighbours, and decision trees. These models were paired with XAI tools such as SHapley Additive exPlanations (SHAP), Local Interpretable Model-agnostic Explanations (LIME), and ELI5 to provide both global and individual-level explanations of predictions.
The findings showed that the enhanced framework outperformed traditional machine learning approaches. Notably, the AdaBoost model achieved the highest performance, with an accuracy of 93%, precision of 90%, recall of 90%, F1-score of 90%, and an area under the curve of 0.95.
This represented a measurable improvement over baseline models, highlighting the added value of integrating explainability into predictive systems.
Bridging Accuracy and Clinical Interpretability
A key strength of the framework was its ability to identify the most influential features contributing to PD prediction. By providing transparent, interpretable outputs, clinicians could better understand which neuroimaging markers or clinical symptoms were driving individual predictions.
This addresses a major limitation of many artificial intelligence models, often described as “black boxes”, where high accuracy is achieved without clarity on decision-making processes. The inclusion of both local and global explanations offers potential for greater clinician trust and improved integration into real-world practice.
Implications for Early Diagnosis and Personalised Care
The study’s findings suggest that XAI could play a pivotal role in advancing early PD prediction and supporting personalised treatment strategies. By combining predictive performance with interpretability, the approach may enable earlier intervention and more tailored clinical decision-making.
Further validation in larger and more diverse populations will be essential before widespread clinical implementation. Nonetheless, the research represents an important step towards making artificial intelligence both accurate and clinically actionable in neurology
Reference
Mehta V et al. A multimodal explainable artificial intelligence framework for interpretable Parkinson’s disease prediction. Sci Rep. 2026;DOI:10.1038/s41598-026-47769-z.
Featured image: New Africa on Adobe Stock





