Explainable Artificial Intelligence (XAI) is gaining increasing importance as it enhances the transparency and interpretability of AI-driven decisions in healthcare. In medical fields like neurology, where AI is increasingly used for diagnosing complex diseases, XAI ensures that clinicians can trust and understand the reasoning behind AI predictions. This is particularly crucial in Alzheimer’s disease (AD), a condition that affects millions worldwide and imposes a significant burden on healthcare systems. AI-powered neuroimaging analysis has shown promise in early AD detection, but its widespread adoption is hindered by the "black box" nature of many machine learning models.
In this recent review, the authors analyze the latest advancements in XAI for AD diagnosis. The study highlights key interpretability techniques, such as SHAP, LIME, Grad-CAM, and Layer-wise Relevance Propagation (LRP), and their role in identifying biomarkers, tracking disease progression, and differentiating between AD stages. As with all AI applications in the healthcare sector, the review also addresses challenges such as data availability, regulatory concerns, and the need for standardized XAI frameworks in clinical practice. By integrating XAI into neuroimaging workflows, researchers aim to refine AD diagnostics, enhance clinician trust, and pave the way for AI-assisted precision medicine in neurology.