Explainable artificial intelligence to increase transparency for revolutionizing healthcare ecosystem and the road ahead

AbstractThe integration of deep learning (DL) into co-clinical applications has generated substantial interest among researchers aiming to enhance clinical decision support systems for various aspects of disease management, including detection, prediction, diagnosis, treatment, and therapy. However, the inherent opacity of DL methods has raised concerns within the healthcare community, particularly in high-risk or complex medical domains. There exists a significant gap in research and understanding when it comes to elucidating and rendering transparent the inner workings of DL models applied to the analysis of medical images. While explainable artificial intelligence (XAI) has gained ground in diverse fields, including healthcare, numerous unexplored facets remain within the realm of medical imaging. To better understand the complexities of DL techniques, there is an urgent need for rapid advancement in the field of eXplainable DL (XDL) or eXplainable Artificial Intelligence (XAI). This would empower healthcare professionals to comprehend, assess, and contribute to decision-making processes before taking any actions. This viewpoint article conducts an extensive review of XAI and XDL, shedding light on methods for unveiling the “black-box” nature of DL. Additionally, it explores the adaptability of techniques originally designed for solving problems across diverse domains for addressing healthcare challenges. The article also delves into how physicians can interpret and co...
Source: Network Modeling Analysis in Health Informatics and Bioinformatics - Category: Bioinformatics Source Type: research