Explainable A.I. Or Why You Need To Understand Machine Learning In Healthcare

A doctor in China uses a machine learning algorithm to detect signs of pneumonia associated with SARS-CoV-2 infections on images from lung CT scans. Epidemiologists in Canada are using the technology to monitor the spread of a disease and help prevent outbreaks. In the U.S., researchers are using artificial intelligence for more efficient drug discovery. Elsewhere around the world, patients are turning to their phones to access symptom checkers leveraging smart algorithms. These instances where medical professionals and patients alike employ artificial intelligence (A. I.) are already happening but in the coming years will be even more commonplace. However, as the technology becomes ubiquitous at a heightened pace, understanding how it works and explaining its deductions becomes increasingly challenging.  However, if the future of medicine and healthcare relies on a collaboration with A.I., we will have to be able to understand the underlying processes of these tools so that we can in turn trust their insights. Seeking such transparency is what explainable A.I. deals with. In this article, we will go over the basics of this disruptive technology; as well as highlight the importance of better understanding it whether you are a healthcare practitioner or a patient. The need for explainable A.I. Explainability when it comes to A.I. refers to humans understanding the output of an algorithm, in particular a machine learning (ML) one. Often, the latter is co...
Source: The Medical Futurist - Category: Information Technology Authors: Tags: Forecast Artificial Intelligence in Medicine Digital Health Research algorithm study deep learning machine learning A.I. npj Digital Medicine supervised learning reinforcement learning unsupervised learning Source Type: blogs