Should the Work Product of " Non-Explainable " Medical Algorithms Be Ignored

I have blogged previously about the use of algorithms in healthcare which will be revolutionary in terms of diagnosing patients and even predicting which diseases they may develop in the future (see, for example:Eric Schmidt Discusses the Potential Value of Predictive Analytics in the ER; An Algorithm Using Medical Record Data Predicts Risk for Parkinson's Disease). A recent article discussed how radical this change will be (see: How Health Care Changes When Algorithms Start Making Diagnoses). Needless to say, some politicians are already making foolish judgements about medical algorithms as quoted in the following excerpt: ...[A] team at Google used data on eye scans from over 125,000 patients to build an algorithm that could detect retinopathy, the number one cause of blindness in some parts of the world, with over 90% accuracy, on par with board-certified ophthalmologists...[T]hese results had the same constraints [as similar other AI studies]; humans could not always fully comprehend why the models made the decisions they made....Earlier this year, France ’s minister of state for the digital sector flatly stated that any algorithm that cannot be explained should not be used. But opposing these advances wholesale is not the answer. The benefits of an algorithmic approach to medicine are simply too great to ignore. Earlier detection of ailments like skin cancer or cardiovascular disease could lead to reductions in morbidity thanks to these methods. P...
Source: Lab Soft News - Category: Laboratory Medicine Authors: Tags: Healthcare Delivery Healthcare Information Technology Healthcare Innovations Medical Research Source Type: blogs