'Can I trust my patient? Machine Learning support for predicting patient behaviour

Giorgia Pozzi’s feature article1 on the risks of testimonial injustice when using automated prediction drug monitoring programmes (PDMPs) turns the spotlight on a pressing and well-known clinical problem: physicians’ challenges to predict patient behaviour, so that treatment decisions can be made based on this information, despite any fallibility. Currently, as one possible way to improve prognostic assessments of patient behaviour, Machine Learning-driven clinical decision support systems (ML-CDSS) are being developed and deployed. To make her point, Pozzi discusses ML-CDSSs that are supposed to deliver physicians an accurate estimation of the likelihood of narcotic, sedative and stimulant opioid misuse by a given patient (like, e.g., ‘NarxCare’). Regarding cases of deviating assessment between human evaluators and automated systems, the medico-ethical discussion has so far mainly focused on disagreement between clinicians and Machine Learning (ML) algorithms,2 for example in ‘second opinions’ that have been reconstructed as disagreements between...
Source: Journal of Medical Ethics - Category: Medical Ethics Authors: Tags: Commentary Source Type: research