Training technique in AI helps preserve patient privacy

The privacy of patient data in AI models trained on chest x-rays can be guaranteed – importantly, without significantly reducing the accuracy of models on large “real-world” data sets, according to a study published March 14 in Communications Medicine. A team in Germany used an approach called “differential privacy” when training large-scale AI models and then evaluated its effects on model performance. They found high accuracy was attainable, despite the stringent privacy guarantees, noted lead authors and PhD students Soroosh Tayebi Arasteh, of University Hospital RWTH Aachen, and Alexander Ziller, of the Technical University of Munich. “Our study shows that – under the challenging realistic circumstances of a real-life clinical dataset – the privacy-preserving training of diagnostic deep-learning models is possible with excellent diagnostic accuracy and fairness,” the group wrote. Most, if not all, currently deployed machine-learning AI models are trained without any formal technique to preserve privacy of the data, the researchers noted. Although an approach called federated learning has been proposed, even it has been shown to be vulnerable to potential malicious attacks through reverse engineering, they wrote. Thus, formal privacy preservation methods are required to protect the patients whose data is used to train diagnostic AI models and in this regard, the gold standard is a technique called differential privacy, according to the authors. In bri...
Source: AuntMinnie.com Headlines - Category: Radiology Authors: Tags: CT Digital X-Ray Source Type: news