New AI training strategies stiffen defense against adversarial attacks

Methods for training artificial intelligence (AI) diagnostic algorithms can help prevent these models from producing medical errors caused by adversarial attacks, according to research presented at RSNA 2023 in Chicago. A test of two security techniques developed at the University of Pittsburgh suggested that efforts to make medical machine-learning diagnosis models resilient to attacks could improve patient safety and prevent fraud. The work presented at RSNA builds on previous research that investigated the feasibility of adversarial attacking (black-box type), in which data generated by generative adversarial networks (GANs) are inserted into the image as either positive- or negative-looking adversarial image features.Adversarial attacks on medical AI image classifiers are problematic because they can fool both the diagnosis model and radiologists themselves. Motivations for adversarial attacks on medical AI can range from unsafe diagnosis to insurance fraud to influencing clinical trial results.Degan Hao (left) and Shandong Wu, PhD (right), University of Pittsburgh. For the RSNA presentation, leading author Degan Hao and corresponding senior author Shandong Wu, PhD, highlighted the specific vulnerabilities of medical machine-learning diagnostic models. Simply adding adversarial noise to standard image data produces adversarial data from which a machine-learning diagnostic model will make a wrong diagnosis, according to the researchers.Security strategiesToward ...
Source: AuntMinnie.com Headlines - Category: Radiology Authors: Tags: Breast Artificial Intelligence Cybersecurity Source Type: news