Can a Machine Learn from Radiologists ’ Visual Search Behaviour and Their Interpretation of Mammograms—a Deep-Learning Study

AbstractVisual search behaviour and the interpretation of mammograms have been studied for errors in breast cancer detection. We aim to ascertain whether machine-learning models can learn about radiologists ’ attentional level and the interpretation of mammograms. We seek to determine whether these models are practical and feasible for use in training and teaching programmes. Eight radiologists of varying experience levels in reading mammograms reviewed 120 two-view digital mammography cases (59 canc ers). Their search behaviour and decisions were captured using a head-mounted eye-tracking device and software allowing them to record their decisions. This information from radiologists was used to build an ensembled machine-learning model using top-down hierarchical deep convolution neural network . Separately, a model to determine type of missed cancer (search, perception or decision-making) was also built. Analysis and comparison of variants of these models using different convolution networks with and without transfer learning were also performed. Our ensembled deep-learning network archit ecture can be trained to learn about radiologists’ attentional level and decisions. High accuracy (95%,p value  ≅ 0 [better than dumb/random model]) and high agreement between true and predicted values (kappa = 0.83) in such modelling can be achieved. Transfer learning techniques improve by<  10% with the performance of this model. We also show that spatial convolution n...
Source: Journal of Digital Imaging - Category: Radiology Source Type: research