Explain yourself, machine. Producing simple text descriptions for AI interpretability

We describe a feature, give a location, and then synthesise a conclusion. For example: There is an irregular mass with microcalcification in the upper outer quadrant of the breast. Findings are consistent with malignancy. You don’t need to understand the words I used here, but the point is that the features (irregular mass, microcalcification) are consistent with the diagnosis (breast cancer, malignancy). A doctor reading this report already sees internal consistency, and that reassures them that the report isn’t wrong. An common example of a wrong report could be: Irregular mass or microcalcification. No evidence of malignancy. In this case, one of the sentences must be wrong. An irregular mass is a sign of malignancy. With some experience (and understanding of grammar), we know that the first sentence is probably wrong; the “No” that should have started the report was accidentally missed by the typist or voice recognition system. I spoke a while ago about the importance of “sanity checking” in image analysis, and this is just as true of human-performed analysis as it is of deep learning models. The fact that the diagnosis and the image features that informed it match is a simple sanity check that clinicians can use to confirm the accuracy of the report. Indeed, every month or two I get a call from a clinician asking me to clarify a report where a transcription error slipped by me. Location, location, location ...
Source: The Health Care Blog - Category: Consumer Health News Authors: Tags: Artificial Intelligence Health Tech AI Luke Oakden-Rayner machine learning Radiology Source Type: blogs