Synthesizing images of tau pathology from cross-modal neuroimaging using deep learning

In this study, we present a convolutional neural network (CNN) model that imputes tau-PET images from more widely available cross-modality ima ging inputs. Participants (n = 1192) with brain T1-weighted MRI (T1w), fluorodeoxyglucose (FDG)-PET, amyloid-PET and tau-PET were included. We found that a CNN model can impute tau-PET images with high accuracy, the highest being for the FDG-based model followed by amyloid-PET and T1w. In testing implications of artificial intelligence-imputed tau-PET, only the FDG-based model showed a significant improvement of performance in classifying tau positivity and diagnostic groups compared to the original input data, suggesting that application of the model could enhance the utility of the metabolic images. The interpretability experiment revealed that the FDG- and T1w-based models utilized the non-local input from physically remote regions of interest to estimate the tau-PET, but this was not the case for the Pittsburgh compound B-based model. This implies that the model can learn the distinct biological relationship between FDG-PET, T1w and tau-PET from the relationship between amyloid-PET and tau-PET. Our study suggests that extending neuroimaging ’s use with artificial intelligence to predict protein specific pathologies has great potential to inform emerging care models.
Source: Brain - Category: Neurology Source Type: research