Deep learning-enabled accurate normalization of reconstruction kernel effects on emphysema quantification in low-dose CT.

This study presents a two-step deep learning architecture that enables accurate normalization of reconstruction kernel effects on emphysema quantification in low-dose CT. Deep learning is used to convert a CT image of a sharp kernel to that of a standard kernel with restoration of truncation artifacts and smoothing-free pixel size normalization. We selected 353 scans reconstructed by both standard and sharp kernels from four different CT scanners from the United States National Lung Screening Trial program database. A truncation artifact correction model was constructed with a combination of histogram extrapolation and a deep learning model trained with truncated and non-truncated image sets. Then, we performed frequency domain zero-padding to normalize reconstruction field of view effects while preventing image smoothing effects. The kernel normalization model has a U-Net based architecture trained for each CT scanner dataset. Three lung density measurements including relative lung area under 950 HU (RA950), lower 15th percentile threshold (perc15), and mean lung density were obtained in the datasets from standard, sharp, and normalized kernels. The effect of kernel normalization was evaluated with pair-wise differences in lung density metrics. The mean of pair-wise differences in RA950 between standard and sharp kernel reconstructions was reduced from 10.75 % to -0.07 % using kernel normalization. The difference for perc15 decreased from -31.03 HU to -0.30 HU after kernel n...
Source: Physics in Medicine and Biology - Category: Physics Authors: Tags: Phys Med Biol Source Type: research