Super-resolution of 2D ultrasound images and videos

AbstractThis paper proposes a novel deep-learning framework for super-resolution ultrasound images and videos in terms of spatial resolution and line reconstruction. To this end, we up-sample the acquired low-resolution image through a vision-based interpolation method; then, we train a learning-based model to improve the quality of the up-sampling. We qualitatively and quantitatively test our model on different anatomical districts (e.g., cardiac, obstetric) images and with different up-sampling resolutions (i.e., 2X, 4X). Our method improves the PSNR median value with respect to SOTA methods of  \(1.7\%\) on obstetric 2X raw images,  \(6.1\%\) on cardiac 2X raw images, and  \(4.4\%\) on abdominal raw 4X images; it also improves the number of pixels with a low prediction error of  \(9.0\%\) on obstetric 4X raw images,  \(5.2\%\) on cardiac 4X raw images, and  \(6.2\%\) on abdominal 4X raw images. The proposed method is then applied to the spatial super-resolution of 2D videos, by optimising the sampling of lines acquired by the probe in terms of the acquisition frequency. Our method specialises trained networks to predict the high-resolution target through the design of the network architecture and the loss function, taking into account the anatomical district and the up-sampling factor and exploiting a large ultrasound data set. The use of deep learning on large data sets overcomes the limitations of vision-based algorithms that are general and do not encode the chara...
Source: Medical and Biological Engineering and Computing - Category: Biomedical Engineering Source Type: research