Vision transformers for breast cancer human epidermal growth factor receptor 2 (HER2) expression staging without immunohistochemical (IHC) staining

This article argues for the effectiveness of customized vision transformers in staging HER2 expression in breast cancer using only H&E-stained images. This work introduces a spatial transformer network (STN) for weakly localizing critical image features before utilizing a vision transformer-based deep learning architecture. An ordinal loss function is employed to precisely characterize HER2 expression stages. The proposed algorithm comprises three modules: a localization module for weak feature identification using spatial transformers, an attention module for global learning via vision transformers, and a loss module to determine proximity to a HER2 expression level based on input images by calculating ordinal loss. Results, reported with 95% confidence intervals, reveal the proposed approach's success in HER2 expression staging: AUC 0.9202±0.01, precision 0.922±0.01, sensitivity 0.876±0.01, and specificity 0.959±0.02 over five-fold cross-validation. Comparatively, our approach significantly outperforms conventional vision transformer models and state-of-the-art convolutional neural network models (p<0.001). Furthermore, it surpasses existing methods when evaluated on an independent test dataset. This work holds great importance, aiding HER2 expression staging in breast cancer treatment while circumventing the costly and time-consuming IHC staining procedure, thereby addressing diagnostic disparities in low-resource settings and low-income countries.PMID:38096984 ...
Source: Am J Pathol - Category: Pathology Authors: Source Type: research