1
|
Zhou L, Zhang Y, Zhang J, Qian X, Gong C, Sun K, Ding Z, Wang X, Li Z, Liu Z, Shen D. Prototype Learning Guided Hybrid Network for Breast Tumor Segmentation in DCE-MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2025; 44:244-258. [PMID: 39074000 DOI: 10.1109/tmi.2024.3435450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/31/2024]
Abstract
Automated breast tumor segmentation on the basis of dynamic contrast-enhancement magnetic resonance imaging (DCE-MRI) has shown great promise in clinical practice, particularly for identifying the presence of breast disease. However, accurate segmentation of breast tumor is a challenging task, often necessitating the development of complex networks. To strike an optimal trade-off between computational costs and segmentation performance, we propose a hybrid network via the combination of convolution neural network (CNN) and transformer layers. Specifically, the hybrid network consists of a encoder-decoder architecture by stacking convolution and deconvolution layers. Effective 3D transformer layers are then implemented after the encoder subnetworks, to capture global dependencies between the bottleneck features. To improve the efficiency of hybrid network, two parallel encoder subnetworks are designed for the decoder and the transformer layers, respectively. To further enhance the discriminative capability of hybrid network, a prototype learning guided prediction module is proposed, where the category-specified prototypical features are calculated through online clustering. All learned prototypical features are finally combined with the features from decoder for tumor mask prediction. The experimental results on private and public DCE-MRI datasets demonstrate that the proposed hybrid network achieves superior performance than the state-of-the-art (SOTA) methods, while maintaining balance between segmentation accuracy and computation cost. Moreover, we demonstrate that automatically generated tumor masks can be effectively applied to identify HER2-positive subtype from HER2-negative subtype with the similar accuracy to the analysis based on manual tumor segmentation. The source code is available at https://github.com/ZhouL-lab/PLHN.
Collapse
|
2
|
Wang H, Wang T, Hao Y, Ding S, Feng J. Breast tumor segmentation via deep correlation analysis of multi-sequence MRI. Med Biol Eng Comput 2024; 62:3801-3814. [PMID: 39031329 DOI: 10.1007/s11517-024-03166-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Accepted: 07/03/2024] [Indexed: 07/22/2024]
Abstract
Precise segmentation of breast tumors from MRI is crucial for breast cancer diagnosis, as it allows for detailed calculation of tumor characteristics such as shape, size, and edges. Current segmentation methodologies face significant challenges in accurately modeling the complex interrelationships inherent in multi-sequence MRI data. This paper presents a hybrid deep network framework with three interconnected modules, aimed at efficiently integrating and exploiting the spatial-temporal features among multiple MRI sequences for breast tumor segmentation. The first module involves an advanced multi-sequence encoder with a densely connected architecture, separating the encoding pathway into multiple streams for individual MRI sequences. To harness the intricate correlations between different sequence features, we propose a sequence-awareness and temporal-awareness method that adeptly fuses spatial-temporal features of MRI in the second multi-scale feature embedding module. Finally, the decoder module engages in the upsampling of feature maps, meticulously refining the resolution to achieve highly precise segmentation of breast tumors. In contrast to other popular methods, the proposed method learns the interrelationships inherent in multi-sequence MRI. We justify the proposed method through extensive experiments. It achieves notable improvements in segmentation performance, with Dice Similarity Coefficient (DSC), Intersection over Union (IoU), and Positive Predictive Value (PPV) scores of 80.57%, 74.08%, and 84.74% respectively.
Collapse
Affiliation(s)
- Hongyu Wang
- School of Computer Science and Technology, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China.
- Shaanxi Key Laboratory of Network Data Analysis and Intelligent Processing, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China.
- Xi'an Key Laboratory of Big Data and Intelligent Computing, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China.
| | - Tonghui Wang
- Department of Information Science and Technology, Northwest University, Xi'an, Shaanxi, 7101127, China
| | - Yanfang Hao
- School of Computer Science and Technology, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China
- Shaanxi Key Laboratory of Network Data Analysis and Intelligent Processing, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China
- Xi'an Key Laboratory of Big Data and Intelligent Computing, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China
| | - Songtao Ding
- School of Computer Science and Technology, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China
- Shaanxi Key Laboratory of Network Data Analysis and Intelligent Processing, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China
- Xi'an Key Laboratory of Big Data and Intelligent Computing, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China
| | - Jun Feng
- Department of Information Science and Technology, Northwest University, Xi'an, Shaanxi, 7101127, China.
| |
Collapse
|
3
|
Liu T, Hu Y, Liu Z, Jiang Z, Ling X, Zhu X, Li W. Deep Learning-Based DCE-MRI Automatic Segmentation in Predicting Lesion Nature in BI-RADS Category 4. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01340-2. [PMID: 39586911 DOI: 10.1007/s10278-024-01340-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/03/2024] [Revised: 11/12/2024] [Accepted: 11/13/2024] [Indexed: 11/27/2024]
Abstract
To investigate whether automatic segmentation based on DCE-MRI with a deep learning (DL) algorithm enabled advantages over manual segmentation in differentiating BI-RADS 4 breast lesions. A total of 197 patients with suspicious breast lesions from two medical centers were enrolled in this study. Patients treated at the First Hospital of Qinhuangdao between January 2018 and April 2024 were included as the training set (n = 138). Patients treated at Lanzhou University Second Hospital were assigned to an external validation set (n = 59). Areas of suspicious lesions were delineated based on DL automatic segmentation and manual segmentation, and evaluated consistency through the Dice correlation coefficient. Radiomics models were constructed based on DL and manual segmentations to predict the nature of BI-RADS 4 lesions. Meanwhile, the nature of the lesions was evaluated by both a professional radiologist and a non-professional radiologist. Finally, the area under the curve value (AUC) and accuracy (ACC) were used to determine which prediction model was more effective. Sixty-four malignant cases (32.5%) and 133 benign cases (67.5%) were included in this study. The DL-based automatic segmentation model showed high consistency with manual segmentation, achieving a Dice coefficient of 0.84 ± 0.11. The DL-based radiomics model demonstrated superior predictive performance compared to professional radiologists, with an AUC of 0.85 (95% CI 0.79-0.92). The DL model significantly reduced working time and improved efficiency by 83.2% compared to manual segmentation, further demonstrating its feasibility for clinical applications. The DL-based radiomics model for automatic segmentation outperformed professional radiologists in distinguishing between benign and malignant lesions in BI-RADS category 4, thereby helping to avoid unnecessary biopsies. This groundbreaking progress suggests that the DL model is expected to be widely applied in clinical practice in the near future, providing an effective auxiliary tool for the diagnosis and treatment of breast cancer.
Collapse
Affiliation(s)
- Tianyu Liu
- School of Graduate, Hebei North University, Zhangjiakou, 075000, Hebei, China
- Department of Radiology, The First Hospital of Qinhuangdao, Qinhuangdao, 066000, Hebei, China
| | - Yurui Hu
- School of Graduate, Hebei North University, Zhangjiakou, 075000, Hebei, China
- Department of Radiology, The First Hospital of Qinhuangdao, Qinhuangdao, 066000, Hebei, China
| | - Zehua Liu
- School of Computer Science and Engineering, Beihang University, Beijing, 100191, China
| | - Zeshuo Jiang
- School of North, China Electric Power University, Beijing, 102206, China
| | - Xiao Ling
- Department of Radiology, Lanzhou University Second Hospital, Lanzhou, 730030, China
| | - Xueling Zhu
- Department of Ultrasound, Qingzhou People's Hospital, Weifang, 262512, China
| | - Wenfei Li
- Department of Radiology, The First Hospital of Qinhuangdao, Qinhuangdao, 066000, Hebei, China.
| |
Collapse
|
4
|
Dai S, Liu X, Wei W, Yin X, Qiao L, Wang J, Zhang Y, Hou Y. A multi-scale, multi-task fusion UNet model for accurate breast tumor segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 258:108484. [PMID: 39531807 DOI: 10.1016/j.cmpb.2024.108484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Revised: 10/12/2024] [Accepted: 10/29/2024] [Indexed: 11/16/2024]
Abstract
BACKGROUND AND OBJECTIVE Breast cancer is the most common cancer type among women worldwide and a leading cause of female death. Accurately interpreting these complex tumors, involving small size and morphology, requires a significant amount of expertise and time. Developing a breast tumor segmentation model to assist clinicians in treatment, therefore, holds great practical significance. METHODS We propose a multi-scale, multi-task model framework named MTF-UNet. Firstly, we differ from the common approach of using different convolution kernel sizes to extract multi-scale features, and instead use the same convolution kernel size with different numbers of convolutions to obtain multi-scale, multi-level features. Additionally, to better integrate features from different levels and sizes, we extract a new multi-branch feature fusion block (ADF). This block differs from using channel and spatial attention to fuse features, but considers fusion weights between various branches. Secondly, we propose to use the number of pixels predicted to be related to tumors and background to assist segmentation, which is different from the conventional approach of using classification tasks to assist segmentation. RESULTS We conducted extensive experiments on our proprietary DCE-MRI dataset, as well as two public datasets (BUSI and Kvasir-SEG). In the aforementioned datasets, our model achieved excellent MIoU scores of 90.4516%, 89.8408%, and 92.8431% on the respective test sets. Furthermore, our ablation study has demonstrated the efficacy of each component and the effective integration of our auxiliary prediction branch into other models. CONCLUSION Through comprehensive experiments and comparisons with other algorithms, the effectiveness, adaptability, and robustness of our proposed method have been demonstrated. We believe that MTF-UNet has great potential for further development in the field of medical image segmentation. The relevant code and data can be found at https://github.com/LCUDai/MTF-UNet.git.
Collapse
Affiliation(s)
- Shuo Dai
- School of Mathematics Science, Liaocheng University, Liaocheng, Shandong, 252000, China
| | - Xueyan Liu
- School of Mathematics Science, Liaocheng University, Liaocheng, Shandong, 252000, China.
| | - Wei Wei
- School of Electronics and information, Xi'An Polytechnic University, Xian, Shanxi, 710600, China
| | - Xiaoping Yin
- Department of Radiology, Affiliated Hospital of Hebei University, No. 212 of Yuhua Road, Lianchi District, Baoding, 071000, China
| | - Lishan Qiao
- School of Computer Science and Technology, Shandong Jianzhu University, Jinan, Shandong 250101, China
| | - Jianing Wang
- Department of Radiology, Affiliated Hospital of Hebei University, No. 212 of Yuhua Road, Lianchi District, Baoding, 071000, China
| | - Yu Zhang
- Department of Radiology, Affiliated Hospital of Hebei University, No. 212 of Yuhua Road, Lianchi District, Baoding, 071000, China
| | - Yan Hou
- Department of Radiology, Affiliated Hospital of Hebei University, No. 212 of Yuhua Road, Lianchi District, Baoding, 071000, China
| |
Collapse
|
5
|
Wang L, Wang L, Kuai Z, Tang L, Ou Y, Wu M, Shi T, Ye C, Zhu Y. Progressive Dual Priori Network for Generalized Breast Tumor Segmentation. IEEE J Biomed Health Inform 2024; 28:5459-5472. [PMID: 38843066 DOI: 10.1109/jbhi.2024.3410274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/11/2024]
Abstract
To promote the generalization ability of breast tumor segmentation models, as well as to improve the segmentation performance for breast tumors with smaller size, low-contrast and irregular shape, we propose a progressive dual priori network (PDPNet) to segment breast tumors from dynamic enhanced magnetic resonance images (DCE-MRI) acquired at different centers. The PDPNet first cropped tumor regions with a coarse-segmentation based localization module, then the breast tumor mask was progressively refined by using the weak semantic priori and cross-scale correlation prior knowledge. To validate the effectiveness of PDPNet, we compared it with several state-of-the-art methods on multi-center datasets. The results showed that, comparing against the suboptimal method, the DSC and HD95 of PDPNet were improved at least by 5.13% and 7.58% respectively on multi-center test sets. In addition, through ablations, we demonstrated that the proposed localization module can decrease the influence of normal tissues and therefore improve the generalization ability of the model. The weak semantic priors allow focusing on tumor regions to avoid missing small tumors and low-contrast tumors. The cross-scale correlation priors are beneficial for promoting the shape-aware ability for irregular tumors. Thus integrating them in a unified framework improved the multi-center breast tumor segmentation performance.
Collapse
|
6
|
Ilesanmi AE, Ilesanmi TO, Ajayi BO. Reviewing 3D convolutional neural network approaches for medical image segmentation. Heliyon 2024; 10:e27398. [PMID: 38496891 PMCID: PMC10944240 DOI: 10.1016/j.heliyon.2024.e27398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Revised: 02/27/2024] [Accepted: 02/28/2024] [Indexed: 03/19/2024] Open
Abstract
Background Convolutional neural networks (CNNs) assume pivotal roles in aiding clinicians in diagnosis and treatment decisions. The rapid evolution of imaging technology has established three-dimensional (3D) CNNs as a formidable framework for delineating organs and anomalies in medical images. The prominence of 3D CNN frameworks is steadily growing within medical image segmentation and classification. Thus, our proposition entails a comprehensive review, encapsulating diverse 3D CNN algorithms for the segmentation of medical image anomalies and organs. Methods This study systematically presents an exhaustive review of recent 3D CNN methodologies. Rigorous screening of abstracts and titles were carried out to establish their relevance. Research papers disseminated across academic repositories were meticulously chosen, analyzed, and appraised against specific criteria. Insights into the realm of anomalies and organ segmentation were derived, encompassing details such as network architecture and achieved accuracies. Results This paper offers an all-encompassing analysis, unveiling the prevailing trends in 3D CNN segmentation. In-depth elucidations encompass essential insights, constraints, observations, and avenues for future exploration. A discerning examination indicates the preponderance of the encoder-decoder network in segmentation tasks. The encoder-decoder framework affords a coherent methodology for the segmentation of medical images. Conclusion The findings of this study are poised to find application in clinical diagnosis and therapeutic interventions. Despite inherent limitations, CNN algorithms showcase commendable accuracy levels, solidifying their potential in medical image segmentation and classification endeavors.
Collapse
Affiliation(s)
- Ademola E. Ilesanmi
- University of Pennsylvania, 3710 Hamilton Walk, 6th Floor, Philadelphia, PA, 19104, United States
| | | | - Babatunde O. Ajayi
- National Astronomical Research Institute of Thailand, Chiang Mai 50180, Thailand
| |
Collapse
|
7
|
Tong G, Jiang H, Yao YD. SDA-UNet: a hepatic vein segmentation network based on the spatial distribution and density awareness of blood vessels. Phys Med Biol 2023; 68:035009. [PMID: 36623320 DOI: 10.1088/1361-6560/acb199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Accepted: 01/09/2023] [Indexed: 01/11/2023]
Abstract
Objective.Hepatic vein segmentation is a fundamental task for liver diagnosis and surgical navigation planning. Unlike other organs, the liver is the only organ with two sets of venous systems. Meanwhile, the segmentation target distribution in the hepatic vein scene is extremely unbalanced. The hepatic veins occupy a small area in abdominal CT slices. The morphology of each person's hepatic vein is different, which also makes segmentation difficult. The purpose of this study is to develop an automated hepatic vein segmentation model that guides clinical diagnosis.Approach.We introduce the 3D spatial distribution and density awareness (SDA) of hepatic veins and propose an automatic segmentation network based on 3D U-Net which includes a multi-axial squeeze and excitation module (MASE) and a distribution correction module (DCM). The MASE restrict the activation area to the area with hepatic veins. The DCM improves the awareness of the sparse spatial distribution of the hepatic veins. To obtain global axial information and spatial information at the same time, we study the effect of different training strategies on hepatic vein segmentation. Our method was evaluated by a public dataset and a private dataset. The Dice coefficient achieves 71.37% and 69.58%, improving 3.60% and 3.30% compared to the other SOTA models, respectively. Furthermore, metrics based on distance and volume also show the superiority of our method.Significance.The proposed method greatly reduced false positive areas and improved the segmentation performance of the hepatic vein in CT images. It will assist doctors in making accurate diagnoses and surgical navigation planning.
Collapse
Affiliation(s)
- Guoyu Tong
- Software College, Northeastern University, Shenyang 110819, People's Republic of China
| | - Huiyan Jiang
- Software College, Northeastern University, Shenyang 110819, People's Republic of China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang 110819, People's Republic of China
| | - Yu-Dong Yao
- Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ 07030, United States of America
| |
Collapse
|
8
|
Meng X, Fan J, Yu H, Mu J, Li Z, Yang A, Liu B, Lv K, Ai D, Lin Y, Song H, Fu T, Xiao D, Ma G, Yang J, Gu Y. Volume-awareness and outlier-suppression co-training for weakly-supervised MRI breast mass segmentation with partial annotations. Knowl Based Syst 2022; 258:109988. [DOI: 10.1016/j.knosys.2022.109988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
9
|
Madani M, Behzadi MM, Nabavi S. The Role of Deep Learning in Advancing Breast Cancer Detection Using Different Imaging Modalities: A Systematic Review. Cancers (Basel) 2022; 14:5334. [PMID: 36358753 PMCID: PMC9655692 DOI: 10.3390/cancers14215334] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Revised: 10/23/2022] [Accepted: 10/25/2022] [Indexed: 12/02/2022] Open
Abstract
Breast cancer is among the most common and fatal diseases for women, and no permanent treatment has been discovered. Thus, early detection is a crucial step to control and cure breast cancer that can save the lives of millions of women. For example, in 2020, more than 65% of breast cancer patients were diagnosed in an early stage of cancer, from which all survived. Although early detection is the most effective approach for cancer treatment, breast cancer screening conducted by radiologists is very expensive and time-consuming. More importantly, conventional methods of analyzing breast cancer images suffer from high false-detection rates. Different breast cancer imaging modalities are used to extract and analyze the key features affecting the diagnosis and treatment of breast cancer. These imaging modalities can be divided into subgroups such as mammograms, ultrasound, magnetic resonance imaging, histopathological images, or any combination of them. Radiologists or pathologists analyze images produced by these methods manually, which leads to an increase in the risk of wrong decisions for cancer detection. Thus, the utilization of new automatic methods to analyze all kinds of breast screening images to assist radiologists to interpret images is required. Recently, artificial intelligence (AI) has been widely utilized to automatically improve the early detection and treatment of different types of cancer, specifically breast cancer, thereby enhancing the survival chance of patients. Advances in AI algorithms, such as deep learning, and the availability of datasets obtained from various imaging modalities have opened an opportunity to surpass the limitations of current breast cancer analysis methods. In this article, we first review breast cancer imaging modalities, and their strengths and limitations. Then, we explore and summarize the most recent studies that employed AI in breast cancer detection using various breast imaging modalities. In addition, we report available datasets on the breast-cancer imaging modalities which are important in developing AI-based algorithms and training deep learning models. In conclusion, this review paper tries to provide a comprehensive resource to help researchers working in breast cancer imaging analysis.
Collapse
Affiliation(s)
- Mohammad Madani
- Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269, USA
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| | - Mohammad Mahdi Behzadi
- Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269, USA
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| | - Sheida Nabavi
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| |
Collapse
|
10
|
Du X, Ma K, Song Y. AGMR-Net: Attention-guided multiscale recovery framework for stroke segmentation. Comput Med Imaging Graph 2022; 101:102120. [PMID: 36179432 DOI: 10.1016/j.compmedimag.2022.102120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Revised: 07/24/2022] [Accepted: 08/30/2022] [Indexed: 01/27/2023]
Abstract
Automatic and accurate lesion segmentation is critical to the clinical estimation of the lesion status of stroke diseases and appropriate diagnostic systems. Although existing methods have achieved remarkable results, their further adoption is hindered by: (1) intraclass inconsistency, i.e., large variability between different areas of the lesion; and (2) interclass indistinction, in which normal brain tissue resembles the lesion in appearance. To meet these challenges in stroke segmentation, we propose a novel method, namely attention-guided multiscale recovery framework (AGMR-Net) in this paper. Firstly, a coarse-grained patch attention (CPA) module in the encoding is adopted to obtain a patch-based coarse-grained attention map in a multistage, explicitly supervised way, enabling target spatial context saliency representation with a patch-based weighting technique that eliminates the effect of intraclass inconsistency. Secondly, to obtain more detailed boundary partitioning to meet the challenge of interclass indistinction, a newly designed cross-dimensional feature fusion (CFF) module is used to capture global contextual information to further guide the selective aggregation of 2D and 3D features, which can compensate for the lack of boundary learning capability of 2D convolution. Lastly, in the decoding stage, an innovative designed multiscale deconvolution upsampling (MDU) is used for enhanced recovery of target spatial and boundary information. AGMR-Net is evaluated on the open-source dataset Anatomical Tracings of Lesions After Stroke, achieving the highest Dice similarity coefficient of 0.594, Hausdorff distance of 27.005 mm, and average symmetry surface distance of 7.137 mm, which demonstrates that our proposed method outperforms state-of-the-art methods and has great potential for stroke diagnosis.
Collapse
Affiliation(s)
- Xiuquan Du
- Key Laboratory of Intelligent Computing and Signal Processing of Ministry of Education, Anhui University, China.
| | - Kunpeng Ma
- School of Computer Science and Technology, Anhui University, China
| | - Yuhui Song
- School of Computer Science and Technology, Anhui University, China
| |
Collapse
|
11
|
Zhu J, Geng J, Shan W, Zhang B, Shen H, Dong X, Liu M, Li X, Cheng L. Development and validation of a deep learning model for breast lesion segmentation and characterization in multiparametric MRI. Front Oncol 2022; 12:946580. [PMID: 36033449 PMCID: PMC9402900 DOI: 10.3389/fonc.2022.946580] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Accepted: 07/12/2022] [Indexed: 11/13/2022] Open
Abstract
Importance The utilization of artificial intelligence for the differentiation of benign and malignant breast lesions in multiparametric MRI (mpMRI) assists radiologists to improve diagnostic performance. Objectives To develop an automated deep learning model for breast lesion segmentation and characterization and to evaluate the characterization performance of AI models and radiologists. Materials and methods For lesion segmentation, 2,823 patients were used for the training, validation, and testing of the VNet-based segmentation models, and the average Dice similarity coefficient (DSC) between the manual segmentation by radiologists and the mask generated by VNet was calculated. For lesion characterization, 3,303 female patients with 3,607 pathologically confirmed lesions (2,213 malignant and 1,394 benign lesions) were used for the three ResNet-based characterization models (two single-input and one multi-input models). Histopathology was used as the diagnostic criterion standard to assess the characterization performance of the AI models and the BI-RADS categorized by the radiologists, in terms of sensitivity, specificity, accuracy, and the area under the receiver operating characteristic curve (AUC). An additional 123 patients with 136 lesions (81 malignant and 55 benign lesions) from another institution were available for external testing. Results Of the 5,811 patients included in the study, the mean age was 46.14 (range 11–89) years. In the segmentation task, a DSC of 0.860 was obtained between the VNet-generated mask and manual segmentation by radiologists. In the characterization task, the AUCs of the multi-input and the other two single-input models were 0.927, 0.821, and 0.795, respectively. Compared to the single-input DWI or DCE model, the multi-input DCE and DWI model obtained a significant increase in sensitivity, specificity, and accuracy (0.831 vs. 0.772/0.776, 0.874 vs. 0.630/0.709, 0.846 vs. 0.721/0.752). Furthermore, the specificity of the multi-input model was higher than that of the radiologists, whether using BI-RADS category 3 or 4 as a cutoff point (0.874 vs. 0.404/0.841), and the accuracy was intermediate between the two assessment methods (0.846 vs. 0.773/0.882). For the external testing, the performance of the three models remained robust with AUCs of 0.812, 0.831, and 0.885, respectively. Conclusions Combining DCE with DWI was superior to applying a single sequence for breast lesion characterization. The deep learning computer-aided diagnosis (CADx) model we developed significantly improved specificity and achieved comparable accuracy to the radiologists with promise for clinical application to provide preliminary diagnoses.
Collapse
Affiliation(s)
- Jingjin Zhu
- School of Medicine, Nankai University, Tianjin, China
- Department of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing, China
| | - Jiahui Geng
- Department of Neurology, Beijing Tiantan Hospital, Beijing, China
| | - Wei Shan
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Boya Zhang
- School of Medicine, Nankai University, Tianjin, China
- Department of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing, China
| | - Huaqing Shen
- Department of Neurology, Beijing Tiantan Hospital, Beijing, China
| | - Xiaohan Dong
- Department of Radiology, Chinese People’s Liberation Army General Hospital, Beijing, China
| | - Mei Liu
- Department of Pathology, Chinese People’s Liberation Army General Hospital, Beijing, China
| | - Xiru Li
- Department of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing, China
- *Correspondence: Liuquan Cheng, ; Xiru Li,
| | - Liuquan Cheng
- Department of Radiology, Chinese People’s Liberation Army General Hospital, Beijing, China
- *Correspondence: Liuquan Cheng, ; Xiru Li,
| |
Collapse
|
12
|
Segmentation of breast lesion in DCE-MRI by multi-level thresholding using sine cosine algorithm with quasi opposition-based learning. Pattern Anal Appl 2022. [DOI: 10.1007/s10044-022-01099-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
13
|
Qin C, Lin J, Zeng J, Zhai Y, Tian L, Peng S, Li F. Joint Dense Residual and Recurrent Attention Network for DCE-MRI Breast Tumor Segmentation. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:3470764. [PMID: 35498198 PMCID: PMC9045980 DOI: 10.1155/2022/3470764] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Revised: 10/13/2021] [Accepted: 03/24/2022] [Indexed: 11/17/2022]
Abstract
Breast cancer detection largely relies on imaging characteristics and the ability of clinicians to easily and quickly identify potential lesions. Magnetic resonance imaging (MRI) of breast tumors has recently shown great promise for enabling the automatic identification of breast tumors. Nevertheless, state-of-the-art MRI-based algorithms utilizing deep learning techniques are still limited in their ability to accurately separate tumor and healthy tissue. Therefore, in the current work, we propose an automatic and accurate two-stage U-Net-based segmentation framework for breast tumor detection using dynamic contrast-enhanced MRI (DCE-MRI). This framework was evaluated using T2-weighted MRI data from 160 breast tumor cases, and its performance was compared with that of the standard U-Net model. In the first stage of the proposed framework, a refined U-Net model was utilized to automatically delineate a breast region of interest (ROI) from the surrounding healthy tissue. Importantly, this automatic segmentation step reduced the impact of the background chest tissue on breast tumors' identification. For the second stage, we employed an improved U-Net model that combined a dense residual module based on dilated convolution with a recurrent attention module. This model was used to accurately and automatically segment the tumor tissue from healthy tissue in the breast ROI derived in the previous step. Overall, compared to the U-Net model, the proposed technique exhibited increases in the Dice similarity coefficient, Jaccard similarity, positive predictive value, sensitivity, and Hausdorff distance of 3%, 3%, 3%, 2%, and 16.2, respectively. The proposed model may in the future aid in the clinical diagnosis of breast cancer lesions and help guide individualized patient treatment.
Collapse
Affiliation(s)
- ChuanBo Qin
- Faculty of Intelligent Manufacturing, Wuyi University, Jiangmen 529020, China
| | - JingYin Lin
- Faculty of Intelligent Manufacturing, Wuyi University, Jiangmen 529020, China
- College of Computer Science and Software Engineering, Shenzhen University, Shenzhen 518000, China
| | - JunYing Zeng
- Faculty of Intelligent Manufacturing, Wuyi University, Jiangmen 529020, China
| | - YiKui Zhai
- Faculty of Intelligent Manufacturing, Wuyi University, Jiangmen 529020, China
| | - LianFang Tian
- Automation Science and Engineering, South China University of Technology, Guangzhou 510641, China
| | - ShuTing Peng
- Faculty of Intelligent Manufacturing, Wuyi University, Jiangmen 529020, China
| | - Fang Li
- Jiangmen Maternal and Child Healthcare Hospital, Jiangmen 529020, China
| |
Collapse
|
14
|
Semi-automated and interactive segmentation of contrast-enhancing masses on breast DCE-MRI using spatial fuzzy clustering. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103113] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
15
|
Wang H, Zhang D, Ding S, Gao Z, Feng J, Wan S. Rib segmentation algorithm for X-ray image based on unpaired sample augmentation and multi-scale network. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06546-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|