1
|
Wang DD, Lin S, Lyu GR. Advances in the Application of Artificial Intelligence in the Ultrasound Diagnosis of Vulnerable Carotid Atherosclerotic Plaque. ULTRASOUND IN MEDICINE & BIOLOGY 2025; 51:607-614. [PMID: 39828500 DOI: 10.1016/j.ultrasmedbio.2024.12.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/23/2024] [Revised: 12/16/2024] [Accepted: 12/17/2024] [Indexed: 01/22/2025]
Abstract
Vulnerable atherosclerotic plaque is a type of plaque that poses a significant risk of high mortality in patients with cardiovascular disease. Ultrasound has long been used for carotid atherosclerosis screening and plaque assessment due to its safety, low cost and non-invasive nature. However, conventional ultrasound techniques have limitations such as subjectivity, operator dependence, and low inter-observer agreement, leading to inconsistent and possibly inaccurate diagnoses. In recent years, a promising approach to address these limitations has emerged through the integration of artificial intelligence (AI) into ultrasound imaging. It was found that by training AI algorithms with large data sets of ultrasound images, the technology can learn to recognize specific characteristics and patterns associated with vulnerable plaques. This allows for a more objective and consistent assessment, leading to improved diagnostic accuracy. This article reviews the application of AI in the field of diagnostic ultrasound, with a particular focus on carotid vulnerable plaques, and discusses the limitations and prospects of AI-assisted ultrasound. This review also provides a deeper understanding of the role of AI in diagnostic ultrasound and promotes more research in the field.
Collapse
Affiliation(s)
- Dan-Dan Wang
- Department of Ultrasound, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China
| | - Shu Lin
- Centre of Neurological and Metabolic Research, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China; Group of Neuroendocrinology, Garvan Institute of Medical Research, Sydney, Australia
| | - Guo-Rong Lyu
- Department of Ultrasound, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China; Departments of Medical Imaging, Quanzhou Medical College, Quanzhou, China.
| |
Collapse
|
2
|
Jiang X, Chen C, Yao J, Wang L, Yang C, Li W, Ou D, Jin Z, Liu Y, Peng C, Wang Y, Xu D. A nomogram for diagnosis of BI-RADS 4 breast nodules based on three-dimensional volume ultrasound. BMC Med Imaging 2025; 25:48. [PMID: 39953395 PMCID: PMC11829536 DOI: 10.1186/s12880-025-01580-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Accepted: 02/03/2025] [Indexed: 02/17/2025] Open
Abstract
OBJECTIVES The classification of malignant breast nodules into four categories according to the Breast Imaging Reporting and Data System (BI-RADS) presents significant variability, posing challenges in clinical diagnosis. This study investigates whether a nomogram prediction model incorporating automated breast ultrasound system (ABUS) can improve the accuracy of differentiating benign and malignant BI-RADS 4 breast nodules. METHODS Data were collected for a total of 257 nodules with breast nodules corresponding to BI-RADS 4 who underwent ABUS examination and for whom pathology results were obtained from January 2019 to August 2022. The participants were divided into a benign group (188 cases) and a malignant group (69 cases) using a retrospective study method. Ultrasound imaging features were recorded. Logistic regression analysis was used to screen the clinical and ultrasound characteristics. Using the results of these analyses, a nomogram prediction model was established accordingly. RESULTS Age, distance between nodule and nipple, calcification and C-plane convergence sign were independent risk factors that enabled differentiation between benign and malignant breast nodules (all P < 0.05). A nomogram model was established based on these variables. The area under curve (AUC) values for the nomogram model, age, distance between nodule and nipple, calcification, and C-plane convergence sign were 0.86, 0.735, 0.645, 0.697, and 0.685, respectively. Thus, the AUC value for the model was significantly higher than a single variable. CONCLUSIONS A nomogram based on the clinical and ultrasound imaging features of ABUS can be used to improve the accuracy of the diagnosis of benign and malignant BI-RADS 4 nodules. It can function as a relatively accurate predictive tool for sonographers and clinicians and is therefore clinically useful. ADVANCES IN KNOWLEDGE STATEMENT: we retrospectively analyzed the clinical and ultrasound characteristics of ABUS BI-RADS 4 nodules and established a nomogram model to improve the efficiency of the majority of ABUS readers in the diagnosis of BI-RADS 4 nodules.
Collapse
Affiliation(s)
- Xianping Jiang
- Department of Ultrasound, Shengzhou People's Hospital (Shengzhou Branch of the First Affiliated Hospital of Zhejiang University School of Medicine, the Shengzhou Hospital of Shaoxing University), Shengzhou, 312400, China
| | - Chen Chen
- Department of Diagnostic Ultrasound Imaging & Interventional Therapy, Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, No.1 East Banshan Road, Gongshu District, Hangzhou, Zhejiang, 310022, China
- Center of Intelligent Diagnosis and Therapy (Taizhou), Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Taizhou, 317502, China
- Wenling Institute of Big Data and Artificial Intelligence in Medicine, Taizhou, 317502, China
| | - Jincao Yao
- Department of Diagnostic Ultrasound Imaging & Interventional Therapy, Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, No.1 East Banshan Road, Gongshu District, Hangzhou, Zhejiang, 310022, China
- Center of Intelligent Diagnosis and Therapy (Taizhou), Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Taizhou, 317502, China
- Wenling Institute of Big Data and Artificial Intelligence in Medicine, Taizhou, 317502, China
| | - Liping Wang
- Department of Diagnostic Ultrasound Imaging & Interventional Therapy, Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, No.1 East Banshan Road, Gongshu District, Hangzhou, Zhejiang, 310022, China
- Center of Intelligent Diagnosis and Therapy (Taizhou), Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Taizhou, 317502, China
- Wenling Institute of Big Data and Artificial Intelligence in Medicine, Taizhou, 317502, China
| | - Chen Yang
- Department of Diagnostic Ultrasound Imaging & Interventional Therapy, Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, No.1 East Banshan Road, Gongshu District, Hangzhou, Zhejiang, 310022, China
- Center of Intelligent Diagnosis and Therapy (Taizhou), Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Taizhou, 317502, China
- Wenling Institute of Big Data and Artificial Intelligence in Medicine, Taizhou, 317502, China
| | - Wei Li
- Department of Diagnostic Ultrasound Imaging & Interventional Therapy, Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, No.1 East Banshan Road, Gongshu District, Hangzhou, Zhejiang, 310022, China
- Center of Intelligent Diagnosis and Therapy (Taizhou), Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Taizhou, 317502, China
- Wenling Institute of Big Data and Artificial Intelligence in Medicine, Taizhou, 317502, China
| | - Di Ou
- Department of Diagnostic Ultrasound Imaging & Interventional Therapy, Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, No.1 East Banshan Road, Gongshu District, Hangzhou, Zhejiang, 310022, China
- Center of Intelligent Diagnosis and Therapy (Taizhou), Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Taizhou, 317502, China
- Wenling Institute of Big Data and Artificial Intelligence in Medicine, Taizhou, 317502, China
| | - Zhiyan Jin
- Postgraduate training base Alliance of Wenzhou Medical University, Hangzhou, 310022, China
| | - Yuanzhen Liu
- Department of Diagnostic Ultrasound Imaging & Interventional Therapy, Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, No.1 East Banshan Road, Gongshu District, Hangzhou, Zhejiang, 310022, China
- Center of Intelligent Diagnosis and Therapy (Taizhou), Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Taizhou, 317502, China
- Wenling Institute of Big Data and Artificial Intelligence in Medicine, Taizhou, 317502, China
| | - Chanjuan Peng
- Department of Diagnostic Ultrasound Imaging & Interventional Therapy, Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, No.1 East Banshan Road, Gongshu District, Hangzhou, Zhejiang, 310022, China
- Center of Intelligent Diagnosis and Therapy (Taizhou), Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Taizhou, 317502, China
- Wenling Institute of Big Data and Artificial Intelligence in Medicine, Taizhou, 317502, China
| | - Yifan Wang
- Department of Diagnostic Ultrasound Imaging & Interventional Therapy, Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, No.1 East Banshan Road, Gongshu District, Hangzhou, Zhejiang, 310022, China.
- Center of Intelligent Diagnosis and Therapy (Taizhou), Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Taizhou, 317502, China.
- Wenling Institute of Big Data and Artificial Intelligence in Medicine, Taizhou, 317502, China.
| | - Dong Xu
- Department of Diagnostic Ultrasound Imaging & Interventional Therapy, Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, No.1 East Banshan Road, Gongshu District, Hangzhou, Zhejiang, 310022, China.
- Center of Intelligent Diagnosis and Therapy (Taizhou), Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Taizhou, 317502, China.
- Wenling Institute of Big Data and Artificial Intelligence in Medicine, Taizhou, 317502, China.
| |
Collapse
|
3
|
Liu F, Li G, Wang J. Advanced analytical methods for multi-spectral transmission imaging optimization: enhancing breast tissue heterogeneity detection and tumor screening with hybrid image processing and deep learning. ANALYTICAL METHODS : ADVANCING METHODS AND APPLICATIONS 2024; 17:104-123. [PMID: 39569814 DOI: 10.1039/d4ay01755b] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/22/2024]
Abstract
Light sources exhibit significant absorption and scattering effects during the transmission through biological tissues, posing challenges in identifying heterogeneities in multi-spectral images. This paper introduces a fusion of techniques encompassing the spatial pyramid matching model (SPM), modulation and demodulation (M_D), and frame accumulation (FA). These techniques not only elevate image quality but also augment the precision of heterogeneous classification in multi-spectral transmission images (MTI) within deep learning network models (DLNM). Initially, experiments are designed to capture MTI of phantoms. Subsequently, the images are preprocessed separately through a combination of different techniques such as SPM, M_D and FA. Ultimately, multi-spectral fusion pseudo-color images derived from U-Net semantic segmentation are fed into VGG16/19 and ResNet50/101 networks for heterogeneous classification. Among them, different combinations of SPM, M_D and FA significantly enhance the quality of images, facilitating the extraction of heterogeneous feature information from multi-spectral images. In comparison to the classification accuracy achieved in the original image VGG and ResNet network models, all images after preprocessing effectively improved the classification accuracy of heterogeneities. Following scatter correction, images processed with 3.5 Hz modulation-demodulation combined with frame accumulation (M_D-FA) attain the highest classification accuracy for heterogeneities in the VGG19 and ResNet101 models, achieving accuracies of 95.47% and 98.47%, respectively. In conclusion, this paper utilizes different combinations of SPM, M_D and FA techniques to not only enhance the quality of images but also further improve the accuracy of DLNM in heterogeneous classification, which will promote the clinical application of MTI technique in breast tumor screening.
Collapse
Affiliation(s)
- Fulong Liu
- Xuzhou Medical University, School of Medical Information and Engineering, Xuzhou, Jiangsu, 221000, China
| | - Gang Li
- State Key Laboratory of Precision Measuring Technology and Instruments, Tianjin University, Tianjin 300072, China
- Tianjin Key Laboratory of Biomedical Detecting Techniques and Instruments, Tianjin University, Tianjin 300072, China
| | - Junqi Wang
- Xinyuan Middle School, Xuzhou, Jiangsu, 221000, China.
| |
Collapse
|
4
|
Li H, Zhao J, Jiang Z. Deep learning-based computer-aided detection of ultrasound in breast cancer diagnosis: A systematic review and meta-analysis. Clin Radiol 2024; 79:e1403-e1413. [PMID: 39217049 DOI: 10.1016/j.crad.2024.08.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Revised: 07/05/2024] [Accepted: 08/01/2024] [Indexed: 09/04/2024]
Abstract
PURPOSE The aim of this meta-analysis was to assess the diagnostic performance of deep learning (DL) and ultrasound in breast cancer diagnosis. Additionally, we categorized the included studies into two subgroups: B-mode ultrasound diagnostic subgroup and multimodal ultrasound diagnostic subgroup, and compared the performance differences of DL algorithms in breast cancer diagnosis using only B-mode ultrasound or multimodal ultrasound. METHODS We conducted a comprehensive search for relevant studies published from January 01, 2017 to July 31, 2023 in the MEDLINE and EMBASE databases. The quality of the included studies was evaluated using the QUADAS-2 tool and radiomics quality scores (RQS). Meta-analysis was performed using R software. Inter-study heterogeneity was assessed by I^2 values and Q-test P-values, with sources of heterogeneity analyzed through a random effects model based on test results. Summary receiver operating characteristics (SROC) curves were used for meta-analysis across multiple trials, while combined sensitivity, specificity, and AUC were calculated to quantify prediction accuracy. Subgroup analysis and sensitivity analyses were also conducted to identify potential sources of study heterogeneity. Publication bias was assessed using the funnel plot method. (PROSPERO identifier: CRD42024545758). RESULTS The 20 studies included a total of 14,955 cases, with 4197 cases used for model testing. Among these cases were 1582 breast cancer patients and 2615 benign or other breast lesions. The combined sensitivity, specificity, and AUC values across all studies were found to be 0.93, 0.90, and 0.732, respectively. In subgroup analysis, the multimodal subgroup demonstrated superior performance with combined sensitivity, specificity, and AUC values of 0.93, 0.88, and 0.787, respectively; whereas the combined sensitivity, specificity, and AUC value for the model B subgroup was at a level of 0.92, 0.91, and 0.642, respectively. CONCLUSIONS The integration of DL with ultrasound demonstrates high accuracy in the adjunctive diagnosis of breast cancer, while the fusion of DL and multimodal breast ultrasound exhibits superior diagnostic efficacy compared to B-mode ultrasound alone.
Collapse
Affiliation(s)
- H Li
- Department of Ultrasound, Changzheng Hospital, Naval Medical University (Second Medical University), No.415, Fengyang Rd, Shanghai, China.
| | - J Zhao
- Department of Ultrasound, Changzheng Hospital, Naval Medical University (Second Medical University), No.415, Fengyang Rd, Shanghai, China; Department of Ultrasound, Shanghai Fourth People's Hospital, School of Medicine, Tongji University, No.1279, Sanmen Rd, Shanghai, China.
| | - Z Jiang
- School of Health Science and Engineering, University of Shanghai for Science and Technology, No.516, Jungong Rd, Shanghai, China
| |
Collapse
|
5
|
Huang Z, Zhang X, Ju Y, Zhang G, Chang W, Song H, Gao Y. Explainable breast cancer molecular expression prediction using multi-task deep-learning based on 3D whole breast ultrasound. Insights Imaging 2024; 15:227. [PMID: 39320560 PMCID: PMC11424596 DOI: 10.1186/s13244-024-01810-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Accepted: 09/03/2024] [Indexed: 09/26/2024] Open
Abstract
OBJECTIVES To noninvasively estimate three breast cancer biomarkers, estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2) and enhance performance and interpretability via multi-task deep learning. METHODS The study included 388 breast cancer patients who received the 3D whole breast ultrasound system (3DWBUS) examinations at Xijing Hospital between October 2020 and September 2021. Two predictive models, a single-task and a multi-task, were developed; the former predicts biomarker expression, while the latter combines tumor segmentation with biomarker prediction to enhance interpretability. Performance evaluation included individual and overall prediction metrics, and Delong's test was used for performance comparison. The models' attention regions were visualized using Grad-CAM + + technology. RESULTS All patients were randomly split into a training set (n = 240, 62%), a validation set (n = 60, 15%), and a test set (n = 88, 23%). In the individual evaluation of ER, PR, and HER2 expression prediction, the single-task and multi-task models achieved respective AUCs of 0.809 and 0.735 for ER, 0.688 and 0.767 for PR, and 0.626 and 0.697 for HER2, as observed in the test set. In the overall evaluation, the multi-task model demonstrated superior performance in the test set, achieving a higher macro AUC of 0.733, in contrast to 0.708 for the single-task model. The Grad-CAM + + method revealed that the multi-task model exhibited a stronger focus on diseased tissue areas, improving the interpretability of how the model worked. CONCLUSION Both models demonstrated impressive performance, with the multi-task model excelling in accuracy and offering improved interpretability on noninvasive 3DWBUS images using Grad-CAM + + technology. CRITICAL RELEVANCE STATEMENT The multi-task deep learning model exhibits effective prediction for breast cancer biomarkers, offering direct biomarker identification and improved clinical interpretability, potentially boosting the efficiency of targeted drug screening. KEY POINTS Tumoral biomarkers are paramount for determining breast cancer treatment. The multi-task model can improve prediction performance, and improve interpretability in clinical practice. The 3D whole breast ultrasound system-based deep learning models excelled in predicting breast cancer biomarkers.
Collapse
Affiliation(s)
- Zengan Huang
- School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, Guangdong, 518055, China
| | - Xin Zhang
- School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, Guangdong, 518055, China
| | - Yan Ju
- Department of Ultrasound, Xijing Hospital, Fourth Military Medical University, No. 127 Changle West Road, Xi'an, 710032, China
| | - Ge Zhang
- Department of Ultrasound, Xijing Hospital, Fourth Military Medical University, No. 127 Changle West Road, Xi'an, 710032, China
| | - Wanying Chang
- Department of Ultrasound, Xijing Hospital, Fourth Military Medical University, No. 127 Changle West Road, Xi'an, 710032, China
| | - Hongping Song
- Department of Ultrasound, Xijing Hospital, Fourth Military Medical University, No. 127 Changle West Road, Xi'an, 710032, China.
| | - Yi Gao
- School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, Guangdong, 518055, China.
| |
Collapse
|
6
|
Wang M, Liu W, Gu X, Cui F, Ding J, Zhu Y, Bian J, Liu W, Chen Y, Zhou J. Few-shot learning to identify atypical endometrial hyperplasia and endometrial cancer based on transvaginal ultrasonic images. Heliyon 2024; 10:e36426. [PMID: 39253160 PMCID: PMC11381780 DOI: 10.1016/j.heliyon.2024.e36426] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2024] [Revised: 07/04/2024] [Accepted: 08/15/2024] [Indexed: 09/11/2024] Open
Abstract
Objective It is challenging to accurately distinguish atypical endometrial hyperplasia (AEH) and endometrial cancer (EC) under routine transvaginal ultrasonic (TVU) detection. Our research aims to use the few-shot learning (FSL) method to identify non-atypical endometrial hyperplasia (NAEH), AEH, and EC based on limited TVU images. Methods The TVU images of pathologically confirmed NAEH, AEH, and EC patients (n = 33 per class) were split into the support set (SS, n = 3 per class) and the query set (QS, n = 30 per class). Next, we used dual pretrained ResNet50 V2 which pretrained on ImageNet first and then on extra collected TVU images to extract 1*64 eigenvectors from the TVU images in SS and QS. Then, the Euclidean distances were calculated between each TVU image in QS and nine TVU images of SS. Finally, the k-nearest neighbor (KNN) algorithm was used to diagnose the TVU images in QS. Results The overall accuracy and macro precision of the proposed FSL model in QS were 0.878 and 0.882 respectively, superior to the automated machine learning models, traditional ResNet50 V2 model, junior sonographer, and senior sonographer. When identifying EC, the proposed FSL model achieved the highest precision of 0.964, the highest recall of 0.900, and the highest F1-score of 0.931. Conclusions The proposed FSL model combining dual pretrained ResNet50 V2 eigenvectors extractor and KNN classifier presented well in identifying NAEH, AEH, and EC patients with limited TVU images, showing potential in the application of computer-aided disease diagnosis.
Collapse
Affiliation(s)
- Mingyue Wang
- Department of Obstetrics and Gynecology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Wen Liu
- Department of Gastroenterology, Changzhou Hospital of Traditional Chinese Medicine, China
| | - Xinxian Gu
- Department of Ultrasound, The Fourth Affiliated Hospital of Soochow University, Suzhou, China
- Jiangsu Province Engineering Research Center of Precision Diagnostics and Therapeutics Development, Soochow University, Suzhou, China
| | - Feng Cui
- Department of Ultrasound, The Hospital of Traditional Chinese Medicine, Suzhou, China
| | - Jin Ding
- Department of Obstetrics and Gynecology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Yindi Zhu
- Department of Obstetrics and Gynecology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Jinyan Bian
- Department of Obstetrics and Gynecology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Wen Liu
- Department of Obstetrics and Gynecology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Youguo Chen
- Department of Obstetrics and Gynecology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Jinhua Zhou
- Department of Obstetrics and Gynecology, The First Affiliated Hospital of Soochow University, Suzhou, China
| |
Collapse
|
7
|
Lanjewar MG, Panchbhai KG, Patle LB. Fusion of transfer learning models with LSTM for detection of breast cancer using ultrasound images. Comput Biol Med 2024; 169:107914. [PMID: 38190766 DOI: 10.1016/j.compbiomed.2023.107914] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Revised: 12/14/2023] [Accepted: 12/27/2023] [Indexed: 01/10/2024]
Abstract
Breast Cancer (BC) is one of the top reasons for fatality in women worldwide. As a result, timely identification is critical for successful therapy and excellent survival rates. Transfer Learning (TL) approaches have recently shown promise in aiding in the early recognition of BC. In this work, three TL models, MobileNetV2, ResNet50, and VGG16, were combined with LSTM to extract the features from Ultrasound Images (USIs). Furthermore, the Synthetic Minority Over-sampling Technique (SMOTE) with Tomek (SMOTETomek) was employed to balance the extracted features. The proposed method with VGG16 achieved an F1 score of 99.0 %, Matthews Correlation Coefficient (MCC) and Kappa Coefficient of 98.9 % with an Area Under Curve (AUC) of 1.0. The K-fold method was applied for cross-validation and achieved an average F1 score of 96 %. Moreover, the Gradient-weighted Class Activation Mapping (Grad-CAM) method was applied for visualization, and the Local Interpretable Model-agnostic Explanations (LIME) method was applied for interpretability. The Normal Approximation Interval (NAI) and bootstrapping methods were used to calculate Confidence Intervals (CIs). The proposed method achieved a Lower CI (LCI), Upper CI (UCI), and Mean CI (MCI) of 96.50 %, 99.75 %, and 98.13 %, respectively, with the NAI, while 95 % LCI of 93.81 %, an UCI of 96.00 %, and a bootstrap mean of 94.90 % with the bootstrap method. Furthermore, the performance of the six state-of-the-art (SOTA) TL models, such as Xception, NASNetMobile, InceptionResNetV2, MobileNetV2, ResNet50, and VGG16, were compared with the proposed method.
Collapse
Affiliation(s)
- Madhusudan G Lanjewar
- School of Physical and Applied Sciences, Goa University, Taleigao Plateau, Goa, 403206, India.
| | | | - Lalchand B Patle
- PG Department of Electronics, MGSM's DDSGP College Chopda, KBCNMU, Jalgaon, Maharashtra, 425107, India.
| |
Collapse
|
8
|
Xie L, Liu Z, Pei C, Liu X, Cui YY, He NA, Hu L. Convolutional neural network based on automatic segmentation of peritumoral shear-wave elastography images for predicting breast cancer. Front Oncol 2023; 13:1099650. [PMID: 36865812 PMCID: PMC9970986 DOI: 10.3389/fonc.2023.1099650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 01/31/2023] [Indexed: 02/16/2023] Open
Abstract
Objective Our aim was to develop dual-modal CNN models based on combining conventional ultrasound (US) images and shear-wave elastography (SWE) of peritumoral region to improve prediction of breast cancer. Method We retrospectively collected US images and SWE data of 1271 ACR- BIRADS 4 breast lesions from 1116 female patients (mean age ± standard deviation, 45.40 ± 9.65 years). The lesions were divided into three subgroups based on the maximum diameter (MD): ≤15 mm; >15 mm and ≤25 mm; >25 mm. We recorded lesion stiffness (SWV1) and 5-point average stiffness of the peritumoral tissue (SWV5). The CNN models were built based on the segmentation of different widths of peritumoral tissue (0.5 mm, 1.0 mm, 1.5 mm, 2.0 mm) and internal SWE image of the lesions. All single-parameter CNN models, dual-modal CNN models, and quantitative SWE parameters in the training cohort (971 lesions) and the validation cohort (300 lesions) were assessed by receiver operating characteristic (ROC) curve. Results The US + 1.0 mm SWE model achieved the highest area under the ROC curve (AUC) in the subgroup of lesions with MD ≤15 mm in both the training (0.94) and the validation cohorts (0.91). In the subgroups with MD between15 and 25 mm and above 25 mm, the US + 2.0 mm SWE model achieved the highest AUCs in both the training cohort (0.96 and 0.95, respectively) and the validation cohort (0.93 and 0.91, respectively). Conclusion The dual-modal CNN models based on the combination of US and peritumoral region SWE images allow accurate prediction of breast cancer.
Collapse
Affiliation(s)
- Li Xie
- Department of Ultrasound, The First Affiliated Hospital of University of Science and Technology of China (USTC), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
| | - Zhen Liu
- Department of Computing, Hebin Intelligent Robots Co., LTD., Hefei, China
| | - Chong Pei
- Department of Respiratory and Critical Care Medicine, The First People’s Hospital of Hefei City, The Third Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Xiao Liu
- Department of Ultrasound, The First Affiliated Hospital of University of Science and Technology of China (USTC), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
| | - Ya-yun Cui
- Department of Ultrasound, The First Affiliated Hospital of University of Science and Technology of China (USTC), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
| | - Nian-an He
- Department of Ultrasound, The First Affiliated Hospital of University of Science and Technology of China (USTC), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China,*Correspondence: Nian-an He, ; Lei Hu,
| | - Lei Hu
- Department of Ultrasound, The First Affiliated Hospital of University of Science and Technology of China (USTC), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China,*Correspondence: Nian-an He, ; Lei Hu,
| |
Collapse
|