1
|
Jiang Y, Ebrahimpour L, Després P, Manem VS. A benchmark of deep learning approaches to predict lung cancer risk using national lung screening trial cohort. Sci Rep 2025; 15:1736. [PMID: 39799226 PMCID: PMC11724919 DOI: 10.1038/s41598-024-84193-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2024] [Accepted: 12/20/2024] [Indexed: 01/15/2025] Open
Abstract
Deep learning (DL) methods have demonstrated remarkable effectiveness in assisting with lung cancer risk prediction tasks using computed tomography (CT) scans. However, the lack of comprehensive comparison and validation of state-of-the-art (SOTA) models in practical settings limits their clinical application. This study aims to review and analyze current SOTA deep learning models for lung cancer risk prediction (malignant-benign classification). To evaluate our model's general performance, we selected 253 out of 467 patients from a subset of the National Lung Screening Trial (NLST) who had CT scans without contrast, which are the most commonly used, and divided them into training and test cohorts. The CT scans were preprocessed into 2D-image and 3D-volume formats according to their nodule annotations. We evaluated ten 3D and eleven 2D SOTA deep learning models, which were pretrained on large-scale general-purpose datasets (Kinetics and ImageNet) and radiological datasets (3DSeg-8, nnUnet and RadImageNet), for their lung cancer risk prediction performance. Our results showed that 3D-based deep learning models generally perform better than 2D models. On the test cohort, the best-performing 3D model achieved an AUROC of 0.86, while the best 2D model reached 0.79. The lowest AUROCs for the 3D and 2D models were 0.70 and 0.62, respectively. Furthermore, pretraining on large-scale radiological image datasets did not show the expected performance advantage over pretraining on general-purpose datasets. Both 2D and 3D deep learning models can handle lung cancer risk prediction tasks effectively, although 3D models generally have superior performance than their 2D competitors. Our findings highlight the importance of carefully selecting pretrained datasets and model architectures for lung cancer risk prediction. Overall, these results have important implications for the development and clinical integration of DL-based tools in lung cancer screening.
Collapse
Affiliation(s)
- Yifan Jiang
- Centre de recherche du CHU de Québec-Université Laval, Quebec City, Canada
- Département de biologie moléculaire, de biochimie médicale et de pathologie, Université Laval, Quebec City, Canada
- Institute Intelligence and Data, Université Laval, Quebec City, Canada
| | - Leyla Ebrahimpour
- Centre de recherche du CHU de Québec-Université Laval, Quebec City, Canada
- Département de biologie moléculaire, de biochimie médicale et de pathologie, Université Laval, Quebec City, Canada
- Département de physique, de génie physique et d'optique, Université Laval, Quebec City, Canada
- Centre de recherche de l'Institut universitaire de cardiologie et de pneumologie de Québec, Quebec City, Canada
- Institute Intelligence and Data, Université Laval, Quebec City, Canada
| | - Philippe Després
- Département de physique, de génie physique et d'optique, Université Laval, Quebec City, Canada
- Centre de recherche de l'Institut universitaire de cardiologie et de pneumologie de Québec, Quebec City, Canada
- Big Data Research Center, Université Laval, Quebec City, Canada
- Institute Intelligence and Data, Université Laval, Quebec City, Canada
| | - Venkata Sk Manem
- Centre de recherche du CHU de Québec-Université Laval, Quebec City, Canada.
- Département de biologie moléculaire, de biochimie médicale et de pathologie, Université Laval, Quebec City, Canada.
- Cancer Research Center, Université Laval, Quebec City, Canada.
- Big Data Research Center, Université Laval, Quebec City, Canada.
- Institute Intelligence and Data, Université Laval, Quebec City, Canada.
| |
Collapse
|
2
|
Barekatrezaei S, Kozegar E, Salamati M, Soryani M. Mass detection in automated three dimensional breast ultrasound using cascaded convolutional neural networks. Phys Med 2024; 124:103433. [PMID: 39002423 DOI: 10.1016/j.ejmp.2024.103433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 07/03/2024] [Accepted: 07/08/2024] [Indexed: 07/15/2024] Open
Abstract
PURPOSE Early detection of breast cancer has a significant effect on reducing its mortality rate. For this purpose, automated three-dimensional breast ultrasound (3-D ABUS) has been recently used alongside mammography. The 3-D volume produced in this imaging system includes many slices. The radiologist must review all the slices to find the mass, a time-consuming task with a high probability of mistakes. Therefore, many computer-aided detection (CADe) systems have been developed to assist radiologists in this task. In this paper, we propose a novel CADe system for mass detection in 3-D ABUS images. METHODS The proposed system includes two cascaded convolutional neural networks. The goal of the first network is to achieve the highest possible sensitivity, and the second network's goal is to reduce false positives while maintaining high sensitivity. In both networks, an improved version of 3-D U-Net architecture is utilized in which two types of modified Inception modules are used in the encoder section. In the second network, new attention units are also added to the skip connections that receive the results of the first network as saliency maps. RESULTS The system was evaluated on a dataset containing 60 3-D ABUS volumes from 43 patients and 55 masses. A sensitivity of 91.48% and a mean false positive of 8.85 per patient were achieved. CONCLUSIONS The suggested mass detection system is fully automatic without any user interaction. The results indicate that the sensitivity and the mean FP per patient of the CADe system outperform competing techniques.
Collapse
Affiliation(s)
- Sepideh Barekatrezaei
- School of Computer Engineering, Iran University of Science and Technology, Tehran, Iran.
| | - Ehsan Kozegar
- Department of Computer Engineering and Engineering Sciences, Faculty of Technology and Engineering, University of Guilan, Rudsar-Vajargah, Guilan, Iran.
| | - Masoumeh Salamati
- Department of Reproductive Imaging, Reproductive Biomedicine Research Center, Royan Institute for Reproductive Biomedicine, ACECR, Tehran, Iran.
| | - Mohsen Soryani
- School of Computer Engineering, Iran University of Science and Technology, Tehran, Iran.
| |
Collapse
|
3
|
Oh K, Lee SE, Kim EK. 3-D breast nodule detection on automated breast ultrasound using faster region-based convolutional neural networks and U-Net. Sci Rep 2023; 13:22625. [PMID: 38114666 PMCID: PMC10730541 DOI: 10.1038/s41598-023-49794-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Accepted: 12/12/2023] [Indexed: 12/21/2023] Open
Abstract
Mammography is currently the most commonly used modality for breast cancer screening. However, its sensitivity is relatively low in women with dense breasts. Dense breast tissues show a relatively high rate of interval cancers and are at high risk for developing breast cancer. As a supplemental screening tool, ultrasonography is a widely adopted imaging modality to standard mammography, especially for dense breasts. Lately, automated breast ultrasound imaging has gained attention due to its advantages over hand-held ultrasound imaging. However, automated breast ultrasound imaging requires considerable time and effort for reading because of the lengthy data. Hence, developing a computer-aided nodule detection system for automated breast ultrasound is invaluable and impactful practically. This study proposes a three-dimensional breast nodule detection system based on a simple two-dimensional deep-learning model exploiting automated breast ultrasound. Additionally, we provide several postprocessing steps to reduce false positives. In our experiments using the in-house automated breast ultrasound datasets, a sensitivity of [Formula: see text] with 8.6 false positives is achieved on unseen test data at best.
Collapse
Affiliation(s)
- Kangrok Oh
- Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul, 03722, Republic of Korea
| | - Si Eun Lee
- Department of Radiology, Yongin Severance Hospital, Yonsei University College of Medicine, 363, Dongbaekjukjeon-daero, Giheung-gu, Yongin, Gyeonggi-do, 16995, Republic of Korea
| | - Eun-Kyung Kim
- Department of Radiology, Yongin Severance Hospital, Yonsei University College of Medicine, 363, Dongbaekjukjeon-daero, Giheung-gu, Yongin, Gyeonggi-do, 16995, Republic of Korea.
| |
Collapse
|
4
|
Zafar A, Tanveer J, Ali MU, Lee SW. BU-DLNet: Breast Ultrasonography-Based Cancer Detection Using Deep-Learning Network Selection and Feature Optimization. Bioengineering (Basel) 2023; 10:825. [PMID: 37508852 PMCID: PMC10376009 DOI: 10.3390/bioengineering10070825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 07/04/2023] [Accepted: 07/09/2023] [Indexed: 07/30/2023] Open
Abstract
Early detection of breast lesions and distinguishing between malignant and benign lesions are critical for breast cancer (BC) prognosis. Breast ultrasonography (BU) is an important radiological imaging modality for the diagnosis of BC. This study proposes a BU image-based framework for the diagnosis of BC in women. Various pre-trained networks are used to extract the deep features of the BU images. Ten wrapper-based optimization algorithms, including the marine predator algorithm, generalized normal distribution optimization, slime mold algorithm, equilibrium optimizer (EO), manta-ray foraging optimization, atom search optimization, Harris hawks optimization, Henry gas solubility optimization, path finder algorithm, and poor and rich optimization, were employed to compute the optimal subset of deep features using a support vector machine classifier. Furthermore, a network selection algorithm was employed to determine the best pre-trained network. An online BU dataset was used to test the proposed framework. After comprehensive testing and analysis, it was found that the EO algorithm produced the highest classification rate for each pre-trained model. It produced the highest classification accuracy of 96.79%, and it was trained using only a deep feature vector with a size of 562 in the ResNet-50 model. Similarly, the Inception-ResNet-v2 had the second highest classification accuracy of 96.15% using the EO algorithm. Moreover, the results of the proposed framework are compared with those in the literature.
Collapse
Affiliation(s)
- Amad Zafar
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Jawad Tanveer
- Department of Computer Science and Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Muhammad Umair Ali
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Seung Won Lee
- Department of Precision Medicine, School of Medicine, Sungkyunkwan University, Suwon 16419, Republic of Korea
| |
Collapse
|
5
|
Sujatha R, Chatterjee JM, Angelopoulou A, Kapetanios E, Srinivasu PN, Hemanth DJ. A transfer learning‐based system for grading breast invasive ductal carcinoma. IET IMAGE PROCESSING 2023; 17:1979-1990. [DOI: 10.1049/ipr2.12660] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Accepted: 09/30/2022] [Indexed: 01/15/2025]
Abstract
AbstractBreast carcinoma is a sort of malignancy that begins in the breast. Breast malignancy cells generally structure a tumour that can routinely be seen on an x‐ray or felt like a lump. Despite advances in screening, treatment, and observation that have improved patient endurance rates, breast carcinoma is the most regularly analyzed malignant growth and the subsequent driving reason for malignancy mortality among ladies. Invasive ductal carcinoma is the most boundless breast malignant growth with about 80% of all analyzed cases. It has been found from numerous types of research that artificial intelligence has tremendous capabilities, which is why it is used in various sectors, especially in the healthcare domain. In the initial phase of the medical field, mammography is used for diagnosis, and finding cancer in the case of a dense breast is challenging. The evolution of deep learning and applying the same in the findings are helpful for earlier tracking and medication. The authors have tried to utilize the deep learning concepts for grading breast invasive ductal carcinoma using Transfer Learning in the present work. The authors have used five transfer learning approaches here, namely VGG16, VGG19, InceptionReNetV2, DenseNet121, and DenseNet201 with 50 epochs in the Google Colab platform which has a single 12GB NVIDIA Tesla K80 graphical processing unit (GPU) support that can be used up to 12 h continuously. The dataset used for this work can be openly accessed from http://databiox.com. The experimental results that the authors have received regarding the algorithm's accuracy are as follows: VGG16 with 92.5%, VGG19 with 89.77%, InceptionReNetV2 with 84.46%, DenseNet121 with 92.64%, DenseNet201 with 85.22%. From the experimental results, it is clear that the DenseNet121 gives the maximum accuracy in terms of cancer grading, whereas the InceptionReNetV2 has minimal accuracy.
Collapse
Affiliation(s)
| | | | | | - Epaminondas Kapetanios
- School of Physics, Engineering and Computer Science University of Hertfordshire Hertfordshire UK
| | | | | |
Collapse
|
6
|
Ma JJ, Meng S, Dang SJ, Wang JZ, Yuan Q, Yang Q, Song CX. Evaluation of a new method of calculating breast tumor volume based on automated breast ultrasound. Front Oncol 2022; 12:895575. [PMID: 36176389 PMCID: PMC9513394 DOI: 10.3389/fonc.2022.895575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 08/26/2022] [Indexed: 11/13/2022] Open
Abstract
Objective To evaluate the effectiveness and advantages of a new method for calculating breast tumor volume based on an automated breast ultrasound system (ABUS). Methods A total of 42 patients (18–70 years old) with breast lesions were selected for this study. The Ivenia ABUS 2.0 (General Electric Company, USA) was used, with a probe frequency of 6–15 MHz. Adobe Photoshop CS6 software was used to calculate the pixel ratio of each ABUS image, and to draw an outline of the tumor cross-section. The resulting area (in pixels) was multiplied by the pixel ratio to yield the area of the tumor cross-section. The Wilcoxon signed rank test and Bland-Altman plot were used to compare mean differences and mean values, respectively, between the two methods. Results There was no significant difference between the tumor volumes calculated by pixel method as compared to the traditional method (P>0.05). Repeated measurements of the same tumor volume were more consistent with the pixel method. Conclusion The new pixel method is feasible for measuring breast tumor volume and has good validity and measurement stability.
Collapse
Affiliation(s)
- Jing-Jing Ma
- Department of Internal Medicine, Xi’an Fifth Hospital, Xi’an, China
| | - Shan Meng
- Department of Hematology, The Second Affiliated Hospital of Xi’an Jiaotong University, Xi’an, China
| | - Sha-Jie Dang
- Department of Anesthesia, Shaanxi Provincial Cancer Hospital, Affiliated to Xi’an Jiaotong University, Xi’an, China
| | - Jia-Zhong Wang
- Department of General Surgery, The Second Affiliated Hospital of Xi’an Jiaotong University, Xi’an, China
| | - Quan Yuan
- Department of Ultrasound, Shaanxi Provincial Cancer Hospital, Affiliated to Xi’an Jiaotong University, Xi’an, China
| | - Qi Yang
- Department of Surgery, Shaanxi Provincial Cancer Hospital, Affiliated to Xi’an Jiaotong University, Xi’an, China
| | - Can-Xu Song
- Department of Ultrasound, Shaanxi Provincial Cancer Hospital, Affiliated to Xi’an Jiaotong University, Xi’an, China
- *Correspondence: Can-Xu Song,
| |
Collapse
|
7
|
Wang Y, Yao Y. Breast lesion detection using an anchor-free network from ultrasound images with segmentation-based enhancement. Sci Rep 2022; 12:14720. [PMID: 36042216 PMCID: PMC9428142 DOI: 10.1038/s41598-022-18747-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2021] [Accepted: 08/18/2022] [Indexed: 12/24/2022] Open
Abstract
The survival rate of breast cancer patients is closely related to the pathological stage of cancer. The earlier the pathological stage, the higher the survival rate. Breast ultrasound is a commonly used breast cancer screening or diagnosis method, with simple operation, no ionizing radiation, and real-time imaging. However, ultrasound also has the disadvantages of high noise, strong artifacts, low contrast between tissue structures, which affect the effective screening of breast cancer. Therefore, we propose a deep learning based breast ultrasound detection system to assist doctors in the diagnosis of breast cancer. The system implements the automatic localization of breast cancer lesions and the diagnosis of benign and malignant lesions. The method consists of two steps: 1. Contrast enhancement of breast ultrasound images using segmentation-based enhancement methods. 2. An anchor-free network was used to detect and classify breast lesions. Our proposed method achieves a mean average precision (mAP) of 0.902 on the datasets used in our experiment. In detecting benign and malignant tumors, precision is 0.917 and 0.888, and recall is 0.980 and 0.963, respectively. Our proposed method outperforms other image enhancement methods and an anchor-based detection method. We propose a breast ultrasound image detection system for breast cancer detection. The system can locate and diagnose benign and malignant breast lesions. The test results on single dataset and mixed dataset show that the proposed method has good performance.
Collapse
Affiliation(s)
- Yu Wang
- College of Medicine and Biological Information Engineering, Northeastern University, Shengyang, China
| | - Yudong Yao
- Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, USA.
| |
Collapse
|
8
|
Mask Branch Network: Weakly Supervised Branch Network with a Template Mask for Classifying Masses in 3D Automated Breast Ultrasound. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12136332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
Abstract
Automated breast ultrasound (ABUS) is being rapidly utilized for screening and diagnosing breast cancer. Breast masses, including cancers shown in ABUS scans, often appear as irregular hypoechoic areas that are hard to distinguish from background shadings. We propose a novel branch network architecture incorporating segmentation information of masses in the training process. The branch network is integrated into neural network, providing the spatial attention effect. The branch network boosts the performance of existing classifiers, helping to learn meaningful features around the target breast mass. For the segmentation information, we leverage the existing radiology reports without additional labeling efforts. The reports, which is generated in medical image reading process, should include the characteristics of breast masses, such as shape and orientation, and a template mask can be created in a rule-based manner. Experimental results show that the proposed branch network with a template mask significantly improves the performance of existing classifiers. We also provide qualitative interpretation of the proposed method by visualizing the attention effect on target objects.
Collapse
|
9
|
Wang Q, Chen H, Luo G, Li B, Shang H, Shao H, Sun S, Wang Z, Wang K, Cheng W. Performance of novel deep learning network with the incorporation of the automatic segmentation network for diagnosis of breast cancer in automated breast ultrasound. Eur Radiol 2022; 32:7163-7172. [PMID: 35488916 DOI: 10.1007/s00330-022-08836-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 04/15/2022] [Accepted: 04/21/2022] [Indexed: 11/29/2022]
Abstract
OBJECTIVE To develop novel deep learning network (DLN) with the incorporation of the automatic segmentation network (ASN) for morphological analysis and determined the performance for diagnosis breast cancer in automated breast ultrasound (ABUS). METHODS A total of 769 breast tumors were enrolled in this study and were randomly divided into training set and test set at 600 vs. 169. The novel DLNs (Resent v2, ResNet50 v2, ResNet101 v2) added a new ASN to the traditional ResNet networks and extracted morphological information of breast tumors. The accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), area under the receiver operating characteristic (ROC) curve (AUC), and average precision (AP) were calculated. The diagnostic performances of novel DLNs were compared with those of two radiologists with different experience. RESULTS The ResNet34 v2 model had higher specificity (76.81%) and PPV (82.22%) than the other two, the ResNet50 v2 model had higher accuracy (78.11%) and NPV (72.86%), and the ResNet101 v2 model had higher sensitivity (85.00%). According to the AUCs and APs, the novel ResNet101 v2 model produced the best result (AUC 0.85 and AP 0.90) compared with the remaining five DLNs. Compared with the novice radiologist, the novel DLNs performed better. The F1 score was increased from 0.77 to 0.78, 0.81, and 0.82 by three novel DLNs. However, their diagnostic performance was worse than that of the experienced radiologist. CONCLUSIONS The novel DLNs performed better than traditional DLNs and may be helpful for novice radiologists to improve their diagnostic performance of breast cancer in ABUS. KEY POINTS • A novel automatic segmentation network to extract morphological information was successfully developed and implemented with ResNet deep learning networks. • The novel deep learning networks in our research performed better than the traditional deep learning networks in the diagnosis of breast cancer using ABUS images. • The novel deep learning networks in our research may be useful for novice radiologists to improve diagnostic performance.
Collapse
Affiliation(s)
- Qiucheng Wang
- Department of Ultrasound, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China
| | - He Chen
- Department of Ultrasound, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China
| | - Gongning Luo
- School of Computer Science and Technology, Harbin Institute of Technology, No. 92, Xidazhi Street, Nangang District, Harbin, Heilongjiang Province, China
| | - Bo Li
- Department of Ultrasound, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China
| | - Haitao Shang
- Department of Ultrasound, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China
| | - Hua Shao
- Department of Ultrasound, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China
| | - Shanshan Sun
- Department of Breast Surgery, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China
| | - Zhongshuai Wang
- School of Computer Science and Technology, Harbin Institute of Technology, No. 92, Xidazhi Street, Nangang District, Harbin, Heilongjiang Province, China
| | - Kuanquan Wang
- School of Computer Science and Technology, Harbin Institute of Technology, No. 92, Xidazhi Street, Nangang District, Harbin, Heilongjiang Province, China
| | - Wen Cheng
- Department of Ultrasound, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China.
| |
Collapse
|
10
|
Luo X, Xu M, Tang G, PhD YW, Wang N, PhD DN, PhD XL, Li AH. The lesion detection efficacy of deep learning on automatic breast ultrasound and factors affecting its efficacy: a pilot study. Br J Radiol 2022; 95:20210438. [PMID: 34860574 PMCID: PMC8822545 DOI: 10.1259/bjr.20210438] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023] Open
Abstract
OBJECTIVES The aim of this study was to investigate the detection efficacy of deep learning (DL) for automatic breast ultrasound (ABUS) and factors affecting its efficacy. METHODS Females who underwent ABUS and handheld ultrasound from May 2016 to June 2017 (N = 397) were enrolled and divided into training (n = 163 patients with breast cancer and 33 with benign lesions), test (n = 57) and control (n = 144) groups. A convolutional neural network was optimized to detect lesions in ABUS. The sensitivity and false positives (FPs) were evaluated and compared for different breast tissue compositions, lesion sizes, morphologies and echo patterns. RESULTS In the training set, with 688 lesion regions (LRs), the network achieved sensitivities of 93.8%, 97.2% and 100%, based on volume, lesion and patient, respectively, with 1.9 FPs per volume. In the test group with 247 LRs, the sensitivities were 92.7%, 94.5% and 96.5%, respectively, with 2.4 FPs per volume. The control group, with 900 volumes, showed 0.24 FPs per volume. The sensitivity was 98% for lesions > 1 cm3, but 87% for those ≤1 cm3 (p < 0.05). Similar sensitivities and FPs were observed for different breast tissue compositions (homogeneous, 97.5%, 2.1; heterogeneous, 93.6%, 2.1), lesion morphologies (mass, 96.3%, 2.1; non-mass, 95.8%, 2.0) and echo patterns (homogeneous, 96.1%, 2.1; heterogeneous 96.8%, 2.1). CONCLUSIONS DL had high detection sensitivity with a low FP but was affected by lesion size. ADVANCES IN KNOWLEDGE DL is technically feasible for the automatic detection of lesions in ABUS.
Collapse
Affiliation(s)
| | | | | | - Yi Wang PhD
- National Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China, and also with the Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen, China
| | - Na Wang
- National Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China, and also with the Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen, China
| | - Dong Ni PhD
- National Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China, and also with the Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen, China
| | | | | |
Collapse
|
11
|
Agarwal R, Yap MH, Hasan MK, Zwiggelaar R, Martí R. Deep Learning in Mammography Breast Cancer Detection. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
12
|
Eroğlu Y, Yildirim M, Çinar A. Convolutional Neural Networks based classification of breast ultrasonography images by hybrid method with respect to benign, malignant, and normal using mRMR. Comput Biol Med 2021; 133:104407. [PMID: 33901712 DOI: 10.1016/j.compbiomed.2021.104407] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Revised: 04/08/2021] [Accepted: 04/13/2021] [Indexed: 12/25/2022]
Abstract
Early diagnosis of breast lesions and differentiation of malignant lesions from benign lesions are important for the prognosis of breast cancer. In the diagnosis of this disease ultrasound is an extremely important radiological imaging method because it enables biopsy as well as lesion characterization. Since ultrasonographic diagnosis depends on the expert, the knowledge level and experience of the user is very important. In addition, the contribution of computer aided systems is quite high, as these systems can reduce the workload of radiologists and reinforce their knowledge and experience when considered together with a dense patient population in hospital conditions. In this paper, a hybrid based CNN system is developed for diagnosing breast cancer lesions with respect to benign, malignant and normal. Alexnet, MobilenetV2, and Resnet50 models are used as the base for the Hybrid structure. The features of these models used are obtained and concatenated separately. Thus, the number of features used are increased. Later, the most valuable of these features are selected by the mRMR (Minimum Redundancy Maximum Relevance) feature selection method and classified with machine learning classifiers such as SVM, KNN. The highest rate is obtained in the SVM classifier with 95.6% in accuracy.
Collapse
Affiliation(s)
- Yeşim Eroğlu
- Department of Radiology, Firat University School of Medicine, Elazig, Turkey.
| | | | - Ahmet Çinar
- Computer Engineering Department, Firat University, Elazig, Turkey.
| |
Collapse
|
13
|
Kim J, Kim HJ, Kim C, Kim WH. Artificial intelligence in breast ultrasonography. Ultrasonography 2020; 40:183-190. [PMID: 33430577 PMCID: PMC7994743 DOI: 10.14366/usg.20117] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Accepted: 11/12/2020] [Indexed: 12/13/2022] Open
Abstract
Although breast ultrasonography is the mainstay modality for differentiating between benign and malignant breast masses, it has intrinsic problems with false positives and substantial interobserver variability. Artificial intelligence (AI), particularly with deep learning models, is expected to improve workflow efficiency and serve as a second opinion. AI is highly useful for performing three main clinical tasks in breast ultrasonography: detection (localization/segmentation), differential diagnosis (classification), and prognostication (prediction). This article provides a current overview of AI applications in breast ultrasonography, with a discussion of methodological considerations in the development of AI models and an up-to-date literature review of potential clinical applications.
Collapse
Affiliation(s)
- Jaeil Kim
- School of Computer Science and Engineering, Kyungpook National University, Daegu, Korea
| | - Hye Jung Kim
- Department of Radiology, School of Medicine, Kyungpook National University, Kyungpook National University Chilgok Hospital, Daegu, Korea
| | - Chanho Kim
- School of Computer Science and Engineering, Kyungpook National University, Daegu, Korea
| | - Won Hwa Kim
- Department of Radiology, School of Medicine, Kyungpook National University, Kyungpook National University Chilgok Hospital, Daegu, Korea
| |
Collapse
|
14
|
Li Y, Wu W, Chen H, Cheng L, Wang S. 3D tumor detection in automated breast ultrasound using deep convolutional neural network. Med Phys 2020; 47:5669-5680. [PMID: 32970838 DOI: 10.1002/mp.14477] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2020] [Revised: 08/06/2020] [Accepted: 08/28/2020] [Indexed: 01/17/2023] Open
Affiliation(s)
- Yanfeng Li
- School of Electronic and Information Engineering Beijing Jiaotong University Beijing China
| | - Wen Wu
- School of Electronic and Information Engineering Beijing Jiaotong University Beijing China
| | - Houjin Chen
- School of Electronic and Information Engineering Beijing Jiaotong University Beijing China
| | - Lin Cheng
- Center for Breast People’s Hospital of Peking University Beijing China
| | - Shu Wang
- Center for Breast People’s Hospital of Peking University Beijing China
| |
Collapse
|
15
|
Zhang X, Lin X, Zhang Z, Dong L, Sun X, Sun D, Yuan K. Artificial Intelligence Medical Ultrasound Equipment: Application of Breast Lesions Detection. ULTRASONIC IMAGING 2020; 42:191-202. [PMID: 32546066 DOI: 10.1177/0161734620928453] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Breast cancer ranks first among cancers affecting women's health. Our work aims to realize the intelligence of the medical ultrasound equipment with limited computational capability, which is used for the assistant detection of breast lesions. We embed the high-computational deep learning algorithm into the medical ultrasound equipment with limited computational capability by two techniques: (1) lightweight neural network: considering the limited computational capability of ultrasound equipment, a lightweight neural network is designed, which greatly reduces the amount of calculation. And we use the technique of knowledge distillation to train the low-precision network helped with the high-precision network; (2) asynchronous calculations: consider four frames of ultrasound images as a group; the image of the first frame of each group is used as the input of the network, and the result is respectively fused with the images of the fourth to seventh frames. An amount of computation of 30 GFLO/frame is required for the proposed lightweight neural network, about 1/6 of that of the large high-precision network. After trained from scratch using the knowledge distillation technique, the detection performance of the lightweight neural network (sensitivity = 89.25%, specificity = 96.33%, the average precision [AP] = 0.85) is close to that of the high-precision network (sensitivity = 98.3%, specificity = 88.33%, AP = 0.91). By asynchronous calculation, we achieve real-time automatic detection of 24 fps (frames per second) on the ultrasound equipment. Our work proposes a method to realize the intelligence of the low-computation-power ultrasonic equipment, and successfully achieves the real-time assistant detection of breast lesions. The significance of the study is as follows: (1) The proposed method is of practical significance in assisting doctors to detect breast lesions; (2) our method provides some practical and theoretical support for the development and engineering of intelligent equipment based on artificial intelligence algorithms.
Collapse
Affiliation(s)
- Xuesheng Zhang
- Graduate School at Shenzhen, Tsinghua University, Shenzhen, China
| | - Xiaona Lin
- Department of Ultrasound, Shenzhen Hospital of Peking University, Shenzhen, China
| | - Zihao Zhang
- Graduate School at Shenzhen, Tsinghua University, Shenzhen, China
| | - Licong Dong
- Department of Ultrasound, Shenzhen Hospital of Peking University, Shenzhen, China
| | - Xinlong Sun
- Graduate School at Shenzhen, Tsinghua University, Shenzhen, China
| | - Desheng Sun
- Department of Ultrasound, Shenzhen Hospital of Peking University, Shenzhen, China
| | - Kehong Yuan
- Graduate School at Shenzhen, Tsinghua University, Shenzhen, China
| |
Collapse
|