51
|
Shareef B, Vakanski A, Freer PE, Xian M. ESTAN: Enhanced Small Tumor-Aware Network for Breast Ultrasound Image Segmentation. Healthcare (Basel) 2022; 10:2262. [PMID: 36421586 PMCID: PMC9690845 DOI: 10.3390/healthcare10112262] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Revised: 11/01/2022] [Accepted: 11/03/2022] [Indexed: 11/16/2022] Open
Abstract
Breast tumor segmentation is a critical task in computer-aided diagnosis (CAD) systems for breast cancer detection because accurate tumor size, shape, and location are important for further tumor quantification and classification. However, segmenting small tumors in ultrasound images is challenging due to the speckle noise, varying tumor shapes and sizes among patients, and the existence of tumor-like image regions. Recently, deep learning-based approaches have achieved great success in biomedical image analysis, but current state-of-the-art approaches achieve poor performance for segmenting small breast tumors. In this paper, we propose a novel deep neural network architecture, namely the Enhanced Small Tumor-Aware Network (ESTAN), to accurately and robustly segment breast tumors. The Enhanced Small Tumor-Aware Network introduces two encoders to extract and fuse image context information at different scales, and utilizes row-column-wise kernels to adapt to the breast anatomy. We compare ESTAN and nine state-of-the-art approaches using seven quantitative metrics on three public breast ultrasound datasets, i.e., BUSIS, Dataset B, and BUSI. The results demonstrate that the proposed approach achieves the best overall performance and outperforms all other approaches on small tumor segmentation. Specifically, the Dice similarity coefficient (DSC) of ESTAN on the three datasets is 0.92, 0.82, and 0.78, respectively; and the DSC of ESTAN on the three datasets of small tumors is 0.89, 0.80, and 0.81, respectively.
Collapse
Affiliation(s)
- Bryar Shareef
- Department of Computer Science, University of Idaho, Idaho Falls, ID 83402, USA
| | - Aleksandar Vakanski
- Department of Industrial Technology, University of Idaho, Idaho Falls, ID 83402, USA
| | - Phoebe E. Freer
- Department of Radiology and Imaging Sciences, University of Utah School of Medicine, Salt Lake City, UT 84132, USA
| | - Min Xian
- Department of Computer Science, University of Idaho, Idaho Falls, ID 83402, USA
| |
Collapse
|
52
|
Memon SA, Javed Q, Kim WG, Mahmood Z, Khan U, Shahzad M. A Machine-Learning-Based Robust Classification Method for PV Panel Faults. SENSORS (BASEL, SWITZERLAND) 2022; 22:8515. [PMID: 36366213 PMCID: PMC9655523 DOI: 10.3390/s22218515] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 10/26/2022] [Accepted: 11/03/2022] [Indexed: 06/16/2023]
Abstract
Renewable energy resources have gained considerable attention in recent years due to their efficiency and economic benefits. Their proportion of total energy use continues to grow over time. Photovoltaic (PV) cell and wind energy generation are the least-expensive new energy sources in most countries. Renewable energy technologies significantly contribute to climate mitigation and provide economic benefits. Apart from these advantages, renewable energy sources, particularly solar energy, have drawbacks, for instance restricted energy supply, reliance on weather conditions, and being affected by several kinds of faults, which cause a high power loss. Usually, the local PV plants are small in size, and it is easy to trace any fault and defect; however, there are many PV cells in the grid-connected PV system where it is difficult to find a fault. Keeping in view the aforedescribed facts, this paper presents an intelligent model to detect faults in the PV panels. The proposed model utilizes the Convolutional Neural Network (CNN), which is trained on historic data. The dataset was preprocessed before being fed to the CNN. The dataset contained different parameters, such as current, voltage, temperature, and irradiance, for five different classes. The simulation results showed that the proposed CNN model achieved a training accuracy of 97.64% and a testing accuracy of 95.20%, which are much better than the previous research performed on this dataset.
Collapse
Affiliation(s)
- Sufyan Ali Memon
- Department of Defense Systems Engineering, Sejong University, Seoul 05006, Korea
| | - Qaiser Javed
- Department of Electrical and Computer Engineering, COMSATS University Islamabad, Abbottabad Campus, Abbottabad 22060, Pakistan
| | - Wan-Gu Kim
- Department of Defense Systems Engineering, Sejong University, Seoul 05006, Korea
| | - Zahid Mahmood
- Department of Electrical and Computer Engineering, COMSATS University Islamabad, Abbottabad Campus, Abbottabad 22060, Pakistan
| | - Uzair Khan
- Department of Electrical and Computer Engineering, COMSATS University Islamabad, Abbottabad Campus, Abbottabad 22060, Pakistan
| | - Mohsin Shahzad
- Department of Electrical and Computer Engineering, COMSATS University Islamabad, Abbottabad Campus, Abbottabad 22060, Pakistan
| |
Collapse
|
53
|
Madani M, Behzadi MM, Nabavi S. The Role of Deep Learning in Advancing Breast Cancer Detection Using Different Imaging Modalities: A Systematic Review. Cancers (Basel) 2022; 14:5334. [PMID: 36358753 PMCID: PMC9655692 DOI: 10.3390/cancers14215334] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Revised: 10/23/2022] [Accepted: 10/25/2022] [Indexed: 12/02/2022] Open
Abstract
Breast cancer is among the most common and fatal diseases for women, and no permanent treatment has been discovered. Thus, early detection is a crucial step to control and cure breast cancer that can save the lives of millions of women. For example, in 2020, more than 65% of breast cancer patients were diagnosed in an early stage of cancer, from which all survived. Although early detection is the most effective approach for cancer treatment, breast cancer screening conducted by radiologists is very expensive and time-consuming. More importantly, conventional methods of analyzing breast cancer images suffer from high false-detection rates. Different breast cancer imaging modalities are used to extract and analyze the key features affecting the diagnosis and treatment of breast cancer. These imaging modalities can be divided into subgroups such as mammograms, ultrasound, magnetic resonance imaging, histopathological images, or any combination of them. Radiologists or pathologists analyze images produced by these methods manually, which leads to an increase in the risk of wrong decisions for cancer detection. Thus, the utilization of new automatic methods to analyze all kinds of breast screening images to assist radiologists to interpret images is required. Recently, artificial intelligence (AI) has been widely utilized to automatically improve the early detection and treatment of different types of cancer, specifically breast cancer, thereby enhancing the survival chance of patients. Advances in AI algorithms, such as deep learning, and the availability of datasets obtained from various imaging modalities have opened an opportunity to surpass the limitations of current breast cancer analysis methods. In this article, we first review breast cancer imaging modalities, and their strengths and limitations. Then, we explore and summarize the most recent studies that employed AI in breast cancer detection using various breast imaging modalities. In addition, we report available datasets on the breast-cancer imaging modalities which are important in developing AI-based algorithms and training deep learning models. In conclusion, this review paper tries to provide a comprehensive resource to help researchers working in breast cancer imaging analysis.
Collapse
Affiliation(s)
- Mohammad Madani
- Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269, USA
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| | - Mohammad Mahdi Behzadi
- Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269, USA
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| | - Sheida Nabavi
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| |
Collapse
|
54
|
Jones MA, Islam W, Faiz R, Chen X, Zheng B. Applying artificial intelligence technology to assist with breast cancer diagnosis and prognosis prediction. Front Oncol 2022; 12:980793. [PMID: 36119479 PMCID: PMC9471147 DOI: 10.3389/fonc.2022.980793] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Accepted: 08/04/2022] [Indexed: 12/27/2022] Open
Abstract
Breast cancer remains the most diagnosed cancer in women. Advances in medical imaging modalities and technologies have greatly aided in the early detection of breast cancer and the decline of patient mortality rates. However, reading and interpreting breast images remains difficult due to the high heterogeneity of breast tumors and fibro-glandular tissue, which results in lower cancer detection sensitivity and specificity and large inter-reader variability. In order to help overcome these clinical challenges, researchers have made great efforts to develop computer-aided detection and/or diagnosis (CAD) schemes of breast images to provide radiologists with decision-making support tools. Recent rapid advances in high throughput data analysis methods and artificial intelligence (AI) technologies, particularly radiomics and deep learning techniques, have led to an exponential increase in the development of new AI-based models of breast images that cover a broad range of application topics. In this review paper, we focus on reviewing recent advances in better understanding the association between radiomics features and tumor microenvironment and the progress in developing new AI-based quantitative image feature analysis models in three realms of breast cancer: predicting breast cancer risk, the likelihood of tumor malignancy, and tumor response to treatment. The outlook and three major challenges of applying new AI-based models of breast images to clinical practice are also discussed. Through this review we conclude that although developing new AI-based models of breast images has achieved significant progress and promising results, several obstacles to applying these new AI-based models to clinical practice remain. Therefore, more research effort is needed in future studies.
Collapse
Affiliation(s)
- Meredith A. Jones
- School of Biomedical Engineering, University of Oklahoma, Norman, OK, United States
| | - Warid Islam
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK, United States
| | - Rozwat Faiz
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK, United States
| | - Xuxin Chen
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK, United States
| | - Bin Zheng
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK, United States
| |
Collapse
|
55
|
Abstract
OBJECTIVES This meta-analysis aimed to evaluate the value of ultrasonic S-Detect mode for the evaluation of thyroid nodules. METHODS We searched PubMed, Cochrane Library, and Chinese biomedical databases from inception to August 31, 2021. Meta-analysis was conducted using STATA version 14.0 and Meta-Disc version 1.4 software. We calculated the summary statistics for sensitivity (Sen), specificity (Spe), summary receiver operating characteristic curve, and the area under the curve, and compared the area under the curve between ultrasonic S-Detect mode and thyroid imaging report and data system (TI-RADS) for the diagnosis of thyroid nodules. As a systematic review summarizing the results of previous studies, this study does not need the informed consent of patients or the approval of the ethics review committee. RESULTS Fifteen studies that met all inclusion criteria were included in this meta-analysis. A total of 924 thyroid malignant nodules and 1228 thyroid benign nodules were assessed. All thyroid nodules were histologically confirmed after examination. The pooled Sen and Spe of TI-RADS were 0.89 (95% confidence interval [CI] = 0.85-0.91) and 0.85 (95% CI = 0.78-0.90), respectively; the pooled Sen and Spe of S-Detect were 0.88 (95% CI = 0.85-0.90) and 0.73 (95% CI = 0.63-0.81), respectively. The areas under the summary receiver operating characteristic curve of TI-RADS and S-Detect were 0.9370 (standard error [SE] = 0.0110) and 0.9128 (SE = 0.0147), respectively, between which there was no significant difference (Z = 1.318; SE = 0.0184; P = .1875). We found no evidence of publication bias (t = 0.36, P = .72). CONCLUSIONS Our meta-analysis indicates that ultrasonic S-Detect mode may have high diagnostic accuracy and may have certain clinical application value, especially for young doctors.
Collapse
Affiliation(s)
- Jinyi Bian
- Ultrasound Department, The First Affiliated Hospital of Dalian Medical University, Dalian, China
| | - Ruyue Wang
- Dalian Medical University, Dalian, China
| | - Mingxin Lin
- Ultrasound Department, The First Affiliated Hospital of Dalian Medical University, Dalian, China
- *Correspondence: Mingxin Lin, Ultrasound Department, The First Affiliated Hospital of Dalian Medical University, No. 222 Zhongshan Road, Xigang District, Dalian City, Liaoning Province 116011, China (e-mail: )
| |
Collapse
|
56
|
Interpretable Lightweight Ensemble Classification of Normal versus Leukemic Cells. COMPUTERS 2022. [DOI: 10.3390/computers11080125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The lymphocyte classification problem is usually solved by deep learning approaches based on convolutional neural networks with multiple layers. However, these techniques require specific hardware and long training times. This work proposes a lightweight image classification system capable of discriminating between healthy and cancerous lymphocytes of leukemia patients using image processing and feature-based machine learning techniques that require less training time and can run on a standard CPU. The features are composed of statistical, morphological, textural, frequency, and contour features extracted from each image and used to train a set of lightweight algorithms that classify the lymphocytes into malignant or healthy. After the training, these classifiers were combined into an ensemble classifier to improve the results. The proposed method has a lower computational cost than most deep learning approaches in learning time and neural network size. Our results contribute to the leukemia classification system, showing that high performance can be achieved by classifiers trained with a rich set of features. This study extends a previous work by combining simple classifiers into a single ensemble solution. With principal component analysis, it is possible to reduce the number of features used while maintaining a high accuracy.
Collapse
|
57
|
Wang W, Jiang R, Cui N, Li Q, Yuan F, Xiao Z. Semi-supervised vision transformer with adaptive token sampling for breast cancer classification. Front Pharmacol 2022; 13:929755. [PMID: 35935827 PMCID: PMC9353650 DOI: 10.3389/fphar.2022.929755] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Accepted: 06/29/2022] [Indexed: 12/24/2022] Open
Abstract
Various imaging techniques combined with machine learning (ML) models have been used to build computer-aided diagnosis (CAD) systems for breast cancer (BC) detection and classification. The rise of deep learning models in recent years, represented by convolutional neural network (CNN) models, has pushed the accuracy of ML-based CAD systems to a new level that is comparable to human experts. Existing studies have explored the usage of a wide spectrum of CNN models for BC detection, and supervised learning has been the mainstream. In this study, we propose a semi-supervised learning framework based on the Vision Transformer (ViT). The ViT is a model that has been validated to outperform CNN models on numerous classification benchmarks but its application in BC detection has been rare. The proposed method offers a custom semi-supervised learning procedure that unifies both supervised and consistency training to enhance the robustness of the model. In addition, the method uses an adaptive token sampling technique that can strategically sample the most significant tokens from the input image, leading to an effective performance gain. We validate our method on two datasets with ultrasound and histopathology images. Results demonstrate that our method can consistently outperform the CNN baselines for both learning tasks. The code repository of the project is available at https://github.com/FeiYee/Breast-area-TWO.
Collapse
Affiliation(s)
- Wei Wang
- Department of Breast Surgery, Hubei Provincial Clinical Research Center for Breast Cancer, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Ran Jiang
- Department of Thyroid and Breast Surgery, Maternal and Child Health Hospital of Hubei Province, Wuhan, Hubei, China
| | - Ning Cui
- Department of Ultrasound, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Qian Li
- Department of Ultrasound, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Feng Yuan
- Department of Breast Surgery, Hubei Provincial Clinical Research Center for Breast Cancer, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Zhifeng Xiao
- School of Engineering,Penn State Erie, The Behrend College, Erie, PA, United States
| |
Collapse
|
58
|
Ding W, Wang J, Zhou W, Zhou S, Chang C, Shi J. Joint Localization and Classification of Breast Cancer in B-Mode Ultrasound Imaging via Collaborative Learning with Elastography. IEEE J Biomed Health Inform 2022; 26:4474-4485. [PMID: 35763467 DOI: 10.1109/jbhi.2022.3186933] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/07/2022]
Abstract
Convolutional neural networks (CNNs) have been successfully applied in the computer-aided ultrasound diagnosis for breast cancer. Up to now, several CNN-based methods have been proposed. However, most of them consider tumor localization and classification as two separate steps, rather than performing them simultaneously. Besides, they suffer from the limited diagnosis information in the B-mode ultrasound (BUS) images. In this study, we develop a novel network ResNet-GAP that incorporates both localization and classification into a unified procedure. To enhance the performance of ResNet-GAP, we leverage stiffness information in the elastography ultrasound (EUS) modality by collaborative learning in the training stage. Specifically, a dual-channel ResNet-GAP is developed, one channel for BUS and the other for EUS. In each channel, multiple class activity maps (CAMs) are generated using a series of convolutional kernels of different sizes. The multi-scale consistency of the CAMs in both channels are further considered in network optimization. Experiments on 264 patients in this study show that the newly developed ResNet-GAP achieves an accuracy of 88.6%, a sensitivity of 95.3%, a specificity of 84.6%, and an AUC of 93.6% on the classification task, and a 1.0NLF of 87.9% on the localization task, which is better than some state-of-the-art approaches.
Collapse
|
59
|
Song SH, Han JH, Kim KS, Cho YA, Youn HJ, Kim YI, Kweon J. Deep-learning segmentation of ultrasound images for automated calculation of the hydronephrosis area to renal parenchyma ratio. Investig Clin Urol 2022; 63:455-463. [PMID: 35670007 PMCID: PMC9262488 DOI: 10.4111/icu.20220085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 04/14/2022] [Accepted: 04/26/2022] [Indexed: 11/18/2022] Open
Abstract
Purpose We investigated the feasibility of measuring the hydronephrosis area to renal parenchyma (HARP) ratio from ultrasound images using a deep-learning network. Materials and Methods The coronal renal ultrasound images of 195 pediatric and adolescent patients who underwent pyeloplasty to repair ureteropelvic junction obstruction were retrospectively reviewed. After excluding cases without a representative longitudinal renal image, we used a dataset of 168 images for deep-learning segmentation. Ten novel networks, such as combinations of DeepLabV3+ and UNet++, were assessed for their ability to calculate hydronephrosis and kidney areas, and the ensemble method was applied for further improvement. By dividing the image set into four, cross-validation was conducted, and the segmentation performance of the deep-learning network was evaluated using sensitivity, specificity, and dice similarity coefficients by comparison with the manually traced area. Results All 10 networks and ensemble methods showed good visual correlation with the manually traced kidney and hydronephrosis areas. The dice similarity coefficient of the 10-model ensemble was 0.9108 on average, and the best 5-model ensemble had a dice similarity coefficient of 0.9113 on average. We included patients with severe hydronephrosis who underwent renal ultrasonography at a single institution; thus, external validation of our algorithm in a heterogeneous ultrasonography examination setup with a diverse set of instruments is recommended. Conclusions Deep-learning-based calculation of the HARP ratio is feasible and showed high accuracy for imaging of the severity of hydronephrosis using ultrasonography. This algorithm can help physicians make more accurate and reproducible diagnoses of hydronephrosis using ultrasonography.
Collapse
Affiliation(s)
- Sang Hoon Song
- Department of Urology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Jae Hyeon Han
- Department of Urology, Korea University Ansan Hospital, Korea University College of Medicine, Seoul, Korea
| | - Kun Suk Kim
- Department of Urology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Young Ah Cho
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Hye Jung Youn
- Department of Convergence Medicine, Asan Medical Center, Seoul, Korea
| | - Young In Kim
- Department of Medical Science, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, Seoul, Korea
| | - Jihoon Kweon
- Department of Convergence Medicine, Asan Medical Center, Seoul, Korea.
| |
Collapse
|
60
|
A gated convolutional neural network for classification of breast lesions in ultrasound images. Soft comput 2022. [DOI: 10.1007/s00500-022-07024-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
61
|
Ukwuoma CC, Urama GC, Qin Z, Bin Heyat MB, Mohammed Khan H, Akhtar F, Masadeh MS, Ibegbulam CS, Delali FL, AlShorman O. Boosting Breast Cancer Classification from Microscopic Images Using Attention Mechanism. 2022 INTERNATIONAL CONFERENCE ON DECISION AID SCIENCES AND APPLICATIONS (DASA) 2022. [DOI: 10.1109/dasa54658.2022.9765013] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Affiliation(s)
- Chiagoziem C. Ukwuoma
- University of Electronic Science and Technology of China,School of Information and Software Engineering,Chengdu,Sichuan,China
| | - Gilbert C. Urama
- University of Electronic Science and Technology of China,School of Computer Science and Engineering,Chengdu,Sichuan,China
| | - Zhiguang Qin
- University of Electronic Science and Technology of China,School of Information and Software Engineering,Chengdu,Sichuan,China
| | - Md Belal Bin Heyat
- Sichuan University,West China Hospital,Department of Orthopedics Surgery,Chengdu,Sichuan,China
| | - Haider Mohammed Khan
- University of Electronic Science and Technology of China,School of Computer Science and Engineering,Chengdu,Sichuan,China
| | - Faijan Akhtar
- University of Electronic Science and Technology of China,School of Computer Science and Engineering,Chengdu,Sichuan,China
| | - Mahmoud S. Masadeh
- Yarmouk University,Hijjawi Faculty for Engineering,Computer Engineering Department,Irbid,Jordan
| | - Chukwuemeka S. Ibegbulam
- Federal University of Technology,Department of Polymer and Textile Engineering,Owerri,Imo State,Nigeria
| | - Fiasam Linda Delali
- University of Electronic Science and Technology of China,School of Information and Software Engineering,Chengdu,Sichuan,China
| | - Omar AlShorman
- Najran University,Faculty of Engineering and AlShrouk Traiding Company,Najran,KSA
| |
Collapse
|
62
|
Wang H, Hu Y, Lu Y, Zhou J, Guo Y. The uncertainty of boundary can improve the classification accuracy of BI-RADS 4A ultrasound image. Med Phys 2022; 49:3314-3324. [PMID: 35261034 DOI: 10.1002/mp.15590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Revised: 02/07/2022] [Accepted: 02/24/2022] [Indexed: 12/24/2022] Open
Abstract
PURPOSE The Breast Imaging-Reporting and Data System (BI-RADS) for ultrasound imaging provides a widely used reporting schema for breast imaging. Previous studies have shown that in ultrasound imaging, 90% of BI-RADS 4A tumors are benign lesions after biopsies. Unnecessary biopsy procedures can be avoided by accurate classification of BIRADS 4A tumors. However, the classification task is challenging and has not been fully investigated by existing studies. For benign and malignant tumors of BI-RADS 4A, the appearances of intra-class tumors are highly-variable, the characteristics of inter-class tumors is overall-similar. Discriminative features need to be found to improve classification accuracy of BI-RADS 4A tumors. METHODS In this study, we designed the network using the clinical features of BI-RADS 4A tumors to improve the discrimination ability of network. The boundary information is embedded into the input of the network using the uncertainty. A fine-grained data augmentation method is used to find discriminative features in tumor information embedded with boundary information. Two mathematical methods, voting-based and variance-based, are used to define the uncertainty of boundary, and the differences of these two definitions are compared in a classification network. RESULTS The dataset we used to evaluate our method had 1155 2D gray-scale images. Each image represented a unique BI-RADS 4A tumor. Among them, 248 tumors were proven to be malignant by biopsy, and the remaining 907 were benign. A weakly supervised data augmentation network (WS-DAN) was used as the backbone classification network, which showed competitive performance in finding discriminative features. Using the auxiliary input of the uncertain boundaries defined by the voting method, the area under the curve (AUC) value of our method was 0.8347 (sensitivity = 0.7774, specificity = 0.7459). The AUC value of the variance-based uncertainty was 0.7789. The voting-based uncertainty was higher than the baseline (AUC = 0.803), which only inputs the original image. Compared with the classic classification network, our method had a significant effect improvement (p < 0.01). CONCLUSIONS Using the uncertain boundaries defined by the voting methods as auxiliary information, we obtained a better performance in the classification of BI-RADS 4A ultrasound images, while variance-based uncertain boundaries had no effect on improving classification performance. Additionally, fine-grained network helped find discriminative features comparing with the commonly used classification networks. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Huayu Wang
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, 510275, P.R. China
| | - Yixin Hu
- Department of Ultrasound, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China; and Collaborative Innovation Center for Cancer Medicine, No. 651 Dongfeng Road East, Guangzhou, 510060, PR China
| | - Yao Lu
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, 510275, P.R. China.,Guangdong Province Key Laboratory of Computational Science, Sun Yat-sen University, Guangzhou, 510275, P.R. China
| | - Jianhua Zhou
- Department of Ultrasound, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China; and Collaborative Innovation Center for Cancer Medicine, No. 651 Dongfeng Road East, Guangzhou, 510060, PR China
| | - Yongze Guo
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, 510275, P.R. China
| |
Collapse
|
63
|
Zhao C, Xiao M, Ma L, Ye X, Deng J, Cui L, Guo F, Wu M, Luo B, Chen Q, Chen W, Guo J, Li Q, Zhang Q, Li J, Jiang Y, Zhu Q. Enhancing Performance of Breast Ultrasound in Opportunistic Screening Women by a Deep Learning-Based System: A Multicenter Prospective Study. Front Oncol 2022; 12:804632. [PMID: 35223484 PMCID: PMC8867611 DOI: 10.3389/fonc.2022.804632] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Accepted: 01/07/2022] [Indexed: 12/21/2022] Open
Abstract
PURPOSE To validate the feasibility of S-Detect, an ultrasound computer-aided diagnosis (CAD) system using deep learning, in enhancing the diagnostic performance of breast ultrasound (US) for patients with opportunistic screening-detected breast lesions. METHODS Nine medical centers throughout China participated in this prospective study. Asymptomatic patients with US-detected breast masses were enrolled and received conventional US, S-Detect, and strain elastography subsequently. The final pathological results are referred to as the gold standard for classifying breast mass. The diagnostic performances of the three methods and the combination of S-Detect and elastography were evaluated and compared, including sensitivity, specificity, and area under the receiver operating characteristics (AUC) curve. We also compared the diagnostic performances of S-Detect among different study sites. RESULTS A total of 757 patients were enrolled, including 460 benign and 297 malignant cases. S-Detect exhibited significantly higher AUC and specificity than conventional US (AUC, S-Detect 0.83 [0.80-0.85] vs. US 0.74 [0.70-0.77], p < 0.0001; specificity, S-Detect 74.35% [70.10%-78.28%] vs. US 54.13% [51.42%-60.29%], p < 0.0001), with no decrease in sensitivity. In comparison to that of S-Detect alone, the AUC value significantly was enhanced after combining elastography and S-Detect (0.87 [0.84-0.90]), without compromising specificity (73.93% [68.60%-78.78%]). Significant differences in the S-Detect's performance were also observed across different study sites (AUC of S-Detect in Groups 1-4: 0.89 [0.84-0.93], 0.84 [0.77-0.89], 0.85 [0.76-0.92], 0.75 [0.69-0.80]; p [1 vs. 4] < 0.0001, p [2 vs. 4] = 0.0165, p [3 vs. 4] = 0.0157). CONCLUSIONS Compared with the conventional US, S-Detect presented higher overall accuracy and specificity. After S-Detect and strain elastography were combined, the performance could be further enhanced. The performances of S-Detect also varied among different centers.
Collapse
Affiliation(s)
- Chenyang Zhao
- Department of Ultrasound, Chinese Academy of Medical Sciences and Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Mengsu Xiao
- Department of Ultrasound, Chinese Academy of Medical Sciences and Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Li Ma
- Department of Ultrasound, Chinese Academy of Medical Sciences and Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Xinhua Ye
- Department of Ultrasound, First Affiliated Hospital, Nanjing Medical University, Nanjing, China
| | - Jing Deng
- Department of Ultrasound, First Affiliated Hospital, Nanjing Medical University, Nanjing, China
| | - Ligang Cui
- Department of Ultrasound, Peking University Third Hospital, Beijing, China
| | - Fajin Guo
- Department of Ultrasound, Beijing Hospital, Beijing, China
| | - Min Wu
- Department of Ultrasound, Nanjing Drum Tower Hospital, Nanjing, China
| | - Baoming Luo
- Department of Ultrasound, Sun Yat-sen Memorial Hospital, Guangzhou, China
| | - Qin Chen
- Department of Ultrasound, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu, China
| | - Wu Chen
- Department of Ultrasound, First Hospital of Shanxi Medical University, Taiyuan, China
| | - Jun Guo
- Department of Ultrasound, Aero Space Central Hospital, Beijing, China
| | - Qian Li
- Department of Ultrasound, Henan Provincial Cancer Hospital, Zhengzhou, China
| | - Qing Zhang
- Department of Ultrasound, Chinese Academy of Medical Sciences and Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jianchu Li
- Department of Ultrasound, Chinese Academy of Medical Sciences and Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yuxin Jiang
- Department of Ultrasound, Chinese Academy of Medical Sciences and Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Qingli Zhu
- Department of Ultrasound, Chinese Academy of Medical Sciences and Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
64
|
Liu H, Cui G, Luo Y, Guo Y, Zhao L, Wang Y, Subasi A, Dogan S, Tuncer T. Artificial Intelligence-Based Breast Cancer Diagnosis Using Ultrasound Images and Grid-Based Deep Feature Generator. Int J Gen Med 2022; 15:2271-2282. [PMID: 35256855 PMCID: PMC8898057 DOI: 10.2147/ijgm.s347491] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Accepted: 01/11/2022] [Indexed: 01/30/2023] Open
Abstract
Purpose Breast cancer is a prominent cancer type with high mortality. Early detection of breast cancer could serve to improve clinical outcomes. Ultrasonography is a digital imaging technique used to differentiate benign and malignant tumors. Several artificial intelligence techniques have been suggested in the literature for breast cancer detection using breast ultrasonography (BUS). Nowadays, particularly deep learning methods have been applied to biomedical images to achieve high classification performances. Patients and Methods This work presents a new deep feature generation technique for breast cancer detection using BUS images. The widely known 16 pre-trained CNN models have been used in this framework as feature generators. In the feature generation phase, the used input image is divided into rows and columns, and these deep feature generators (pre-trained models) have applied to each row and column. Therefore, this method is called a grid-based deep feature generator. The proposed grid-based deep feature generator can calculate the error value of each deep feature generator, and then it selects the best three feature vectors as a final feature vector. In the feature selection phase, iterative neighborhood component analysis (INCA) chooses 980 features as an optimal number of features. Finally, these features are classified by using a deep neural network (DNN). Results The developed grid-based deep feature generation-based image classification model reached 97.18% classification accuracy on the ultrasonic images for three classes, namely malignant, benign, and normal. Conclusion The findings obviously denoted that the proposed grid deep feature generator and INCA-based feature selection model successfully classified breast ultrasonic images.
Collapse
Affiliation(s)
- Haixia Liu
- Department of Ultrasound, Cangzhou Central Hospital, Cangzhou, Hebei Province, 061000, People's Republic of China
| | - Guozhong Cui
- Department of Surgical Oncology, Cangzhou Central Hospital, Cangzhou, Hebei Province, 061000, People's Republic of China
| | - Yi Luo
- Medical Statistics Room, Cangzhou Central Hospital, Cangzhou, Hebei Province, 061000, People's Republic of China
| | - Yajie Guo
- Department of Ultrasound, Cangzhou Central Hospital, Cangzhou, Hebei Province, 061000, People's Republic of China
| | - Lianli Zhao
- Department of Internal Medicine teaching and research group, Cangzhou Central Hospital, Cangzhou, Hebei Province, 061000, China
| | - Yueheng Wang
- Department of Ultrasound, The Second Hospital of Hebei MedicalUniversity, Shijiazhuang, Hebei Province, 050000, People's Republic of China
| | - Abdulhamit Subasi
- Institute of Biomedicine, Faculty of Medicine, University of Turku, Turku, 20520, Finland.,Department of Computer Science, College of Engineering, Effat University, Jeddah, 21478, Saudi Arabia
| | - Sengul Dogan
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, 23119, Turkey
| | - Turker Tuncer
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, 23119, Turkey
| |
Collapse
|
65
|
Song D, Zhang Z, Li W, Yuan L, Zhang W. Judgment of benign and early malignant colorectal tumors from ultrasound images with deep multi-View fusion. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 215:106634. [PMID: 35081497 DOI: 10.1016/j.cmpb.2022.106634] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/19/2021] [Revised: 11/28/2021] [Accepted: 01/11/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Colorectal cancer (CRC) is currently one of the main cancers world-wide, with a high incidence in the elderly. In the diagnosis of CRC, endorectal ultrasound plays an important role in judging benign and early malignant tumors. However, malignant tumors in the early-stage are not easy to identify visually and experts usually seek help from multi-view images, which increases the workload and also exists a certain probability of misdiagnosis. In recent years, with the widespread use of deep learning methods in the analysis of medical images, it becomes necessary to design an effective computer-aided diagnosis (CAD) system of CRC based on multi-view endorectal ultrasound images. METHOD In this study, we proposed a CAD system for judging benign and early malignant colorectal tumors, and constructed the first multi-view ultrasound image dataset of CRC to validate our algorithm. Our system is an end-to-end model based on a deep neural network (DNN) which includes a feature extraction module based on dense blocks, a multi-view fusion module, and a Multi-Layer Perception-based classifier. A center loss was used for the first time in CAD tasks, to optimize our model. RESULT On the constructed dataset, the proposed system surpasses expert diagnosis in accuracy, sensitivity, specificity, and F1-score. Compared with the popular deep classification networks and other CAD methods, the algorithm has reached the best performance. Comparative experiments using different feature extraction methods, different view fusion strategies, and different classifiers verify the effectiveness of each part of the algorithm. CONCLUSION We propose a CAD system for judging benign and early malignant colorectal tumors based on DNN, which combines information of ultrasound images from different views for comprehension. On the first CRC multi-view ultrasound image dataset which we constructed, our method outperforms expert diagnosis results and all other methods, and the effectiveness of each part of the system has been verified. Our system has application value in future medical practice on early diagnosis of CRC.
Collapse
Affiliation(s)
- Dan Song
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
| | - Zheqi Zhang
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
| | - Wenhui Li
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China.
| | - Lijun Yuan
- Department of Colorectal Surgery, Tianjin Union Medical Center, Tianjin 300121, China; Tianjin Institute of Coloproctology, Tianjin 300121, China.
| | - Wenshu Zhang
- EUREKA Robotics Centre, School of Technologies, Cardiff Metropolitan University, Cardiff, Wales, United Kingdom
| |
Collapse
|
66
|
Mannepalli DP, Namdeo V. A cad system design based on HybridMultiscale convolutional Mantaray network for pneumonia diagnosis. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:12857-12881. [PMID: 35221779 PMCID: PMC8863100 DOI: 10.1007/s11042-022-12547-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 08/02/2021] [Accepted: 01/31/2022] [Indexed: 06/14/2023]
Abstract
Pneumonia is one of the diseases that people may encounter in any period of their lives. Recently, researches and developers all around the world are focussing on deep learning and image processing strategies to quicken the pneumonia diagnosis as those strategies are capable of processing numerous X-ray and computed tomography (CT) images. Clinicians need more time and appropriate experiences for making a diagnosis. Hence, a precise, reckless, and less expensive tool to detect pneumonia is necessary. Thus, this research focuses on classifying the pneumonia chest X-ray images by proposing a very efficient stacked approach to improve the image quality and hybridmultiscale convolutional mantaray feature extraction network model with high accuracy. The input dataset is restructured with the sake of a hybrid fuzzy colored and stacking approach. Then the deep feature extraction stage is processed with the aid of stacking dataset by hybrid multiscale feature extraction unit to extract multiple features. Also, the features and network size are diminished by the self-attention module (SAM) based convolutional neural network (CNN). In addition to this, the error in the proposed network model will get reduced with the aid of adaptivemantaray foraging optimization (AMRFO) approach. Finally, the support vector regression (SVR) is suggested to classify the presence of pneumonia. The proposed module has been compared with existing technique to prove the overall efficiency of the system. The huge collection of chest X-ray images from the kaggle dataset was emphasized to validate the proposed work. The experimental results reveal an outstanding performance of accuracy (97%), precision (95%) and f-score (96%) progressively.
Collapse
Affiliation(s)
- Durga Prasad Mannepalli
- Research Scholar, Department of Computer Science & Engineering, Sarvepalli Radhakrishnan University, Bhopal, Madhya Pradesh India
| | - Varsha Namdeo
- Department of Computer Science & Engineering, Sarvepalli Radhakrishnan University, Bhopal, Madhya Pradesh India
| |
Collapse
|
67
|
Zhao Y, Hu B, Wang Y, Yin X, Jiang Y, Zhu X. Identification of gastric cancer with convolutional neural networks: a systematic review. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:11717-11736. [PMID: 35221775 PMCID: PMC8856868 DOI: 10.1007/s11042-022-12258-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Revised: 06/20/2021] [Accepted: 01/14/2022] [Indexed: 06/14/2023]
Abstract
The identification of diseases is inseparable from artificial intelligence. As an important branch of artificial intelligence, convolutional neural networks play an important role in the identification of gastric cancer. We conducted a systematic review to summarize the current applications of convolutional neural networks in the gastric cancer identification. The original articles published in Embase, Cochrane Library, PubMed and Web of Science database were systematically retrieved according to relevant keywords. Data were extracted from published papers. A total of 27 articles were retrieved for the identification of gastric cancer using medical images. Among them, 19 articles were applied in endoscopic images and 8 articles were applied in pathological images. 16 studies explored the performance of gastric cancer detection, 7 studies explored the performance of gastric cancer classification, 2 studies reported the performance of gastric cancer segmentation and 2 studies analyzed the performance of gastric cancer delineating margins. The convolutional neural network structures involved in the research included AlexNet, ResNet, VGG, Inception, DenseNet and Deeplab, etc. The accuracy of studies was 77.3 - 98.7%. Good performances of the systems based on convolutional neural networks have been showed in the identification of gastric cancer. Artificial intelligence is expected to provide more accurate information and efficient judgments for doctors to diagnose diseases in clinical work.
Collapse
Affiliation(s)
- Yuxue Zhao
- School of Nursing, Department of Medicine, Qingdao University, No. 15, Ningde Road, Shinan District, Qingdao, 266073 China
| | - Bo Hu
- Department of Thoracic Surgery, Qingdao Municipal Hospital, Qingdao, China
| | - Ying Wang
- School of Nursing, Department of Medicine, Qingdao University, No. 15, Ningde Road, Shinan District, Qingdao, 266073 China
| | - Xiaomeng Yin
- Pediatrics Intensive Care Unit, Qingdao Municipal Hospital, Qingdao, China
| | - Yuanyuan Jiang
- International Medical Services, Qilu Hospital of Shandong University, Jinan, China
| | - Xiuli Zhu
- School of Nursing, Department of Medicine, Qingdao University, No. 15, Ningde Road, Shinan District, Qingdao, 266073 China
| |
Collapse
|
68
|
Explainable Ensemble Machine Learning for Breast Cancer Diagnosis Based on Ultrasound Image Texture Features. FORECASTING 2022. [DOI: 10.3390/forecast4010015] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
Image classification is widely used to build predictive models for breast cancer diagnosis. Most existing approaches overwhelmingly rely on deep convolutional networks to build such diagnosis pipelines. These model architectures, although remarkable in performance, are black-box systems that provide minimal insight into the inner logic behind their predictions. This is a major drawback as the explainability of prediction is vital for applications such as cancer diagnosis. In this paper, we address this issue by proposing an explainable machine learning pipeline for breast cancer diagnosis based on ultrasound images. We extract first- and second-order texture features of the ultrasound images and use them to build a probabilistic ensemble of decision tree classifiers. Each decision tree learns to classify the input ultrasound image by learning a set of robust decision thresholds for texture features of the image. The decision path of the model predictions can then be interpreted by decomposing the learned decision trees. Our results show that our proposed framework achieves high predictive performance while being explainable.
Collapse
|
69
|
Hybrid deep learning and genetic algorithms approach (HMB-DLGAHA) for the early ultrasound diagnoses of breast cancer. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06851-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
70
|
Jabeen K, Khan MA, Alhaisoni M, Tariq U, Zhang YD, Hamza A, Mickus A, Damaševičius R. Breast Cancer Classification from Ultrasound Images Using Probability-Based Optimal Deep Learning Feature Fusion. SENSORS (BASEL, SWITZERLAND) 2022; 22:807. [PMID: 35161552 PMCID: PMC8840464 DOI: 10.3390/s22030807] [Citation(s) in RCA: 81] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Revised: 01/12/2022] [Accepted: 01/17/2022] [Indexed: 12/11/2022]
Abstract
After lung cancer, breast cancer is the second leading cause of death in women. If breast cancer is detected early, mortality rates in women can be reduced. Because manual breast cancer diagnosis takes a long time, an automated system is required for early cancer detection. This paper proposes a new framework for breast cancer classification from ultrasound images that employs deep learning and the fusion of the best selected features. The proposed framework is divided into five major steps: (i) data augmentation is performed to increase the size of the original dataset for better learning of Convolutional Neural Network (CNN) models; (ii) a pre-trained DarkNet-53 model is considered and the output layer is modified based on the augmented dataset classes; (iii) the modified model is trained using transfer learning and features are extracted from the global average pooling layer; (iv) the best features are selected using two improved optimization algorithms known as reformed differential evaluation (RDE) and reformed gray wolf (RGW); and (v) the best selected features are fused using a new probability-based serial approach and classified using machine learning algorithms. The experiment was conducted on an augmented Breast Ultrasound Images (BUSI) dataset, and the best accuracy was 99.1%. When compared with recent techniques, the proposed framework outperforms them.
Collapse
Affiliation(s)
- Kiran Jabeen
- Department of Computer Science, HITEC University Taxila, Taxila 47080, Pakistan; (K.J.); (M.A.K.); (A.H.)
| | - Muhammad Attique Khan
- Department of Computer Science, HITEC University Taxila, Taxila 47080, Pakistan; (K.J.); (M.A.K.); (A.H.)
| | - Majed Alhaisoni
- College of Computer Science and Engineering, University of Ha’il, Ha’il 55211, Saudi Arabia;
| | - Usman Tariq
- College of Computer Engineering and Science, Prince Sattam Bin Abdulaziz University, Al-Kharaj 11942, Saudi Arabia;
| | - Yu-Dong Zhang
- Department of Informatics, University of Leicester, Leicester LE1 7RH, UK;
| | - Ameer Hamza
- Department of Computer Science, HITEC University Taxila, Taxila 47080, Pakistan; (K.J.); (M.A.K.); (A.H.)
| | - Artūras Mickus
- Department of Applied Informatics, Vytautas Magnus University, LT-44404 Kaunas, Lithuania;
| | - Robertas Damaševičius
- Department of Applied Informatics, Vytautas Magnus University, LT-44404 Kaunas, Lithuania;
| |
Collapse
|
71
|
Amin J, Sharif M, Fernandes SL, Wang SH, Saba T, Khan AR. Breast microscopic cancer segmentation and classification using unique 4-qubit-quantum model. Microsc Res Tech 2022; 85:1926-1936. [PMID: 35043505 DOI: 10.1002/jemt.24054] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 10/20/2021] [Accepted: 12/02/2021] [Indexed: 12/19/2022]
Abstract
The visual inspection of histopathological samples is the benchmark for detecting breast cancer, but a strenuous and complicated process takes a long time of the pathologist practice. Deep learning models have shown excellent outcomes in clinical diagnosis and image processing and advances in various fields, including drug development, frequency simulation, and optimization techniques. However, the resemblance of histopathologic images of breast cancer and the inclusion of stable and infected tissues in different areas make detecting and classifying tumors on entire slide images more difficult. In breast cancer, a correct diagnosis is needed for complete care in a limited amount of time. An effective detection can relieve the pathologist's workload and mitigate diagnostic subjectivity. Therefore, this research work investigates improved the pre-trained xception and deeplabv3+ design semantic model. The model has been trained on input images with ground masks on the tuned parameters that significantly improve the segmentation of ultrasound breast images into respective classes, that is, benign/malignant. The segmentation model delivered an accuracy of greater than 99% to prove the model's effectiveness. The segmented images and histopathological breast images are transferred to the 4-qubit-quantum circuit with six-layered architecture to detect breast malignancy. The proposed framework achieved remarkable performance as contrasted to currently published methodologies. HIGHLIGHTS: This research proposed hybrid semantic model using pre-trained xception and deeplabv3 for breast microscopic cancer classification in to benign and malignant classes at accuracy of 95% accuracy, 99% accuracy for detection of breast malignancy.
Collapse
Affiliation(s)
- Javaria Amin
- Department of Computer Science, University of Wah, Quaid Avenue, Wah Cantt, Pakistan, 4740, Pakistan
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Steven Lawrence Fernandes
- Department of Computer Science, Design and Journalism, Creighton University, Omaha, Nebraska, 68178, USA
| | - Shui-Hua Wang
- School of Mathematics and Actuarial Science, University of Leicester, Leicester, UK
| | - Tanzila Saba
- Artificial Intelligence & Data Lab (AIDA) CCIS, Prince Sultan University, Riyadh, 11586, Saudi Arabia
| | - Amjad Rehman Khan
- Artificial Intelligence & Data Lab (AIDA) CCIS, Prince Sultan University, Riyadh, 11586, Saudi Arabia
| |
Collapse
|
72
|
RiIG Modeled WCP Image-Based CNN Architecture and Feature-Based Approach in Breast Tumor Classification from B-Mode Ultrasound. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app112412138] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
This study presents two new approaches based on Weighted Contourlet Parametric (WCP) images for the classification of breast tumors from B-mode ultrasound images. The Rician Inverse Gaussian (RiIG) distribution is considered for modeling the statistics of ultrasound images in the Contourlet transform domain. The WCP images are obtained by weighting the RiIG modeled Contourlet sub-band coefficient images. In the feature-based approach, various geometrical, statistical, and texture features are shown to have low ANOVA p-value, thus indicating a good capacity for class discrimination. Using three publicly available datasets (Mendeley, UDIAT, and BUSI), it is shown that the classical feature-based approach can yield more than 97% accuracy across the datasets for breast tumor classification using WCP images while the custom-made convolutional neural network (CNN) can deliver more than 98% accuracy, sensitivity, specificity, NPV, and PPV values utilizing the same WCP images. Both methods provide superior classification performance, better than those of several existing techniques on the same datasets.
Collapse
|
73
|
Meraj T, Alosaimi W, Alouffi B, Rauf HT, Kumar SA, Damaševičius R, Alyami H. A quantization assisted U-Net study with ICA and deep features fusion for breast cancer identification using ultrasonic data. PeerJ Comput Sci 2021; 7:e805. [PMID: 35036531 PMCID: PMC8725669 DOI: 10.7717/peerj-cs.805] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Accepted: 11/12/2021] [Indexed: 06/14/2023]
Abstract
Breast cancer is one of the leading causes of death in women worldwide-the rapid increase in breast cancer has brought about more accessible diagnosis resources. The ultrasonic breast cancer modality for diagnosis is relatively cost-effective and valuable. Lesion isolation in ultrasonic images is a challenging task due to its robustness and intensity similarity. Accurate detection of breast lesions using ultrasonic breast cancer images can reduce death rates. In this research, a quantization-assisted U-Net approach for segmentation of breast lesions is proposed. It contains two step for segmentation: (1) U-Net and (2) quantization. The quantization assists to U-Net-based segmentation in order to isolate exact lesion areas from sonography images. The Independent Component Analysis (ICA) method then uses the isolated lesions to extract features and are then fused with deep automatic features. Public ultrasonic-modality-based datasets such as the Breast Ultrasound Images Dataset (BUSI) and the Open Access Database of Raw Ultrasonic Signals (OASBUD) are used for evaluation comparison. The OASBUD data extracted the same features. However, classification was done after feature regularization using the lasso method. The obtained results allow us to propose a computer-aided design (CAD) system for breast cancer identification using ultrasonic modalities.
Collapse
Affiliation(s)
- Talha Meraj
- Department of Computer Science, COMSATS University Islamabad-Wah Campus, Wah Cantt, Pakistan
| | - Wael Alosaimi
- Department of Information Technology, College of Computers and Information Technology, Taif University, Taif, Saudi Arabia
| | - Bader Alouffi
- Department of Computer Science, College of Computers and Information Technology, Taif University, Taif, Saudi Arabia
| | - Hafiz Tayyab Rauf
- Department of Computer Science, Faculty of Engineering & Informatics, University of Bradford, Bradford, United Kingdom
| | - Swarn Avinash Kumar
- Department of Information Technology, Indian Institute of Information Technology, Uttar Pradesh, Jhalwa, Prayagraj, India
| | | | - Hashem Alyami
- Department of Computer Science, College of Computers and Information Technology, Taif University, Taif, Saudi Arabia
| |
Collapse
|
74
|
Saba T, Abunadi I, Sadad T, Khan AR, Bahaj SA. Optimizing the transfer-learning with pretrained deep convolutional neural networks for first stage breast tumor diagnosis using breast ultrasound visual images. Microsc Res Tech 2021; 85:1444-1453. [PMID: 34908213 DOI: 10.1002/jemt.24008] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Revised: 09/09/2021] [Accepted: 10/26/2021] [Indexed: 11/10/2022]
Abstract
Female accounts for approximately 50% of the total population worldwide and many of them had breast cancer. Computer-aided diagnosis frameworks could reduce the number of needless biopsies and the workload of radiologists. This research aims to detect benign and malignant tumors automatically using breast ultrasound (BUS) images. Accordingly, two pretrained deep convolutional neural network (CNN) models were employed for transfer learning using BUS images like AlexNet and DenseNet201. A total of 697 BUS images containing benign and malignant tumors are preprocessed and performed classification tasks using the transfer learning-based CNN models. The classification accuracy of the benign and malignant tasks is completed and achieved 92.8% accuracy using the DensNet201 model. The results thus achieved compared in state of the art using benchmark data set and concluded proposed model outperforms in accuracy from first stage breast tumor diagnosis. Finally, the proposed model could help radiologists diagnose benign and malignant tumors swiftly by screening suspected patients.
Collapse
Affiliation(s)
- Tanzila Saba
- Artificial Intelligence & Data Analytics Lab, CCIS, Prince Sultan University, Riyadh, 11586, Saudi Arabia
| | - Ibrahim Abunadi
- Artificial Intelligence & Data Analytics Lab, CCIS, Prince Sultan University, Riyadh, 11586, Saudi Arabia
| | - Tariq Sadad
- Department of Computer Science and Software Engineering, International Islamic University, Islamabad, 44000, Pakistan
| | - Amjad Rehman Khan
- Artificial Intelligence & Data Analytics Lab, CCIS, Prince Sultan University, Riyadh, 11586, Saudi Arabia
| | - Saeed Ali Bahaj
- MIS Department, College of Business Administration, Prince Sattam bin Abdulaziz University, Alkharj, 11942, Saudi Arabia
| |
Collapse
|
75
|
Chen G, Dai Y, Zhang J, Yin X, Cui L. MBANet: Multi-branch aware network for kidney ultrasound images segmentation. Comput Biol Med 2021; 141:105140. [PMID: 34922172 DOI: 10.1016/j.compbiomed.2021.105140] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2021] [Revised: 12/11/2021] [Accepted: 12/11/2021] [Indexed: 12/18/2022]
Abstract
Due to the influence of kidney morphology, heterogeneous structure and image quality, segmenting kidney in ultrasound images is challenging. To alleviate this challenge, we proposed a novel deep neural network architecture, namely Multi-branch Aware Network (MBANet), to segment kidney accurately and robustly. MBANet mainly consists of multi-scale feature pyramid (MSFP), multi-branch encoders (MBE) and master decoder. The design of MSFP can make the network more accessible to different kinds of class details at different scales. The information exchange between MBE can reduce the loss of feature information and improve the segmentation accuracy of the network. In addition, we designed a multi-scale fusion block (MFBlock) in the MBE to further extract and fuse more refined multi-scale image information. In order to further improve the robustness of MBANet, this paper also designed a step-by-step training mechanism. We validated the proposed approach and compared to several state-of-the-art approaches on the same kidney ultrasound datasets using six quantitative metrics. The results of our method on the six indicators of pixel accuracy (PA), intersection over union (IoU), precision, recall, specificity and F1-score (F1) are 98.83%, 92.38%, 97.10%, 95.03%, 99.46% and 0.9601, respectively. Compared with the comparison method, the average values on the six indicators are improved by about 2%. The evaluation results and segmentation results demonstrate that the proposed approach achieves the best overall performance on kidney ultrasound images segmentation.
Collapse
Affiliation(s)
- Gongping Chen
- The Institute of Robotics and Automatic Information System, Tianjin Key Laboratory of Intelligent Robotics, College of Artificial Intelligence, Nankai University, Tianjin, 300350, China.
| | - Yu Dai
- The Institute of Robotics and Automatic Information System, Tianjin Key Laboratory of Intelligent Robotics, College of Artificial Intelligence, Nankai University, Tianjin, 300350, China.
| | - Jianxun Zhang
- The Institute of Robotics and Automatic Information System, Tianjin Key Laboratory of Intelligent Robotics, College of Artificial Intelligence, Nankai University, Tianjin, 300350, China
| | - Xiaotao Yin
- Department of Urology, Civil Aviation General Hospital, Beijing, 100123, China
| | - Liang Cui
- Department of Urology, Fourth Medical Center of Chinese PLA General Hospital, Beijing, 10048, China
| |
Collapse
|
76
|
Ilesanmi AE, Chaumrattanakul U, Makhanov SS. Methods for the segmentation and classification of breast ultrasound images: a review. J Ultrasound 2021; 24:367-382. [PMID: 33428123 PMCID: PMC8572242 DOI: 10.1007/s40477-020-00557-5] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Accepted: 12/21/2020] [Indexed: 02/07/2023] Open
Abstract
PURPOSE Breast ultrasound (BUS) is one of the imaging modalities for the diagnosis and treatment of breast cancer. However, the segmentation and classification of BUS images is a challenging task. In recent years, several methods for segmenting and classifying BUS images have been studied. These methods use BUS datasets for evaluation. In addition, semantic segmentation algorithms have gained prominence for segmenting medical images. METHODS In this paper, we examined different methods for segmenting and classifying BUS images. Popular datasets used to evaluate BUS images and semantic segmentation algorithms were examined. Several segmentation and classification papers were selected for analysis and review. Both conventional and semantic methods for BUS segmentation were reviewed. RESULTS Commonly used methods for BUS segmentation were depicted in a graphical representation, while other conventional methods for segmentation were equally elucidated. CONCLUSIONS We presented a review of the segmentation and classification methods for tumours detected in BUS images. This review paper selected old and recent studies on segmenting and classifying tumours in BUS images.
Collapse
Affiliation(s)
- Ademola E. Ilesanmi
- School of ICT, Sirindhorn International Institute of Technology, Thammasat University, Pathum Thani, 12000 Thailand
| | | | - Stanislav S. Makhanov
- School of ICT, Sirindhorn International Institute of Technology, Thammasat University, Pathum Thani, 12000 Thailand
| |
Collapse
|
77
|
Cui W, Peng Y, Yuan G, Cao W, Cao Y, Lu Z, Ni X, Yan Z, Zheng J. FMRNet: A fused network of multiple tumoral regions for breast tumor classification with ultrasound images. Med Phys 2021; 49:144-157. [PMID: 34766623 DOI: 10.1002/mp.15341] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Revised: 10/21/2021] [Accepted: 10/22/2021] [Indexed: 12/16/2022] Open
Abstract
PURPOSE Recent studies have illustrated that the peritumoral regions of medical images have value for clinical diagnosis. However, the existing approaches using peritumoral regions mainly focus on the diagnostic capability of the single region and ignore the advantages of effectively fusing the intratumoral and peritumoral regions. In addition, these methods need accurate segmentation masks in the testing stage, which are tedious and inconvenient in clinical applications. To address these issues, we construct a deep convolutional neural network that can adaptively fuse the information of multiple tumoral-regions (FMRNet) for breast tumor classification using ultrasound (US) images without segmentation masks in the testing stage. METHODS To sufficiently excavate the potential relationship, we design a fused network and two independent modules to extract and fuse features of multiple regions simultaneously. First, we introduce two enhanced combined-tumoral (EC) region modules, aiming to enhance the combined-tumoral features gradually. Then, we further design a three-branch module for extracting and fusing the features of intratumoral, peritumoral, and combined-tumoral regions, denoted as the intratumoral, peritumoral, and combined-tumoral module. Especially, we design a novel fusion module by introducing a channel attention module to adaptively fuse the features of three regions. The model is evaluated on two public datasets including UDIAT and BUSI with breast tumor ultrasound images. Two independent groups of experiments are performed on two respective datasets using the fivefold stratified cross-validation strategy. Finally, we conduct ablation experiments on two datasets, in which BUSI is used as the training set and UDIAT is used as the testing set. RESULTS We conduct detailed ablation experiments about the proposed two modules and comparative experiments with other existing representative methods. The experimental results show that the proposed method yields state-of-the-art performance on both two datasets. Especially, in the UDIAT dataset, the proposed FMRNet achieves a high accuracy of 0.945 and a specificity of 0.945, respectively. Moreover, the precision (PRE = 0.909) even dramatically improves by 21.6% on the BUSI dataset compared with the existing method of the best result. CONCLUSION The proposed FMRNet shows good performance in breast tumor classification with US images, and proves its capability of exploiting and fusing the information of multiple tumoral-regions. Furthermore, the FMRNet has potential value in classifying other types of cancers using multiple tumoral-regions of other kinds of medical images.
Collapse
Affiliation(s)
- Wenju Cui
- Institute of Biomedical Engineering, School of Communication and Information Engineering, Shanghai University, Shanghai, China.,Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
| | - Yunsong Peng
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China.,Division of Life Sciences and Medicine, School of Biomedical Engineering (Suzhou), University of Science and Technology of China, Hefei, China
| | - Gang Yuan
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China.,Division of Life Sciences and Medicine, School of Biomedical Engineering (Suzhou), University of Science and Technology of China, Hefei, China
| | - Weiwei Cao
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China.,Division of Life Sciences and Medicine, School of Biomedical Engineering (Suzhou), University of Science and Technology of China, Hefei, China
| | - Yuzhu Cao
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China.,Division of Life Sciences and Medicine, School of Biomedical Engineering (Suzhou), University of Science and Technology of China, Hefei, China
| | - Zhengda Lu
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, China.,Center for Medical Physics, Nanjing Medical University, Changzhou, China
| | - Xinye Ni
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, China.,Center for Medical Physics, Nanjing Medical University, Changzhou, China
| | - Zhuangzhi Yan
- Institute of Biomedical Engineering, School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Jian Zheng
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China.,Division of Life Sciences and Medicine, School of Biomedical Engineering (Suzhou), University of Science and Technology of China, Hefei, China
| |
Collapse
|
78
|
Kim H, Park J, Lee H, Im G, Lee J, Lee KB, Lee HJ. Classification for Breast Ultrasound Using Convolutional Neural Network with Multiple Time-Domain Feature Maps. APPLIED SCIENCES 2021; 11:10216. [DOI: 10.3390/app112110216] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2025]
Abstract
Ultrasound (US) imaging is widely utilized as a diagnostic screening method, and deep learning has recently drawn attention for the analysis of US images for the pathological status of tissues. While low image quality and poor reproducibility are the common obstacles in US analysis, the small size of the dataset is a new limitation for deep learning due to lack of generalization. In this work, a convolutional neural network (CNN) using multiple feature maps, such as entropy and phase images, as well as a B-mode image, was proposed to classify breast US images. Although B-mode images contain both anatomical and textual information, traditional CNNs experience difficulties in abstracting features automatically, especially with small datasets. For the proposed CNN framework, two distinct feature maps were obtained from a B-mode image and utilized as new inputs for training the CNN. These feature maps can also be made from the evaluation data and applied to the CNN separately for the final classification decision. The experimental results with 780 breast US images in three categories of benign, malignant, and normal, showed that the proposed CNN framework using multiple feature maps exhibited better performances than the traditional CNN with B-mode only for most deep network models.
Collapse
Affiliation(s)
- Hyungsuk Kim
- Department of Electrical Engineering, Kwangwoon University, Seoul 01897, Korea
| | - Juyoung Park
- Department of Electrical Engineering, Kwangwoon University, Seoul 01897, Korea
| | - Hakjoon Lee
- Department of Electrical Engineering, Kwangwoon University, Seoul 01897, Korea
| | - Geuntae Im
- Department of Electrical Engineering, Kwangwoon University, Seoul 01897, Korea
| | - Jongsoo Lee
- Department of Electrical Engineering, Kwangwoon University, Seoul 01897, Korea
| | - Ki-Baek Lee
- Department of Electrical Engineering, Kwangwoon University, Seoul 01897, Korea
| | - Heung Jae Lee
- Department of Electrical Engineering, Kwangwoon University, Seoul 01897, Korea
| |
Collapse
|
79
|
Zhang G, Zhao K, Hong Y, Qiu X, Zhang K, Wei B. SHA-MTL: soft and hard attention multi-task learning for automated breast cancer ultrasound image segmentation and classification. Int J Comput Assist Radiol Surg 2021; 16:1719-1725. [PMID: 34254225 DOI: 10.1007/s11548-021-02445-7] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2021] [Accepted: 06/28/2021] [Indexed: 02/01/2023]
Abstract
Purpose The automatic analysis of ultrasound images facilitates the diagnosis of breast cancer effectively and objectively. However, due to the characteristics of ultrasound images, it is still a challenging task to achieve analyzation automatically. We suppose that the algorithm will extract lesion regions and distinguish categories easily if it is guided to focus on the lesion regions.Method We propose a multi-task learning (SHA-MTL) model based on soft and hard attention mechanisms for breast ultrasound (BUS) image simultaneous segmentation and binary classification. The SHA-MTL model consists of a dense CNN encoder and an upsampling decoder, which are connected by attention-gated (AG) units with soft attention mechanism. Cross-validation experiments are performed on BUS datasets with category and mask labels, and multiple comprehensive analyses are performed on the two tasks.Results We assess the SHA-MTL model on a public BUS image dataset. For the segmentation task, the sensitivity and DICE of the SHA-MTL model to the lesion regions increased by 2.27% and 1.19% compared with the single task model, respectively. The classification accuracy and F1 score increased by 2.45% and 3.82%, respectively.Conclusion The results validate the effectiveness of our model and indicate that the SHA-MTL model requires less a priori knowledge to achieve better results by comparing with other recent models. Therefore, we can draw the conclusion that paying more attention to the lesion region of BUS is conducive to the discrimination of lesion types.
Collapse
Affiliation(s)
- Guisheng Zhang
- College of Intelligence and Information Technology, Shandong University of Traditional Chinese Medicine, Jinan, 250355, China.,Center for Medical Artificial Intelligence, Shandong University of Traditional Chinese Medicine, Qingdao, 266112, China
| | - Kehui Zhao
- The Second Affiliated Hospital of Shandong University of Traditional Chinese Medicine, Jinan, 250000, China.
| | - Yanfei Hong
- Center for Medical Artificial Intelligence, Shandong University of Traditional Chinese Medicine, Qingdao, 266112, China
| | - Xiaoyu Qiu
- The Library, Shandong University of Traditional Chinese Medicine, Jinan, 250355, China
| | - Kuixing Zhang
- College of Intelligence and Information Technology, Shandong University of Traditional Chinese Medicine, Jinan, 250355, China.,Center for Medical Artificial Intelligence, Shandong University of Traditional Chinese Medicine, Qingdao, 266112, China
| | - Benzheng Wei
- Center for Medical Artificial Intelligence, Shandong University of Traditional Chinese Medicine, Qingdao, 266112, China. .,Qingdao Academy of Chinese Medical Sciences, Shandong University of Traditional Chinese Medicine, Qingdao, 266112, China.
| |
Collapse
|
80
|
Irfan R, Almazroi AA, Rauf HT, Damaševičius R, Nasr EA, Abdelgawad AE. Dilated Semantic Segmentation for Breast Ultrasonic Lesion Detection Using Parallel Feature Fusion. Diagnostics (Basel) 2021; 11:1212. [PMID: 34359295 PMCID: PMC8304124 DOI: 10.3390/diagnostics11071212] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Revised: 04/16/2021] [Accepted: 04/27/2021] [Indexed: 12/15/2022] Open
Abstract
Breast cancer is becoming more dangerous by the day. The death rate in developing countries is rapidly increasing. As a result, early detection of breast cancer is critical, leading to a lower death rate. Several researchers have worked on breast cancer segmentation and classification using various imaging modalities. The ultrasonic imaging modality is one of the most cost-effective imaging techniques, with a higher sensitivity for diagnosis. The proposed study segments ultrasonic breast lesion images using a Dilated Semantic Segmentation Network (Di-CNN) combined with a morphological erosion operation. For feature extraction, we used the deep neural network DenseNet201 with transfer learning. We propose a 24-layer CNN that uses transfer learning-based feature extraction to further validate and ensure the enriched features with target intensity. To classify the nodules, the feature vectors obtained from DenseNet201 and the 24-layer CNN were fused using parallel fusion. The proposed methods were evaluated using a 10-fold cross-validation on various vector combinations. The accuracy of CNN-activated feature vectors and DenseNet201-activated feature vectors combined with the Support Vector Machine (SVM) classifier was 90.11 percent and 98.45 percent, respectively. With 98.9 percent accuracy, the fused version of the feature vector with SVM outperformed other algorithms. When compared to recent algorithms, the proposed algorithm achieves a better breast cancer diagnosis rate.
Collapse
Affiliation(s)
- Rizwana Irfan
- Department of Information Technology, College of Computing and Information Technology at Khulais, University of Jeddah, Jeddah 21959, Saudi Arabia; (R.I.); (A.A.A.)
| | - Abdulwahab Ali Almazroi
- Department of Information Technology, College of Computing and Information Technology at Khulais, University of Jeddah, Jeddah 21959, Saudi Arabia; (R.I.); (A.A.A.)
| | - Hafiz Tayyab Rauf
- Centre for Smart Systems, AI and Cybersecurity, Staffordshire University, Stoke-on-Trent ST4 2DE, UK
| | - Robertas Damaševičius
- Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland;
| | - Emad Abouel Nasr
- Industrial Engineering Department, College of Engineering, King Saud University, Riyadh 11421, Saudi Arabia; (E.A.N.); (A.E.A.)
| | - Abdelatty E. Abdelgawad
- Industrial Engineering Department, College of Engineering, King Saud University, Riyadh 11421, Saudi Arabia; (E.A.N.); (A.E.A.)
| |
Collapse
|
81
|
Qian X, Pei J, Zheng H, Xie X, Yan L, Zhang H, Han C, Gao X, Zhang H, Zheng W, Sun Q, Lu L, Shung KK. Prospective assessment of breast cancer risk from multimodal multiview ultrasound images via clinically applicable deep learning. Nat Biomed Eng 2021; 5:522-532. [PMID: 33875840 DOI: 10.1038/s41551-021-00711-2] [Citation(s) in RCA: 108] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2019] [Accepted: 03/08/2021] [Indexed: 02/02/2023]
Abstract
The clinical application of breast ultrasound for the assessment of cancer risk and of deep learning for the classification of breast-ultrasound images has been hindered by inter-grader variability and high false positive rates and by deep-learning models that do not follow Breast Imaging Reporting and Data System (BI-RADS) standards, lack explainability features and have not been tested prospectively. Here, we show that an explainable deep-learning system trained on 10,815 multimodal breast-ultrasound images of 721 biopsy-confirmed lesions from 634 patients across two hospitals and prospectively tested on 912 additional images of 152 lesions from 141 patients predicts BI-RADS scores for breast cancer as accurately as experienced radiologists, with areas under the receiver operating curve of 0.922 (95% confidence interval (CI) = 0.868-0.959) for bimodal images and 0.955 (95% CI = 0.909-0.982) for multimodal images. Multimodal multiview breast-ultrasound images augmented with heatmaps for malignancy risk predicted via deep learning may facilitate the adoption of ultrasound imaging in screening mammography workflows.
Collapse
Affiliation(s)
- Xuejun Qian
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, USA. .,Keck School of Medicine, University of Southern California, Los Angeles, CA, USA.
| | - Jing Pei
- Department of Breast Surgery, The First Affiliated Hospital of Anhui Medical University, Hefei, China.,Department of General Surgery, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Hui Zheng
- Department of Ultrasound, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Xinxin Xie
- Department of Ultrasound, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Lin Yan
- School of Computer Science and Technology, Xidian University, Xi'an, China
| | - Hao Zhang
- Department of Neurosurgery, University Hospital Heidelberg, Heidelberg, Germany
| | - Chunguang Han
- Department of Breast Surgery, The First Affiliated Hospital of Anhui Medical University, Hefei, China.,Department of General Surgery, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Xiang Gao
- Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, USA
| | - Hanqi Zhang
- Department of Ultrasound, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Weiwei Zheng
- Department of Ultrasound, Xuancheng People's Hospital, Xuancheng, China
| | - Qiang Sun
- Department of Breast Surgery, The First Affiliated Hospital of Anhui Medical University, Hefei, China.,Department of General Surgery, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Lu Lu
- Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, USA
| | - K Kirk Shung
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, USA
| |
Collapse
|
82
|
Multi-Features-Based Automated Breast Tumor Diagnosis Using Ultrasound Image and Support Vector Machine. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:9980326. [PMID: 34113378 PMCID: PMC8154287 DOI: 10.1155/2021/9980326] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Accepted: 05/07/2021] [Indexed: 12/11/2022]
Abstract
Breast ultrasound examination is a routine, fast, and safe method for clinical diagnosis of breast tumors. In this paper, a classification method based on multi-features and support vector machines was proposed for breast tumor diagnosis. Multi-features are composed of characteristic features and deep learning features of breast tumor images. Initially, an improved level set algorithm was used to segment the lesion in breast ultrasound images, which provided an accurate calculation of characteristic features, such as orientation, edge indistinctness, characteristics of posterior shadowing region, and shape complexity. Simultaneously, we used transfer learning to construct a pretrained model as a feature extractor to extract the deep learning features of breast ultrasound images. Finally, the multi-features were fused and fed to support vector machine for the further classification of breast ultrasound images. The proposed model, when tested on unknown samples, provided a classification accuracy of 92.5% for cancerous and noncancerous tumors.
Collapse
|
83
|
Eroğlu Y, Yildirim M, Çinar A. Convolutional Neural Networks based classification of breast ultrasonography images by hybrid method with respect to benign, malignant, and normal using mRMR. Comput Biol Med 2021; 133:104407. [PMID: 33901712 DOI: 10.1016/j.compbiomed.2021.104407] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Revised: 04/08/2021] [Accepted: 04/13/2021] [Indexed: 12/25/2022]
Abstract
Early diagnosis of breast lesions and differentiation of malignant lesions from benign lesions are important for the prognosis of breast cancer. In the diagnosis of this disease ultrasound is an extremely important radiological imaging method because it enables biopsy as well as lesion characterization. Since ultrasonographic diagnosis depends on the expert, the knowledge level and experience of the user is very important. In addition, the contribution of computer aided systems is quite high, as these systems can reduce the workload of radiologists and reinforce their knowledge and experience when considered together with a dense patient population in hospital conditions. In this paper, a hybrid based CNN system is developed for diagnosing breast cancer lesions with respect to benign, malignant and normal. Alexnet, MobilenetV2, and Resnet50 models are used as the base for the Hybrid structure. The features of these models used are obtained and concatenated separately. Thus, the number of features used are increased. Later, the most valuable of these features are selected by the mRMR (Minimum Redundancy Maximum Relevance) feature selection method and classified with machine learning classifiers such as SVM, KNN. The highest rate is obtained in the SVM classifier with 95.6% in accuracy.
Collapse
Affiliation(s)
- Yeşim Eroğlu
- Department of Radiology, Firat University School of Medicine, Elazig, Turkey.
| | | | - Ahmet Çinar
- Computer Engineering Department, Firat University, Elazig, Turkey.
| |
Collapse
|
84
|
18F-FDG-PET/CT Whole-Body Imaging Lung Tumor Diagnostic Model: An Ensemble E-ResNet-NRC with Divided Sample Space. BIOMED RESEARCH INTERNATIONAL 2021; 2021:8865237. [PMID: 33869635 PMCID: PMC8032520 DOI: 10.1155/2021/8865237] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/13/2020] [Revised: 11/03/2020] [Accepted: 02/25/2021] [Indexed: 11/17/2022]
Abstract
Under the background of 18F-FDG-PET/CT multimodal whole-body imaging for lung tumor diagnosis, for the problems of network degradation and high dimension features during convolutional neural network (CNN) training, beginning with the perspective of dividing sample space, an E-ResNet-NRC (ensemble ResNet nonnegative representation classifier) model is proposed in this paper. The model includes the following steps: (1) Parameters of a pretrained ResNet model are initialized using transfer learning. (2) Samples are divided into three different sample spaces (CT, PET, and PET/CT) based on the differences in multimodal medical images PET/CT, and ROI of the lesion was extracted. (3) The ResNet neural network was used to extract ROI features and obtain feature vectors. (4) Individual classifier ResNet-NRC was constructed with nonnegative representation NRC at a fully connected layer. (5) Ensemble classifier E-ResNet-NRC was constructed using the “relative majority voting method.” Finally, two network models, AlexNet and ResNet-50, and three classification algorithms, nearest neighbor classification algorithm (NNC), softmax, and nonnegative representation classification algorithm (NRC), were combined to compare with the E-ResNet-NRC model in this paper. The experimental results show that the overall classification performance of the Ensemble E-ResNet-NRC model is better than the individual ResNet-NRC, and specificity and sensitivity are more higher; the E-ResNet-NRC has better robustness and generalization ability.
Collapse
|
85
|
Zhang X, Li H, Wang C, Cheng W, Zhu Y, Li D, Jing H, Li S, Hou J, Li J, Li Y, Zhao Y, Mo H, Pang D. Evaluating the Accuracy of Breast Cancer and Molecular Subtype Diagnosis by Ultrasound Image Deep Learning Model. Front Oncol 2021; 11:623506. [PMID: 33747937 PMCID: PMC7973262 DOI: 10.3389/fonc.2021.623506] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2020] [Accepted: 02/15/2021] [Indexed: 12/24/2022] Open
Abstract
Background: Breast ultrasound is the first choice for breast tumor diagnosis in China, but the Breast Imaging Reporting and Data System (BI-RADS) categorization routinely used in the clinic often leads to unnecessary biopsy. Radiologists have no ability to predict molecular subtypes with important pathological information that can guide clinical treatment. Materials and Methods: This retrospective study collected breast ultrasound images from two hospitals and formed training, test and external test sets after strict selection, which included 2,822, 707, and 210 ultrasound images, respectively. An optimized deep learning model (DLM) was constructed with the training set, and the performance was verified in both the test set and the external test set. Diagnostic results were compared with the BI-RADS categorization determined by radiologists. We divided breast cancer into different molecular subtypes according to hormone receptor (HR) and human epidermal growth factor receptor 2 (HER2) expression. The ability to predict molecular subtypes using the DLM was confirmed in the test set. Results: In the test set, with pathological results as the gold standard, the accuracy, sensitivity and specificity were 85.6, 98.7, and 63.1%, respectively, according to the BI-RADS categorization. The same set achieved an accuracy, sensitivity, and specificity of 89.7, 91.3, and 86.9%, respectively, when using the DLM. For the test set, the area under the curve (AUC) was 0.96. For the external test set, the AUC was 0.90. The diagnostic accuracy was 92.86% with the DLM in BI-RADS 4a patients. Approximately 70.76% of the cases were judged as benign tumors. Unnecessary biopsy was theoretically reduced by 67.86%. However, the false negative rate was 10.4%. A good prediction effect was shown for the molecular subtypes of breast cancer with the DLM. The AUC were 0.864, 0.811, and 0.837 for the triple-negative subtype, HER2 (+) subtype and HR (+) subtype predictions, respectively. Conclusion: This study showed that the DLM was highly accurate in recognizing breast tumors from ultrasound images. Thus, the DLM can greatly reduce the incidence of unnecessary biopsy, especially for patients with BI-RADS 4a. In addition, the predictive ability of this model for molecular subtypes was satisfactory,which has specific clinical application value.
Collapse
Affiliation(s)
- Xianyu Zhang
- Department of Breast Surgery, Harbin Medical University Cancer Hospital, Harbin, China
| | - Hui Li
- Department of Breast Surgery, Harbin Medical University Cancer Hospital, Harbin, China
| | - Chaoyun Wang
- Harbin Engineering University Automation College, Harbin, China
| | - Wen Cheng
- Department of Ultrasound, Harbin Medical University Cancer Hospital, Harbin, China
| | - Yuntao Zhu
- Harbin Engineering University Automation College, Harbin, China
| | - Dapeng Li
- Department of Epidemiology, Harbin Medical University, Harbin, China
| | - Hui Jing
- Department of Ultrasound, Harbin Medical University Cancer Hospital, Harbin, China
| | - Shu Li
- Prenatal Diagnosis Center, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Jiahui Hou
- Department of Ultrasound, Harbin Medical University Cancer Hospital, Harbin, China
| | - Jiaying Li
- Department of Breast Surgery, Harbin Medical University Cancer Hospital, Harbin, China
| | - Yingpu Li
- Department of Breast Surgery, Harbin Medical University Cancer Hospital, Harbin, China
| | - Yashuang Zhao
- Department of Epidemiology, Harbin Medical University, Harbin, China
| | - Hongwei Mo
- Harbin Engineering University Automation College, Harbin, China
| | - Da Pang
- Department of Breast Surgery, Harbin Medical University Cancer Hospital, Harbin, China
| |
Collapse
|
86
|
Xiang H, Huang YS, Lee CH, Chang Chien TY, Lee CK, Liu L, Li A, Lin X, Chang RF. 3-D Res-CapsNet convolutional neural network on automated breast ultrasound tumor diagnosis. Eur J Radiol 2021; 138:109608. [PMID: 33711572 DOI: 10.1016/j.ejrad.2021.109608] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2020] [Revised: 02/06/2021] [Accepted: 02/19/2021] [Indexed: 12/24/2022]
Abstract
PURPOSE We propose a 3-D tumor computer-aided diagnosis (CADx) system with U-net and a residual-capsule neural network (Res-CapsNet) for ABUS images and provide a reference for early tumor diagnosis, especially non-mass lesions. METHODS A total of 396 patients with 444 tumors (226 malignant and 218 benign) were retrospectively enrolled from Sun Yat-sen University Cancer Center. In our CADx, preprocessing was performed first to crop and resize the tumor volumes of interest (VOIs). Then, a 3-D U-net and postprocessing were applied to the VOIs to obtain tumor masks. Finally, a 3-D Res-CapsNet classification model was executed with the VOIs and the corresponding masks to diagnose the tumors. Finally, the diagnostic performance, including accuracy, sensitivity, specificity, and area under the curve (AUC), was compared with other classification models and among three readers with different years of experience in ABUS review. RESULTS For all tumors, the accuracy, sensitivity, specificity, and AUC of the proposed CADx were 84.9 %, 87.2 %, 82.6 %, and 0.9122, respectively, outperforming other models and junior reader. Next, the tumors were subdivided into mass and non-mass tumors to validate the system performance. For mass tumors, our CADx achieved an accuracy, sensitivity, specificity, and AUC of 85.2 %, 88.2 %, 82.3 %, and 0.9147, respectively, which was higher than that of other models and junior reader. For non-mass tumors, our CADx achieved an accuracy, sensitivity, specificity, and AUC of 81.6 %, 78.3 %, 86.7 %, and 0.8654, respectively, outperforming the two readers. CONCLUSION The proposed CADx with 3-D U-net and 3-D Res-CapsNet models has the potential to reduce misdiagnosis, especially for non-mass lesions.
Collapse
Affiliation(s)
- Huiling Xiang
- Department of Ultrasound, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Yao-Sian Huang
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Chu-Hsuan Lee
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Ting-Yin Chang Chien
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan
| | | | - Lixian Liu
- Department of Ultrasound, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Anhua Li
- Department of Ultrasound, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Xi Lin
- Department of Ultrasound, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.
| | - Ruey-Feng Chang
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan; Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan.
| |
Collapse
|
87
|
Deep-Learning-Based Computer-Aided Systems for Breast Cancer Imaging: A Critical Review. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10228298] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
This paper provides a critical review of the literature on deep learning applications in breast tumor diagnosis using ultrasound and mammography images. It also summarizes recent advances in computer-aided diagnosis/detection (CAD) systems, which make use of new deep learning methods to automatically recognize breast images and improve the accuracy of diagnoses made by radiologists. This review is based upon published literature in the past decade (January 2010–January 2020), where we obtained around 250 research articles, and after an eligibility process, 59 articles were presented in more detail. The main findings in the classification process revealed that new DL-CAD methods are useful and effective screening tools for breast cancer, thus reducing the need for manual feature extraction. The breast tumor research community can utilize this survey as a basis for their current and future studies.
Collapse
|
88
|
Xie J, Song X, Zhang W, Dong Q, Wang Y, Li F, Wan C. A novel approach with dual-sampling convolutional neural network for ultrasound image classification of breast tumors. Phys Med Biol 2020; 65. [PMID: 33120380 DOI: 10.1088/1361-6560/abc5c7] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Accepted: 10/29/2020] [Indexed: 12/19/2022]
Abstract
Breast cancer is one of the leading causes of female cancer deaths. Early diagnosis with prophylactic may improve the patients' prognosis. So far ultrasound (US) imaging is a popular method in breast cancer diagnosis. However, its accuracy is bounded to traditional handcrafted feature methods and expertise. A novel method named Dual-Sampling Convolutional Neural Networks (DSCNN) was proposed in this paper for the differential diagnosis of breast tumors based on US images. Combining traditional convolutional and residual networks, DSCNN prevented gradient disappearance and degradation. The prediction accuracy was increased by the parallel dual-sampling structure, which can effectively extract potential features from US images. Compared with other advanced deep learning methods and traditional handcraftedfeaturemethods,DSCNNreachedthebestperformance withanaccuracyof91.67%andan AUC of 0.939. The robustness of the proposed method was also verified by using a public dataset. Moreover, DSCNN was compared with evaluation from three radiologists utilizing US-BI-RADS lexicon categories for overall breast tumors assessment. The result demonstrated that the prediction sensitivity, specificity and accuracy of the DSCNN were higher than those of the radiologist with 10- year experience, suggesting that the DSCNN has the potential to help doctors make judgement in clinic.
Collapse
Affiliation(s)
- Jiang Xie
- School of Computer Engineering and Science, Shanghai University, Shanghai, CHINA
| | - Xiangshuai Song
- School of Computer Engineering and Science, Shanghai University, Shanghai, 200444, CHINA
| | - Wu Zhang
- Shanghai Institute of Applied Mathematics and Mechanics, Shanghai University, Shanghai, CHINA
| | - Qi Dong
- Department of Ultrasound, Shanghai Jiao Tong University School of Medicine Affiliated Renji Hospital, Shanghai, CHINA
| | - Yan Wang
- Department of Ultrasound, Shanghai Jiao Tong University School of Medicine Affiliated Renji Hospital, Shanghai, CHINA
| | - Fenghua Li
- Department of Ultrasound, Shanghai Jiao Tong University School of Medicine Affiliated Renji Hospital, Shanghai, CHINA
| | - Caifeng Wan
- Department of Ultrasound, Shanghai Jiao Tong University School of Medicine Affiliated Renji Hospital, Shanghai, 200127, CHINA
| |
Collapse
|
89
|
Masud M, Eldin Rashed AE, Hossain MS. Convolutional neural network-based models for diagnosis of breast cancer. Neural Comput Appl 2020; 34:11383-11394. [PMID: 33052172 PMCID: PMC7545025 DOI: 10.1007/s00521-020-05394-5] [Citation(s) in RCA: 43] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2020] [Accepted: 09/24/2020] [Indexed: 12/12/2022]
Abstract
Breast cancer is the most prevailing cancer in the world and each year affecting millions of women. It is also the cause of largest number of deaths in women dying in cancers. During the last few years, researchers are proposing different convolutional neural network models in order to facilitate diagnostic process of breast cancer. Convolutional neural networks are showing promising results to classify cancers using image datasets. There is still a lack of standard models which can claim the best model because of unavailability of large datasets that can be used for models’ training and validation. Hence, researchers are now focusing on leveraging the transfer learning approach using pre-trained models as feature extractors that are trained over millions of different images. With this motivation, this paper considers eight different fine-tuned pre-trained models to observe how these models classify breast cancers applying on ultrasound images. We also propose a shallow custom convolutional neural network that outperforms the pre-trained models with respect to different performance metrics. The proposed model shows 100% accuracy and achieves 1.0 AUC score, whereas the best pre-trained model shows 92% accuracy and 0.972 AUC score. In order to avoid biasness, the model is trained using the fivefold cross validation technique. Moreover, the model is faster in training than the pre-trained models and requires a small number of trainable parameters. The Grad-CAM heat map visualization technique also shows how perfectly the proposed model extracts important features to classify breast cancers.
Collapse
Affiliation(s)
- Mehedi Masud
- College of Computers and Information Technology, Taif University, Taif, Saudi Arabia
| | - Amr E Eldin Rashed
- College of Computers and Information Technology, Taif University, Taif, Saudi Arabia
| | - M Shamim Hossain
- Chair of Pervasive and Mobile Computing, King Saud University, Riyadh, 11543 Saudi Arabia.,Department of Software Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, 11543 Saudi Arabia
| |
Collapse
|
90
|
MRFF-YOLO: A Multi-Receptive Fields Fusion Network for Remote Sensing Target Detection. REMOTE SENSING 2020. [DOI: 10.3390/rs12193118] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
High-altitude remote sensing target detection has problems related to its low precision and low detection rate. In order to enhance the performance of detecting remote sensing targets, a new YOLO (You Only Look Once)-V3-based algorithm was proposed. In our improved YOLO-V3, we introduced the concept of multi-receptive fields to enhance the performance of feature extraction. Therefore, the proposed model was termed Multi-Receptive Fields Fusion YOLO (MRFF-YOLO). In addition, to address the flaws of YOLO-V3 in detecting small targets, we increased the detection layers from three to four. Moreover, in order to avoid gradient fading, the structure of improved DenseNet was chosen in the detection layers. We compared our approach (MRFF-YOLO) with YOLO-V3 and other state-of-the-art target detection algorithms on an Remote Sensing Object Detection (RSOD) dataset and a dataset of Object Detection in Aerial Images (UCS-AOD). With a series of improvements, the mAP (mean average precision) of MRFF-YOLO increased from 77.10% to 88.33% in the RSOD dataset and increased from 75.67% to 90.76% in the UCS-AOD dataset. The leaking detection rates are also greatly reduced, especially for small targets. The experimental results showed that our approach achieved better performance than traditional YOLO-V3 and other state-of-the-art models for remote sensing target detection.
Collapse
|
91
|
Village-Level Homestead and Building Floor Area Estimates Based on UAV Imagery and U-Net Algorithm. ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION 2020. [DOI: 10.3390/ijgi9060403] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
China’s rural population has declined markedly with the acceleration of urbanization and industrialization, but the area under rural homesteads has continued to expand. Proper rural land use and management require large-scale, efficient, and low-cost rural residential surveys; however, such surveys are time-consuming and difficult to accomplish. Unmanned aerial vehicle (UAV) technology coupled with a deep learning architecture and 3D modelling can provide a potential alternative to traditional surveys for gathering rural homestead information. In this study, a method to estimate the village-level homestead area, a 3D-based building height model (BHM), and the number of building floors based on UAV imagery and the U-net algorithm was developed, and the respective estimation accuracies were found to be 0.92, 0.99, and 0.89. This method is rapid and inexpensive compared to the traditional time-consuming and costly household surveys, and, thus, it is of great significance to the ongoing use and management of rural homestead information, especially with regards to the confirmation of homestead property rights in China. Further, the proposed combination of UAV imagery and U-net technology may have a broader application in rural household surveys, as it can provide more information for decision-makers to grasp the current state of the rural socio-economic environment.
Collapse
|