1
|
Alhajlah M. A hybrid features fusion-based framework for classification of breast micronodules using ultrasonography. BMC Med Imaging 2024; 24:253. [PMID: 39304839 DOI: 10.1186/s12880-024-01425-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2024] [Accepted: 09/09/2024] [Indexed: 09/22/2024] Open
Abstract
BACKGROUND Breast cancer is one of the leading diseases worldwide. According to estimates by the National Breast Cancer Foundation, over 42,000 women are expected to die from this disease in 2024. OBJECTIVE The prognosis of breast cancer depends on the early detection of breast micronodules and the ability to distinguish benign from malignant lesions. Ultrasonography is a crucial radiological imaging technique for diagnosing the illness because it allows for biopsy and lesion characterization. The user's level of experience and knowledge is vital since ultrasonographic diagnosis relies on the practitioner's expertise. Furthermore, computer-aided technologies significantly contribute by potentially reducing the workload of radiologists and enhancing their expertise, especially when combined with a large patient volume in a hospital setting. METHOD This work describes the development of a hybrid CNN system for diagnosing benign and malignant breast cancer lesions. The models InceptionV3 and MobileNetV2 serve as the foundation for the hybrid framework. Features from these models are extracted and concatenated individually, resulting in a larger feature set. Finally, various classifiers are applied for the classification task. RESULTS The model achieved the best results using the softmax classifier, with an accuracy of over 95%. CONCLUSION Computer-aided diagnosis greatly assists radiologists and reduces their workload. Therefore, this research can serve as a foundation for other researchers to build clinical solutions.
Collapse
Affiliation(s)
- Mousa Alhajlah
- College of Applied Computer Science, King Saud University, Riyadh, 11543, Saudi Arabia.
| |
Collapse
|
2
|
Shi S, An X, Li Y. Ultrasound Radiomics-Based Logistic Regression Model to Differentiate Between Benign and Malignant Breast Nodules. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2023; 42:869-879. [PMID: 36149670 DOI: 10.1002/jum.16078] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/17/2022] [Revised: 07/18/2022] [Accepted: 07/22/2022] [Indexed: 06/16/2023]
Abstract
OBJECTIVES To explore the potential value of ultrasound radiomics in differentiating between benign and malignant breast nodules by extracting the radiomic features of two-dimensional (2D) grayscale ultrasound images and establishing a logistic regression model. METHODS The clinical and ultrasound data of 1000 female patients (500 pathologically benign patients, 500 pathologically malignant patients) who underwent breast ultrasound examinations at our hospital were retrospectively analyzed. The cases were randomly divided into training and validation sets at a ratio of 7:3. Once the region of interest (ROI) of the lesion was manually contoured, Spearman's rank correlation, least absolute shrinkage and selection operator (LASSO) regression, and the Boruta algorithm were adopted to determine optimal features and establish a logistic regression classification model. The performance of the model was assessed using the area under the receiver operating characteristic curve (AUC), and calibration and decision curves (DCA). RESULTS Eight ultrasound radiomic features were selected to establish the model. The AUC values of the model were 0.979 and 0.977 in the training and validation sets, respectively (P = .0029), indicating good discriminative ability in both datasets. Additionally, the calibration and DCA suggested that the model's calibration efficiency and clinical application value were both superior. CONCLUSIONS The proposed logistic regression model based on 2D grayscale ultrasound images could facilitate differential diagnosis of benign and malignant breast nodules. The model, which was constructed using ultrasound radiomic features identified in this study, demonstrated good diagnostic performance and could be useful in helping clinicians formulate individualized treatment plans for patients.
Collapse
Affiliation(s)
- Shanshan Shi
- Ultrasound Department, The First Affiliated Hospital of Jinzhou Medical University, Jinzhou, China
| | - Xin An
- Ultrasound Department, The First Affiliated Hospital of Jinzhou Medical University, Jinzhou, China
| | - Yuhong Li
- Ultrasound Department, The First Affiliated Hospital of Jinzhou Medical University, Jinzhou, China
| |
Collapse
|
3
|
Kuo CFJ, Chen HY, Barman J, Ko KH, Hsu HH. Complete, Fully Automatic Detection and Classification of Benign and Malignant Breast Tumors Based on CT Images Using Artificial Intelligent and Image Processing. J Clin Med 2023; 12:1582. [PMID: 36836118 PMCID: PMC9960342 DOI: 10.3390/jcm12041582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 12/09/2022] [Accepted: 12/19/2022] [Indexed: 02/19/2023] Open
Abstract
Breast cancer is the most common type of cancer in women, and early detection is important to significantly reduce its mortality rate. This study introduces a detection and diagnosis system that automatically detects and classifies breast tumors in CT scan images. First, the contours of the chest wall are extracted from computed chest tomography images, and two-dimensional image characteristics and three-dimensional image features, together with the application of active contours without edge and geodesic active contours methods, are used to detect, locate, and circle the tumor. Then, the computer-assisted diagnostic system extracts features, quantifying and classifying benign and malignant breast tumors using a greedy algorithm and a support vector machine. The study used 174 breast tumors for experiment and training and performed cross-validation 10 times (k-fold cross-validation) to evaluate performance of the system. The accuracy, sensitivity, specificity, and positive and negative predictive values of the system were 99.43%, 98.82%, 100%, 100%, and 98.89% respectively. This system supports the rapid extraction and classification of breast tumors as either benign or malignant, helping physicians to improve clinical diagnosis.
Collapse
Affiliation(s)
- Chung-Feng Jeffrey Kuo
- Department of Materials Science and Engineering, National Taiwan University of Science and Technology, Taipei 106, Taiwan
| | - Hsuan-Yu Chen
- Department of Materials Science and Engineering, National Taiwan University of Science and Technology, Taipei 106, Taiwan
| | - Jagadish Barman
- Department of Materials Science and Engineering, National Taiwan University of Science and Technology, Taipei 106, Taiwan
| | - Kai-Hsiung Ko
- Department of Radiology, Tri-Service General Hospital, National Defense Medical Center, Taipei 114, Taiwan
| | - Hsian-He Hsu
- Department of Radiology, Tri-Service General Hospital, National Defense Medical Center, Taipei 114, Taiwan
| |
Collapse
|
4
|
Sarkar S, Mali K. Firefly-SVM predictive model for breast cancer subgroup classification with clinicopathological parameters. Digit Health 2023; 9:20552076231207203. [PMID: 37860702 PMCID: PMC10583530 DOI: 10.1177/20552076231207203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2023] [Accepted: 09/26/2023] [Indexed: 10/21/2023] Open
Abstract
Background Breast cancer is a highly predominant destructive disease among women characterised with varied tumour biology, molecular subgroups and diverse clinicopathological specifications. The potentiality of machine learning to transform complex medical data into meaningful knowledge has led to its application in breast cancer detection and prognostic evaluation. Objective The emergence of data-driven diagnostic model for assisting clinicians in diagnostic decision making has gained an increasing curiosity in breast cancer identification and analysis. This motivated us to develop a breast cancer data-driven model for subtype classification more accurately. Method In this article, we proposed a firefly-support vector machine (SVM) breast cancer predictive model that uses clinicopathological and demographic data gathered from various tertiary care cancer hospitals or oncological centres to distinguish between patients with triple-negative breast cancer (TNBC) and non-triple-negative breast cancer (non-TNBC). Results The results of the firefly-support vector machine (firefly-SVM) predictive model were distinguished from the traditional grid search-support vector machine (Grid-SVM) model, particle swarm optimisation-support vector machine (PSO-SVM) and genetic algorithm-support vector machine (GA-SVM) hybrid models through hyperparameter tuning. The findings show that the recommended firefly-SVM classification model outperformed other existing models in terms of prediction accuracy (93.4%, 86.6%, 69.6%) for automated SVM parameter selection. The effectiveness of the prediction model was also evaluated using well-known metrics, such as the F1-score, mean square error, area under the ROC curve, logarithmic loss and precision-recall curve. Conclusion Firefly-SVM predictive model may be treated as an alternate tool for breast cancer subgroup classification that would benefit the clinicians for managing the patient with proper treatment and diagnostic outcome.
Collapse
Affiliation(s)
- Suvobrata Sarkar
- Department of Computer Science and Engineering, Dr. B.C. Roy Engineering College, Durgapur, West Bengal, India
| | - Kalyani Mali
- Department of Computer Science and Engineering, University of Kalyani, Kalyani, West Bengal, India
| |
Collapse
|
5
|
A A, M P, Bourouis S, Band SS, Mosavi A, Agrawal S, Hamdi M. Meta-Heuristic Algorithm-Tuned Neural Network for Breast Cancer Diagnosis Using Ultrasound Images. Front Oncol 2022; 12:834028. [PMID: 35769710 PMCID: PMC9234296 DOI: 10.3389/fonc.2022.834028] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2021] [Accepted: 03/14/2022] [Indexed: 11/18/2022] Open
Abstract
Breast cancer is the most menacing cancer among all types of cancer in women around the globe. Early diagnosis is the only way to increase the treatment options which then decreases the death rate and increases the chance of survival in patients. However, it is a challenging task to differentiate abnormal breast tissues from normal tissues because of their structure and unclear boundaries. Therefore, early and accurate diagnosis and classification of breast lesions into malignant or benign lesions is an active domain of research. Over the decade, numerous artificial neural network (ANN)-based techniques were adopted in order to diagnose and classify breast cancer due to the unique characteristics of learning key features from complex data via a training process. However, these schemes have limitations like slow convergence and longer training time. To address the above mentioned issues, this paper employs a meta-heuristic algorithm for tuning the parameters of the neural network. The main novelty of this work is the computer-aided diagnosis scheme for detecting abnormalities in breast ultrasound images by integrating a wavelet neural network (WNN) and the grey wolf optimization (GWO) algorithm. Here, breast ultrasound (US) images are preprocessed with a sigmoid filter followed by interference-based despeckling and then by anisotropic diffusion. The automatic segmentation algorithm is adopted to extract the region of interest, and subsequently morphological and texture features are computed. Finally, the GWO-tuned WNN is exploited to accomplish the classification task. The classification performance of the proposed scheme is validated on 346 ultrasound images. Efficiency of the proposed methodology is evaluated by computing the confusion matrix and receiver operating characteristic (ROC) curve. Numerical analysis revealed that the proposed work can yield higher classification accuracy when compared to the prevailing methods and thereby proves its potential in effective breast tumor detection and classification. The proposed GWO-WNN method (98%) gives better accuracy than other methods like SOM-SVM (87.5), LOFA-SVM (93.62%), MBA-RF (96.85%), and BAS-BPNN (96.3%)
Collapse
Affiliation(s)
- Ahila A
- Department of Electronics and Communication Engineering, Sethu Institute of Technology, Kariapatti, India
- *Correspondence: Ahila A, ; Poongodi M., ; Shahab S. Band, ; Amir Mosavi,
| | - Poongodi M
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar
- *Correspondence: Ahila A, ; Poongodi M., ; Shahab S. Band, ; Amir Mosavi,
| | - Sami Bourouis
- Department of Information Technology, College of Computers and Information Technology, Taif University, Taif, Saudi Arabia
| | - Shahab S. Band
- Future Technology Research Center, College of Future, National Yunlin University of Science and Technology, Douliou, Taiwan
- *Correspondence: Ahila A, ; Poongodi M., ; Shahab S. Band, ; Amir Mosavi,
| | - Amir Mosavi
- John von Neumann Faculty of Informatics, Obuda University, Budapest, Hungary
- *Correspondence: Ahila A, ; Poongodi M., ; Shahab S. Band, ; Amir Mosavi,
| | | | - Mounir Hamdi
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar
| |
Collapse
|
6
|
Yao R, Zhang Y, Wu K, Li Z, He M, Fengyue B. Quantitative assessment for characterization of breast lesion tissues using adaptively decomposed ultrasound RF images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
7
|
A gated convolutional neural network for classification of breast lesions in ultrasound images. Soft comput 2022. [DOI: 10.1007/s00500-022-07024-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
8
|
|
9
|
Zhang G, Ren Y, Xi X, Li D, Guo J, Li X, Tian C, Xu Z. LRSCnet: Local Reference Semantic Code learning for breast tumor classification in ultrasound images. Biomed Eng Online 2021; 20:127. [PMID: 34920726 PMCID: PMC8684265 DOI: 10.1186/s12938-021-00968-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Accepted: 12/09/2021] [Indexed: 11/17/2022] Open
Abstract
Purpose This study proposed a novel Local Reference Semantic Code (LRSC) network for automatic breast ultrasound image classification with few labeled data. Methods In the proposed network, the local structure extractor is firstly developed to learn the local reference which describes common local characteristics of tumors. After that, a two-stage hierarchical encoder is developed to encode the local structures of lesion into the high-level semantic code. Based on the learned semantic code, the self-matching layer is proposed for the final classification. Results In the experiment, the proposed method outperformed traditional classification methods and AUC (Area Under Curve), ACC (Accuracy), Sen (Sensitivity), Spec (Specificity), PPV (Positive Predictive Values), and NPV(Negative Predictive Values) are 0.9540, 0.9776, 0.9629, 0.93, 0.9774 and 0.9090, respectively. In addition, the proposed method also improved matching speed. Conclusions LRSC-network is proposed for breast ultrasound images classification with few labeled data. In the proposed network, a two-stage hierarchical encoder is introduced to learn high-level semantic code. The learned code contains more effective high-level classification information and is simpler, leading to better generalization ability.
Collapse
Affiliation(s)
- Guang Zhang
- School of Software, Shandong University, Jinan, China.,Health Management, The First Affiliated Hospital of Shangdong First Medical University & Shandong Provincial Qianfoshan Hospital, Jinan, China
| | - Yanwei Ren
- School of Software, Shandong University, Jinan, China
| | - Xiaoming Xi
- School of Computer Science and Technology, Shandong Jianzhu University, Jinan, China.
| | - Delin Li
- Health Management, The First Affiliated Hospital of Shangdong First Medical University & Shandong Provincial Qianfoshan Hospital, Jinan, China
| | - Jie Guo
- School of Computer Science and Technology, Shandong Jianzhu University, Jinan, China
| | - Xiaofeng Li
- School of Computer Science and Technology, Shandong Jianzhu University, Jinan, China
| | - Cuihuan Tian
- School of Medicine, Shandong Universit, Jinan, China.,Health Management Center, QiLu Hospital of Shandong University, Jinan, China
| | - Zunyi Xu
- School of Computer Science and Technology, Shandong Jianzhu University, Jinan, China.
| |
Collapse
|
10
|
Bianconi F, Fernández A, Smeraldi F, Pascoletti G. Colour and Texture Descriptors for Visual Recognition: A Historical Overview. J Imaging 2021; 7:jimaging7110245. [PMID: 34821876 PMCID: PMC8622414 DOI: 10.3390/jimaging7110245] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Revised: 11/14/2021] [Accepted: 11/16/2021] [Indexed: 11/25/2022] Open
Abstract
Colour and texture are two perceptual stimuli that determine, to a great extent, the appearance of objects, materials and scenes. The ability to process texture and colour is a fundamental skill in humans as well as in animals; therefore, reproducing such capacity in artificial (‘intelligent’) systems has attracted considerable research attention since the early 70s. Whereas the main approach to the problem was essentially theory-driven (‘hand-crafted’) up to not long ago, in recent years the focus has moved towards data-driven solutions (deep learning). In this overview we retrace the key ideas and methods that have accompanied the evolution of colour and texture analysis over the last five decades, from the ‘early years’ to convolutional networks. Specifically, we review geometric, differential, statistical and rank-based approaches. Advantages and disadvantages of traditional methods vs. deep learning are also critically discussed, including a perspective on which traditional methods have already been subsumed by deep learning or would be feasible to integrate in a data-driven approach.
Collapse
Affiliation(s)
- Francesco Bianconi
- Department of Engineering, Università degli Studi di Perugia, Via Goffredo Duranti 93, 06135 Perugia, Italy
- Correspondence: ; Tel.: +39-075-5853706
| | - Antonio Fernández
- School of Industrial Engineering, Universidade de Vigo, Rúa Maxwell s/n, 36310 Vigo, Spain;
| | - Fabrizio Smeraldi
- School of Electronic Engineering and Computer Science, Queen Mary University of London, Mile End Road, London E1 4NS, UK;
| | - Giulia Pascoletti
- Department of Mechanical and Aerospace Engineering, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Torino, Italy;
| |
Collapse
|
11
|
Guo R, Passi K, Jain CK. Tuberculosis Diagnostics and Localization in Chest X-Rays via Deep Learning Models. Front Artif Intell 2021; 3:583427. [PMID: 33733221 PMCID: PMC7861240 DOI: 10.3389/frai.2020.583427] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2020] [Accepted: 08/13/2020] [Indexed: 11/13/2022] Open
Abstract
For decades, tuberculosis (TB), a potentially serious infectious lung disease, continues to be a leading cause of worldwide death. Proven to be conveniently efficient and cost-effective, chest X-ray (CXR) has become the preliminary medical imaging tool for detecting TB. Arguably, the quality of TB diagnosis will improve vastly with automated CXRs for TB detection and the localization of suspected areas, which may manifest TB. The current line of research aims to develop an efficient computer-aided detection system that will support doctors (and radiologists) to become well-informed when making TB diagnosis from patients' CXRs. Here, an integrated process to improve TB diagnostics via convolutional neural networks (CNNs) and localization in CXRs via deep-learning models is proposed. Three key steps in the TB diagnostics process include (a) modifying CNN model structures, (b) model fine-tuning via artificial bee colony algorithm, and (c) the implementation of linear average–based ensemble method. Comparisons of the overall performance are made across all three steps among the experimented deep CNN models on two publicly available CXR datasets, namely, the Shenzhen Hospital CXR dataset and the National Institutes of Health CXR dataset. Validated performance includes detecting CXR abnormalities and differentiating among seven TB-related manifestations (consolidation, effusion, fibrosis, infiltration, mass, nodule, and pleural thickening). Importantly, class activation mapping is employed to inform a visual interpretation of the diagnostic result by localizing the detected lung abnormality manifestation on CXR. Compared to the state-of-the-art, the resulting approach showcases an outstanding performance both in the lung abnormality detection and the specific TB-related manifestation diagnosis vis-à-vis the localization in CXRs.
Collapse
Affiliation(s)
- Ruihua Guo
- Department of Mathematics and Computer Science, Laurentian University, Greater Sudbury, ON, Canada
| | - Kalpdrum Passi
- Department of Mathematics and Computer Science, Laurentian University, Greater Sudbury, ON, Canada
| | - Chakresh Kumar Jain
- Department of Biotechnology, Jaypee Institute of Information Technology, Noida, India
| |
Collapse
|
12
|
Shia WC, Lin LS, Chen DR. Classification of malignant tumours in breast ultrasound using unsupervised machine learning approaches. Sci Rep 2021; 11:1418. [PMID: 33446841 PMCID: PMC7809485 DOI: 10.1038/s41598-021-81008-x] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2020] [Accepted: 12/07/2020] [Indexed: 12/22/2022] Open
Abstract
Traditional computer-aided diagnosis (CAD) processes include feature extraction, selection, and classification. Effective feature extraction in CAD is important in improving the classification’s performance. We introduce a machine-learning method and have designed an analysis procedure of benign and malignant breast tumour classification in ultrasound (US) images without a need for a priori tumour region-selection processing, thereby decreasing clinical diagnosis efforts while maintaining high classification performance. Our dataset constituted 677 US images (benign: 312, malignant: 365). Regarding two-dimensional US images, the oriented gradient descriptors’ histogram pyramid was extracted and utilised to obtain feature vectors. The correlation-based feature selection method was used to evaluate and select significant feature sets for further classification. Sequential minimal optimisation—combining local weight learning—was utilised for classification and performance enhancement. The image dataset’s classification performance showed an 81.64% sensitivity and 87.76% specificity for malignant images (area under the curve = 0.847). The positive and negative predictive values were 84.1 and 85.8%, respectively. Here, a new workflow, utilising machine learning to recognise malignant US images was proposed. Comparison of physician diagnoses and the automatic classifications made using machine learning yielded similar outcomes. This indicates the potential applicability of machine learning in clinical diagnoses.
Collapse
Affiliation(s)
- Wei-Chung Shia
- Molecular Medicine Laboratory, Department of Research, Changhua Christian Hospital, Changhua, Taiwan
| | - Li-Sheng Lin
- Department of Breast Surgery, The Affiliated Hospital (Group) of Putian University, Putian, Fujian, China
| | - Dar-Ren Chen
- Comprehensive Breast Cancer Center, Changhua Christian Hospital, Changhua, Taiwan.
| |
Collapse
|
13
|
Shia WC, Chen DR. Classification of malignant tumors in breast ultrasound using a pretrained deep residual network model and support vector machine. Comput Med Imaging Graph 2020; 87:101829. [PMID: 33302247 DOI: 10.1016/j.compmedimag.2020.101829] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2020] [Revised: 10/26/2020] [Accepted: 11/18/2020] [Indexed: 12/22/2022]
Abstract
In this study, a transfer learning method was utilized to recognize and classify benign and malignant breast tumors, using two-dimensional breast ultrasound (US) images, to decrease the effort expended by physicians and improve the quality of clinical diagnosis. The pretrained deep residual network model was utilized for image feature extraction from the convolutional layer of the trained network; whereas, the linear support vector machine (SVM), with a sequential minimal optimization solver, was used to classify the extracted feature. We used an image dataset with 2099 unlabeled two-dimensional breast US images, collected from 543 patients (benign: 302, malignant: 241). The classification performance yielded a sensitivity of 94.34 % and a specificity of 93.22 % for malignant images (Area under curve = 0.938). The positive and negative predictive values were 92.6 and 94.8, respectively. A comparison between the diagnosis made by the physician and the automated classification by a trained classifier, showed that the latter had significantly better outcomes. This indicates the potential applicability of the proposed approach that incorporates both the pretrained deep learning network and a well-trained classifier, to improve the quality and efficacy of clinical diagnosis.
Collapse
Affiliation(s)
- Wei-Chung Shia
- Molecular Medicine Laboratory, Department of Research, Changhua Christian Hospital, 8F., No. 235, XuGuang Road, Changhua, Taiwan.
| | - Dar-Ren Chen
- Comprehensive Breast Cancer Center, Changhua Christian Hospital, No. 135, NanXiao Street, Changhua, Taiwan.
| |
Collapse
|
14
|
Identification of Breast Malignancy by Marker-Controlled Watershed Transformation and Hybrid Feature Set for Healthcare. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10061900] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
Breast cancer is a highly prevalent disease in females that may lead to mortality in severe cases. The mortality can be subsided if breast cancer is diagnosed at an early stage. The focus of this study is to detect breast malignancy through computer-aided diagnosis (CADx). In the first phase of this work, Hilbert transform is employed to reconstruct B-mode images from the raw data followed by the marker-controlled watershed transformation to segment the lesion. The methods based only on texture analysis are quite sensitive to speckle noise and other artifacts. Therefore, a hybrid feature set is developed after the extraction of shape-based and texture features from the breast lesion. Decision tree, k-nearest neighbor (KNN), and ensemble decision tree model via random under-sampling with Boost (RUSBoost) are utilized to segregate the cancerous lesions from the benign ones. The proposed technique is tested on OASBUD (Open Access Series of Breast Ultrasonic Data) and breast ultrasound (BUS) images collected at Baheya Hospital Egypt (BHE). The OASBUD dataset contains raw ultrasound data obtained from 100 patients containing 52 malignant and 48 benign lesions. The dataset collected at BHE contains 210 malignant and 437 benign images. The proposed system achieved promising accuracy of 97% with confidence interval (CI) of 91.48% to 99.38% for OASBUD and 96.6% accuracy with CI of 94.90% to 97.86% for the BHE dataset using ensemble method.
Collapse
|
15
|
A Novel Computer-Aided-Diagnosis System for Breast Ultrasound Images Based on BI-RADS Categories. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10051830] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The breast ultrasound is not only one of major devices for breast tissue imaging, but also one of important methods in breast tumor screening. It is non-radiative, non-invasive, harmless, simple, and low cost screening. The American College of Radiology (ACR) proposed the Breast Imaging Reporting and Data System (BI-RADS) to evaluate far more breast lesion severities compared to traditional diagnoses according to five-criterion categories of masses composition described as follows: shape, orientation, margin, echo pattern, and posterior features. However, there exist some problems, such as intensity differences and different resolutions in image acquisition among different types of ultrasound imaging modalities so that clinicians cannot always identify accurately the BI-RADS categories or disease severities. To this end, this article adopted three different brands of ultrasound scanners to fetch breast images for our experimental samples. The breast lesion was detected on the original image using preprocessing, image segmentation, etc. The breast tumor’s severity was evaluated on the features of the breast lesion via our proposed classifiers according to the BI-RADS standard rather than traditional assessment on the severity; i.e., merely using benign or malignant. In this work, we mainly focused on the BI-RADS categories 2–5 after the stage of segmentation as a result of the clinical practice. Moreover, several features related to lesion severities based on the selected BI-RADS categories were introduced into three machine learning classifiers, including a Support Vector Machine (SVM), Random Forest (RF), and Convolution Neural Network (CNN) combined with feature selection to develop a multi-class assessment of breast tumor severity based on BI-RADS. Experimental results show that the proposed CAD system based on BI-RADS can obtain the identification accuracies with SVM, RF, and CNN reaching 80.00%, 77.78%, and 85.42%, respectively. We also validated the performance and adaptability of the classification using different ultrasound scanners. Results also indicate that the evaluations of F-score based on CNN can obtain measures higher than 75% (i.e., prominent adaptability) when samples were tested on various BI-RADS categories.
Collapse
|
16
|
Gómez-Flores W, Hernández-López J. Assessment of the invariance and discriminant power of morphological features under geometric transformations for breast tumor classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 185:105173. [PMID: 31710986 DOI: 10.1016/j.cmpb.2019.105173] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2019] [Revised: 10/28/2019] [Accepted: 10/31/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVES Computer-aided diagnosis (CAD) systems are intended to assist specialists in the interpretation of images aiming to support clinical conduct. In breast tumor classification, CAD systems involve a feature extraction stage, in which morphological features are used to describe the tumor shape. Such features are expected to satisfy at least two conditions: (1) discriminant to distinguish between benign and malignant tumors, and (2) invariant to geometric transformations. Herein, 39 morphological features were evaluated in terms of invariance and discriminant power for breast tumor classification. METHODS Morphological features were divided into region-based features, for describing the irregularity of the tumor shape, and boundary-based features, for measuring the anfractuosity of the tumor margin. Also, two datasets were considered in the experiments: 2054 breast ultrasound images and 892 mammographies. From both datasets, synthetic data augmentation was performed to obtain distinct combinations of rotation and scaling of breast tumors, from which morphological features were calculated. The linear discriminant analysis was used to classify breast tumors in benign and malignant classes. The area under the ROC curve (AUC) quantified the discriminant power of every morphological feature, whereas the relative difference (RD) between AUC values measured the invariance to geometric transformations. For indicating adequate performance, AUC and RD should tend toward unity and zero, respectively. RESULTS For both datasets, the convexity was the most discriminant feature that reached AUC > 0.81 with RD<1×10-2, while the most invariant feature was the roundness that attained RD<1×10-3 with AUC < 0.72. Additionally, for each dataset, the most discriminant and invariant features were combined for performing tumor classification. For mammography, it was achieved accuracy (ACC) of 0.76, sensitivity (SEN) of 0.76, and specificity (SPE) of 0.84, whereas for breast ultrasound the results were ACC=0.88,SEN=0.81, and SPE=0.91. CONCLUSIONS In general, region-based features are more discriminant and invariant than boundary-based features. Moreover, it was observed that an invariant feature is not necessarily a discriminant feature; hence, a balance between invariance and discriminant power should be attained for breast tumor classification.
Collapse
Affiliation(s)
- Wilfrido Gómez-Flores
- Center for Research and Advanced Studies of the National Polytechnic Institute, Ciudad Victoria, Tamaulipas ZIP 87138, Mexico.
| | - Juanita Hernández-López
- Center for Research and Advanced Studies of the National Polytechnic Institute, Ciudad Victoria, Tamaulipas ZIP 87138, Mexico
| |
Collapse
|
17
|
Qian X, Zhang B, Liu S, Wang Y, Chen X, Liu J, Yang Y, Chen X, Wei Y, Xiao Q, Ma J, Shung KK, Zhou Q, Liu L, Chen Z. A combined ultrasonic B-mode and color Doppler system for the classification of breast masses using neural network. Eur Radiol 2020; 30:3023-3033. [PMID: 32006174 DOI: 10.1007/s00330-019-06610-0] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2019] [Revised: 11/19/2019] [Accepted: 12/06/2019] [Indexed: 12/11/2022]
Abstract
OBJECTIVES To develop a dual-modal neural network model to characterize ultrasound (US) images of breast masses. MATERIALS AND METHODS A combined US B-mode and color Doppler neural network model was developed to classify US images of the breast. Three datasets with breast masses were originally detected and interpreted by 20 experienced radiologists according to Breast Imaging-Reporting and Data System (BI-RADS) lexicon ((1) training set, 103212 masses from 45,433 + 12,519 patients. (2) held-out validation set, 2748 masses from 1197 + 395 patients. (3) test set, 605 masses from 337 + 78 patients). The neural network was first trained on training set. Then, the trained model was tested on a held-out validation set to evaluate agreement on BI-RADS category between the model and the radiologists. In addition, the model and a reader study of 10 radiologists were applied to the test set with biopsy-proven results. To evaluate the performance of the model in benign or malignant classifications, the receiver operating characteristic curve, sensitivities, and specificities were compared. RESULTS The trained dual-modal model showed favorable agreement with the assessment performed by the radiologists (κ = 0.73; 95% confidence interval, 0.71-0.75) in classifying breast masses into four BI-RADS categories in the validation set. For the binary categorization of benign or malignant breast masses in the test set, the dual-modal model achieved the area under the ROC curve (AUC) of 0.982, while the readers scored an AUC of 0.948 in terms of the ROC convex hull. CONCLUSION The dual-modal model can be used to assess breast masses at a level comparable to that of an experienced radiologist. KEY POINTS • A neural network model based on ultrasonic imaging can classify breast masses into different Breast Imaging-Reporting and Data System categories according to the probability of malignancy. • A combined ultrasonic B-mode and color Doppler neural network model achieved a high level of agreement with the readings of an experienced radiologist and has the potential to automate the routine characterization of breast masses.
Collapse
Affiliation(s)
- Xuejun Qian
- Keck School of Medicine, University of Southern California, Los Angeles, CA, 90033, USA.,Department of Biomedical Engineering and NIH Resource Center for Medical Ultrasonic Transducer Technology, University of Southern California, Los Angeles, CA, 90089, USA
| | - Bo Zhang
- Ultrasound Imaging Department, Xiangya Hospital of Central South University, Changsha, 410083, Hunan, China
| | - Shaoqiang Liu
- School of Information Science and Engineering, Central South University, Changsha, 410083, Hunan, China
| | - Yueai Wang
- Ultrasound Imaging Department, First Affiliated Hospital of Hunan University of Traditional Chinese Medicine, Changsha, 410007, Hunan, China
| | - Xiaoqiong Chen
- Ultrasound Imaging Department, First Affiliated Hospital of Hunan University of Traditional Chinese Medicine, Changsha, 410007, Hunan, China
| | - Jingyuan Liu
- Blood Testing Center, First Affiliated Hospital of Hunan University of Traditional Chinese Medicine, Changsha, 410007, Hunan, China
| | - Yuzheng Yang
- The Middle School Attached to Human Normal University, Changsha, 410006, Hunan, China
| | - Xiang Chen
- Xiangya Hospital of Central South University, Aluminium Science & Technology Building, Changsha, 410083, Hunan, China
| | - Yi Wei
- Arvato Systems Co, Ltd, Xuhui, Shanghai, 20072, China
| | - Qisen Xiao
- School of Telecommunications Engineering, Xidian University, Xi'an, 710126, Shaanxi, China
| | - Jie Ma
- Department of Materials Science, University of Southern California, Los Angeles, CA, 90089, USA
| | - K Kirk Shung
- Department of Biomedical Engineering and NIH Resource Center for Medical Ultrasonic Transducer Technology, University of Southern California, Los Angeles, CA, 90089, USA
| | - Qifa Zhou
- Keck School of Medicine, University of Southern California, Los Angeles, CA, 90033, USA.,Department of Biomedical Engineering and NIH Resource Center for Medical Ultrasonic Transducer Technology, University of Southern California, Los Angeles, CA, 90089, USA
| | - Lifang Liu
- Department of Breast Surgery, First Affiliated Hospital of Hunan University of Traditional Chinese Medicine, Changsha, 410007, Hunan, China
| | - Zeyu Chen
- Keck School of Medicine, University of Southern California, Los Angeles, CA, 90033, USA. .,Xiangya Hospital of Central South University, Aluminium Science & Technology Building, Changsha, 410083, Hunan, China.
| |
Collapse
|
18
|
Lu Y, Shi XQ, Zhao X, Song D, Li J. Value of Computer Software for Assisting Sonographers in the Diagnosis of Thyroid Imaging Reporting and Data System Grade 3 and 4 Thyroid Space-Occupying Lesions. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2019; 38:3291-3300. [PMID: 31237716 DOI: 10.1002/jum.15065] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/03/2018] [Revised: 05/10/2019] [Accepted: 05/12/2019] [Indexed: 06/09/2023]
Abstract
OBJECTIVES To analyze the ability of thyroid ultrasound computer-aided diagnosis (CAD) detection software (AmCAD-UT; AmCAD BioMed Corporation, Taipei, Taiwan) to assist sonographers in diagnosing Thyroid Imaging Reporting and Data System grade 3 and 4 space-occupying lesions and to provide evidence for ultrasound doctors (UDs) to use the diagnostic recommendations of the AmCAD system to inform clinical decisions. METHODS In group 1, a retrospective study was performed on 234 cases of thyroid lesions confirmed by surgical pathology. The sensitivities, specificities, and accuracies of the diagnoses determined by the same UD independent of the software (UD) and after consulting the CAD software (UD + CAD) and by the software alone (CAD) were compared. In group 2, a prospective study was performed on 220 individuals with thyroid space-occupying lesions recommended by physicians from our hospital to undergo needle biopsy to confirm the diagnosis. Ultrasound images were imported into AmCAD, and recommendations for needle biopsy or periodic follow-up were obtained. According to the pathologic results of needle biopsy, consistency and coincidence rates of diagnostic recommendations for AmCAD were obtained. RESULTS In group 1, CAD and UD + CAD diagnoses achieved significantly higher sensitivities and accuracies of diagnosis than did independent diagnosis by the UD (P < .05). In group 2, the software showed an overall intraclass correlation (κ = 0.786) and a diagnosis coincidence rate of 93.6% with needle biopsy results. CONCLUSIONS AmCAD-UT Detection improved the ability of UDs to diagnose Thyroid Imaging Reporting and Data System grade 3 and 4 space-occupying lesions. Diagnostic recommendations of AmCAD are relatively consistent with needle biopsy results and can reduce the rate of unnecessary diagnostic needle biopsies.
Collapse
Affiliation(s)
- Yuanyuan Lu
- Department of Ultrasound, Sixth Medical Center of PLA General Hospital, Beijing, China
| | - Xian Quan Shi
- Department of Ultrasound, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Xiaohui Zhao
- Department of Ultrasound, Second Medical Center of PLA General Hospital, Beijing, China
| | - Danfei Song
- Department of Ultrasound, Second Medical Center of PLA General Hospital, Beijing, China
| | - Junlai Li
- Department of Ultrasound, Second Medical Center of PLA General Hospital, Beijing, China
| |
Collapse
|
19
|
Tao C, Chen K, Han L, Peng Y, Li C, Hua Z, Lin J. New one-step model of breast tumor locating based on deep learning. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2019; 27:839-856. [PMID: 31306148 DOI: 10.3233/xst-190548] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Affiliation(s)
- Chao Tao
- Department of Biomedical Engineering, College of Materials Science and Engineering, Sichuan University, Chengdu, China
| | - Ke Chen
- Department of Biomedical Engineering, College of Materials Science and Engineering, Sichuan University, Chengdu, China
| | - Lin Han
- Department of Biomedical Engineering, College of Materials Science and Engineering, Sichuan University, Chengdu, China
| | - Yulan Peng
- Department of Ultrasound, West China Hospital of Sichuan University, Chengdu, China
| | - Cheng Li
- China-Japan Friendship Hospital, Beijing, China
| | - Zhan Hua
- China-Japan Friendship Hospital, Beijing, China
| | - Jiangli Lin
- Department of Biomedical Engineering, College of Materials Science and Engineering, Sichuan University, Chengdu, China
| |
Collapse
|
20
|
Love SM, Berg WA, Podilchuk C, López Aldrete AL, Gaxiola Mascareño AP, Pathicherikollamparambil K, Sankarasubramanian A, Eshraghi L, Mammone R. Palpable Breast Lump Triage by Minimally Trained Operators in Mexico Using Computer-Assisted Diagnosis and Low-Cost Ultrasound. J Glob Oncol 2019; 4:1-9. [PMID: 30156946 PMCID: PMC6223536 DOI: 10.1200/jgo.17.00222] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
Abstract
Purpose In low- to middle-income countries (LMICs), most breast cancers present as palpable lumps; however, most palpable lumps are benign. We have developed artificial intelligence–based computer-assisted diagnosis (CADx) for an existing low-cost portable ultrasound system to triage which lumps need further evaluation and which are clearly benign. This pilot study was conducted to demonstrate that this approach can be successfully used by minimally trained health care workers in an LMIC country. Patients and Methods We recruited and trained three nonradiologist health care workers to participate in an institutional review board–approved, Health Insurance Portability and Accountability Act–compliant pilot study in Jalisco, Mexico, to determine whether they could use portable ultrasound (GE Vscan Dual Probe) to acquire images of palpable breast lumps of adequate quality for accurate computer analysis. Images from 32 women with 32 breast masses were then analyzed with a triage-CADx system, generating an output of benign or suspicious (biopsy recommended). Triage-CADx outputs were compared with radiologist readings. Results The nonradiologists were able to acquire adequate images. Triage by the CADx software was as accurate as assessment by specialist radiologists, with two (100%) of two cancers considered suspicious and 30 (100%) of 30 benign lesions classified as benign. Conclusion A portable ultrasound system with CADx software can be successfully used by first-level health care workers to triage palpable breast lumps. These results open up the possibility of implementing practical, cost-effective triage of palpable breast lumps, ensuring that scarce resources can be dedicated to suspicious lesions requiring further workup.
Collapse
Affiliation(s)
- Susan M Love
- Susan M. Love and Leah Eshraghi, Dr Susan Love Research Foundation, Encino, CA; Wendie A. Berg, Magee-Womens Hospital, University of Pittsburgh School of Medicine, Pittsburgh, PA; Christine Podilchuk, Krishnamohan Pathicherikollamparambil, Ananth Sankarasubramanian, and Richard Mammone, AI Strategy, Warren, NJ; Richard Mammone, Rutgers University, New Brunswick, NJ; and Ana Lilia López Aldrete and Aarón Patricio Gaxiola Mascareño, Instituto de Seguridad y Servicios Sociales de los Trabajadores del Estado Hospital Regional Valentin Gomez Farias, Jalisco, Mexico
| | - Wendie A Berg
- Susan M. Love and Leah Eshraghi, Dr Susan Love Research Foundation, Encino, CA; Wendie A. Berg, Magee-Womens Hospital, University of Pittsburgh School of Medicine, Pittsburgh, PA; Christine Podilchuk, Krishnamohan Pathicherikollamparambil, Ananth Sankarasubramanian, and Richard Mammone, AI Strategy, Warren, NJ; Richard Mammone, Rutgers University, New Brunswick, NJ; and Ana Lilia López Aldrete and Aarón Patricio Gaxiola Mascareño, Instituto de Seguridad y Servicios Sociales de los Trabajadores del Estado Hospital Regional Valentin Gomez Farias, Jalisco, Mexico
| | - Christine Podilchuk
- Susan M. Love and Leah Eshraghi, Dr Susan Love Research Foundation, Encino, CA; Wendie A. Berg, Magee-Womens Hospital, University of Pittsburgh School of Medicine, Pittsburgh, PA; Christine Podilchuk, Krishnamohan Pathicherikollamparambil, Ananth Sankarasubramanian, and Richard Mammone, AI Strategy, Warren, NJ; Richard Mammone, Rutgers University, New Brunswick, NJ; and Ana Lilia López Aldrete and Aarón Patricio Gaxiola Mascareño, Instituto de Seguridad y Servicios Sociales de los Trabajadores del Estado Hospital Regional Valentin Gomez Farias, Jalisco, Mexico
| | - Ana Lilia López Aldrete
- Susan M. Love and Leah Eshraghi, Dr Susan Love Research Foundation, Encino, CA; Wendie A. Berg, Magee-Womens Hospital, University of Pittsburgh School of Medicine, Pittsburgh, PA; Christine Podilchuk, Krishnamohan Pathicherikollamparambil, Ananth Sankarasubramanian, and Richard Mammone, AI Strategy, Warren, NJ; Richard Mammone, Rutgers University, New Brunswick, NJ; and Ana Lilia López Aldrete and Aarón Patricio Gaxiola Mascareño, Instituto de Seguridad y Servicios Sociales de los Trabajadores del Estado Hospital Regional Valentin Gomez Farias, Jalisco, Mexico
| | - Aarón Patricio Gaxiola Mascareño
- Susan M. Love and Leah Eshraghi, Dr Susan Love Research Foundation, Encino, CA; Wendie A. Berg, Magee-Womens Hospital, University of Pittsburgh School of Medicine, Pittsburgh, PA; Christine Podilchuk, Krishnamohan Pathicherikollamparambil, Ananth Sankarasubramanian, and Richard Mammone, AI Strategy, Warren, NJ; Richard Mammone, Rutgers University, New Brunswick, NJ; and Ana Lilia López Aldrete and Aarón Patricio Gaxiola Mascareño, Instituto de Seguridad y Servicios Sociales de los Trabajadores del Estado Hospital Regional Valentin Gomez Farias, Jalisco, Mexico
| | - Krishnamohan Pathicherikollamparambil
- Susan M. Love and Leah Eshraghi, Dr Susan Love Research Foundation, Encino, CA; Wendie A. Berg, Magee-Womens Hospital, University of Pittsburgh School of Medicine, Pittsburgh, PA; Christine Podilchuk, Krishnamohan Pathicherikollamparambil, Ananth Sankarasubramanian, and Richard Mammone, AI Strategy, Warren, NJ; Richard Mammone, Rutgers University, New Brunswick, NJ; and Ana Lilia López Aldrete and Aarón Patricio Gaxiola Mascareño, Instituto de Seguridad y Servicios Sociales de los Trabajadores del Estado Hospital Regional Valentin Gomez Farias, Jalisco, Mexico
| | - Ananth Sankarasubramanian
- Susan M. Love and Leah Eshraghi, Dr Susan Love Research Foundation, Encino, CA; Wendie A. Berg, Magee-Womens Hospital, University of Pittsburgh School of Medicine, Pittsburgh, PA; Christine Podilchuk, Krishnamohan Pathicherikollamparambil, Ananth Sankarasubramanian, and Richard Mammone, AI Strategy, Warren, NJ; Richard Mammone, Rutgers University, New Brunswick, NJ; and Ana Lilia López Aldrete and Aarón Patricio Gaxiola Mascareño, Instituto de Seguridad y Servicios Sociales de los Trabajadores del Estado Hospital Regional Valentin Gomez Farias, Jalisco, Mexico
| | - Leah Eshraghi
- Susan M. Love and Leah Eshraghi, Dr Susan Love Research Foundation, Encino, CA; Wendie A. Berg, Magee-Womens Hospital, University of Pittsburgh School of Medicine, Pittsburgh, PA; Christine Podilchuk, Krishnamohan Pathicherikollamparambil, Ananth Sankarasubramanian, and Richard Mammone, AI Strategy, Warren, NJ; Richard Mammone, Rutgers University, New Brunswick, NJ; and Ana Lilia López Aldrete and Aarón Patricio Gaxiola Mascareño, Instituto de Seguridad y Servicios Sociales de los Trabajadores del Estado Hospital Regional Valentin Gomez Farias, Jalisco, Mexico
| | - Richard Mammone
- Susan M. Love and Leah Eshraghi, Dr Susan Love Research Foundation, Encino, CA; Wendie A. Berg, Magee-Womens Hospital, University of Pittsburgh School of Medicine, Pittsburgh, PA; Christine Podilchuk, Krishnamohan Pathicherikollamparambil, Ananth Sankarasubramanian, and Richard Mammone, AI Strategy, Warren, NJ; Richard Mammone, Rutgers University, New Brunswick, NJ; and Ana Lilia López Aldrete and Aarón Patricio Gaxiola Mascareño, Instituto de Seguridad y Servicios Sociales de los Trabajadores del Estado Hospital Regional Valentin Gomez Farias, Jalisco, Mexico
| |
Collapse
|
21
|
Moon WK, Chen HH, Shin SU, Han W, Chang RF. Evaluation of TP53/PIK3CA mutations using texture and morphology analysis on breast MRI. Magn Reson Imaging 2019; 63:60-69. [PMID: 31425802 DOI: 10.1016/j.mri.2019.08.026] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2018] [Revised: 06/27/2019] [Accepted: 08/15/2019] [Indexed: 12/25/2022]
Abstract
PURPOSE Somatic mutations in TP53 and PIK3CA genes, the two most frequent genetic alternations in breast cancer, are associated with prognosis and therapeutic response. This study predicted the presence of TP53 and PIK3CA mutations in breast cancer by using texture and morphology analyses on breast MRI. MATERIALS AND METHODS A total of 107 breast cancers (dataset A) from The Cancer Imaging Archive (TCIA) consisting of 40 TP53 mutation cancer and 67 cancers without TP53 mutation; 35 PIK3CA mutations cancer and 72 without PIK3CA mutation. 122 breast cancer (dataset B) from Seoul National University Hospital containing 54 TP53 mutation cancer and 68 without mutations were used in this study. At first, the tumor area was segmented by a region growing method. Subsequently, gray level co-occurrence matrix (GLCM) texture features were extracted after ranklet transform, and a series of features including compactness, margin, and ellipsoid fitting model were used to describe the morphological characteristics of tumors. Lastly, a logistic regression was used to identify the presence of TP53 and PIK3CA mutations. The classification performances were evaluated by accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). Taking into account the trade-offs of sensitivity and specificity, the overall performances were evaluated by using receiver operating characteristic (ROC) curve analysis. RESULTS The GLCM texture feature based on ranklet transform is more capable of recognizing TP53 and PIK3CA mutations than morphological feature, especially for the TP53 mutation that achieves statistically significant. The area under the ROC curve (AUC) for TP53 mutation dataset A and dataset B achieved 0.78 and 0.81 respectively. For PIK3CA mutation, the AUC of ranklet texture feature was 0.70. CONCLUSION Texture analysis of segmented tumor on breast MRI based on ranklet transform is potential in recognizing the presence of TP53 mutation and PIK3CA mutation.
Collapse
Affiliation(s)
- Woo Kyung Moon
- Department of Radiology, Seoul National University Hospital, South Korea
| | - Hong-Hao Chen
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Sung Ui Shin
- Department of Radiology, Seoul National University Hospital Healthcare System Gangnam Center, South Korea
| | - Wonshik Han
- Department of Surgery and Cancer Research Institute, Seoul National University College of Medicine, South Korea
| | - Ruey-Feng Chang
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan; Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan; MOST Joint Research Center for AI Technology and All Vista Healthcare, Taipei, Taiwan.
| |
Collapse
|
22
|
Gómez-Flores W, Rodríguez-Cristerna A, de Albuquerque Pereira WC. Texture Analysis Based on Auto-Mutual Information for Classifying Breast Lesions with Ultrasound. ULTRASOUND IN MEDICINE & BIOLOGY 2019; 45:2213-2225. [PMID: 31097332 DOI: 10.1016/j.ultrasmedbio.2019.03.018] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/18/2018] [Revised: 03/22/2019] [Accepted: 03/26/2019] [Indexed: 06/09/2023]
Abstract
Described here is a novel texture extraction method based on auto-mutual information (AMI) for classifying breast lesions. The objective is to extract discriminating information found in the non-linear relationship of textures in breast ultrasound (BUS) images. The AMI method performs three basic tasks: (i) it transforms the input image using the ranklet transform to handle intensity variations of BUS images acquired with distinct ultrasound scanners; (ii) it extracts the AMI-based texture features in the horizontal and vertical directions from each ranklet image; and (iii) it classifies the breast lesions into benign and malignant classes, in which a support-vector machine is used as the underlying classifier. The image data set is composed of 2050 BUS images consisting of 1347 benign and 703 malignant tumors. Additionally, nine commonly used texture extraction methods proposed in the literature for BUS analysis are compared with the AMI method. The bootstrap method, which considers 1000 bootstrap samples, is used to evaluate classification performance. The experimental results indicate that the proposed approach outperforms its counterparts in terms of area under the receiver operating characteristic curve, sensitivity, specificity and Matthews correlation coefficient, with values of 0.82, 0.80, 0.85 and 0.63, respectively. These results suggest that the AMI method is suitable for breast lesion classification systems.
Collapse
Affiliation(s)
- Wilfrido Gómez-Flores
- Center for Research and Advanced Studies of the National Polytechnic Institute, 87138 Ciudad Victoria, Tamaulipas, Mexico.
| | - Arturo Rodríguez-Cristerna
- Center for Research and Advanced Studies of the National Polytechnic Institute, 87138 Ciudad Victoria, Tamaulipas, Mexico
| | | |
Collapse
|
23
|
Choi JS, Han BK, Ko ES, Bae JM, Ko EY, Song SH, Kwon MR, Shin JH, Hahn SY. Effect of a Deep Learning Framework-Based Computer-Aided Diagnosis System on the Diagnostic Performance of Radiologists in Differentiating between Malignant and Benign Masses on Breast Ultrasonography. Korean J Radiol 2019; 20:749-758. [PMID: 30993926 PMCID: PMC6470083 DOI: 10.3348/kjr.2018.0530] [Citation(s) in RCA: 82] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2018] [Accepted: 01/16/2019] [Indexed: 12/18/2022] Open
Abstract
OBJECTIVE To investigate whether a computer-aided diagnosis (CAD) system based on a deep learning framework (deep learning-based CAD) improves the diagnostic performance of radiologists in differentiating between malignant and benign masses on breast ultrasound (US). MATERIALS AND METHODS B-mode US images were prospectively obtained for 253 breast masses (173 benign, 80 malignant) in 226 consecutive patients. Breast mass US findings were retrospectively analyzed by deep learning-based CAD and four radiologists. In predicting malignancy, the CAD results were dichotomized (possibly benign vs. possibly malignant). The radiologists independently assessed Breast Imaging Reporting and Data System final assessments for two datasets (US images alone or with CAD). For each dataset, the radiologists' final assessments were classified as positive (category 4a or higher) and negative (category 3 or lower). The diagnostic performances of the radiologists for the two datasets (US alone vs. US with CAD) were compared. RESULTS When the CAD results were added to the US images, the radiologists showed significant improvement in specificity (range of all radiologists for US alone vs. US with CAD: 72.8-92.5% vs. 82.1-93.1%; p < 0.001), accuracy (77.9-88.9% vs. 86.2-90.9%; p = 0.038), and positive predictive value (PPV) (60.2-83.3% vs. 70.4-85.2%; p = 0.001). However, there were no significant changes in sensitivity (81.3-88.8% vs. 86.3-95.0%; p = 0.120) and negative predictive value (91.4-93.5% vs. 92.9-97.3%; p = 0.259). CONCLUSION Deep learning-based CAD could improve radiologists' diagnostic performance by increasing their specificity, accuracy, and PPV in differentiating between malignant and benign masses on breast US.
Collapse
Affiliation(s)
- Ji Soo Choi
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Boo Kyung Han
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea.
| | - Eun Sook Ko
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Jung Min Bae
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Eun Young Ko
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - So Hee Song
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Mi Ri Kwon
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Jung Hee Shin
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Soo Yeon Hahn
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| |
Collapse
|
24
|
Kriti, Virmani J, Agarwal R. Effect of despeckle filtering on classification of breast tumors using ultrasound images. Biocybern Biomed Eng 2019. [DOI: 10.1016/j.bbe.2019.02.004] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
25
|
Ciritsis A, Rossi C, Eberhard M, Marcon M, Becker AS, Boss A. Automatic classification of ultrasound breast lesions using a deep convolutional neural network mimicking human decision-making. Eur Radiol 2019; 29:5458-5468. [PMID: 30927100 DOI: 10.1007/s00330-019-06118-7] [Citation(s) in RCA: 77] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2018] [Revised: 02/06/2019] [Accepted: 02/15/2019] [Indexed: 12/20/2022]
Abstract
OBJECTIVES To evaluate a deep convolutional neural network (dCNN) for detection, highlighting, and classification of ultrasound (US) breast lesions mimicking human decision-making according to the Breast Imaging Reporting and Data System (BI-RADS). METHODS AND MATERIALS One thousand nineteen breast ultrasound images from 582 patients (age 56.3 ± 11.5 years) were linked to the corresponding radiological report. Lesions were categorized into the following classes: no tissue, normal breast tissue, BI-RADS 2 (cysts, lymph nodes), BI-RADS 3 (non-cystic mass), and BI-RADS 4-5 (suspicious). To test the accuracy of the dCNN, one internal dataset (101 images) and one external test dataset (43 images) were evaluated by the dCNN and two independent readers. Radiological reports, histopathological results, and follow-up examinations served as reference. The performances of the dCNN and the humans were quantified in terms of classification accuracies and receiver operating characteristic (ROC) curves. RESULTS In the internal test dataset, the classification accuracy of the dCNN differentiating BI-RADS 2 from BI-RADS 3-5 lesions was 87.1% (external 93.0%) compared with that of human readers with 79.2 ± 1.9% (external 95.3 ± 2.3%). For the classification of BI-RADS 2-3 versus BI-RADS 4-5, the dCNN reached a classification accuracy of 93.1% (external 95.3%), whereas the classification accuracy of humans yielded 91.6 ± 5.4% (external 94.1 ± 1.2%). The AUC on the internal dataset was 83.8 (external 96.7) for the dCNN and 84.6 ± 2.3 (external 90.9 ± 2.9) for the humans. CONCLUSION dCNNs may be used to mimic human decision-making in the evaluation of single US images of breast lesion according to the BI-RADS catalog. The technique reaches high accuracies and may serve for standardization of highly observer-dependent US assessment. KEY POINTS • Deep convolutional neural networks could be used to classify US breast lesions. • The implemented dCNN with its sliding window approach reaches high accuracies in the classification of US breast lesions. • Deep convolutional neural networks may serve for standardization in US BI-RADS classification.
Collapse
Affiliation(s)
- Alexander Ciritsis
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistrasse 100, 8091, Zurich, Switzerland.
| | - Cristina Rossi
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
| | - Matthias Eberhard
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
| | - Magda Marcon
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
| | - Anton S Becker
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
| | - Andreas Boss
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
| |
Collapse
|
26
|
Shin SY, Lee S, Yun ID, Kim SM, Lee KM. Joint Weakly and Semi-Supervised Deep Learning for Localization and Classification of Masses in Breast Ultrasound Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:762-774. [PMID: 30273145 DOI: 10.1109/tmi.2018.2872031] [Citation(s) in RCA: 60] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
We propose a framework for localization and classification of masses in breast ultrasound images. We have experimentally found that training convolutional neural network-based mass detectors with large, weakly annotated datasets presents a non-trivial problem, while overfitting may occur with those trained with small, strongly annotated datasets. To overcome these problems, we use a weakly annotated dataset together with a smaller strongly annotated dataset in a hybrid manner. We propose a systematic weakly and semi-supervised training scenario with appropriate training loss selection. Experimental results show that the proposed method can successfully localize and classify masses with less annotation effort. The results trained with only 10 strongly annotated images along with weakly annotated images were comparable to results trained from 800 strongly annotated images, with the 95% confidence interval (CI) of difference -3%-5%, in terms of the correct localization (CorLoc) measure, which is the ratio of images with intersection over union with ground truth higher than 0.5. With the same number of strongly annotated images, additional weakly annotated images can be incorporated to give a 4.5% point increase in CorLoc, from 80% to 84.50% (with 95% CIs 76%-83.75% and 81%-88%). The effects of different algorithmic details and varied amount of data are presented through ablative analysis.
Collapse
|
27
|
Qi X, Zhang L, Chen Y, Pi Y, Chen Y, Lv Q, Yi Z. Automated diagnosis of breast ultrasonography images using deep neural networks. Med Image Anal 2018; 52:185-198. [PMID: 30594771 DOI: 10.1016/j.media.2018.12.006] [Citation(s) in RCA: 106] [Impact Index Per Article: 15.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2018] [Revised: 11/26/2018] [Accepted: 12/19/2018] [Indexed: 02/05/2023]
Abstract
Ultrasonography images of breast mass aid in the detection and diagnosis of breast cancer. Manually analyzing ultrasonography images is time-consuming, exhausting and subjective. Automated analyzing such images is desired. In this study, we develop an automated breast cancer diagnosis model for ultrasonography images. Traditional methods of automated ultrasonography images analysis employ hand-crafted features to classify images, and lack robustness to the variation in the shapes, size and texture of breast lesions, leading to low sensitivity in clinical applications. To overcome these shortcomings, we propose a method to diagnose breast ultrasonography images using deep convolutional neural networks with multi-scale kernels and skip connections. Our method consists of two components: the first one is to determine whether there are malignant tumors in the image, and the second one is to recognize solid nodules. In order to let the two networks work in a collaborative way, a region enhance mechanism based on class activation maps is proposed. The mechanism helps to improve classification accuracy and sensitivity for both networks. A cross training algorithm is introduced to train the networks. We construct a large annotated dataset containing a total of 8145 breast ultrasonography images to train and evaluate the models. All of the annotations are proven by pathological records. The proposed method is compared with two state-of-the-art approaches, and outperforms both of them by a large margin. Experimental results show that our approach achieves a performance comparable to human sonographers and can be applied to clinical scenarios.
Collapse
Affiliation(s)
- Xiaofeng Qi
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, 610065, PR China
| | - Lei Zhang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, 610065, PR China
| | - Yao Chen
- Department of Galactophore Surgery, West China Hospital, Sichuan University, Chengdu, 610041, PR China
| | - Yong Pi
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, 610065, PR China
| | - Yi Chen
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, 610065, PR China
| | - Qing Lv
- Department of Galactophore Surgery, West China Hospital, Sichuan University, Chengdu, 610041, PR China.
| | - Zhang Yi
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, 610065, PR China.
| |
Collapse
|
28
|
Bharti P, Mittal D, Ananthasivan R. Preliminary Study of Chronic Liver Classification on Ultrasound Images Using an Ensemble Model. ULTRASONIC IMAGING 2018; 40:357-379. [PMID: 30015593 DOI: 10.1177/0161734618787447] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
Chronic liver diseases are fifth leading cause of fatality in developing countries. Their early diagnosis is extremely important for timely treatment and salvage life. To examine abnormalities of liver, ultrasound imaging is the most frequently used modality. However, the visual differentiation between chronic liver and cirrhosis, and presence of heptocellular carcinomas (HCC) evolved over cirrhotic liver is difficult, as they appear almost similar in ultrasound images. In this paper, to deal with this difficult visualization problem, a method has been developed for classifying four liver stages, that is, normal, chronic, cirrhosis, and HCC evolved over cirrhosis. The method is formulated with selected set of "handcrafted" texture features obtained after hierarchal feature fusion. These multiresolution and higher order features, which are able to characterize echotexture and roughness of liver surface, are extracted by using ranklet, gray-level difference matrix and gray-level co-occurrence matrix methods. Thereafter, these features are applied on proposed ensemble classifier that is designed with voting algorithm in conjunction with three classifiers, namely, k-nearest neighbor (k-NN), support vector machine (SVM), and rotation forest. The experiments are conducted to evaluate the (a) effectiveness of "handcrafted" texture features, (b) performance of proposed ensemble model, (c) effectiveness of proposed ensemble strategy, (d) performance of different classifiers, and (e) performance of proposed ensemble model based on Convolutional Neural Networks (CNN) features to differentiate four liver stages. These experiments are carried out on database of 754 segmented regions of interest formed by clinically acquired ultrasound images. The results show that classification accuracy of 96.6% is obtained by use of proposed classifier model.
Collapse
Affiliation(s)
- Puja Bharti
- 1 Thapar Institute of Engineering & Technology, Patiala, India
| | - Deepti Mittal
- 1 Thapar Institute of Engineering & Technology, Patiala, India
| | | |
Collapse
|
29
|
Quantitative vessel tortuosity: A potential CT imaging biomarker for distinguishing lung granulomas from adenocarcinomas. Sci Rep 2018; 8:15290. [PMID: 30327507 PMCID: PMC6191462 DOI: 10.1038/s41598-018-33473-0] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2018] [Accepted: 09/24/2018] [Indexed: 12/17/2022] Open
Abstract
Adenocarcinomas and active granulomas can both have a spiculated appearance on computed tomography (CT) and both are often fluorodeoxyglucose (FDG) avid on positron emission tomography (PET) scan, making them difficult to distinguish. Consequently, patients with benign granulomas are often subjected to invasive surgical biopsies or resections. In this study, quantitative vessel tortuosity (QVT), a novel CT imaging biomarker to distinguish between benign granulomas and adenocarcinomas on routine non-contrast lung CT scans is introduced. Our study comprised of CT scans of 290 patients from two different institutions, one cohort for training (N = 145) and the other (N = 145) for independent validation. In conjunction with a machine learning classifier, the top informative and stable QVT features yielded an area under receiver operating characteristic curve (ROC AUC) of 0.85 in the independent validation set. On the same cohort, the corresponding AUCs for two human experts including a radiologist and a pulmonologist were found to be 0.61 and 0.60, respectively. QVT features also outperformed well known shape and textural radiomic features which had a maximum AUC of 0.73 (p-value = 0.002), as well as features learned using a convolutional neural network AUC = 0.76 (p-value = 0.028). Our results suggest that QVT features could potentially serve as a non-invasive imaging biomarker to distinguish granulomas from adenocarcinomas on non-contrast CT scans.
Collapse
|
30
|
Lee SE, Han K, Kwak JY, Lee E, Kim EK. Radiomics of US texture features in differential diagnosis between triple-negative breast cancer and fibroadenoma. Sci Rep 2018; 8:13546. [PMID: 30202040 PMCID: PMC6131410 DOI: 10.1038/s41598-018-31906-4] [Citation(s) in RCA: 75] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2018] [Accepted: 08/30/2018] [Indexed: 12/31/2022] Open
Abstract
Triple-negative breast cancer (TNBC) is sometimes mistaken for fibroadenoma due to its tendency to show benign morphology on breast ultrasound (US) albeit its aggressive nature. This study aims to develop a radiomics score based on US texture analysis for differential diagnosis between TNBC and fibroadenoma, and to evaluate its diagnostic performance compared with pathologic results. We retrospectively included 715 pathology-proven fibroadenomas and 186 pathology-proven TNBCs which were examined by three different US machines. We developed the radiomics score by using penalized logistic regression with a least absolute shrinkage and selection operator (LASSO) analysis from 730 extracted features consisting of 14 intensity-based features, 132 textural features and 584 wavelet-based features. The constructed radiomics score showed significant difference between fibroadenoma and TNBC for all three US machines (p < 0.001). Although the radiomics score showed dependency on the type of US machine, we developed more elaborate radiomics score for a subgroup in which US examinations were performed with iU22. This subsequent radiomics score also showed good diagnostic performance, even for BI-RADS category 3 or 4a lesions (AUC 0.782) which were presumed as probably benign or low suspicious of malignancy by radiologists. It was expected to assist radiologist’s diagnosis and reduce the number of invasive biopsies, although US standardization should be overcome before clinical application.
Collapse
Affiliation(s)
- Si Eun Lee
- Department of Radiology, Severance Hospital, Research Institute of Radiological Science and Center for Clinical Image Data Science, Yonsei University College of Medicine, Seoul, Korea
| | - Kyunghwa Han
- Department of Radiology, Severance Hospital, Research Institute of Radiological Science and Center for Clinical Image Data Science, Yonsei University College of Medicine, Seoul, Korea
| | - Jin Young Kwak
- Department of Radiology, Severance Hospital, Research Institute of Radiological Science and Center for Clinical Image Data Science, Yonsei University College of Medicine, Seoul, Korea
| | - Eunjung Lee
- Department of Computational Science and Engineering, Yonsei University, Seoul, Korea
| | - Eun-Kyung Kim
- Department of Radiology, Severance Hospital, Research Institute of Radiological Science and Center for Clinical Image Data Science, Yonsei University College of Medicine, Seoul, Korea.
| |
Collapse
|
31
|
Moon WK, Chen IL, Yi A, Bae MS, Shin SU, Chang RF. Computer-aided prediction model for axillary lymph node metastasis in breast cancer using tumor morphological and textural features on ultrasound. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 162:129-137. [PMID: 29903479 DOI: 10.1016/j.cmpb.2018.05.011] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/23/2017] [Revised: 04/14/2018] [Accepted: 05/03/2018] [Indexed: 06/08/2023]
Abstract
BACKGROUND AND OBJECTIVES Axillary lymph node (ALN) status is a key indicator in assessing and determining the treatment strategy for patients with newly diagnosed breast cancer. Previous studies suggest that sonographic features of a primary tumor have the potential to predict ALN status in the preoperative staging of breast cancer. In this study, a computer-aided prediction (CAP) model as well as the tumor features for ALN metastasis in breast cancers were developed using breast ultrasound (US) images. METHODS A total of 249 malignant tumors were acquired from 247 female patients (ages 20-84 years; mean 55 ± 11 years) to test the differences between the non-metastatic (130) and metastatic (119) groups based on various features. After applying semi-automatic tumor segmentation, 69 quantitative features were extracted. The features included morphology and texture of tumors inside a ROI of breast US image. By the backward feature selection and linear logistic regression, the prediction model was constructed and established to estimate the likelihood of ALN metastasis for each sample collected. RESULTS In the experiments, the texture features showed higher performance for predicting ALN metastasis compared to morphology (Az, 0.730 vs 0.667). The difference, however, was not statistically significant (p-values > 0.05). Combining the textural and morphological features, the accuracy, sensitivity, specificity, and Az value achieved 75.1% (187/249), 79.0% (94/119), 71.5% (93/130), and 0.757, respectively. CONCLUSIONS The proposed CAP model, which combines textural and morphological features of primary tumor, may be a useful method to determine the ALN status in patients with breast cancer.
Collapse
Affiliation(s)
- Woo Kyung Moon
- Department of Radiology, Seoul National University College of Medicine and Seoul National University Hospital, Seoul, Republic of Korea
| | - I-Ling Chen
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Ann Yi
- Department of Radiology, Seoul National University College of Medicine and Seoul National University Hospital, Seoul, Republic of Korea
| | - Min Sun Bae
- Department of Radiology, Seoul National University College of Medicine and Seoul National University Hospital, Seoul, Republic of Korea
| | - Sung Ui Shin
- Department of Radiology, Seoul National University College of Medicine and Seoul National University Hospital, Seoul, Republic of Korea
| | - Ruey-Feng Chang
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan; Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan.
| |
Collapse
|
32
|
Nemat H, Fehri H, Ahmadinejad N, Frangi AF, Gooya A. Classification of breast lesions in ultrasonography using sparse logistic regression and morphology-based texture features. Med Phys 2018; 45:4112-4124. [PMID: 29974971 DOI: 10.1002/mp.13082] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2017] [Revised: 04/16/2018] [Accepted: 04/29/2018] [Indexed: 02/28/2024] Open
Abstract
PURPOSE This work proposes a new reliable computer-aided diagnostic (CAD) system for the diagnosis of breast cancer from breast ultrasound (BUS) images. The system can be useful to reduce the number of biopsies and pathological tests, which are invasive, costly, and often unnecessary. METHODS The proposed CAD system classifies breast tumors into benign and malignant classes using morphological and textural features extracted from breast ultrasound (BUS) images. The images are first preprocessed to enhance the edges and filter the speckles. The tumor is then segmented semiautomatically using the watershed method. Having the tumor contour, a set of 855 features including 21 shape-based, 810 contour-based, and 24 textural features are extracted from each tumor. Then, a Bayesian Automatic Relevance Detection (ARD) mechanism is used for computing the discrimination power of different features and dimensionality reduction. Finally, a logistic regression classifier computed the posterior probabilities of malignant vs benign tumors using the reduced set of features. RESULTS A dataset of 104 BUS images of breast tumors, including 72 benign and 32 malignant tumors, was used for evaluation using an eightfold cross-validation. The algorithm outperformed six state-of-the-art methods for BUS image classification with large margins by achieving 97.12% accuracy, 93.75% sensitivity, and 98.61% specificity rates. CONCLUSIONS Using ARD, the proposed CAD system selects five new features for breast tumor classification and outperforms state-of-the-art, making a reliable and complementary tool to help clinicians diagnose breast cancer.
Collapse
Affiliation(s)
- Hoda Nemat
- Department of Electronic and Electrical Engineering, Center for Computational Imaging Simulation Technologies in Biomedicine (CISTIB), University of Sheffield, Sheffield, S1 3JD, UK
| | - Hamid Fehri
- Department of Electronic and Electrical Engineering, Center for Computational Imaging Simulation Technologies in Biomedicine (CISTIB), University of Sheffield, Sheffield, S1 3JD, UK
| | - Nasrin Ahmadinejad
- Department of Radiology, Faculty of Medicine, Tehran University of Medical Sciences, Tehran, 1416753955, Iran
| | - Alejandro F Frangi
- Department of Electronic and Electrical Engineering, Center for Computational Imaging Simulation Technologies in Biomedicine (CISTIB), University of Sheffield, Sheffield, S1 3JD, UK
| | - Ali Gooya
- Department of Electronic and Electrical Engineering, Center for Computational Imaging Simulation Technologies in Biomedicine (CISTIB), University of Sheffield, Sheffield, S1 3JD, UK
| |
Collapse
|
33
|
Rodríguez-Cristerna A, Gómez-Flores W, de Albuquerque Pereira WC. A computer-aided diagnosis system for breast ultrasound based on weighted BI-RADS classes. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 153:33-40. [PMID: 29157459 DOI: 10.1016/j.cmpb.2017.10.004] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/05/2017] [Revised: 08/23/2017] [Accepted: 10/02/2017] [Indexed: 06/07/2023]
Abstract
BACKGROUND AND OBJECTIVE Conventional computer-aided diagnosis (CAD) systems for breast ultrasound (BUS) are trained to classify pathological classes, that is, benign and malignant. However, from a clinical perspective, this kind of classification does not agree totally with radiologists' diagnoses. Usually, the tumors are assessed by using a BI-RADS (Breast Imaging-Reporting and Data System) category and, accordingly, a recommendation is emitted: annual study for category 2 (benign), six-month follow-up study for category 3 (probably benign), and biopsy for categories 4 and 5 (suspicious of malignancy). Hence, in this paper, a CAD system based on BI-RADS categories weighted by pathological information is presented. The goal is to increase the classification performance by reducing the common class imbalance found in pathological classes as well as to provide outcomes quite similar to radiologists' recommendations. METHODS The BUS dataset considers 781 benign lesions and 347 malignant tumors proven by biopsy. Moreover, every lesion is associated to one BI-RADS category in the set {2, 3, 4, 5}. Thus, the dataset is split into three weighted classes: benign, BI-RADS 2 in benign lesions; probably benign, BI-RADS 3 and 4 in benign lesions; and malignant, BI-RADS 4 and 5 in malignant lesions. Thereafter, a random forest (RF) classifier, denoted by RFw, is trained to predict the weighted BI-RADS classes. In addition, for comparison purposes, a RF classifier is trained to predict pathological classes, denoted as RFp. RESULTS The ability of the classifiers to predict the pathological classes is measured by the area under the ROC curve (AUC), sensitivity (SEN), and specificity (SPE). The RFw classifier obtained AUC=0.872,SEN=0.826, and SPE=0.919, whereas the RFp classifier reached AUC=0.868,SEN=0.808, and SPE=0.929. According to a one-way analysis of variance test, the RFw classifier statistically outperforms (p < 0.001) the RFp classifier in terms of the AUC and SEN. Moreover, the classification performance of RFw to predict weighted BI-RADS classes is given by the Matthews correlation coefficient that obtained 0.614. CONCLUSIONS The division of the classification problem into three classes reduces the imbalance between benign and malignant classes; thus, the sensitivity is increased without degrading the specificity. Therefore, the CAD based on weighted BI-RADS classes improves the classification performance of the conventional CAD systems. Additionally, the proposed approach has the advantage of being capable of providing a multiclass outcome related to radiologists' recommendations.
Collapse
Affiliation(s)
- Arturo Rodríguez-Cristerna
- Center for Research and Advanced Studies of the National Polytechnic Institute, ZIP 87130, Ciudad Victoria, Tamaulipas, Mexico
| | - Wilfrido Gómez-Flores
- Center for Research and Advanced Studies of the National Polytechnic Institute, ZIP 87130, Ciudad Victoria, Tamaulipas, Mexico.
| | | |
Collapse
|
34
|
Xi X, Xu H, Shi H, Zhang C, Ding HY, Zhang G, Tang Y, Yin Y. Robust texture analysis of multi-modal images using Local Structure Preserving Ranklet and multi-task learning for breast tumor diagnosis. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2016.06.082] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
|
35
|
Han S, Kang HK, Jeong JY, Park MH, Kim W, Bang WC, Seong YK. A deep learning framework for supporting the classification of breast lesions in ultrasound images. Phys Med Biol 2017; 62:7714-7728. [PMID: 28753132 DOI: 10.1088/1361-6560/aa82ec] [Citation(s) in RCA: 188] [Impact Index Per Article: 23.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
In this research, we exploited the deep learning framework to differentiate the distinctive types of lesions and nodules in breast acquired with ultrasound imaging. A biopsy-proven benchmarking dataset was built from 5151 patients cases containing a total of 7408 ultrasound breast images, representative of semi-automatically segmented lesions associated with masses. The dataset comprised 4254 benign and 3154 malignant lesions. The developed method includes histogram equalization, image cropping and margin augmentation. The GoogLeNet convolutionary neural network was trained to the database to differentiate benign and malignant tumors. The networks were trained on the data with augmentation and the data without augmentation. Both of them showed an area under the curve of over 0.9. The networks showed an accuracy of about 0.9 (90%), a sensitivity of 0.86 and a specificity of 0.96. Although target regions of interest (ROIs) were selected by radiologists, meaning that radiologists still have to point out the location of the ROI, the classification of malignant lesions showed promising results. If this method is used by radiologists in clinical situations it can classify malignant lesions in a short time and support the diagnosis of radiologists in discriminating malignant lesions. Therefore, the proposed method can work in tandem with human radiologists to improve performance, which is a fundamental purpose of computer-aided diagnosis.
Collapse
Affiliation(s)
- Seokmin Han
- Korea National University of Transportation, Uiwang-si, Kyunggi-do, Republic of Korea
| | | | | | | | | | | | | |
Collapse
|
36
|
Moon WK, Lee YW, Huang YS, Lee SH, Bae MS, Yi A, Huang CS, Chang RF. Computer-aided prediction of axillary lymph node status in breast cancer using tumor surrounding tissue features in ultrasound images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2017; 146:143-150. [PMID: 28688484 DOI: 10.1016/j.cmpb.2017.06.001] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/28/2016] [Revised: 04/20/2017] [Accepted: 06/02/2017] [Indexed: 06/07/2023]
Abstract
BACKGROUND AND OBJECTIVE The presence or absence of axillary lymph node (ALN) metastasis is the most important prognostic factor for patients with early-stage breast cancer. In this study, a computer-aided prediction (CAP) system using the tumor surrounding tissue features in ultrasound (US) images was proposed to determine the ALN status in breast cancer. METHODS The US imaging database used in this study contained 114 cases of invasive breast cancer and 49 of them were ALN metastasis. After the tumor region segmentation by the level set method, image matting method was used to extract surrounding abnormal tissue of tumor from the acquired images. Then, 21 features composed of 2 intensity, 3 morphology, and 16 textural features are extracted from the surrounding tissue and processed by a logistic regression model. Finally, the prediction model is trained and tested from the selected features. RESULTS In the experiments, the textural feature set extracted from surrounding tissue showed higher performance than intensity and morphology feature sets (Az, 0.7756 vs 0.7071 and 0.6431). The accuracy, sensitivity, specificity and the area index Az under the receiver operating characteristic (ROC) curve for the CAP system were 81.58% (93/114), 81.63% (40/49), 81.54% (53/65), and 0.8269 for using combined feature set. CONCLUSIONS These results indicated that the proposed CAP system can be helpful to determine the ALN status in patients with breast cancer.
Collapse
Affiliation(s)
- Woo Kyung Moon
- Department of Radiology, Seoul National University Hospital and Seoul National University College of Medicine, Seoul 110-744, Korea
| | - Yan-Wei Lee
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Yao-Sian Huang
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Su Hyun Lee
- Department of Radiology, Seoul National University Hospital and Seoul National University College of Medicine, Seoul 110-744, Korea
| | - Min Sun Bae
- Department of Radiology, Seoul National University Hospital and Seoul National University College of Medicine, Seoul 110-744, Korea
| | - Ann Yi
- Department of Radiology, Seoul National University Hospital and Seoul National University College of Medicine, Seoul 110-744, Korea; Seoul National University Hospital Healthcare System Gangnam Center, Seoul 135-984, Korea
| | - Chiun-Sheng Huang
- Department of Surgery, National Taiwan University Hospital, Taipei, Taiwan
| | - Ruey-Feng Chang
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan; Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan; Graduate Institute of Network and Multimedia, National Taiwan University, Taipei, Taiwan.
| |
Collapse
|
37
|
Yu Q, Jiang T, Zhou A, Zhang L, Zhang C, Xu P. Computer-aided diagnosis of malignant or benign thyroid nodes based on ultrasound images. Eur Arch Otorhinolaryngol 2017; 274:2891-2897. [PMID: 28389809 DOI: 10.1007/s00405-017-4562-3] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2017] [Accepted: 04/03/2017] [Indexed: 11/27/2022]
Abstract
The objective of this study is to evaluate the diagnostic value of combination of artificial neural networks (ANN) and support vector machine (SVM)-based CAD systems in differentiating malignant from benign thyroid nodes with gray-scale ultrasound images. Two morphological and 65 texture features extracted from regions of interest in 610 2D-ultrasound thyroid node images from 543 patients (207 malignant, 403 benign) were used to develop the ANN and SVM models. Tenfold cross validation evaluated their performance; the best models showed accuracy of 99% for ANN and 100% for SVM. From 50 thyroid node ultrasound images from 45 prospectively enrolled patients, the ANN model showed sensitivity, specificity, positive and negative predictive values, Youden index, and accuracy of 88.24, 90.91, 83.33, 93.75, 79.14, and 90.00%, respectively, the SVM model 76.47, 90.91, 81.25, 88.24, 67.38, and 86.00%, respectively, and in combination 100.00, 87.88, 80.95, 100.00, 87.88, and 92.00%, respectively. Both ANN and SVM had high value in classifying thyroid nodes. In combination, the sensitivity increased but specificity decreased. This combination might provide a second opinion for radiologists dealing with difficult to diagnose thyroid node ultrasound images.
Collapse
Affiliation(s)
- Qin Yu
- Department of Ultrasonography, The First Affiliated Hospital of Nanchang University, Nanchang, Jiangxi, 330006, China
| | - Tao Jiang
- Department of Ultrasonography, The First Affiliated Hospital of Nanchang University, Nanchang, Jiangxi, 330006, China
| | - Aiyun Zhou
- Department of Ultrasonography, The First Affiliated Hospital of Nanchang University, Nanchang, Jiangxi, 330006, China.
| | - Lili Zhang
- Department of Ultrasonography, The First Affiliated Hospital of Nanchang University, Nanchang, Jiangxi, 330006, China
| | - Cheng Zhang
- Department of Ultrasonography, The First Affiliated Hospital of Nanchang University, Nanchang, Jiangxi, 330006, China
| | - Pan Xu
- Department of Ultrasonography, The First Affiliated Hospital of Nanchang University, Nanchang, Jiangxi, 330006, China
| |
Collapse
|
38
|
Moon WK, Chen IL, Chang JM, Shin SU, Lo CM, Chang RF. The adaptive computer-aided diagnosis system based on tumor sizes for the classification of breast tumors detected at screening ultrasound. ULTRASONICS 2017; 76:70-77. [PMID: 28086107 DOI: 10.1016/j.ultras.2016.12.017] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/17/2016] [Revised: 12/06/2016] [Accepted: 12/26/2016] [Indexed: 06/06/2023]
Abstract
Screening ultrasound (US) is increasingly used as a supplement to mammography in women with dense breasts, and more than 80% of cancers detected by US alone are 1cm or smaller. An adaptive computer-aided diagnosis (CAD) system based on tumor size was proposed to classify breast tumors detected at screening US images using quantitative morphological and textural features. In the present study, a database containing 156 tumors (78 benign and 78 malignant) was separated into two subsets of different tumor sizes (<1cm and ⩾1cm) to explore the improvement in the performance of the CAD system. After adaptation, the accuracies, sensitivities, specificities and Az values of the CAD for the entire database increased from 73.1% (114/156), 73.1% (57/78), 73.1% (57/78), and 0.790 to 81.4% (127/156), 83.3% (65/78), 79.5% (62/78), and 0.852, respectively. In the data subset of tumors larger than 1cm, the performance improved from 66.2% (51/77), 68.3% (28/41), 63.9% (23/36), and 0.703 to 81.8% (63/77), 85.4% (35/41), 77.8% (28/36), and 0.855, respectively. The proposed CAD system can be helpful to classify breast tumors detected at screening US.
Collapse
Affiliation(s)
- Woo Kyung Moon
- Department of Radiology, Seoul National University College of Medicine and Seoul National University Hospital, Seoul, Republic of Korea
| | - I-Ling Chen
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Jung Min Chang
- Department of Radiology, Seoul National University College of Medicine and Seoul National University Hospital, Seoul, Republic of Korea
| | - Sung Ui Shin
- Department of Radiology, Seoul National University College of Medicine and Seoul National University Hospital, Seoul, Republic of Korea
| | - Chung-Ming Lo
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan; Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei, Taiwan.
| | - Ruey-Feng Chang
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan; Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan.
| |
Collapse
|
39
|
Riaz F, Hassan A, Nisar R, Dinis-Ribeiro M, Coimbra MT. Content-Adaptive Region-Based Color Texture Descriptors for Medical Images. IEEE J Biomed Health Inform 2017; 21:162-171. [DOI: 10.1109/jbhi.2015.2492464] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
40
|
A Fusion-Based Approach for Breast Ultrasound Image Classification Using Multiple-ROI Texture and Morphological Analyses. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2016; 2016:6740956. [PMID: 28127383 PMCID: PMC5227307 DOI: 10.1155/2016/6740956] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/05/2016] [Revised: 10/31/2016] [Accepted: 11/15/2016] [Indexed: 11/18/2022]
Abstract
Ultrasound imaging is commonly used for breast cancer diagnosis, but accurate interpretation of breast ultrasound (BUS) images is often challenging and operator-dependent. Computer-aided diagnosis (CAD) systems can be employed to provide the radiologists with a second opinion to improve the diagnosis accuracy. In this study, a new CAD system is developed to enable accurate BUS image classification. In particular, an improved texture analysis is introduced, in which the tumor is divided into a set of nonoverlapping regions of interest (ROIs). Each ROI is analyzed using gray-level cooccurrence matrix features and a support vector machine classifier to estimate its tumor class indicator. The tumor class indicators of all ROIs are combined using a voting mechanism to estimate the tumor class. In addition, morphological analysis is employed to classify the tumor. A probabilistic approach is used to fuse the classification results of the multiple-ROI texture analysis and morphological analysis. The proposed approach is applied to classify 110 BUS images that include 64 benign and 46 malignant tumors. The accuracy, specificity, and sensitivity obtained using the proposed approach are 98.2%, 98.4%, and 97.8%, respectively. These results demonstrate that the proposed approach can effectively be used to differentiate benign and malignant tumors.
Collapse
|
41
|
Moon WK, Huang YS, Lo CM, Huang CS, Bae MS, Kim WH, Chen JH, Chang RF. Computer-aided diagnosis for distinguishing between triple-negative breast cancer and fibroadenomas based on ultrasound texture features. Med Phys 2016; 42:3024-35. [PMID: 26127055 DOI: 10.1118/1.4921123] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023] Open
Abstract
PURPOSE Triple-negative breast cancer (TNBC), an aggressive subtype, is frequently misclassified as fibroadenoma due to benign morphologic features on breast ultrasound (US). This study aims to develop a computer-aided diagnosis (CAD) system based on texture features for distinguishing between TNBC and benign fibroadenomas in US images. METHODS US images of 169 pathology-proven tumors (mean size, 1.65 cm; range, 0.7-3.0 cm) composed of 84 benign fibroadenomas and 85 TNBC tumors are used in this study. After a tumor is segmented out using the level-set method, morphological, conventional texture, and multiresolution gray-scale invariant texture feature sets are computed using a best-fitting ellipse, gray-level co-occurrence matrices, and the ranklet transform, respectively. The linear support vector machine with leave-one-out cross-validation schema is used as a classifier, and the diagnostic performance is assessed with receiver operating characteristic curve analysis. RESULTS The Az values of the morphology, conventional texture, and multiresolution gray-scale invariant texture feature sets are 0.8470 [95% confidence intervals (CIs), 0.7826-0.8973], 0.8542 (95% CI, 0.7911-0.9030), and 0.9695 (95% CI, 0.9376-0.9865), respectively. The Az of the CAD system based on the combined feature sets is 0.9702 (95% CI, 0.9334-0.9882). CONCLUSIONS The CAD system based on texture features extracted via the ranklet transform may be useful for improving the ability to discriminate between TNBC and benign fibroadenomas.
Collapse
Affiliation(s)
- Woo Kyung Moon
- Department of Radiology, Seoul National University Hospital and Seoul National University College of Medicine, Seoul 110-744, South Korea
| | - Yao-Sian Huang
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei 10617, Taiwan, Republic of China
| | - Chung-Ming Lo
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei 10617, Taiwan, Republic of China
| | - Chiun-Sheng Huang
- Department of Surgery, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei 10041, Taiwan, Republic of China and Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei 10617, Taiwan, Republic of China
| | - Min Sun Bae
- Department of Radiology, Seoul National University Hospital and Seoul National University College of Medicine, Seoul 110-744, South Korea
| | - Won Hwa Kim
- Department of Radiology, Seoul National University Hospital and Seoul National University College of Medicine, Seoul 110-744, South Korea
| | - Jeon-Hor Chen
- Center for Functional Onco-Imaging and Department of Radiological Science, University of California, Irvine, California 92868 and Department of Radiology, E-Da Hospital and I-Shou University, Kaohsiung 82445, Taiwan, Republic of China
| | - Ruey-Feng Chang
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei 10617, Taiwan, Republic of China and Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei 10617, Taiwan, Republic of China
| |
Collapse
|
42
|
Computer-Aided Diagnosis with Deep Learning Architecture: Applications to Breast Lesions in US Images and Pulmonary Nodules in CT Scans. Sci Rep 2016; 6:24454. [PMID: 27079888 PMCID: PMC4832199 DOI: 10.1038/srep24454] [Citation(s) in RCA: 306] [Impact Index Per Article: 34.0] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2015] [Accepted: 03/30/2016] [Indexed: 01/02/2023] Open
Abstract
This paper performs a comprehensive study on the deep-learning-based computer-aided diagnosis (CADx) for the differential diagnosis of benign and malignant nodules/lesions by avoiding the potential errors caused by inaccurate image processing results (e.g., boundary segmentation), as well as the classification bias resulting from a less robust feature set, as involved in most conventional CADx algorithms. Specifically, the stacked denoising auto-encoder (SDAE) is exploited on the two CADx applications for the differentiation of breast ultrasound lesions and lung CT nodules. The SDAE architecture is well equipped with the automatic feature exploration mechanism and noise tolerance advantage, and hence may be suitable to deal with the intrinsically noisy property of medical image data from various imaging modalities. To show the outperformance of SDAE-based CADx over the conventional scheme, two latest conventional CADx algorithms are implemented for comparison. 10 times of 10-fold cross-validations are conducted to illustrate the efficacy of the SDAE-based CADx algorithm. The experimental results show the significant performance boost by the SDAE-based CADx algorithm over the two conventional methods, suggesting that deep learning techniques can potentially change the design paradigm of the CADx systems without the need of explicit design and selection of problem-oriented features.
Collapse
|
43
|
Gangeh MJ, Tadayyon H, Sannachi L, Sadeghi-Naini A, Tran WT, Czarnota GJ. Computer Aided Theragnosis Using Quantitative Ultrasound Spectroscopy and Maximum Mean Discrepancy in Locally Advanced Breast Cancer. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:778-790. [PMID: 26529750 DOI: 10.1109/tmi.2015.2495246] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
A noninvasive computer-aided-theragnosis (CAT) system was developed for the early therapeutic cancer response assessment in patients with locally advanced breast cancer (LABC) treated with neoadjuvant chemotherapy. The proposed CAT system was based on multi-parametric quantitative ultrasound (QUS) spectroscopic methods in conjunction with advanced machine learning techniques. Specifically, a kernel-based metric named maximum mean discrepancy (MMD), a technique for learning from imbalanced data based on random undersampling, and supervised learning were investigated with response-monitoring data from LABC patients. The CAT system was tested on 56 patients using statistical significance tests and leave-one-subject-out classification techniques. Textural features using state-of-the-art local binary patterns (LBP), and gray-scale intensity features were extracted from the spectral parametric maps in the proposed CAT system. The system indicated significant differences in changes between the responding and non-responding patient populations as well as high accuracy, sensitivity, and specificity in discriminating between the two patient groups early after the start of treatment, i.e., on weeks 1 and 4 of several months of treatment. The proposed CAT system achieved an accuracy of 85%, 87%, and 90% on weeks 1, 4 and 8, respectively. The sensitivity and specificity of developed CAT system for the same times was 85%, 95%, 90% and 85%, 85%, 91%, respectively. The proposed CAT system thus establishes a noninvasive framework for monitoring cancer treatment response in tumors using clinical ultrasound imaging in conjunction with machine learning techniques. Such a framework can potentially facilitate the detection of refractory responses in patients to treatment early on during a course of therapy to enable possibly switching to more efficacious treatments.
Collapse
|
44
|
Sudarshan VK, Mookiah MRK, Acharya UR, Chandran V, Molinari F, Fujita H, Ng KH. Application of wavelet techniques for cancer diagnosis using ultrasound images: A Review. Comput Biol Med 2015; 69:97-111. [PMID: 26761591 DOI: 10.1016/j.compbiomed.2015.12.006] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2015] [Revised: 11/12/2015] [Accepted: 12/11/2015] [Indexed: 02/01/2023]
Abstract
Ultrasound is an important and low cost imaging modality used to study the internal organs of human body and blood flow through blood vessels. It uses high frequency sound waves to acquire images of internal organs. It is used to screen normal, benign and malignant tissues of various organs. Healthy and malignant tissues generate different echoes for ultrasound. Hence, it provides useful information about the potential tumor tissues that can be analyzed for diagnostic purposes before therapeutic procedures. Ultrasound images are affected with speckle noise due to an air gap between the transducer probe and the body. The challenge is to design and develop robust image preprocessing, segmentation and feature extraction algorithms to locate the tumor region and to extract subtle information from isolated tumor region for diagnosis. This information can be revealed using a scale space technique such as the Discrete Wavelet Transform (DWT). It decomposes an image into images at different scales using low pass and high pass filters. These filters help to identify the detail or sudden changes in intensity in the image. These changes are reflected in the wavelet coefficients. Various texture, statistical and image based features can be extracted from these coefficients. The extracted features are subjected to statistical analysis to identify the significant features to discriminate normal and malignant ultrasound images using supervised classifiers. This paper presents a review of wavelet techniques used for preprocessing, segmentation and feature extraction of breast, thyroid, ovarian and prostate cancer using ultrasound images.
Collapse
Affiliation(s)
- Vidya K Sudarshan
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, 599489, Singapore
| | | | - U Rajendra Acharya
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, 599489, Singapore; Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, 50603 Malaysia; Department of Biomedical Engineering, School of Science and Technology, SIM University, 599491, Singapore
| | - Vinod Chandran
- School of Electrical Engineering and Computer Science, Queensland University of Technology, Brisbane QLD 4000, Australia
| | - Filippo Molinari
- Department of Electronics and Telecommunications, Politecnico di Torino, 10129 Torino, Italy
| | - Hamido Fujita
- Faculty of Software and Information Science, Iwate Prefectural University (IPU), Iwate 020-0693, Japan
| | - Kwan Hoong Ng
- Department of Biomedical Imaging, Faculty of Medicine, University of Malaya, 50603, Malaysia
| |
Collapse
|
45
|
Lo CM, Moon WK, Huang CS, Chen JH, Yang MC, Chang RF. Intensity-Invariant Texture Analysis for Classification of BI-RADS Category 3 Breast Masses. ULTRASOUND IN MEDICINE & BIOLOGY 2015; 41:2039-2048. [PMID: 25843514 DOI: 10.1016/j.ultrasmedbio.2015.03.003] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/11/2014] [Revised: 02/22/2015] [Accepted: 03/01/2015] [Indexed: 06/04/2023]
Abstract
Radiologists likely incorrectly classify benign masses as Breast Imaging Reporting and Data System (BI-RADS) category 3. A computer-aided diagnosis (CAD) system was developed in this study as a second viewer to avoid misclassification of carcinomas. Sixty-nine biopsy-proven BI-RADS category 3 masses, including 21 malignant and 48 benign masses, were used to evaluate the CAD system. To improve the texture features, gray-scale variations between images were reduced by transforming pixels into intensity-invariant ranklet coefficients. The textures of the tumor and speckle pixels were extracted from the transformed ranklet images to provide more robust features than in conventional CAD systems. As a result, tumor texture and speckle texture with ranklet transformation achieved significantly better areas under the receiver operating characteristic curve (Az) compared with those without ranklet transformation (Az = 0.83 vs. 0.58 and Az = 0.80 vs. 0.56, p value < 0.05). The improved CAD system can be a second reader to confirm the classification of BI-RADS category 3 masses.
Collapse
Affiliation(s)
- Chung-Ming Lo
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Woo Kyung Moon
- Department of Radiology, Seoul National University Hospital, Seoul, Korea.
| | - Chiun-Sheng Huang
- Department of Surgery, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan
| | - Jeon-Hor Chen
- Center for Functional Onco-Imaging and Department of Radiologic Science, University of California at Irvine, Irvine, California, USA; Department of Radiology, E-Da Hospital and I-Shou University, Kaohsiung, Taiwan
| | - Min-Chun Yang
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Ruey-Feng Chang
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan; Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan.
| |
Collapse
|
46
|
Cai L, Wang X, Wang Y, Guo Y, Yu J, Wang Y. Robust phase-based texture descriptor for classification of breast ultrasound images. Biomed Eng Online 2015; 14:26. [PMID: 25889570 PMCID: PMC4376500 DOI: 10.1186/s12938-015-0022-8] [Citation(s) in RCA: 53] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2014] [Accepted: 03/05/2015] [Indexed: 12/20/2022] Open
Abstract
BACKGROUND Classification of breast ultrasound (BUS) images is an important step in the computer-aided diagnosis (CAD) system for breast cancer. In this paper, a novel phase-based texture descriptor is proposed for efficient and robust classifiers to discriminate benign and malignant tumors in BUS images. METHOD The proposed descriptor, namely the phased congruency-based binary pattern (PCBP) is an oriented local texture descriptor that combines the phase congruency (PC) approach with the local binary pattern (LBP). The support vector machine (SVM) is further applied for the tumor classification. To verify the efficiency of the proposed PCBP texture descriptor, we compare the PCBP with other three state-of-art texture descriptors, and experiments are carried out on a BUS image database including 138 cases. The receiver operating characteristic (ROC) analysis is firstly performed and seven criteria are utilized to evaluate the classification performance using different texture descriptors. Then, in order to verify the robustness of the PCBP against illumination variations, we train the SVM classifier on texture features obtained from the original BUS images, and use this classifier to deal with the texture features extracted from BUS images with different illumination conditions (i.e., contrast-improved, gamma-corrected and histogram-equalized). The area under ROC curve (AUC) index is used as the figure of merit to evaluate the classification performances. RESULTS AND CONCLUSIONS The proposed PCBP texture descriptor achieves the highest values (i.e. 0.894) and the least variations in respect of the AUC index, regardless of the gray-scale variations. It's revealed in the experimental results that classifications of BUS images with the proposed PCBP texture descriptor are efficient and robust, which may be potentially useful for breast ultrasound CADs.
Collapse
Affiliation(s)
- Lingyun Cai
- Department of Electronic Engineering, Fudan University, Shanghai, 200433, China.
| | - Xin Wang
- Department of Electronic Engineering, Fudan University, Shanghai, 200433, China.
| | - Yuanyuan Wang
- Department of Electronic Engineering, Fudan University, Shanghai, 200433, China. .,Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention (MICCAI) of Shanghai, Shanghai, 200433, China.
| | - Yi Guo
- Department of Electronic Engineering, Fudan University, Shanghai, 200433, China. .,Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention (MICCAI) of Shanghai, Shanghai, 200433, China.
| | - Jinhua Yu
- Department of Electronic Engineering, Fudan University, Shanghai, 200433, China. .,Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention (MICCAI) of Shanghai, Shanghai, 200433, China.
| | - Yi Wang
- Department of Ultrasound, Huashan Hospital, Fudan University, Shanghai, 200040, China.
| |
Collapse
|
47
|
Uniyal N, Eskandari H, Abolmaesumi P, Sojoudi S, Gordon P, Warren L, Rohling RN, Salcudean SE, Moradi M. Ultrasound RF time series for classification of breast lesions. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:652-661. [PMID: 25350925 DOI: 10.1109/tmi.2014.2365030] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
This work reports the use of ultrasound radio frequency (RF) time series analysis as a method for ultrasound-based classification of malignant breast lesions. The RF time series method is versatile and requires only a few seconds of raw ultrasound data with no need for additional instrumentation. Using the RF time series features, and a machine learning framework, we have generated malignancy maps, from the estimated cancer likelihood, for decision support in biopsy recommendation. These maps depict the likelihood of malignancy for regions of size 1 mm(2) within the suspicious lesions. We report an area under receiver operating characteristics curve of 0.86 (95% confidence interval [CI]: 0.84%-0.90%) using support vector machines and 0.81 (95% CI: 0.78-0.85) using Random Forests classification algorithms, on 22 subjects with leave-one-subject-out cross-validation. Changing the classification method yielded consistent results which indicates the robustness of this tissue typing method. The findings of this report suggest that ultrasound RF time series, along with the developed machine learning framework, can help in differentiating malignant from benign breast lesions, subsequently reducing the number of unnecessary biopsies after mammography screening.
Collapse
|
48
|
Li Y, Jiao L, Shang R, Stolkin R. Dynamic-context cooperative quantum-behaved particle swarm optimization based on multilevel thresholding applied to medical image segmentation. Inf Sci (N Y) 2015. [DOI: 10.1016/j.ins.2014.10.005] [Citation(s) in RCA: 94] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
49
|
Chang SC, Lee YW, Lai YC, Tiu CM, Wang HK, Chiou HJ, Hsu YW, Chou YH, Chang RF. Automatic slice selection and diagnosis of breast strain elastography. Med Phys 2014; 41:102902. [DOI: 10.1118/1.4894717] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022] Open
|