1
|
Harchegani HB, Moghaddasi H. Designing a Hybrid Method of Artificial Neural Network and Particle Swarm Optimization to Diagnosis Polyps from Colorectal CT Images. Int J Prev Med 2024; 15:4. [PMID: 38487703 PMCID: PMC10935572 DOI: 10.4103/ijpvm.ijpvm_373_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2022] [Accepted: 05/17/2023] [Indexed: 03/17/2024] Open
Abstract
Background Since colorectal cancer is one of the most important types of cancer in the world that often leads to death, computer-aided diagnostic (CAD) systems are a promising solution for early diagnosis of this disease with fewer side effects than conventional colonoscopy. Therefore, the aim of this research is to design a CAD system for processing colorectal Computerized Tomography (CT) images using a combination of an artificial neural network and a particle swarm optimizer. Method First, the data set of the research was created from the colorectal CT images of the patients of Loghman-e Hakim Hospitals in Tehran and Al-Zahra Hospitals in Isfahan who underwent colorectal CT imaging and had conventional colonoscopy done within a maximum period of one month after that. Then the steps of model implementation, including electronic cleansing of images, segmentation, labeling of samples, extraction of features, and training and optimization of the artificial neural network (ANN) with a particle swarm optimizer, were performed. A binomial statistical test and confusion matrix calculation were used to evaluate the model. Results The values of accuracy, sensitivity, and specificity of the model with a P value = 0.000 as a result of the McNemar test were 0.9354, 0.9298, and 0.9889, respectively. Also, the result of the P value of the binomial test of the ratio of diagnosis of the model and the radiologist from Loqman Hakim and Al-Zahra Hospitals was 0.044 and 0.021, respectively. Conclusions The results of statistical tests and research variables show the efficiency of the CTC-CAD system created based on the hybrid of the ANN and particle swarm optimization compared to the opinion of radiologists in diagnosing colorectal polyps from CTC images.
Collapse
Affiliation(s)
- Hossein Beigi Harchegani
- Health Information Technology Research Center, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Hamid Moghaddasi
- Professor of Health Information Management and Medical Informatics, Faculty of Paramedical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| |
Collapse
|
2
|
Cao B, Zhang KC, Wei B, Chen L. Status quo and future prospects of artificial neural network from the perspective of gastroenterologists. World J Gastroenterol 2021; 27:2681-2709. [PMID: 34135549 PMCID: PMC8173384 DOI: 10.3748/wjg.v27.i21.2681] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 03/29/2021] [Accepted: 04/22/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial neural networks (ANNs) are one of the primary types of artificial intelligence and have been rapidly developed and used in many fields. In recent years, there has been a sharp increase in research concerning ANNs in gastrointestinal (GI) diseases. This state-of-the-art technique exhibits excellent performance in diagnosis, prognostic prediction, and treatment. Competitions between ANNs and GI experts suggest that efficiency and accuracy might be compatible in virtue of technique advancements. However, the shortcomings of ANNs are not negligible and may induce alterations in many aspects of medical practice. In this review, we introduce basic knowledge about ANNs and summarize the current achievements of ANNs in GI diseases from the perspective of gastroenterologists. Existing limitations and future directions are also proposed to optimize ANN’s clinical potential. In consideration of barriers to interdisciplinary knowledge, sophisticated concepts are discussed using plain words and metaphors to make this review more easily understood by medical practitioners and the general public.
Collapse
Affiliation(s)
- Bo Cao
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Ke-Cheng Zhang
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Bo Wei
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Lin Chen
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| |
Collapse
|
3
|
Dutt S, Sivaraman A, Savoy F, Rajalakshmi R. Insights into the growing popularity of artificial intelligence in ophthalmology. Indian J Ophthalmol 2021; 68:1339-1346. [PMID: 32587159 PMCID: PMC7574057 DOI: 10.4103/ijo.ijo_1754_19] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023] Open
Abstract
Artificial intelligence (AI) in healthcare is the use of computer-algorithms in analyzing complex medical data to detect associations and provide diagnostic support outputs. AI and deep learning (DL) find obvious applications in fields like ophthalmology wherein huge amount of image-based data need to be analyzed; however, the outcomes related to image recognition are reasonably well-defined. AI and DL have found important roles in ophthalmology in early screening and detection of conditions such as diabetic retinopathy (DR), age-related macular degeneration (ARMD), retinopathy of prematurity (ROP), glaucoma, and other ocular disorders, being successful inroads as far as early screening and diagnosis are concerned and appear promising with advantages of high-screening accuracy, consistency, and scalability. AI algorithms need equally skilled manpower, trained optometrists/ophthalmologists (annotators) to provide accurate ground truth for training the images. The basis of diagnoses made by AI algorithms is mechanical, and some amount of human intervention is necessary for further interpretations. This review was conducted after tracing the history of AI in ophthalmology across multiple research databases and aims to summarise the journey of AI in ophthalmology so far, making a close observation of most of the crucial studies conducted. This article further aims to highlight the potential impact of AI in ophthalmology, the pitfalls, and how to optimally use it to the maximum benefits of the ophthalmologists, the healthcare systems and the patients, alike.
Collapse
Affiliation(s)
- Sreetama Dutt
- Department of Research & Development, Remidio Innovative Solutions, Bengaluru, Karnataka, India
| | - Anand Sivaraman
- Department of Research & Development, Remidio Innovative Solutions, Bengaluru, Karnataka, India
| | - Florian Savoy
- Department of Artificial Intelligence, Medios Technologies, Singapore
| | - Ramachandran Rajalakshmi
- Department of Ophthalmology, Dr. Mohan's Diabetes Specialities Centre Madras Diabetes Research Foundation, Chennai, Tamil Nadu, India
| |
Collapse
|
4
|
Hegde N, Shishir M, Shashank S, Dayananda P, Latte MV. A Survey on Machine Learning and Deep Learning-based Computer-Aided Methods for Detection of Polyps in CT Colonography. Curr Med Imaging 2021; 17:3-15. [PMID: 32294045 DOI: 10.2174/2213335607999200415141427] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2019] [Revised: 02/09/2020] [Accepted: 02/27/2020] [Indexed: 11/22/2022]
Abstract
BACKGROUND Colon cancer generally begins as a neoplastic growth of tissue, called polyps, originating from the inner lining of the colon wall. Most colon polyps are considered harmless but over the time, they can evolve into colon cancer, which, when diagnosed in later stages, is often fatal. Hence, time is of the essence in the early detection of polyps and the prevention of colon cancer. METHODS To aid this endeavor, many computer-aided methods have been developed, which use a wide array of techniques to detect, localize and segment polyps from CT Colonography images. In this paper, a comprehensive state-of-the-art method is proposed and categorize this work broadly using the available classification techniques using Machine Learning and Deep Learning. CONCLUSION The performance of each of the proposed approach is analyzed with existing methods and also how they can be used to tackle the timely and accurate detection of colon polyps.
Collapse
Affiliation(s)
- Niharika Hegde
- JSS Academy of Technical Education, Bangalore-560060, Karnataka, India
| | - M Shishir
- JSS Academy of Technical Education, Bangalore-560060, Karnataka, India
| | - S Shashank
- JSS Academy of Technical Education, Bangalore-560060, Karnataka, India
| | - P Dayananda
- JSS Academy of Technical Education, Bangalore-560060, Karnataka, India
| | | |
Collapse
|
5
|
A Comparative Study of Modern Machine Learning Approaches for Focal Lesion Detection and Classification in Medical Images: BoVW, CNN and MTANN. INTELLIGENT SYSTEMS REFERENCE LIBRARY 2018. [DOI: 10.1007/978-3-319-68843-5_2] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
6
|
Suzuki K. Overview of deep learning in medical imaging. Radiol Phys Technol 2017; 10:257-273. [PMID: 28689314 DOI: 10.1007/s12194-017-0406-5] [Citation(s) in RCA: 399] [Impact Index Per Article: 49.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2017] [Accepted: 06/29/2017] [Indexed: 02/07/2023]
Abstract
The use of machine learning (ML) has been increasing rapidly in the medical imaging field, including computer-aided diagnosis (CAD), radiomics, and medical image analysis. Recently, an ML area called deep learning emerged in the computer vision field and became very popular in many fields. It started from an event in late 2012, when a deep-learning approach based on a convolutional neural network (CNN) won an overwhelming victory in the best-known worldwide computer vision competition, ImageNet Classification. Since then, researchers in virtually all fields, including medical imaging, have started actively participating in the explosively growing field of deep learning. In this paper, the area of deep learning in medical imaging is overviewed, including (1) what was changed in machine learning before and after the introduction of deep learning, (2) what is the source of the power of deep learning, (3) two major deep-learning models: a massive-training artificial neural network (MTANN) and a convolutional neural network (CNN), (4) similarities and differences between the two models, and (5) their applications to medical imaging. This review shows that ML with feature input (or feature-based ML) was dominant before the introduction of deep learning, and that the major and essential difference between ML before and after deep learning is the learning of image data directly without object segmentation or feature extraction; thus, it is the source of the power of deep learning, although the depth of the model is an important attribute. The class of ML with image input (or image-based ML) including deep learning has a long history, but recently gained popularity due to the use of the new terminology, deep learning. There are two major models in this class of ML in medical imaging, MTANN and CNN, which have similarities as well as several differences. In our experience, MTANNs were substantially more efficient in their development, had a higher performance, and required a lesser number of training cases than did CNNs. "Deep learning", or ML with image input, in medical imaging is an explosively growing, promising field. It is expected that ML with image input will be the mainstream area in the field of medical imaging in the next few decades.
Collapse
Affiliation(s)
- Kenji Suzuki
- Medical Imaging Research Center and Department of Electrical and Computer Engineering, Illinois Institute of Technology, 3440 South Dearborn Street, Chicago, IL, 60616, USA. .,World Research Hub Initiative (WRHI), Tokyo Institute of Technology, Tokyo, Japan.
| |
Collapse
|
7
|
A CAD of fully automated colonic polyp detection for contrasted and non-contrasted CT scans. Int J Comput Assist Radiol Surg 2017; 12:627-644. [PMID: 28101760 DOI: 10.1007/s11548-017-1521-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2016] [Accepted: 01/04/2017] [Indexed: 10/20/2022]
Abstract
PURPOSE Computer-aided detection (CAD) systems are developed to help radiologists detect colonic polyps over CT scans. It is possible to reduce the detection time and increase the detection accuracy rates by using CAD systems. In this paper, we aimed to develop a fully integrated CAD system for automated detection of polyps that yields a high polyp detection rate with a reasonable number of false positives. METHODS The proposed CAD system is a multistage implementation whose main components are: automatic colon segmentation, candidate detection, feature extraction and classification. The first element of the algorithm includes a discrete segmentation for both air and fluid regions. Colon-air regions were determined based on adaptive thresholding, and the volume/length measure was used to detect air regions. To extract the colon-fluid regions, a rule-based connectivity test was used to detect the regions belong to the colon. Potential polyp candidates were detected based on the 3D Laplacian of Gaussian filter. The geometrical features were used to reduce false-positive detections. A 2D projection image was generated to extract discriminative features as the inputs of an artificial neural network classifier. RESULTS Our CAD system performs at 100% sensitivity for polyps larger than 9 mm, 95.83% sensitivity for polyps 6-10 mm and 85.71% sensitivity for polyps smaller than 6 mm with 5.3 false positives per dataset. Also, clinically relevant polyps ([Formula: see text]6 mm) were identified with 96.67% sensitivity at 1.12 FP/dataset. CONCLUSIONS To the best of our knowledge, the novel polyp candidate detection system which determines polyp candidates with LoG filters is one of the main contributions. We also propose a new 2D projection image calculation scheme to determine the distinctive features. We believe that our CAD system is highly effective for assisting radiologist interpreting CT.
Collapse
|
8
|
Tajbakhsh N, Gurudu SR, Liang J. Automated Polyp Detection in Colonoscopy Videos Using Shape and Context Information. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:630-44. [PMID: 26462083 DOI: 10.1109/tmi.2015.2487997] [Citation(s) in RCA: 265] [Impact Index Per Article: 29.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
This paper presents the culmination of our research in designing a system for computer-aided detection (CAD) of polyps in colonoscopy videos. Our system is based on a hybrid context-shape approach, which utilizes context information to remove non-polyp structures and shape information to reliably localize polyps. Specifically, given a colonoscopy image, we first obtain a crude edge map. Second, we remove non-polyp edges from the edge map using our unique feature extraction and edge classification scheme. Third, we localize polyp candidates with probabilistic confidence scores in the refined edge maps using our novel voting scheme. The suggested CAD system has been tested using two public polyp databases, CVC-ColonDB, containing 300 colonoscopy images with a total of 300 polyp instances from 15 unique polyps, and ASU-Mayo database, which is our collection of colonoscopy videos containing 19,400 frames and a total of 5,200 polyp instances from 10 unique polyps. We have evaluated our system using free-response receiver operating characteristic (FROC) analysis. At 0.1 false positives per frame, our system achieves a sensitivity of 88.0% for CVC-ColonDB and a sensitivity of 48% for the ASU-Mayo database. In addition, we have evaluated our system using a new detection latency analysis where latency is defined as the time from the first appearance of a polyp in the colonoscopy video to the time of its first detection by our system. At 0.05 false positives per frame, our system yields a polyp detection latency of 0.3 seconds.
Collapse
|
9
|
Epstein ML, Obara PR, Chen Y, Liu J, Zarshenas A, Makkinejad N, Dachman AH, Suzuki K. Quantitative radiology: automated measurement of polyp volume in computed tomography colonography using Hessian matrix-based shape extraction and volume growing. Quant Imaging Med Surg 2015; 5:673-84. [PMID: 26682137 DOI: 10.3978/j.issn.2223-4292.2015.10.06] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
BACKGROUND Current measurement of the single longest dimension of a polyp is subjective and has variations among radiologists. Our purpose was to develop a computerized measurement of polyp volume in computed tomography colonography (CTC). METHODS We developed a 3D automated scheme for measuring polyp volume at CTC. Our scheme consisted of segmentation of colon wall to confine polyp segmentation to the colon wall, extraction of a highly polyp-like seed region based on the Hessian matrix, a 3D volume growing technique under the minimum surface expansion criterion for segmentation of polyps, and sub-voxel refinement and surface smoothing for obtaining a smooth polyp surface. Our database consisted of 30 polyp views (15 polyps) in CTC scans from 13 patients. Each patient was scanned in the supine and prone positions. Polyp sizes measured in optical colonoscopy (OC) ranged from 6-18 mm with a mean of 10 mm. A radiologist outlined polyps in each slice and calculated volumes by summation of volumes in each slice. The measurement study was repeated 3 times at least 1 week apart for minimizing a memory effect bias. We used the mean volume of the three studies as "gold standard". RESULTS Our measurement scheme yielded a mean polyp volume of 0.38 cc (range, 0.15-1.24 cc), whereas a mean "gold standard" manual volume was 0.40 cc (range, 0.15-1.08 cc). The "gold-standard" manual and computer volumetric reached excellent agreement (intra-class correlation coefficient =0.80), with no statistically significant difference [P (F≤f) =0.42]. CONCLUSIONS We developed an automated scheme for measuring polyp volume at CTC based on Hessian matrix-based shape extraction and volume growing. Polyp volumes obtained by our automated scheme agreed excellently with "gold standard" manual volumes. Our fully automated scheme can efficiently provide accurate polyp volumes for radiologists; thus, it would help radiologists improve the accuracy and efficiency of polyp volume measurements in CTC.
Collapse
Affiliation(s)
- Mark L Epstein
- 1 Department of Radiology, The University of Chicago, Chicago, IL, USA ; 2 Department of Radiology, University of New Mexico, Albuquerque, NM, USA ; 3 Department of Radiology, Loyola University Medical Center, Maywood, IL, USA ; 4 Medical Imaging Research Center & Department of Electrical and Computer Engineering, Illinois Institute of Technology, Chicago, IL, USA ; 5 School of Electronics Engineering and Computer Science, Beijing University, Beijing 100871, China
| | - Piotr R Obara
- 1 Department of Radiology, The University of Chicago, Chicago, IL, USA ; 2 Department of Radiology, University of New Mexico, Albuquerque, NM, USA ; 3 Department of Radiology, Loyola University Medical Center, Maywood, IL, USA ; 4 Medical Imaging Research Center & Department of Electrical and Computer Engineering, Illinois Institute of Technology, Chicago, IL, USA ; 5 School of Electronics Engineering and Computer Science, Beijing University, Beijing 100871, China
| | - Yisong Chen
- 1 Department of Radiology, The University of Chicago, Chicago, IL, USA ; 2 Department of Radiology, University of New Mexico, Albuquerque, NM, USA ; 3 Department of Radiology, Loyola University Medical Center, Maywood, IL, USA ; 4 Medical Imaging Research Center & Department of Electrical and Computer Engineering, Illinois Institute of Technology, Chicago, IL, USA ; 5 School of Electronics Engineering and Computer Science, Beijing University, Beijing 100871, China
| | - Junchi Liu
- 1 Department of Radiology, The University of Chicago, Chicago, IL, USA ; 2 Department of Radiology, University of New Mexico, Albuquerque, NM, USA ; 3 Department of Radiology, Loyola University Medical Center, Maywood, IL, USA ; 4 Medical Imaging Research Center & Department of Electrical and Computer Engineering, Illinois Institute of Technology, Chicago, IL, USA ; 5 School of Electronics Engineering and Computer Science, Beijing University, Beijing 100871, China
| | - Amin Zarshenas
- 1 Department of Radiology, The University of Chicago, Chicago, IL, USA ; 2 Department of Radiology, University of New Mexico, Albuquerque, NM, USA ; 3 Department of Radiology, Loyola University Medical Center, Maywood, IL, USA ; 4 Medical Imaging Research Center & Department of Electrical and Computer Engineering, Illinois Institute of Technology, Chicago, IL, USA ; 5 School of Electronics Engineering and Computer Science, Beijing University, Beijing 100871, China
| | - Nazanin Makkinejad
- 1 Department of Radiology, The University of Chicago, Chicago, IL, USA ; 2 Department of Radiology, University of New Mexico, Albuquerque, NM, USA ; 3 Department of Radiology, Loyola University Medical Center, Maywood, IL, USA ; 4 Medical Imaging Research Center & Department of Electrical and Computer Engineering, Illinois Institute of Technology, Chicago, IL, USA ; 5 School of Electronics Engineering and Computer Science, Beijing University, Beijing 100871, China
| | - Abraham H Dachman
- 1 Department of Radiology, The University of Chicago, Chicago, IL, USA ; 2 Department of Radiology, University of New Mexico, Albuquerque, NM, USA ; 3 Department of Radiology, Loyola University Medical Center, Maywood, IL, USA ; 4 Medical Imaging Research Center & Department of Electrical and Computer Engineering, Illinois Institute of Technology, Chicago, IL, USA ; 5 School of Electronics Engineering and Computer Science, Beijing University, Beijing 100871, China
| | - Kenji Suzuki
- 1 Department of Radiology, The University of Chicago, Chicago, IL, USA ; 2 Department of Radiology, University of New Mexico, Albuquerque, NM, USA ; 3 Department of Radiology, Loyola University Medical Center, Maywood, IL, USA ; 4 Medical Imaging Research Center & Department of Electrical and Computer Engineering, Illinois Institute of Technology, Chicago, IL, USA ; 5 School of Electronics Engineering and Computer Science, Beijing University, Beijing 100871, China
| |
Collapse
|
10
|
Alizadeh Sani Z, Shalbaf A, Behnam H, Shalbaf R. Automatic computation of left ventricular volume changes over a cardiac cycle from echocardiography images by nonlinear dimensionality reduction. J Digit Imaging 2015; 28:91-8. [PMID: 25059548 DOI: 10.1007/s10278-014-9722-z] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
Curve of left ventricular (LV) volume changes throughout the cardiac cycle is a fundamental parameter for clinical evaluation of various cardiovascular diseases. Currently, this evaluation is often performed manually which is tedious and time consuming and suffers from significant interobserver and intraobserver variability. This paper introduces a new automatic method, based on nonlinear dimensionality reduction (NLDR) for extracting the curve of the LV volume changes over a cardiac cycle from two-dimensional (2-D) echocardiography images. Isometric feature mapping (Isomap) is one of the most popular NLDR algorithms. In this study, a modified version of Isomap algorithm, where image to image distance metric is computed using nonrigid registration, is applied on 2-D echocardiography images of one cycle of heart. Using this approach, the nonlinear information of these images is embedded in a 2-D manifold and each image is characterized by a symbol on the constructed manifold. This new representation visualizes the relationship between these images based on LV volume changes and allows extracting the curve of the LV volume changes automatically. Our method in comparison to the traditional segmentation algorithms does not need any LV myocardial segmentation and tracking, particularly difficult in the echocardiography images. Moreover, a large data set under various diseases for training is not required. The results obtained by our method are quantitatively evaluated to those obtained manually by the highly experienced echocardiographer on ten healthy volunteers and six patients which depict the usefulness of the presented method.
Collapse
Affiliation(s)
- Zahra Alizadeh Sani
- Rajaie Cardiovascular Medical & Research Center, Iran University of Medical Science, Tehran, Iran
| | | | | | | |
Collapse
|
11
|
Wang S, Li D, Petrick N, Sahiner B, Linguraru MG, Summers RM. Optimizing area under the ROC curve using semi-supervised learning. PATTERN RECOGNITION 2015; 48:276-287. [PMID: 25395692 PMCID: PMC4226543 DOI: 10.1016/j.patcog.2014.07.025] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Receiver operating characteristic (ROC) analysis is a standard methodology to evaluate the performance of a binary classification system. The area under the ROC curve (AUC) is a performance metric that summarizes how well a classifier separates two classes. Traditional AUC optimization techniques are supervised learning methods that utilize only labeled data (i.e., the true class is known for all data) to train the classifiers. In this work, inspired by semi-supervised and transductive learning, we propose two new AUC optimization algorithms hereby referred to as semi-supervised learning receiver operating characteristic (SSLROC) algorithms, which utilize unlabeled test samples in classifier training to maximize AUC. Unlabeled samples are incorporated into the AUC optimization process, and their ranking relationships to labeled positive and negative training samples are considered as optimization constraints. The introduced test samples will cause the learned decision boundary in a multidimensional feature space to adapt not only to the distribution of labeled training data, but also to the distribution of unlabeled test data. We formulate the semi-supervised AUC optimization problem as a semi-definite programming problem based on the margin maximization theory. The proposed methods SSLROC1 (1-norm) and SSLROC2 (2-norm) were evaluated using 34 (determined by power analysis) randomly selected datasets from the University of California, Irvine machine learning repository. Wilcoxon signed rank tests showed that the proposed methods achieved significant improvement compared with state-of-the-art methods. The proposed methods were also applied to a CT colonography dataset for colonic polyp classification and showed promising results.
Collapse
Affiliation(s)
- Shijun Wang
- Imaging Biomarkers and Computer-Aided Diagnosis Lab, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD 20892-1182, United States
| | - Diana Li
- Imaging Biomarkers and Computer-Aided Diagnosis Lab, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD 20892-1182, United States
| | - Nicholas Petrick
- Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, MD 20993, United States
| | - Berkman Sahiner
- Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, MD 20993, United States
| | - Marius George Linguraru
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children’s National Health System, Washington, DC 20010, United States
- School of Medicine and Health Sciences, George Washington University, Washington, DC 20037, United States
| | - Ronald M. Summers
- Imaging Biomarkers and Computer-Aided Diagnosis Lab, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD 20892-1182, United States
| |
Collapse
|
12
|
Three-dimensional SVM with latent variable: application for detection of lung lesions in CT images. J Med Syst 2014; 39:171. [PMID: 25472729 DOI: 10.1007/s10916-014-0171-5] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2014] [Accepted: 11/25/2014] [Indexed: 10/24/2022]
Abstract
The study aims to improve the performance of current computer-aided schemes for the detection of lung lesions, especially the low-contrast in gray density or irregular in shape. The relative position between suspected lesion and whole lung is, for the first time, added as a latent feature to enrich current Three-dimensional (3D) features such as shape, texture. Subsequently, 3D matrix patterns-based Support Vector Machine (SVM) with the latent variable, referred to as L-SVM3Dmatrix, was constructed accordingly. A CT image database containing 750 abnormal cases with 1050 lesions was used to train and evaluate several similar computer-aided detection (CAD) schemes: traditional features-based SVM (SVMfeature), 3D matrix patterns-based SVM (SVM3Dmatrix) and L-SVM3Dmatrix. The classifier performances were evaluated by computing the area under the ROC curve (AUC), using a 5-fold cross-validation. The L-SVM3Dmatrix sensitivity was 93.0 with 1.23% percentage of False Positive (FP), the SVM3Dmatrix sensitivity was 88.4 with 1.49% percentage of FP, and the SVMfeature sensitivity was 87.2 with 1.78% percentage of FP. The L-SVM3Dmatrix outperformed other current lung CAD schemes, especially regarding the difficult lesions.
Collapse
|
13
|
Lu L, Devarakota P, Vikal S, Wu D, Zheng Y, Wolf M. Computer Aided Diagnosis Using Multilevel Image Features on Large-Scale Evaluation. MEDICAL COMPUTER VISION. LARGE DATA IN MEDICAL IMAGING 2014. [DOI: 10.1007/978-3-319-05530-5_16] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/11/2023]
|
14
|
Pixel-based Machine Learning in Computer-Aided Diagnosis of Lung and Colon Cancer. INTELLIGENT SYSTEMS REFERENCE LIBRARY 2014. [DOI: 10.1007/978-3-642-40017-9_5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
|
15
|
Shalbaf A, Behnam H, Alizade-Sani Z, Shojaifard M. Automatic assessment of regional and global wall motion abnormalities in echocardiography images by nonlinear dimensionality reduction. Med Phys 2013; 40:052904. [PMID: 23635297 DOI: 10.1118/1.4799840] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022] Open
Abstract
PURPOSE Identification and assessment of left ventricular (LV) global and regional wall motion (RWM) abnormalities are essential for clinical evaluation of various cardiovascular diseases. Currently, this evaluation is performed visually which is highly dependent on the training and experience of echocardiographers and thus is prone to considerable interobserver and intraobserver variability. This paper presents a new automatic method, based on nonlinear dimensionality reduction (NLDR) for global wall motion evaluation and also detection and classification of RWM abnormalities of LV wall in a three-point scale as follows: (1) normokinesia, (2) hypokinesia, and (3) akinesia. METHODS Isometric feature mapping (Isomap) is one of the most popular NLDR algorithms. In this paper, a modified version of Isomap algorithm, where image to image distance metric is computed using nonrigid registration, is applied on two-dimensional (2D) echocardiography images of one cycle of heart. By this approach, nonlinear information in these images is embedded in a 2D manifold and each image is characterized by a point on the constructed manifold. This new representation visualizes the relationship between these images based on LV volume changes. Then, a new global and regional quantitative index from the resultant manifold is proposed for global wall motion estimation and also classification of RWM of LV wall in a three-point scale. Obtained results by our method are quantitatively evaluated to those obtained visually by two experienced echocardiographers as the reference (gold standard) on 10 healthy volunteers and 14 patients. RESULTS Linear regression analysis between the proposed global quantitative index and the global wall motion score index and also with LV ejection fraction obtained by reference experienced echocardiographers resulted in the correlation coefficients of 0.85 and 0.90, respectively. Comparison between the proposed automatic RWM scoring and the reference visual scoring resulted in an absolute agreement of 82% and a relative agreement of 97%. CONCLUSIONS The proposed diagnostic method can be used as a useful tool as well as a reference visual assessment by experienced echocardiographers for global wall motion estimation and also classification of RWM abnormalities of LV wall in a three-point scale in clinical evaluations.
Collapse
Affiliation(s)
- Ahmad Shalbaf
- Department of Biomedical Engineering, School of Electrical Engineering, Iran University of Science & Technology, Tehran 1684613114, Iran
| | | | | | | |
Collapse
|
16
|
Suzuki K. Machine Learning in Computer-aided Diagnosis of the Thorax and Colon in CT: A Survey. IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS 2013; E96-D:772-783. [PMID: 24174708 PMCID: PMC3810349 DOI: 10.1587/transinf.e96.d.772] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Computer-aided detection (CADe) and diagnosis (CAD) has been a rapidly growing, active area of research in medical imaging. Machine leaning (ML) plays an essential role in CAD, because objects such as lesions and organs may not be represented accurately by a simple equation; thus, medical pattern recognition essentially require "learning from examples." One of the most popular uses of ML is the classification of objects such as lesion candidates into certain classes (e.g., abnormal or normal, and lesions or non-lesions) based on input features (e.g., contrast and area) obtained from segmented lesion candidates. The task of ML is to determine "optimal" boundaries for separating classes in the multidimensional feature space which is formed by the input features. ML algorithms for classification include linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), multilayer perceptrons, and support vector machines (SVM). Recently, pixel/voxel-based ML (PML) emerged in medical image processing/analysis, which uses pixel/voxel values in images directly, instead of features calculated from segmented lesions, as input information; thus, feature calculation or segmentation is not required. In this paper, ML techniques used in CAD schemes for detection and diagnosis of lung nodules in thoracic CT and for detection of polyps in CT colonography (CTC) are surveyed and reviewed.
Collapse
Affiliation(s)
- Kenji Suzuki
- Department of Radiology, The University of Chicago, Chicago, IL 60637, USA
| |
Collapse
|
17
|
Computer-aided diagnosis systems for lung cancer: challenges and methodologies. Int J Biomed Imaging 2013; 2013:942353. [PMID: 23431282 PMCID: PMC3570946 DOI: 10.1155/2013/942353] [Citation(s) in RCA: 116] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2012] [Accepted: 11/20/2012] [Indexed: 11/24/2022] Open
Abstract
This paper overviews one of the most important, interesting, and challenging problems in oncology, the problem of lung cancer diagnosis. Developing an effective computer-aided diagnosis (CAD) system for lung cancer is of great clinical importance and can increase the patient's chance of survival. For this reason, CAD systems for lung cancer have been investigated in a huge number of research studies. A typical CAD system for lung cancer diagnosis is composed of four main processing steps: segmentation of the lung fields, detection of nodules inside the lung fields, segmentation of the detected nodules, and diagnosis of the nodules as benign or malignant. This paper overviews the current state-of-the-art techniques that have been developed to implement each of these CAD processing steps. For each technique, various aspects of technical issues, implemented methodologies, training and testing databases, and validation methods, as well as achieved performances, are described. In addition, the paper addresses several challenges that researchers face in each implementation step and outlines the strengths and drawbacks of the existing approaches for lung cancer CAD systems.
Collapse
|
18
|
Chen GH, Wachinger C, Golland P. Sparse projections of medical images onto manifolds. INFORMATION PROCESSING IN MEDICAL IMAGING : PROCEEDINGS OF THE ... CONFERENCE 2013; 23:292-303. [PMID: 24683977 PMCID: PMC3979531 DOI: 10.1007/978-3-642-38868-2_25] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Manifold learning has been successfully applied to a variety of medical imaging problems. Its use in real-time applications requires fast projection onto the low-dimensional space. To this end, out-of-sample extensions are applied by constructing an interpolation function that maps from the input space to the low-dimensional manifold. Commonly used approaches such as the Nyström extension and kernel ridge regression require using all training points. We propose an interpolation function that only depends on a small subset of the input training data. Consequently, in the testing phase each new point only needs to be compared against a small number of input training data in order to project the point onto the low-dimensional space. We interpret our method as an out-of-sample extension that approximates kernel ridge regression. Our method involves solving a simple convex optimization problem and has the attractive property of guaranteeing an upper bound on the approximation error, which is crucial for medical applications. Tuning this error bound controls the sparsity of the resulting interpolation function. We illustrate our method in two clinical applications that require fast mapping of input images onto a low-dimensional space.
Collapse
Affiliation(s)
- George H. Chen
- Massachusetts Institute of Technology, Cambridge MA 02139, USA
| | | | - Polina Golland
- Massachusetts Institute of Technology, Cambridge MA 02139, USA
| |
Collapse
|
19
|
Affiliation(s)
- Qingzhu Wang
- School of Information Engineering, Northeast Dianli University, Jilin 132012, China.
| | | | | | | |
Collapse
|
20
|
Suzuki K. A review of computer-aided diagnosis in thoracic and colonic imaging. Quant Imaging Med Surg 2012; 2:163-76. [PMID: 23256078 DOI: 10.3978/j.issn.2223-4292.2012.09.02] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2012] [Accepted: 09/19/2012] [Indexed: 12/24/2022]
Abstract
Medical imaging has been indispensable in medicine since the discovery of x-rays. Medical imaging offers useful information on patients' medical conditions and on the causes of their symptoms and diseases. As imaging technologies advance, a large number of medical images are produced which physicians/radiologists must interpret. Thus, computer aids are demanded and become indispensable in physicians' decision making based on medical images. Consequently, computer-aided detection and diagnosis (CAD) has been investigated and has been an active research area in medical imaging. CAD is defined as detection and/or diagnosis made by a radiologist/physician who takes into account the computer output as a "second opinion". In CAD research, detection and diagnosis of lung and colorectal cancer in thoracic and colonic imaging constitute major areas, because lung and colorectal cancers are the leading and second leading causes, respectively, of cancer deaths in the U.S. and also in other countries. In this review, CAD of the thorax and colon, including CAD for detection and diagnosis of lung nodules in thoracic CT, and that for detection of polyps in CT colonography, are reviewed.
Collapse
Affiliation(s)
- Kenji Suzuki
- Department of Radiology, The University of Chicago, 5841 South Maryland Avenue, Chicago, IL 60637, USA
| |
Collapse
|
21
|
McKenna MT, Wang S, Nguyen TB, Burns JE, Petrick N, Summers RM. Strategies for improved interpretation of computer-aided detections for CT colonography utilizing distributed human intelligence. Med Image Anal 2012; 16:1280-92. [PMID: 22705287 PMCID: PMC3443285 DOI: 10.1016/j.media.2012.04.007] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2012] [Revised: 03/21/2012] [Accepted: 04/24/2012] [Indexed: 01/07/2023]
Abstract
Computer-aided detection (CAD) systems have been shown to improve the diagnostic performance of CT colonography (CTC) in the detection of premalignant colorectal polyps. Despite the improvement, the overall system is not optimal. CAD annotations on true lesions are incorrectly dismissed, and false positives are misinterpreted as true polyps. Here, we conduct an observer performance study utilizing distributed human intelligence in the form of anonymous knowledge workers (KWs) to investigate human performance in classifying polyp candidates under different presentation strategies. We evaluated 600 polyp candidates from 50 patients, each case having at least one polyp ≥6 mm, from a large database of CTC studies. Each polyp candidate was labeled independently as a true or false polyp by 20 KWs and an expert radiologist. We asked each labeler to determine whether the candidate was a true polyp after looking at a single 3D-rendered image of the candidate and after watching a video fly-around of the candidate. We found that distributed human intelligence improved significantly when presented with the additional information in the video fly-around. We noted that performance degraded with increasing interpretation time and increasing difficulty, but distributed human intelligence performed better than our CAD classifier for "easy" and "moderate" polyp candidates. Further, we observed numerous parallels between the expert radiologist and the KWs. Both showed similar improvement in classification moving from single-image to video interpretation. Additionally, difficulty estimates obtained from the KWs using an expectation maximization algorithm correlated well with the difficulty rating assigned by the expert radiologist. Our results suggest that distributed human intelligence is a powerful tool that will aid in the development of CAD for CTC.
Collapse
Affiliation(s)
- Matthew T. McKenna
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Building 10, Room 1C224, MSC 1182, Bethesda, MD 20892-1182
| | - Shijun Wang
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Building 10, Room 1C224, MSC 1182, Bethesda, MD 20892-1182
| | - Tan B. Nguyen
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Building 10, Room 1C224, MSC 1182, Bethesda, MD 20892-1182
| | - Joseph E. Burns
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Building 10, Room 1C224, MSC 1182, Bethesda, MD 20892-1182
- Department of Radiological Sciences, University of California, Irvine Medical Center, 101 The vCity Drive South, Orange, CA 92868
| | - Nicholas Petrick
- Center for Devices and Radiological Health, U.S. Food and Drug Administration, 10903 New Hampshire Avenue, Silver Spring, MD 20993-0002
| | - Ronald M. Summers
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Building 10, Room 1C224, MSC 1182, Bethesda, MD 20892-1182
| |
Collapse
|
22
|
Wang S, Petrick N, Van Uitert RL, Periaswamy S, Wei Z, Summers RM. Matching 3-D prone and supine CT colonography scans using graphs. IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE : A PUBLICATION OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY 2012; 16:676-82. [PMID: 22552585 PMCID: PMC3498489 DOI: 10.1109/titb.2012.2194297] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
In this paper, we propose a new registration method for prone and supine computed tomographic colonography scans using graph matching. We formulate 3-D colon registration as a graph matching problem and propose a new graph matching algorithm based on mean field theory. In the proposed algorithm, we solve the matching problem in an iterative way. In each step, we use mean field theory to find the matched pair of nodes with highest probability. During iterative optimization, one-to-one matching constraints are added to the system in a step-by-step approach. Prominent matching pairs found in previous iterations are used to guide subsequent mean field calculations. The proposed method was found to have the best performance with smallest standard deviation compared with two other baseline algorithms called the normalized distance along the colon centerline (NDACC) ( p = 0.17) with manual colon centerline correction and spectral matching ( p < 1e-5). A major advantage of the proposed method is that it is fully automatic and does not require defining a colon centerline for registration. For the latter NDACC method, user interaction is almost always needed for identifying the colon centerlines.
Collapse
Affiliation(s)
- Shijun Wang
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD 20892-1182, USA.
| | | | | | | | | | | |
Collapse
|
23
|
Wang S, McKenna MT, Nguyen TB, Burns JE, Petrick N, Sahiner B, Summers RM. Seeing is believing: video classification for computed tomographic colonography using multiple-instance learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2012; 31:1141-53. [PMID: 22552333 PMCID: PMC3480731 DOI: 10.1109/tmi.2012.2187304] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
In this paper, we present development and testing results for a novel colonic polyp classification method for use as part of a computed tomographic colonography (CTC) computer-aided detection (CAD) system. Inspired by the interpretative methodology of radiologists using 3-D fly-through mode in CTC reading, we have developed an algorithm which utilizes sequences of images (referred to here as videos) for classification of CAD marks. For each CAD mark, we created a video composed of a series of intraluminal, volume-rendered images visualizing the detection from multiple viewpoints. We then framed the video classification question as a multiple-instance learning (MIL) problem. Since a positive (negative) bag may contain negative (positive) instances, which in our case depends on the viewing angles and camera distance to the target, we developed a novel MIL paradigm to accommodate this class of problems. We solved the new MIL problem by maximizing a L2-norm soft margin using semidefinite programming, which can optimize relevant parameters automatically. We tested our method by analyzing a CTC data set obtained from 50 patients from three medical centers. Our proposed method showed significantly better performance compared with several traditional MIL methods.
Collapse
Affiliation(s)
- Shijun Wang
- National Institutes of Health, Bethesda, MD, 20892 USA
| | | | - Tan B. Nguyen
- National Institutes of Health, Bethesda, MD, 20892 USA
| | - Joseph E. Burns
- Department of Radiological Sciences, University of California, Irvine, School of Medicine, Orange, CA 92868 USA
| | - Nicholas Petrick
- Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, MD 20993 USA
| | - Berkman Sahiner
- Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, MD 20993 USA
| | | |
Collapse
|
24
|
Atasoy S, Mateus D, Meining A, Yang GZ, Navab N. Endoscopic video manifolds for targeted optical biopsy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2012; 31:637-53. [PMID: 22057050 DOI: 10.1109/tmi.2011.2174252] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
Gastro-intestinal (GI) endoscopy is a widely used clinical procedure for screening and surveillance of digestive tract diseases ranging from Barrett's Oesophagus to oesophageal cancer. Current surveillance protocol consists of periodic endoscopic examinations performed in 3-4 month intervals including expert's visual assessment and biopsies taken from suspicious tissue regions. Recent development of a new imaging technology, called probe-based confocal laser endomicroscopy (pCLE), enabled the acquisition of in vivo optical biopsies without removing any tissue sample. Besides its several advantages, i.e., noninvasiveness, real-time and in vivo feedback, optical biopsies involve a new challenge for the endoscopic expert. Due to their noninvasive nature, optical biopsies do not leave any scar on the tissue and therefore recognition of the previous optical biopsy sites in surveillance endoscopy becomes very challenging. In this work, we introduce a clustering and classification framework to facilitate retargeting previous optical biopsy sites in surveillance upper GI-endoscopies. A new representation of endoscopic videos based on manifold learning, "endoscopic video manifolds" (EVMs), is proposed. The low dimensional EVM representation is adapted to facilitate two different clustering tasks; i.e., clustering of informative frames and patient specific endoscopic segments, only by changing the similarity measure. Each step of the proposed framework is validated on three in vivo patient datasets containing 1834, 3445, and 1546 frames, corresponding to endoscopic videos of 73.36, 137.80, and 61.84 s, respectively. Improvements achieved by the introduced EVM representation are demonstrated by quantitative analysis in comparison to the original image representation and principal component analysis. Final experiments evaluating the complete framework demonstrate the feasibility of the proposed method as a promising step for assisting the endoscopic expert in retargeting the optical biopsy sites.
Collapse
Affiliation(s)
- Selen Atasoy
- Chair for Computer Aided Medical Procedures, Technische Universität München, München, Germany.
| | | | | | | | | |
Collapse
|
25
|
Pixel-based machine learning in medical imaging. Int J Biomed Imaging 2012; 2012:792079. [PMID: 22481907 PMCID: PMC3299341 DOI: 10.1155/2012/792079] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2011] [Accepted: 11/14/2011] [Indexed: 11/24/2022] Open
Abstract
Machine learning (ML) plays an important role in the medical imaging field, including medical image analysis and computer-aided diagnosis, because objects such as lesions and organs may not be represented accurately by a simple equation; thus, medical pattern recognition essentially require “learning from examples.” One of the most popular uses of ML is classification of objects such as lesions into certain classes (e.g., abnormal or normal, or lesions or nonlesions) based on input features (e.g., contrast and circularity) obtained from segmented object candidates. Recently, pixel/voxel-based ML (PML) emerged in medical image processing/analysis, which use pixel/voxel values in images directly instead of features calculated from segmented objects as input information; thus, feature calculation or segmentation is not required. Because the PML can avoid errors caused by inaccurate feature calculation and segmentation which often occur for subtle or complex objects, the performance of the PML can potentially be higher for such objects than that of common classifiers (i.e., feature-based MLs). In this paper, PMLs are surveyed to make clear (a) classes of PMLs, (b) similarities and differences within (among) different PMLs and those between PMLs and feature-based MLs, (c) advantages and limitations of PMLs, and (d) their applications in medical imaging.
Collapse
|
26
|
|
27
|
Wachinger C, Yigitsoy M, Rijkhorst EJ, Navab N. Manifold learning for image-based breathing gating in ultrasound and MRI. Med Image Anal 2011; 16:806-18. [PMID: 22226466 DOI: 10.1016/j.media.2011.11.008] [Citation(s) in RCA: 50] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2011] [Revised: 11/27/2011] [Accepted: 11/28/2011] [Indexed: 11/24/2022]
Abstract
Respiratory motion is a challenging factor for image acquisition and image-guided procedures in the abdominal and thoracic region. In order to address the issues arising from respiratory motion, it is often necessary to detect the respiratory signal. In this article, we propose a novel, purely image-based retrospective respiratory gating method for ultrasound and MRI. Further, we apply this technique to acquire breathing-affected 4D ultrasound with a wobbler probe and, similarly, to create 4D MR with a slice stacking approach. We achieve the gating with Laplacian eigenmaps, a manifold learning technique, to determine the low-dimensional manifold embedded in the high-dimensional image space. Since Laplacian eigenmaps assign to each image frame a coordinate in low-dimensional space by respecting the neighborhood relationship, they are well suited for analyzing the breathing cycle. We perform the image-based gating on several 2D and 3D ultrasound datasets over time, and quantify its very good performance by comparing it to measurements from an external gating system. For MRI, we perform the manifold learning on several datasets for various orientations and positions. We achieve very high correlations by a comparison to an alternative gating with diaphragm tracking.
Collapse
Affiliation(s)
- Christian Wachinger
- Computer Aided Medical Procedures (CAMP), Technische Universität München, München, Germany.
| | | | | | | |
Collapse
|
28
|
Xu JW, Suzuki K. Massive-training support vector regression and Gaussian process for false-positive reduction in computer-aided detection of polyps in CT colonography. Med Phys 2011; 38:1888-902. [PMID: 21626922 DOI: 10.1118/1.3562898] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE A massive-training artificial neural network (MTANN) has been developed for the reduction of false positives (FPs) in computer-aided detection (CADe) of polyps in CT colonography (CTC). A major limitation of the MTANN is the long training time. To address this issue, the authors investigated the feasibility of two state-of-the-art regression models, namely, support vector regression (SVR) and Gaussian process regression (GPR) models, in the massive-training framework and developed massive-training SVR (MTSVR) and massive-training GPR (MTGPR) for the reduction of FPs in CADe of polyps. METHODS The authors applied SVR and GPR as volume-processing techniques in the distinction of polyps from FP detections in a CTC CADe scheme. Unlike artificial neural networks (ANNs), both SVR and GPR are memory-based methods that store a part of or the entire training data for testing. Therefore, their training is generally fast and they are able to improve the efficiency of the massive-training methodology. Rooted in a maximum margin property, SVR offers excellent generalization ability and robustness to outliers. On the other hand, GPR approaches nonlinear regression from a Bayesian perspective, which produces both the optimal estimated function and the covariance associated with the estimation. Therefore, both SVR and GPR, as the state-of-the-art nonlinear regression models, are able to offer a performance comparable or potentially superior to that of ANN, with highly efficient training. Both MTSVR and MTGPR were trained directly with voxel values from CTC images. A 3D scoring method based on a 3D Gaussian weighting function was applied to the outputs of MTSVR and MTGPR for distinction between polyps and nonpolyps. To test the performance of the proposed models, the authors compared them to the original MTANN in the distinction between actual polyps and various types of FPs in terms of training time reduction and FP reduction performance. The authors' CTC database consisted of 240 CTC data sets obtained from 120 patients in the supine and prone positions. The training set consisted of 27 patients, 10 of which had 10 polyps. The authors selected 10 nonpolyps (i.e., FP sources) from the training set. These ten polyps and ten nonpolyps were used for training the proposed models. The testing set consisted of 93 patients, including 19 polyps in 7 patients and 86 negative patients with 474 FPs produced by an original CADe scheme. RESULTS With the MTSVR, the training time was reduced by a factor of 190, while a FP reduction performance [by-polyp sensitivity of 94.7% (18/19) with 2.5 (230/93) FPs/patient] comparable to that of the original MTANN [the same sensitivity with 2.6 (244/93) FPs/patient] was achieved. The classification performance in terms of the area under the receiver-operating-characteristic curve value of the MTGPR (0.82) was statistically significantly higher than that of the original MTANN (0.77), with a two-sided p-value of 0.03. The MTGPR yielded a 94.7% (18/19) by-polyp sensitivity at a FP rate of 2.5 (235/93) per patient and reduced the training time by a factor of 1.3. CONCLUSIONS Both MTSVR and MTGPR improve the efficiency of the training in the massive-training framework while maintaining a comparable performance.
Collapse
Affiliation(s)
- Jian-Wu Xu
- Department of Radiology, The University of Chicago, 5841 South Maryland Avenue, Chicago, Illinois 60637, USA.
| | | |
Collapse
|