251
|
Wang H, Zhou Z, Li Y, Chen Z, Lu P, Wang W, Liu W, Yu L. Comparison of machine learning methods for classifying mediastinal lymph node metastasis of non-small cell lung cancer from 18F-FDG PET/CT images. EJNMMI Res 2017; 7:11. [PMID: 28130689 PMCID: PMC5272853 DOI: 10.1186/s13550-017-0260-9] [Citation(s) in RCA: 143] [Impact Index Per Article: 17.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2016] [Accepted: 01/19/2017] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND This study aimed to compare one state-of-the-art deep learning method and four classical machine learning methods for classifying mediastinal lymph node metastasis of non-small cell lung cancer (NSCLC) from 18F-FDG PET/CT images. Another objective was to compare the discriminative power of the recently popular PET/CT texture features with the widely used diagnostic features such as tumor size, CT value, SUV, image contrast, and intensity standard deviation. The four classical machine learning methods included random forests, support vector machines, adaptive boosting, and artificial neural network. The deep learning method was the convolutional neural networks (CNN). The five methods were evaluated using 1397 lymph nodes collected from PET/CT images of 168 patients, with corresponding pathology analysis results as gold standard. The comparison was conducted using 10 times 10-fold cross-validation based on the criterion of sensitivity, specificity, accuracy (ACC), and area under the ROC curve (AUC). For each classical method, different input features were compared to select the optimal feature set. Based on the optimal feature set, the classical methods were compared with CNN, as well as with human doctors from our institute. RESULTS For the classical methods, the diagnostic features resulted in 81~85% ACC and 0.87~0.92 AUC, which were significantly higher than the results of texture features. CNN's sensitivity, specificity, ACC, and AUC were 84, 88, 86, and 0.91, respectively. There was no significant difference between the results of CNN and the best classical method. The sensitivity, specificity, and ACC of human doctors were 73, 90, and 82, respectively. All the five machine learning methods had higher sensitivities but lower specificities than human doctors. CONCLUSIONS The present study shows that the performance of CNN is not significantly different from the best classical methods and human doctors for classifying mediastinal lymph node metastasis of NSCLC from PET/CT images. Because CNN does not need tumor segmentation or feature calculation, it is more convenient and more objective than the classical methods. However, CNN does not make use of the import diagnostic features, which have been proved more discriminative than the texture features for classifying small-sized lymph nodes. Therefore, incorporating the diagnostic features into CNN is a promising direction for future research.
Collapse
Affiliation(s)
- Hongkai Wang
- Department of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, No. 2 Linggong Street, Ganjingzi District, Dalian, Liaoning, 116024, China
| | - Zongwei Zhou
- Department of Biomedical Informatics and the College of Health Solutions, Arizona State University, 13212 East Shea Boulevard, Scottsdale, AZ, 85259, USA
| | - Yingci Li
- Center of PET/CT, The Affiliated Tumor Hospital of Harbin Medical University, 150 Haping Road, Nangang District, Harbin, Heilongjiang Province, 150081, China
| | - Zhonghua Chen
- Department of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, No. 2 Linggong Street, Ganjingzi District, Dalian, Liaoning, 116024, China
| | - Peiou Lu
- Center of PET/CT, The Affiliated Tumor Hospital of Harbin Medical University, 150 Haping Road, Nangang District, Harbin, Heilongjiang Province, 150081, China
| | - Wenzhi Wang
- Center of PET/CT, The Affiliated Tumor Hospital of Harbin Medical University, 150 Haping Road, Nangang District, Harbin, Heilongjiang Province, 150081, China
| | - Wanyu Liu
- HIT-INSA Sino French Research Centre for Biomedical Imaging, Harbin Institute of Technology, Harbin, Heilongjiang, 150001, China
| | - Lijuan Yu
- Center of PET/CT, The Affiliated Tumor Hospital of Harbin Medical University, 150 Haping Road, Nangang District, Harbin, Heilongjiang Province, 150081, China.
| |
Collapse
|
252
|
Affiliation(s)
- Michael F Byrne
- University of British Columbia, Vancouver, British Columbia, Canada
| | - Neal Shahidi
- University of British Columbia, Vancouver, British Columbia, Canada
| | - Douglas K Rex
- Indiana University Medical Center, Indianapolis, Indiana
| |
Collapse
|
253
|
Ioanovici AC, Feier AM, Țilea I, Dobru D. Computer-Aided Diagnosis in Colorectal Cancer: Current Concepts and Future Prospects. JOURNAL OF INTERDISCIPLINARY MEDICINE 2017. [DOI: 10.1515/jim-2017-0057] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
Abstract
Colorectal cancer is an important health issue, both in terms of the number of people affected and the associated costs. Colonoscopy is an important screening method that has a positive impact on the survival of patients with colorectal cancer. The association of colonoscopy with computer-aided diagnostic tools is currently under researchers’ focus, as various methods have already been proposed and show great potential for a better management of this disease. We performed a review of the literature and present a series of aspects, such as the basics of machine learning algorithms, different computational models as well as their benchmarks expressed through measurements such as positive prediction value and accuracy of detection, and the classification of colorectal polyps. Introducing computer-aided diagnostic tools can help clinicians obtain results with a high degree of confidence when performing colonoscopies. The growing field of machine learning in medicine will have a big impact on patient management in the future.
Collapse
Affiliation(s)
| | | | - Ioan Țilea
- University of Medicine and Pharmacy , Tîrgu Mureș , Romania
- Department of Clinical Science-Internal Medicine , University of Medicine and Pharmacy , Tîrgu Mureș , Romania
| | - Daniela Dobru
- University of Medicine and Pharmacy , Tîrgu Mureș , Romania
- Department of Gastroenterology, County Emergency Clinical Hospital , Tîrgu Mureș , Romania
| |
Collapse
|
254
|
Pogorelov K, Riegler M, Eskeland SL, de Lange T, Johansen D, Griwodz C, Schmidt PT, Halvorsen P. Efficient disease detection in gastrointestinal videos – global features versus neural networks. MULTIMEDIA TOOLS AND APPLICATIONS 2017; 76:22493-22525. [DOI: 10.1007/s11042-017-4989-y] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2016] [Revised: 05/29/2017] [Accepted: 06/27/2017] [Indexed: 02/10/2025]
|
255
|
Riegler M, Pogorelov K, Eskeland SL, Schmidt PT, Albisser Z, Johansen D, Griwodz C, Halvorsen P, Lange TD. From Annotation to Computer-Aided Diagnosis. ACM TRANSACTIONS ON MULTIMEDIA COMPUTING, COMMUNICATIONS, AND APPLICATIONS 2017; 13:1-26. [DOI: 10.1145/3079765] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/01/2016] [Accepted: 04/01/2017] [Indexed: 02/10/2025]
Abstract
Holistic medical multimedia systems covering end-to-end functionality from data collection to aided diagnosis are highly needed, but rare. In many hospitals, the potential value of multimedia data collected through routine examinations is not recognized. Moreover, the availability of the data is limited, as the health care personnel may not have direct access to stored data. However, medical specialists interact with multimedia content daily through their everyday work and have an increasing interest in finding ways to use it to facilitate their work processes. In this article, we present a novel, holistic multimedia system aiming to tackle automatic analysis of video from gastrointestinal (GI) endoscopy. The proposed system comprises the whole pipeline, including data collection, processing, analysis, and visualization. It combines filters using machine learning, image recognition, and extraction of global and local image features. The novelty is primarily in this holistic approach and its real-time performance, where we automate a complete algorithmic GI screening process. We built the system in a modular way to make it easily extendable to analyze various abnormalities, and we made it efficient in order to run in real time. The conducted experimental evaluation proves that the detection and localization accuracy are comparable or even better than existing systems, but it is by far leading in terms of real-time performance and efficient resource consumption.
Collapse
Affiliation(s)
- Michael Riegler
- Simula Research Laboratory and University of Oslo, Lysaker, Norway
| | | | | | - Peter Thelin Schmidt
- Karolinska Institutet, Department of Medicine, Solna and Karolinska University Hospital, Center for Digestive Diseases, Stockholm, Sweden
| | - Zeno Albisser
- Simula Research Laboratory and University of Oslo, Lysaker, Norway
| | | | - Carsten Griwodz
- Simula Research Laboratory and University of Oslo, Lysaker, Norway
| | - Pål Halvorsen
- Simula Research Laboratory and University of Oslo, Lysaker, Norway
| | - Thomas De Lange
- Bærum Hospital, Vestre Viken Hospital Trust and Cancer Registry of Norway, Postboks Majorstuen, Oslo
| |
Collapse
|
256
|
Vázquez D, Bernal J, Sánchez FJ, Fernández-Esparrach G, López AM, Romero A, Drozdzal M, Courville A. A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images. JOURNAL OF HEALTHCARE ENGINEERING 2017; 2017:4037190. [PMID: 29065595 PMCID: PMC5549472 DOI: 10.1155/2017/4037190] [Citation(s) in RCA: 165] [Impact Index Per Article: 20.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/24/2017] [Accepted: 05/22/2017] [Indexed: 01/08/2023]
Abstract
Colorectal cancer (CRC) is the third cause of cancer death worldwide. Currently, the standard approach to reduce CRC-related mortality is to perform regular screening in search for polyps and colonoscopy is the screening tool of choice. The main limitations of this screening procedure are polyp miss rate and the inability to perform visual assessment of polyp malignancy. These drawbacks can be reduced by designing decision support systems (DSS) aiming to help clinicians in the different stages of the procedure by providing endoluminal scene segmentation. Thus, in this paper, we introduce an extended benchmark of colonoscopy image segmentation, with the hope of establishing a new strong benchmark for colonoscopy image analysis research. The proposed dataset consists of 4 relevant classes to inspect the endoluminal scene, targeting different clinical needs. Together with the dataset and taking advantage of advances in semantic segmentation literature, we provide new baselines by training standard fully convolutional networks (FCNs). We perform a comparative study to show that FCNs significantly outperform, without any further postprocessing, prior results in endoluminal scene segmentation, especially with respect to polyp segmentation and localization.
Collapse
Affiliation(s)
- David Vázquez
- Computer Vision Center, Computer Science Department, Universitat Autonoma de Barcelona, Barcelona, Spain
- Montreal Institute for Learning Algorithms, Université de Montréal, Montreal, QC, Canada
| | - Jorge Bernal
- Computer Vision Center, Computer Science Department, Universitat Autonoma de Barcelona, Barcelona, Spain
| | - F. Javier Sánchez
- Computer Vision Center, Computer Science Department, Universitat Autonoma de Barcelona, Barcelona, Spain
| | - Gloria Fernández-Esparrach
- Endoscopy Unit, Gastroenterology Service, CIBERHED, IDIBAPS, Hospital Clinic, Universidad de Barcelona, Barcelona, Spain
| | - Antonio M. López
- Computer Vision Center, Computer Science Department, Universitat Autonoma de Barcelona, Barcelona, Spain
- Montreal Institute for Learning Algorithms, Université de Montréal, Montreal, QC, Canada
| | - Adriana Romero
- Montreal Institute for Learning Algorithms, Université de Montréal, Montreal, QC, Canada
| | - Michal Drozdzal
- École Polytechnique de Montréal, Montréal, QC, Canada
- Imagia Inc., Montréal, QC, Canada
| | - Aaron Courville
- Montreal Institute for Learning Algorithms, Université de Montréal, Montreal, QC, Canada
| |
Collapse
|
257
|
A Saliency-based Unsupervised Method for Angiectasia Detection in Endoscopic Video Frames. J Med Biol Eng 2017. [DOI: 10.1007/s40846-017-0299-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
258
|
Bernal J, Tajkbaksh N, Sanchez FJ, Matuszewski BJ, Angermann Q, Romain O, Rustad B, Balasingham I, Pogorelov K, Debard Q, Maier-Hein L, Speidel S, Stoyanov D, Brandao P, Cordova H, Sanchez-Montes C, Gurudu SR, Fernandez-Esparrach G, Dray X, Histace A. Comparative Validation of Polyp Detection Methods in Video Colonoscopy: Results From the MICCAI 2015 Endoscopic Vision Challenge. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:1231-1249. [PMID: 28182555 DOI: 10.1109/tmi.2017.2664042] [Citation(s) in RCA: 174] [Impact Index Per Article: 21.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Colonoscopy is the gold standard for colon cancer screening though some polyps are still missed, thus preventing early disease detection and treatment. Several computational systems have been proposed to assist polyp detection during colonoscopy but so far without consistent evaluation. The lack of publicly available annotated databases has made it difficult to compare methods and to assess if they achieve performance levels acceptable for clinical use. The Automatic Polyp Detection sub-challenge, conducted as part of the Endoscopic Vision Challenge (http://endovis.grand-challenge.org) at the international conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2015, was an effort to address this need. In this paper, we report the results of this comparative evaluation of polyp detection methods, as well as describe additional experiments to further explore differences between methods. We define performance metrics and provide evaluation databases that allow comparison of multiple methodologies. Results show that convolutional neural networks are the state of the art. Nevertheless, it is also demonstrated that combining different methodologies can lead to an improved overall performance.
Collapse
|
259
|
On the Necessity of Fine-Tuned Convolutional Neural Networks for Medical Imaging. DEEP LEARNING AND CONVOLUTIONAL NEURAL NETWORKS FOR MEDICAL IMAGE COMPUTING 2017. [DOI: 10.1007/978-3-319-42999-1_11] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
260
|
Abstract
Video capsule endoscopy (VCE) is used widely nowadays for visualizing the gastrointestinal (GI) tract. Capsule endoscopy exams are prescribed usually as an additional monitoring mechanism and can help in identifying polyps, bleeding, etc. To analyze the large scale video data produced by VCE exams, automatic image processing, computer vision, and learning algorithms are required. Recently, automatic polyp detection algorithms have been proposed with various degrees of success. Though polyp detection in colonoscopy and other traditional endoscopy procedure based images is becoming a mature field, due to its unique imaging characteristics, detecting polyps automatically in VCE is a hard problem. We review different polyp detection approaches for VCE imagery and provide systematic analysis with challenges faced by standard image processing and computer vision methods.
Collapse
Affiliation(s)
- V. Prasath
- Computational Imaging and VisAnalysis (CIVA) Lab, Department of Computer Science, University of Missouri-Columbia, Columbia, MO 65211, USA
| |
Collapse
|
261
|
Integrating Online and Offline Three-Dimensional Deep Learning for Automated Polyp Detection in Colonoscopy Videos. IEEE J Biomed Health Inform 2016; 21:65-75. [PMID: 28114049 DOI: 10.1109/jbhi.2016.2637004] [Citation(s) in RCA: 101] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Automated polyp detection in colonoscopy videos has been demonstrated to be a promising way for colorectal cancer prevention and diagnosis. Traditional manual screening is time consuming, operator dependent, and error prone; hence, automated detection approach is highly demanded in clinical practice. However, automated polyp detection is very challenging due to high intraclass variations in polyp size, color, shape, and texture, and low interclass variations between polyps and hard mimics. In this paper, we propose a novel offline and online three-dimensional (3-D) deep learning integration framework by leveraging the 3-D fully convolutional network (3D-FCN) to tackle this challenging problem. Compared with the previous methods employing hand-crafted features or 2-D convolutional neural network, the 3D-FCN is capable of learning more representative spatio-temporal features from colonoscopy videos, and hence has more powerful discrimination capability. More importantly, we propose a novel online learning scheme to deal with the problem of limited training data by harnessing the specific information of an input video in the learning process. We integrate offline and online learning to effectively reduce the number of false positives generated by the offline network and further improve the detection performance. Extensive experiments on the dataset of MICCAI 2015 Challenge on Polyp Detection demonstrated the better performance of our method when compared with other competitors.
Collapse
|
262
|
Zhang R, Zheng Y, Mak TWC, Yu R, Wong SH, Lau JYW, Poon CCY. Automatic Detection and Classification of Colorectal Polyps by Transferring Low-Level CNN Features From Nonmedical Domain. IEEE J Biomed Health Inform 2016; 21:41-47. [PMID: 28114040 DOI: 10.1109/jbhi.2016.2635662] [Citation(s) in RCA: 163] [Impact Index Per Article: 18.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Abstract
Colorectal cancer (CRC) is a leading cause of cancer deaths worldwide. Although polypectomy at early stage reduces CRC incidence, 90% of the polyps are small and diminutive, where removal of them poses risks to patients that may outweigh the benefits. Correctly detecting and predicting polyp type during colonoscopy allows endoscopists to resect and discard the tissue without submitting it for histology, saving time, and costs. Nevertheless, human visual observation of early stage polyps varies. Therefore, this paper aims at developing a fully automatic algorithm to detect and classify hyperplastic and adenomatous colorectal polyps. Adenomatous polyps should be removed, whereas distal diminutive hyperplastic polyps are considered clinically insignificant and may be left in situ . A novel transfer learning application is proposed utilizing features learned from big nonmedical datasets with 1.4-2.5 million images using deep convolutional neural network. The endoscopic images we collected for experiment were taken under random lighting conditions, zooming and optical magnification, including 1104 endoscopic nonpolyp images taken under both white-light and narrowband imaging (NBI) endoscopy and 826 NBI endoscopic polyp images, of which 263 images were hyperplasia and 563 were adenoma as confirmed by histology. The proposed method identified polyp images from nonpolyp images in the beginning followed by predicting the polyp histology. When compared with visual inspection by endoscopists, the results of this study show that the proposed method has similar precision (87.3% versus 86.4%) but a higher recall rate (87.6% versus 77.0%) and a higher accuracy (85.9% versus 74.3%). In conclusion, automatic algorithms can assist endoscopists in identifying polyps that are adenomatous but have been incorrectly judged as hyperplasia and, therefore, enable timely resection of these polyps at an early stage before they develop into invasive cancer.
Collapse
|
263
|
Abstract
Colonoscopy is currently the best technique available for the detection of colon cancer or colorectal polyps or other precursor lesions. Computer aided detection (CAD) is based on very complex pattern recognition. Local binary patterns (LBPs) are strong illumination invariant texture primitives. Histograms of binary patterns computed across regions are used to describe textures. Every pixel is contrasted relative to gray levels of neighbourhood pixels. In this study, colorectal polyp detection was performed with colonoscopy video frames, with classification via J48 and Fuzzy. Features such as color, discrete cosine transform (DCT) and LBP were used in confirming the superiority of the proposed method in colorectal polyp detection. The performance was better than with other current methods.
Collapse
Affiliation(s)
- Geetha k
- Department of Information Technology, Excel Engineering College, India.
| | | |
Collapse
|
264
|
Exploring Deep Learning and Transfer Learning for Colonic Polyp Classification. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2016; 2016:6584725. [PMID: 27847543 PMCID: PMC5101370 DOI: 10.1155/2016/6584725] [Citation(s) in RCA: 73] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/10/2016] [Accepted: 10/04/2016] [Indexed: 12/26/2022]
Abstract
Recently, Deep Learning, especially through Convolutional Neural Networks (CNNs) has been widely used to enable the extraction of highly representative features. This is done among the network layers by filtering, selecting, and using these features in the last fully connected layers for pattern classification. However, CNN training for automated endoscopic image classification still provides a challenge due to the lack of large and publicly available annotated databases. In this work we explore Deep Learning for the automated classification of colonic polyps using different configurations for training CNNs from scratch (or full training) and distinct architectures of pretrained CNNs tested on 8-HD-endoscopic image databases acquired using different modalities. We compare our results with some commonly used features for colonic polyp classification and the good results suggest that features learned by CNNs trained from scratch and the “off-the-shelf” CNNs features can be highly relevant for automated classification of colonic polyps. Moreover, we also show that the combination of classical features and “off-the-shelf” CNNs features can be a good approach to further improve the results.
Collapse
|
265
|
Tajbakhsh N, Shin JY, Gurudu SR, Hurst RT, Kendall CB, Gotway MB. Convolutional Neural Networks for Medical Image Analysis: Full Training or Fine Tuning? IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1299-1312. [PMID: 26978662 DOI: 10.1109/tmi.2016.2535302] [Citation(s) in RCA: 1044] [Impact Index Per Article: 116.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Training a deep convolutional neural network (CNN) from scratch is difficult because it requires a large amount of labeled training data and a great deal of expertise to ensure proper convergence. A promising alternative is to fine-tune a CNN that has been pre-trained using, for instance, a large set of labeled natural images. However, the substantial differences between natural and medical images may advise against such knowledge transfer. In this paper, we seek to answer the following central question in the context of medical image analysis: Can the use of pre-trained deep CNNs with sufficient fine-tuning eliminate the need for training a deep CNN from scratch? To address this question, we considered four distinct medical imaging applications in three specialties (radiology, cardiology, and gastroenterology) involving classification, detection, and segmentation from three different imaging modalities, and investigated how the performance of deep CNNs trained from scratch compared with the pre-trained CNNs fine-tuned in a layer-wise manner. Our experiments consistently demonstrated that 1) the use of a pre-trained CNN with adequate fine-tuning outperformed or, in the worst case, performed as well as a CNN trained from scratch; 2) fine-tuned CNNs were more robust to the size of training sets than CNNs trained from scratch; 3) neither shallow tuning nor deep tuning was the optimal choice for a particular application; and 4) our layer-wise fine-tuning scheme could offer a practical way to reach the best performance for the application at hand based on the amount of available data.
Collapse
|