301
|
Calculating the target exposure index using a deep convolutional neural network and a rule base. Phys Med 2020; 71:108-114. [PMID: 32114324 DOI: 10.1016/j.ejmp.2020.02.012] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/13/2019] [Revised: 02/17/2020] [Accepted: 02/18/2020] [Indexed: 11/22/2022] Open
Abstract
PURPOSE The objective of this study is to determine the quality of chest X-ray images using a deep convolutional neural network (DCNN) and a rule base without performing any visual assessment. A method is proposed for determining the minimum diagnosable exposure index (EI) and the target exposure index (EIt). METHODS The proposed method involves transfer learning to assess the lung fields, mediastinum, and spine using GoogLeNet, which is a type of DCNN that has been trained using conventional images. Three detectors were created, and the image quality of local regions was rated. Subsequently, the results were used to determine the overall quality of chest X-ray images using a rule-based technique that was in turn based on expert assessment. The minimum EI required for diagnosis was calculated based on the distribution of the EI values, which were classified as either suitable or non-suitable and then used to ascertain the EIt. RESULTS The accuracy rate using the DCNN and the rule base was 81%. The minimum EI required for diagnosis was 230, and the EIt was 288. CONCLUSION The results indicated that the proposed method using the DCNN and the rule base could discriminate different image qualities without any visual assessment; moreover, it could determine both the minimum EI required for diagnosis and the EIt.
Collapse
|
302
|
Agarwala S, Kale M, Kumar D, Swaroop R, Kumar A, Kumar Dhara A, Basu Thakur S, Sadhu A, Nandi D. Deep learning for screening of interstitial lung disease patterns in high-resolution CT images. Clin Radiol 2020; 75:481.e1-481.e8. [PMID: 32075744 DOI: 10.1016/j.crad.2020.01.010] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2019] [Accepted: 01/16/2020] [Indexed: 10/25/2022]
Abstract
AIM To develop a screening tool for the detection of interstitial lung disease (ILD) patterns using a deep-learning method. MATERIALS AND METHODS A fully convolutional network was used for semantic segmentation of several ILD patterns. Improved segmentation of ILD patterns was achieved using multi-scale feature extraction. Dilated convolution was used to maintain the resolution of feature maps and to enlarge the receptive field. The proposed method was evaluated on a publicly available ILD database (MedGIFT) and a private clinical research database. Several metrics, such as success rate, sensitivity, and false positives per section were used for quantitative evaluation of the proposed method. RESULTS Sections with fibrosis and emphysema were detected with a similar success rate and sensitivity for both databases but the performance of detection was lower for consolidation compared to fibrosis and emphysema. CONCLUSION Automatic identification of ILD patterns in a high-resolution computed tomography (CT) image was implemented using a deep-learning framework. Creation of a pre-trained model with natural images and subsequent transfer learning using a particular database gives acceptable results.
Collapse
Affiliation(s)
- S Agarwala
- Department of Computer Science and Engineering, National Institute of Technology Durgapur, Durgapur, 713209, India
| | - M Kale
- Department of Electronics and Electrical Communication Engineering, Indian Institute of Technology Kharagpur, Kharagpur, 721302, India
| | - D Kumar
- Department of Computer Science and Engineering, National Institute of Technology Durgapur, Durgapur, 713209, India
| | - R Swaroop
- Department of Computer Science and Engineering, National Institute of Technology Durgapur, Durgapur, 713209, India
| | - A Kumar
- School of Computer and Information Science, University of Hyderabad, Hyderabad, 500046, India
| | - A Kumar Dhara
- Department of Electrical Engineering, National Institute of Technology Durgapur, Durgapur, 713209, India.
| | - S Basu Thakur
- Department of Chest Medicine, Medical College Kolkata, 700073, India
| | - A Sadhu
- Department of Radiology, Medical College Kolkata, 700073, India
| | - D Nandi
- Department of Computer Science and Engineering, National Institute of Technology Durgapur, Durgapur, 713209, India
| |
Collapse
|
303
|
Wang X, Liang G, Zhang Y, Blanton H, Bessinger Z, Jacobs N. Inconsistent Performance of Deep Learning Models on Mammogram Classification. J Am Coll Radiol 2020; 17:796-803. [PMID: 32068005 DOI: 10.1016/j.jacr.2020.01.006] [Citation(s) in RCA: 73] [Impact Index Per Article: 14.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2019] [Revised: 01/10/2020] [Accepted: 01/12/2020] [Indexed: 12/13/2022]
Abstract
OBJECTIVES Performance of recently developed deep learning models for image classification surpasses that of radiologists. However, there are questions about model performance consistency and generalization in unseen external data. The purpose of this study is to determine whether the high performance of deep learning on mammograms can be transferred to external data with a different data distribution. MATERIALS AND METHODS Six deep learning models (three published models with high performance and three models designed by us) were evaluated on four different mammogram data sets, including three public (Digital Database for Screening Mammography, INbreast, and Mammographic Image Analysis Society) and one private data set (UKy). The models were trained and validated on either Digital Database for Screening Mammography alone or a combined data set that included Digital Database for Screening Mammography. The models were then tested on the three external data sets. The area under the receiver operating characteristic curve (auROC) was used to evaluate model performance. RESULTS The three published models reported validation auROC scores between 0.88 and 0.95 on the validation data set. Our models achieved between 0.71 (95% confidence interval [CI]: 0.70-0.72) and 0.79 (95% CI: 0.78-0.80) auROC on the same validation data set. However, the same evaluation criteria of all six models on the three external test data sets were significantly decreased, only between 0.44 (95% CI: 0.43-0.45) and 0.65 (95% CI: 0.64-0.66). CONCLUSION Our results demonstrate performance inconsistency across the data sets and models, indicating that the high performance of deep learning models on one data set cannot be readily transferred to unseen external data sets, and these models need further assessment and validation before being applied in clinical practice.
Collapse
Affiliation(s)
- Xiaoqin Wang
- Department of Radiology, University of Kentucky, Lexington, Kentucky; Markey Cancer Center, University of Kentucky, Lexington, Kentucky.
| | - Gongbo Liang
- Department of Computer Science, University of Kentucky, Lexington, Kentucky
| | - Yu Zhang
- Department of Computer Science, University of Kentucky, Lexington, Kentucky
| | - Hunter Blanton
- Department of Computer Science, University of Kentucky, Lexington, Kentucky
| | - Zachary Bessinger
- Department of Computer Science, University of Kentucky, Lexington, Kentucky
| | - Nathan Jacobs
- Department of Computer Science, University of Kentucky, Lexington, Kentucky
| |
Collapse
|
304
|
Fehling MK, Grosch F, Schuster ME, Schick B, Lohscheller J. Fully automatic segmentation of glottis and vocal folds in endoscopic laryngeal high-speed videos using a deep Convolutional LSTM Network. PLoS One 2020; 15:e0227791. [PMID: 32040514 PMCID: PMC7010264 DOI: 10.1371/journal.pone.0227791] [Citation(s) in RCA: 44] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2019] [Accepted: 12/25/2019] [Indexed: 01/22/2023] Open
Abstract
The objective investigation of the dynamic properties of vocal fold vibrations demands the recording and further quantitative analysis of laryngeal high-speed video (HSV). Quantification of the vocal fold vibration patterns requires as a first step the segmentation of the glottal area within each video frame from which the vibrating edges of the vocal folds are usually derived. Consequently, the outcome of any further vibration analysis depends on the quality of this initial segmentation process. In this work we propose for the first time a procedure to fully automatically segment not only the time-varying glottal area but also the vocal fold tissue directly from laryngeal high-speed video (HSV) using a deep Convolutional Neural Network (CNN) approach. Eighteen different Convolutional Neural Network (CNN) network configurations were trained and evaluated on totally 13,000 high-speed video (HSV) frames obtained from 56 healthy and 74 pathologic subjects. The segmentation quality of the best performing Convolutional Neural Network (CNN) model, which uses Long Short-Term Memory (LSTM) cells to take also the temporal context into account, was intensely investigated on 15 test video sequences comprising 100 consecutive images each. As performance measures the Dice Coefficient (DC) as well as the precisions of four anatomical landmark positions were used. Over all test data a mean Dice Coefficient (DC) of 0.85 was obtained for the glottis and 0.91 and 0.90 for the right and left vocal fold (VF) respectively. The grand average precision of the identified landmarks amounts 2.2 pixels and is in the same range as comparable manual expert segmentations which can be regarded as Gold Standard. The method proposed here requires no user interaction and overcomes the limitations of current semiautomatic or computational expensive approaches. Thus, it allows also for the analysis of long high-speed video (HSV)-sequences and holds the promise to facilitate the objective analysis of vocal fold vibrations in clinical routine. The here used dataset including the ground truth will be provided freely for all scientific groups to allow a quantitative benchmarking of segmentation approaches in future.
Collapse
Affiliation(s)
- Mona Kirstin Fehling
- Department of Computer Science, Trier University of Applied Sciences, Schneidershof, Trier, Germany
| | - Fabian Grosch
- Department of Computer Science, Trier University of Applied Sciences, Schneidershof, Trier, Germany
| | - Maria Elke Schuster
- Department of Otorhinolaryngology and Head and Neck Surgery, University of Munich, Campus Grosshadern, München, Germany
| | - Bernhard Schick
- Department of Otorhinolaryngology, Saarland University Hospital, Homburg/Saar, Germany
| | - Jörg Lohscheller
- Department of Computer Science, Trier University of Applied Sciences, Schneidershof, Trier, Germany
| |
Collapse
|
305
|
Prayer F, Röhrich S, Pan J, Hofmanninger J, Langs G, Prosch H. [Artificial intelligence in lung imaging]. Radiologe 2020; 60:42-47. [PMID: 31754738 DOI: 10.1007/s00117-019-00611-2] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
CLINICAL/METHODICAL ISSUE Artificial intelligence (AI) has the potential to improve diagnostic accuracy and management in patients with lung disease through automated detection, quantification, classification, and prediction of disease progression. STANDARD RADIOLOGICAL METHODS Owing to unspecific symptoms, few well-defined CT disease patterns, and varying prognosis, interstitial lungs disease represents a focus of AI-based research. METHODICAL INNOVATIONS Supervised and unsupervised machine learning can identify CT disease patterns using features which may allow the analysis of associations with specific diseases and outcomes. PERFORMANCE Machine learning on the one hand improves computer-aided detection of pulmonary nodules. On the other hand it enables further characterization of pulmonary nodules, which may improve resource effectiveness regarding lung cancer screening programs. ACHIEVEMENTS There are several challenges regarding AI-based CT data analysis. Besides the need for powerful algorithms, expert annotations and extensive training data sets that reflect physiologic and pathologic variability are required for effective machine learning. Comparability and reproducibility of AI research deserve consideration due to a lack of standardization in this emerging field. PRACTICAL RECOMMENDATIONS This review article presents the state of the art and the challenges concerning AI in lung imaging with special consideration of interstitial lung disease, and detection and consideration of pulmonary nodules.
Collapse
Affiliation(s)
- F Prayer
- Universitätsklinik für Radiologie und Nuklearmedizin, Medizinische Universität Wien, Währinger Gürtel 18-20, 1090, Wien, Österreich
| | - S Röhrich
- Universitätsklinik für Radiologie und Nuklearmedizin, Medizinische Universität Wien, Währinger Gürtel 18-20, 1090, Wien, Österreich
| | - J Pan
- Computational Imaging and Research Lab, Universitätsklinik für Radiologie und Nuklearmedizin, Medizinische Universität Wien, Wien, Österreich
| | - J Hofmanninger
- Computational Imaging and Research Lab, Universitätsklinik für Radiologie und Nuklearmedizin, Medizinische Universität Wien, Wien, Österreich
| | - G Langs
- Computational Imaging and Research Lab, Universitätsklinik für Radiologie und Nuklearmedizin, Medizinische Universität Wien, Wien, Österreich
| | - H Prosch
- Universitätsklinik für Radiologie und Nuklearmedizin, Medizinische Universität Wien, Währinger Gürtel 18-20, 1090, Wien, Österreich.
| |
Collapse
|
306
|
Gerard SE, Herrmann J, Kaczka DW, Musch G, Fernandez-Bustamante A, Reinhardt JM. Multi-resolution convolutional neural networks for fully automated segmentation of acutely injured lungs in multiple species. Med Image Anal 2020; 60:101592. [PMID: 31760194 PMCID: PMC6980773 DOI: 10.1016/j.media.2019.101592] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2019] [Revised: 08/09/2019] [Accepted: 10/25/2019] [Indexed: 12/27/2022]
Abstract
Segmentation of lungs with acute respiratory distress syndrome (ARDS) is a challenging task due to diffuse opacification in dependent regions which results in little to no contrast at the lung boundary. For segmentation of severely injured lungs, local intensity and texture information, as well as global contextual information, are important factors for consistent inclusion of intrapulmonary structures. In this study, we propose a deep learning framework which uses a novel multi-resolution convolutional neural network (ConvNet) for automated segmentation of lungs in multiple mammalian species with injury models similar to ARDS. The multi-resolution model eliminates the need to tradeoff between high-resolution and global context by using a cascade of low-resolution to high-resolution networks. Transfer learning is used to accommodate the limited number of training datasets. The model was initially pre-trained on human CT images, and subsequently fine-tuned on canine, porcine, and ovine CT images with lung injuries similar to ARDS. The multi-resolution model was compared to both high-resolution and low-resolution networks alone. The multi-resolution model outperformed both the low- and high-resolution models, achieving an overall mean Jacaard index of 0.963 ± 0.025 compared to 0.919 ± 0.027 and 0.950 ± 0.036, respectively, for the animal dataset (N=287). The multi-resolution model achieves an overall average symmetric surface distance of 0.438 ± 0.315 mm, compared to 0.971 ± 0.368 mm and 0.657 ± 0.519 mm for the low-resolution and high-resolution models, respectively. We conclude that the multi-resolution model produces accurate segmentations in severely injured lungs, which is attributed to the inclusion of both local and global features.
Collapse
Affiliation(s)
- Sarah E Gerard
- Roy J. Carver Department of Biomedical Engineering, University of Iowa, Iowa City, IA, USA
| | - Jacob Herrmann
- Roy J. Carver Department of Biomedical Engineering, University of Iowa, Iowa City, IA, USA; Department of Anesthesia, University of Iowa, Iowa City, IA, USA
| | - David W Kaczka
- Roy J. Carver Department of Biomedical Engineering, University of Iowa, Iowa City, IA, USA; Department of Radiology, University of Iowa, Iowa City, IA, USA; Department of Anesthesia, University of Iowa, Iowa City, IA, USA
| | - Guido Musch
- Department of Anesthesiology, Washington University, St. Louis, MO, USA
| | | | - Joseph M Reinhardt
- Roy J. Carver Department of Biomedical Engineering, University of Iowa, Iowa City, IA, USA; Department of Radiology, University of Iowa, Iowa City, IA, USA.
| |
Collapse
|
307
|
Chen J, Zhou S, Kang Z, Wen Q. Locality-constrained group lasso coding for microvessel image classification. Pattern Recognit Lett 2020. [DOI: 10.1016/j.patrec.2019.02.011] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
308
|
Guan Q, Huang Y. Multi-label chest X-ray image classification via category-wise residual attention learning. Pattern Recognit Lett 2020. [DOI: 10.1016/j.patrec.2018.10.027] [Citation(s) in RCA: 55] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
309
|
Huang S, Lee F, Miao R, Si Q, Lu C, Chen Q. A deep convolutional neural network architecture for interstitial lung disease pattern classification. Med Biol Eng Comput 2020; 58:725-737. [DOI: 10.1007/s11517-019-02111-w] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2019] [Accepted: 12/21/2019] [Indexed: 01/22/2023]
|
310
|
Lin BS, Chen JL, Tu YH, Shih YX, Lin YC, Chi WL, Wu YC. Using Deep Learning in Ultrasound Imaging of Bicipital Peritendinous Effusion to Grade Inflammation Severity. IEEE J Biomed Health Inform 2020; 24:1037-1045. [PMID: 31985446 DOI: 10.1109/jbhi.2020.2968815] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Inflammation of the long head of the biceps tendon is a common cause of shoulder pain. Bicipital peritendinous effusion (BPE) is the most common biceps tendon abnormality and is related to various shoulder injuries. Physicians usually use ultrasound imaging to grade the inflammation severity of the long head of the biceps tendon. However, obtaining a clear and accurate ultrasound image is difficult for inexperienced attending physicians. To reduce physicians' workload and avoid errors, an automated BPE recognition system was developed in this article for classifying inflammation into the following categories-normal and mild, moderate, and severe. An ultrasound image serves as the input in the proposed system; the system determines whether the ultrasound image contains biceps. If the image depicts biceps, then the system predicts BPE severity. In this study, two crucial methods were used for solving problems associated with computer-aided detection. First, the faster regions with convolutional neural network (faster R-CNN) used to extract the region of interest (ROI) area identification to evaluate the influence of dataset scale and spatial image context on performance. Second, various CNN architectures were evaluated and explored. Model performance was analyzed by using various network configurations, parameters, and training sample sizes. The proposed system was used for three-class BPE classification and achieved 75% accuracy. The results obtained for the proposed system were determined to be comparable to those of other related state-of-the-art methods.
Collapse
|
311
|
Bermejo-Peláez D, Ash SY, Washko GR, San José Estépar R, Ledesma-Carbayo MJ. Classification of Interstitial Lung Abnormality Patterns with an Ensemble of Deep Convolutional Neural Networks. Sci Rep 2020; 10:338. [PMID: 31941918 PMCID: PMC6962320 DOI: 10.1038/s41598-019-56989-5] [Citation(s) in RCA: 45] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2019] [Accepted: 12/12/2019] [Indexed: 12/31/2022] Open
Abstract
Subtle interstitial changes in the lung parenchyma of smokers, known as Interstitial Lung Abnormalities (ILA), have been associated with clinical outcomes, including mortality, even in the absence of Interstitial Lung Disease (ILD). Although several methods have been proposed for the automatic identification of more advanced Interstitial Lung Disease (ILD) patterns, few have tackled ILA, which likely precedes the development ILD in some cases. In this context, we propose a novel methodology for automated identification and classification of ILA patterns in computed tomography (CT) images. The proposed method is an ensemble of deep convolutional neural networks (CNNs) that detect more discriminative features by incorporating two, two-and-a-half and three- dimensional architectures, thereby enabling more accurate classification. This technique is implemented by first training each individual CNN, and then combining its output responses to form the overall ensemble output. To train and test the system we used 37424 radiographic tissue samples corresponding to eight different parenchymal feature classes from 208 CT scans. The resulting ensemble performance including an average sensitivity of 91,41% and average specificity of 98,18% suggests it is potentially a viable method to identify radiographic patterns that precede the development of ILD.
Collapse
Affiliation(s)
- David Bermejo-Peláez
- Biomedical Image Technologies, ETSI Telecomunicación, Universidad Politécnica de Madrid & CIBER-BBN, Madrid, Spain.
| | - Samuel Y Ash
- Division of Pulmonary and Critical Care Medicine, Department of Medicine, Brigham and Women's Hospital, Boston, MA, USA
| | - George R Washko
- Division of Pulmonary and Critical Care Medicine, Department of Medicine, Brigham and Women's Hospital, Boston, MA, USA
| | - Raúl San José Estépar
- Applied Chest Imaging Laboratory, Department of Radiology, Brigham and Women's Hospital, Boston, Massachusetts, United States of America
| | - María J Ledesma-Carbayo
- Biomedical Image Technologies, ETSI Telecomunicación, Universidad Politécnica de Madrid & CIBER-BBN, Madrid, Spain
| |
Collapse
|
312
|
Ebner L, Christodoulidis S, Stathopoulou T, Geiser T, Stalder O, Limacher A, Heverhagen JT, Mougiakakou SG, Christe A. Meta-analysis of the radiological and clinical features of Usual Interstitial Pneumonia (UIP) and Nonspecific Interstitial Pneumonia (NSIP). PLoS One 2020; 15:e0226084. [PMID: 31929532 PMCID: PMC6957301 DOI: 10.1371/journal.pone.0226084] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2019] [Accepted: 11/18/2019] [Indexed: 02/02/2023] Open
Abstract
PURPOSE To conduct a meta-analysis to determine specific computed tomography (CT) patterns and clinical features that discriminate between nonspecific interstitial pneumonia (NSIP) and usual interstitial pneumonia (UIP). MATERIALS AND METHODS The PubMed/Medline and Embase databases were searched for studies describing the radiological patterns of UIP and NSIP in chest CT images. Only studies involving histologically confirmed diagnoses and a consensus diagnosis by an interstitial lung disease (ILD) board were included in this analysis. The radiological patterns and patient demographics were extracted from suitable articles. We used random-effects meta-analysis by DerSimonian & Laird and calculated pooled odds ratios for binary data and pooled mean differences for continuous data. RESULTS Of the 794 search results, 33 articles describing 2,318 patients met the inclusion criteria. Twelve of these studies included both NSIP (338 patients) and UIP (447 patients). NSIP-patients were significantly younger (NSIP: median age 54.8 years, UIP: 59.7 years; mean difference (MD) -4.4; p = 0.001; 95% CI: -6.97 to -1.77), less often male (NSIP: median 52.8%, UIP: 73.6%; pooled odds ratio (OR) 0.32; p<0.001; 95% CI: 0.17 to 0.60), and less often smokers (NSIP: median 55.1%, UIP: 73.9%; OR 0.42; p = 0.005; 95% CI: 0.23 to 0.77) than patients with UIP. The CT findings from patients with NSIP revealed significantly lower levels of the honeycombing pattern (NSIP: median 28.9%, UIP: 73.4%; OR 0.07; p<0.001; 95% CI: 0.02 to 0.30) with less peripheral predominance (NSIP: median 41.8%, UIP: 83.3%; OR 0.21; p<0.001; 95% CI: 0.11 to 0.38) and more subpleural sparing (NSIP: median 40.7%, UIP: 4.3%; OR 16.3; p = 0.005; 95% CI: 2.28 to 117). CONCLUSION Honeycombing with a peripheral predominance was significantly associated with a diagnosis of UIP. The NSIP pattern showed more subpleural sparing. The UIP pattern was predominantly observed in elderly males with a history of smoking, whereas NSIP occurred in a younger patient population.
Collapse
Affiliation(s)
- Lukas Ebner
- Department of Diagnostic, Interventional and Pediatric Radiology, Inselspital, Bern University Hospital, University of Bern, Switzerland
| | | | - Thomai Stathopoulou
- ARTORG Center for Biomedical Engineering Research, University of Bern, Switzerland
| | - Thomas Geiser
- Department for Pulmonary Medicine, Inselspital, Bern University Hospital, University of Bern, Switzerland
| | - Odile Stalder
- CTU Bern and Institute of Social and Preventive Medicine (ISPM), University of Bern, Switzerland
| | - Andreas Limacher
- CTU Bern and Institute of Social and Preventive Medicine (ISPM), University of Bern, Switzerland
| | - Johannes T. Heverhagen
- Department of Diagnostic, Interventional and Pediatric Radiology, Inselspital, Bern University Hospital, University of Bern, Switzerland
| | - Stavroula G. Mougiakakou
- Department of Diagnostic, Interventional and Pediatric Radiology, Inselspital, Bern University Hospital, University of Bern, Switzerland
- ARTORG Center for Biomedical Engineering Research, University of Bern, Switzerland
| | - Andreas Christe
- Department of Diagnostic, Interventional and Pediatric Radiology, Inselspital, Bern University Hospital, University of Bern, Switzerland
| |
Collapse
|
313
|
An effective approach for CT lung segmentation using mask region-based convolutional neural networks. Artif Intell Med 2020; 103:101792. [PMID: 32143797 DOI: 10.1016/j.artmed.2020.101792] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2019] [Revised: 12/06/2019] [Accepted: 01/02/2020] [Indexed: 01/22/2023]
Abstract
Computer vision systems have numerous tools to assist in various medical fields, notably in image diagnosis. Computed tomography (CT) is the principal imaging method used to assist in the diagnosis of diseases such as bone fractures, lung cancer, heart disease, and emphysema, among others. Lung cancer is one of the four main causes of death in the world. The lung regions in the CT images are marked manually by a specialist as this initial step is a significant challenge for computer vision techniques. Once defined, the lung regions are segmented for clinical diagnoses. This work proposes an automatic segmentation of the lungs in CT images, using the Convolutional Neural Network (CNN) Mask R-CNN, to specialize the model for lung region mapping, combined with supervised and unsupervised machine learning methods (Bayes, Support Vectors Machine (SVM), K-means and Gaussian Mixture Models (GMMs)). Our approach using Mask R-CNN with the K-means kernel produced the best results for lung segmentation reaching an accuracy of 97.68 ± 3.42% and an average runtime of 11.2 s. We compared our results against other works for validation purposes, and our approach had the highest accuracy and was faster than some state-of-the-art methods.
Collapse
|
314
|
Sinha P, Tuteja M, Saxena S. Medical image segmentation: hard and soft computing approaches. SN APPLIED SCIENCES 2020. [DOI: 10.1007/s42452-020-1956-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
|
315
|
A Deep Learning Model for Estimation of Patients with Undiagnosed Diabetes. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10010421] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
A screening model for undiagnosed diabetes mellitus (DM) is important for early medical care. Insufficient research has been carried out developing a screening model for undiagnosed DM using machine learning techniques. Thus, the primary objective of this study was to develop a screening model for patients with undiagnosed DM using a deep neural network. We conducted a cross-sectional study using data from the Korean National Health and Nutrition Examination Survey (KNHANES) 2013–2016. A total of 11,456 participants were selected, excluding those with diagnosed DM, an age < 20 years, or missing data. KNHANES 2013–2015 was used as a training dataset and analyzed to develop a deep learning model (DLM) for undiagnosed DM. The DLM was evaluated with 4444 participants who were surveyed in the 2016 KNHANES. The DLM was constructed using seven non-invasive variables (NIV): age, waist circumference, body mass index, gender, smoking status, hypertension, and family history of diabetes. The model showed an appropriate performance (area under curve (AUC): 80.11) compared with existing previous screening models. The DLM developed in this study for patients with undiagnosed diabetes could contribute to early medical care.
Collapse
|
316
|
Choi BK, Madusanka N, Choi HK, So JH, Kim CH, Park HG, Bhattacharjee S, Prakash D. Convolutional Neural Network-based MR Image Analysis for Alzheimer’s Disease Classification. Curr Med Imaging 2020; 16:27-35. [DOI: 10.2174/1573405615666191021123854] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2019] [Revised: 10/11/2019] [Accepted: 10/12/2019] [Indexed: 01/28/2023]
Abstract
Background:
In this study, we used a convolutional neural network (CNN) to classify
Alzheimer’s disease (AD), mild cognitive impairment (MCI), and normal control (NC) subjects
based on images of the hippocampus region extracted from magnetic resonance (MR) images of
the brain.
Materials and Methods:
The datasets used in this study were obtained from the Alzheimer's Disease Neuroimaging
Initiative (ADNI). To segment the hippocampal region automatically, the patient brain MR
images were matched to the International Consortium for Brain Mapping template (ICBM) using
3D-Slicer software. Using prior knowledge and anatomical annotation label information,
the hippocampal region was automatically extracted from the brain MR images.
Results:
The area of the hippocampus in each image was preprocessed using local entropy minimization
with a bi-cubic spline model (LEMS) by an inhomogeneity intensity correction method.
To train the CNN model, we separated the dataset into three groups, namely AD/NC, AD/MCI,
and MCI/NC. The prediction model achieved an accuracy of 92.3% for AD/NC, 85.6% for
AD/MCI, and 78.1% for MCI/NC.
Conclusion:
The results of this study were compared to those of previous studies, and summarized
and analyzed to facilitate more flexible analyses based on additional experiments. The classification
accuracy obtained by the proposed method is highly accurate. These findings suggest
that this approach is efficient and may be a promising strategy to obtain good AD, MCI and
NC classification performance using small patch images of hippocampus instead of whole slide
images.
Collapse
Affiliation(s)
- Boo-Kyeong Choi
- Department of Digital Anti-Aging Healthcare, u-AHRC, Inje University, Gimhae, Korea
| | - Nuwan Madusanka
- Department of Computer Engineering, u-AHRC, Inje University, Gimhae, Korea
| | - Heung-Kook Choi
- Department of Computer Engineering, u-AHRC, Inje University, Gimhae, Korea
| | - Jae-Hong So
- Department of Digital Anti-Aging Healthcare, u-AHRC, Inje University, Gimhae, Korea
| | - Cho-Hee Kim
- Department of Digital Anti-Aging Healthcare, u-AHRC, Inje University, Gimhae, Korea
| | - Hyeon-Gyun Park
- Department of Computer Engineering, u-AHRC, Inje University, Gimhae, Korea
| | | | - Deekshitha Prakash
- Department of Computer Engineering, u-AHRC, Inje University, Gimhae, Korea
| |
Collapse
|
317
|
Liu B, Chi W, Li X, Li P, Liang W, Liu H, Wang W, He J. Evolving the pulmonary nodules diagnosis from classical approaches to deep learning-aided decision support: three decades' development course and future prospect. J Cancer Res Clin Oncol 2020; 146:153-185. [PMID: 31786740 DOI: 10.1007/s00432-019-03098-5] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2019] [Accepted: 11/25/2019] [Indexed: 02/06/2023]
Abstract
PURPOSE Lung cancer is the commonest cause of cancer deaths worldwide, and its mortality can be reduced significantly by performing early diagnosis and screening. Since the 1960s, driven by the pressing needs to accurately and effectively interpret the massive volume of chest images generated daily, computer-assisted diagnosis of pulmonary nodule has opened up new opportunities to relax the limitation from physicians' subjectivity, experiences and fatigue. And the fair access to the reliable and affordable computer-assisted diagnosis will fight the inequalities in incidence and mortality between populations. It has been witnessed that significant and remarkable advances have been achieved since the 1980s, and consistent endeavors have been exerted to deal with the grand challenges on how to accurately detect the pulmonary nodules with high sensitivity at low false-positive rate as well as on how to precisely differentiate between benign and malignant nodules. There is a lack of comprehensive examination of the techniques' development which is evolving the pulmonary nodules diagnosis from classical approaches to machine learning-assisted decision support. The main goal of this investigation is to provide a comprehensive state-of-the-art review of the computer-assisted nodules detection and benign-malignant classification techniques developed over three decades, which have evolved from the complicated ad hoc analysis pipeline of conventional approaches to the simplified seamlessly integrated deep learning techniques. This review also identifies challenges and highlights opportunities for future work in learning models, learning algorithms and enhancement schemes for bridging current state to future prospect and satisfying future demand. CONCLUSION It is the first literature review of the past 30 years' development in computer-assisted diagnosis of lung nodules. The challenges indentified and the research opportunities highlighted in this survey are significant for bridging current state to future prospect and satisfying future demand. The values of multifaceted driving forces and multidisciplinary researches are acknowledged that will make the computer-assisted diagnosis of pulmonary nodules enter into the main stream of clinical medicine and raise the state-of-the-art clinical applications as well as increase both welfares of physicians and patients. We firmly hold the vision that fair access to the reliable, faithful, and affordable computer-assisted diagnosis for early cancer diagnosis would fight the inequalities in incidence and mortality between populations, and save more lives.
Collapse
Affiliation(s)
- Bo Liu
- Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, China.
| | - Wenhao Chi
- Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Xinran Li
- Department of Mathematics, University of Wisconsin-Madison, Madison, WI, 53706, USA
| | - Peng Li
- Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, China
| | - Wenhua Liang
- Department of Thoracic Surgery and Oncology, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
- China State Key Laboratory of Respiratory Disease, Guangzhou, China
| | - Haiping Liu
- PET/CT Center, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
- China State Key Laboratory of Respiratory Disease, Guangzhou, China
| | - Wei Wang
- Department of Thoracic Surgery and Oncology, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
- China State Key Laboratory of Respiratory Disease, Guangzhou, China
| | - Jianxing He
- Department of Thoracic Surgery and Oncology, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China.
- China State Key Laboratory of Respiratory Disease, Guangzhou, China.
| |
Collapse
|
318
|
Lillington J, Brusaferri L, Kläser K, Shmueli K, Neji R, Hutton BF, Fraioli F, Arridge S, Cardoso MJ, Ourselin S, Thielemans K, Atkinson D. PET/MRI attenuation estimation in the lung: A review of past, present, and potential techniques. Med Phys 2020; 47:790-811. [PMID: 31794071 PMCID: PMC7027532 DOI: 10.1002/mp.13943] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2019] [Revised: 07/23/2019] [Accepted: 11/20/2019] [Indexed: 12/16/2022] Open
Abstract
Positron emission tomography/magnetic resonance imaging (PET/MRI) potentially offers several advantages over positron emission tomography/computed tomography (PET/CT), for example, no CT radiation dose and soft tissue images from MR acquired at the same time as the PET. However, obtaining accurate linear attenuation correction (LAC) factors for the lung remains difficult in PET/MRI. LACs depend on electron density and in the lung, these vary significantly both within an individual and from person to person. Current commercial practice is to use a single‐valued population‐based lung LAC, and better estimation is needed to improve quantification. Given the under‐appreciation of lung attenuation estimation as an issue, the inaccuracy of PET quantification due to the use of single‐valued lung LACs, the unique challenges of lung estimation, and the emerging status of PET/MRI scanners in lung disease, a review is timely. This paper highlights past and present methods, categorizing them into segmentation, atlas/mapping, and emission‐based schemes. Potential strategies for future developments are also presented.
Collapse
Affiliation(s)
- Joseph Lillington
- Centre for Medical Imaging, University College London, London, W1W 7TS, UK
| | - Ludovica Brusaferri
- Institute of Nuclear Medicine, University College London, London, NW1 2BU, UK
| | - Kerstin Kläser
- Centre for Medical Image Computing, University College London, London, WC1E 7JE, UK
| | - Karin Shmueli
- Magnetic Resonance Imaging Group, Department of Medical Physics & Biomedical Engineering, University College London, London, WC1E 6BT, UK
| | - Radhouene Neji
- MR Research Collaborations, Siemens Healthcare Limited, Frimley, GU16 8QD, UK
| | - Brian F Hutton
- Institute of Nuclear Medicine, University College London, London, NW1 2BU, UK
| | - Francesco Fraioli
- Institute of Nuclear Medicine, University College London, London, NW1 2BU, UK
| | - Simon Arridge
- Centre for Medical Image Computing, University College London, London, WC1E 7JE, UK
| | - Manuel Jorge Cardoso
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, SE1 7EH, UK
| | - Sebastien Ourselin
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, SE1 7EH, UK
| | - Kris Thielemans
- Institute of Nuclear Medicine, University College London, London, NW1 2BU, UK
| | - David Atkinson
- Centre for Medical Imaging, University College London, London, W1W 7TS, UK
| |
Collapse
|
319
|
Automated detection of focal cortical dysplasia using a deep convolutional neural network. Comput Med Imaging Graph 2020; 79:101662. [DOI: 10.1016/j.compmedimag.2019.101662] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2018] [Revised: 07/12/2019] [Accepted: 10/01/2019] [Indexed: 11/30/2022]
|
320
|
|
321
|
Yang X, Wu L, Zhao K, Ye W, Liu W, Wang Y, Li J, Li H, Huang X, Zhang W, Huang Y, Chen X, Yao S, Liu Z, Liang C. Evaluation of human epidermal growth factor receptor 2 status of breast cancer using preoperative multidetector computed tomography with deep learning and handcrafted radiomics features. Chin J Cancer Res 2020; 32:175-185. [PMID: 32410795 PMCID: PMC7219093 DOI: 10.21147/j.issn.1000-9604.2020.02.05] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
Objective To evaluate the human epidermal growth factor receptor 2 (HER2) status in patients with breast cancer using multidetector computed tomography (MDCT)-based handcrafted and deep radiomics features. Methods This retrospective study enrolled 339 female patients (primary cohort, n=177; validation cohort, n=162) with pathologically confirmed invasive breast cancer. Handcrafted and deep radiomics features were extracted from the MDCT images during the arterial phase. After the feature selection procedures, handcrafted and deep radiomics signatures and the combined model were built using multivariate logistic regression analysis. Performance was assessed by measures of discrimination, calibration, and clinical usefulness in the primary cohort and validated in the validation cohort. Results The handcrafted radiomics signature had a discriminative ability with a C-index of 0.739 [95% confidence interval (95% CI): 0.661−0.818] in the primary cohort and 0.695 (95% CI: 0.609−0.781) in the validation cohort. The deep radiomics signature also had a discriminative ability with a C-index of 0.760 (95% CI: 0.690−0.831) in the primary cohort and 0.777 (95% CI: 0.696−0.857) in the validation cohort. The combined model, which incorporated both the handcrafted and deep radiomics signatures, showed good discriminative ability with a C-index of 0.829 (95% CI: 0.767−0.890) in the primary cohort and 0.809 (95% CI: 0.740−0.879) in the validation cohort. Conclusions Handcrafted and deep radiomics features from MDCT images were associated with HER2 status in patients with breast cancer. Thus, these features could provide complementary aid for the radiological evaluation of HER2 status in breast cancer.
Collapse
Affiliation(s)
- Xiaojun Yang
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China.,School of Medicine, South China University of Technology, Guangzhou 510006, China
| | - Lei Wu
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China.,School of Medicine, South China University of Technology, Guangzhou 510006, China
| | - Ke Zhao
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China.,School of Medicine, South China University of Technology, Guangzhou 510006, China
| | - Weitao Ye
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China
| | - Weixiao Liu
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China
| | - Yingyi Wang
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China
| | - Jiao Li
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China
| | - Hanxiao Li
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China.,School of Medicine, South China University of Technology, Guangzhou 510006, China
| | - Xiaomei Huang
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China
| | - Wen Zhang
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China
| | - Yanqi Huang
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China
| | - Xin Chen
- Department of Radiology, Guangzhou First People's Hospital, Guangzhou 510180, China
| | - Su Yao
- Department of Pathology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China
| | - Zaiyi Liu
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China.,School of Medicine, South China University of Technology, Guangzhou 510006, China
| | - Changhong Liang
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China.,School of Medicine, South China University of Technology, Guangzhou 510006, China
| |
Collapse
|
322
|
Gao F, Wu T, Chu X, Yoon H, Xu Y, Patel B. Deep Residual Inception Encoder–Decoder Network for Medical Imaging Synthesis. IEEE J Biomed Health Inform 2020; 24:39-49. [DOI: 10.1109/jbhi.2019.2912659] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
323
|
Hou Y. Breast cancer pathological image classification based on deep learning. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2020; 28:727-738. [PMID: 32390646 DOI: 10.3233/xst-200658] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The automatic classification of breast cancer pathological images has important clinical application value. However, to develop the classification algorithm using the artificially extracted image features faces several challenges including the requirement of professional domain knowledge to extract and compute highiquality image features, which are often time-consuming, laborious, and difficult. For overcoming these challenges, this study developed and applied an improved deep convolutional neural network model to perform automatic classification of breast cancer using pathological images. Specifically, in this study, data enhancement and migration learning methods are used to effectively avoid the overfitting problems with deep learning models when they are limited by training image sample size. Experimental results show that a 91% recognition rate or accuracy when applying this improved deep learning model to a publicly available dataset of BreaKHis. Comparing with other previously used models, the new model yields good robustness and generalization.
Collapse
Affiliation(s)
- Yubao Hou
- School of Information and Mechanical Engineering, Hunan International Economics University, Changsha, China
| |
Collapse
|
324
|
Peng L, Lin L, Hu H, Zhang Y, Li H, Iwamoto Y, Han XH, Chen YW. Semi-Supervised Learning for Semantic Segmentation of Emphysema With Partial Annotations. IEEE J Biomed Health Inform 2019; 24:2327-2336. [PMID: 31902784 DOI: 10.1109/jbhi.2019.2963195] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Segmentation and quantification of each subtype of emphysema is helpful to monitor chronic obstructive pulmonary disease. Due to the nature of emphysema (diffuse pulmonary disease), it is very difficult for experts to allocate semantic labels to every pixel in the CT images. In practice, partially annotating is a better choice for the radiologists to reduce their workloads. In this paper, we propose a new end-to-end trainable semi-supervised framework for semantic segmentation of emphysema with partial annotations, in which a segmentation network is trained from both annotated and unannotated areas. In addition, we present a new loss function, referred to as Fisher loss, to enhance the discriminative power of the model and successfully integrate it into our proposed framework. Our experimental results show that the proposed methods have superior performance over the baseline supervised approach (trained with only annotated areas) and outperform the state-of-the-art methods for emphysema segmentation.
Collapse
|
325
|
Deep Learning Applications in Chest Radiography and Computed Tomography: Current State of the Art. J Thorac Imaging 2019; 34:75-85. [PMID: 30802231 DOI: 10.1097/rti.0000000000000387] [Citation(s) in RCA: 58] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Deep learning is a genre of machine learning that allows computational models to learn representations of data with multiple levels of abstraction using numerous processing layers. A distinctive feature of deep learning, compared with conventional machine learning methods, is that it can generate appropriate models for tasks directly from the raw data, removing the need for human-led feature extraction. Medical images are particularly suited for deep learning applications. Deep learning techniques have already demonstrated high performance in the detection of diabetic retinopathy on fundoscopic images and metastatic breast cancer cells on pathologic images. In radiology, deep learning has the opportunity to provide improved accuracy of image interpretation and diagnosis. Many groups are exploring the possibility of using deep learning-based applications to solve unmet clinical needs. In chest imaging, there has been a large effort to develop and apply computer-aided detection systems for the detection of lung nodules on chest radiographs and chest computed tomography. The essential limitation to computer-aided detection is an inability to learn from new information. To overcome these deficiencies, many groups have turned to deep learning approaches with promising results. In addition to nodule detection, interstitial lung disease recognition, lesion segmentation, diagnosis and patient outcomes have been addressed by deep learning approaches. The purpose of this review article was to cover the current state of the art for deep learning approaches and its limitations, and some of the potential impact on the field of radiology, with specific reference to chest imaging.
Collapse
|
326
|
Ma J, Song Y, Tian X, Hua Y, Zhang R, Wu J. Survey on deep learning for pulmonary medical imaging. Front Med 2019; 14:450-469. [PMID: 31840200 DOI: 10.1007/s11684-019-0726-4] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2019] [Accepted: 10/12/2019] [Indexed: 12/27/2022]
Abstract
As a promising method in artificial intelligence, deep learning has been proven successful in several domains ranging from acoustics and images to natural language processing. With medical imaging becoming an important part of disease screening and diagnosis, deep learning-based approaches have emerged as powerful techniques in medical image areas. In this process, feature representations are learned directly and automatically from data, leading to remarkable breakthroughs in the medical field. Deep learning has been widely applied in medical imaging for improved image analysis. This paper reviews the major deep learning techniques in this time of rapid evolution and summarizes some of its key contributions and state-of-the-art outcomes. The topics include classification, detection, and segmentation tasks on medical image analysis with respect to pulmonary medical images, datasets, and benchmarks. A comprehensive overview of these methods implemented on various lung diseases consisting of pulmonary nodule diseases, pulmonary embolism, pneumonia, and interstitial lung disease is also provided. Lastly, the application of deep learning techniques to the medical image and an analysis of their future challenges and potential directions are discussed.
Collapse
Affiliation(s)
| | - Yang Song
- Dalian Municipal Central Hospital Affiliated to Dalian Medical University, Dalian, 116033, China
| | - Xi Tian
- InferVision, Beijing, 100020, China
| | | | | | - Jianlin Wu
- Affiliated Zhongshan Hospital of Dalian University, Dalian, 116001, China.
| |
Collapse
|
327
|
Hallac RR, Lee J, Pressler M, Seaward JR, Kane AA. Identifying Ear Abnormality from 2D Photographs Using Convolutional Neural Networks. Sci Rep 2019; 9:18198. [PMID: 31796839 PMCID: PMC6890688 DOI: 10.1038/s41598-019-54779-7] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2019] [Accepted: 11/19/2019] [Indexed: 01/22/2023] Open
Abstract
Quantifying ear deformity using linear measurements and mathematical modeling is difficult due to the ear's complex shape. Machine learning techniques, such as convolutional neural networks (CNNs), are well-suited for this role. CNNs are deep learning methods capable of finding complex patterns from medical images, automatically building solution models capable of machine diagnosis. In this study, we applied CNN to automatically identify ear deformity from 2D photographs. Institutional review board (IRB) approval was obtained for this retrospective study to train and test the CNNs. Photographs of patients with and without ear deformity were obtained as standard of care in our photography studio. Profile photographs were obtained for one or both ears. A total of 671 profile pictures were used in this study including: 457 photographs of patients with ear deformity and 214 photographs of patients with normal ears. Photographs were cropped to the ear boundary and randomly divided into training (60%), validation (20%), and testing (20%) datasets. We modified the softmax classifier in the last layer in GoogLeNet, a deep CNN, to generate an ear deformity detection model in Matlab. All images were deemed of high quality and usable for training and testing. It took about 2 hours to train the system and the training accuracy reached almost 100%. The test accuracy was about 94.1%. We demonstrate that deep learning has a great potential in identifying ear deformity. These machine learning techniques hold the promise in being used in the future to evaluate treatment outcomes.
Collapse
Affiliation(s)
- Rami R Hallac
- Department of Plastic Surgery, UT Southwestern, 5323 Harry Hines Blvd., Dallas, TX, 75390, United States. .,Analytical Imaging and Modeling Center, Children's Medical Center, Dallas, 1935 Medical District Dr., Dallas, Texas, 75235, United States.
| | - Jeon Lee
- Department of Bioinformatics, UT Southwestern, 5323 Harry Hines Blvd., Dallas, TX, 75390, United States
| | - Mark Pressler
- Department of Plastic Surgery, UT Southwestern, 5323 Harry Hines Blvd., Dallas, TX, 75390, United States
| | - James R Seaward
- Department of Plastic Surgery, UT Southwestern, 5323 Harry Hines Blvd., Dallas, TX, 75390, United States
| | - Alex A Kane
- Department of Plastic Surgery, UT Southwestern, 5323 Harry Hines Blvd., Dallas, TX, 75390, United States.,Analytical Imaging and Modeling Center, Children's Medical Center, Dallas, 1935 Medical District Dr., Dallas, Texas, 75235, United States
| |
Collapse
|
328
|
Zou X, Xu S, Li S, Chen J, Zou W. Optimization of the Brillouin instantaneous frequency measurement using convolutional neural networks. OPTICS LETTERS 2019; 44:5723-5726. [PMID: 31774763 DOI: 10.1364/ol.44.005723] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/12/2019] [Accepted: 10/21/2019] [Indexed: 06/10/2023]
Abstract
The Brillouin instantaneous frequency measurement (B-IFM) is used to measure instantaneous frequencies of an arbitrary signal with high frequency and broad bandwidth. However, the instantaneous frequencies measured using the B-IFM system always suffer from errors, due to system defects. To address this, we adopt a convolutional neural network (CNN) that establishes a function mapping between the measured and nominal instantaneous frequencies to obtain a more accurate instantaneous frequency, thus improving the frequency resolution, system sensitivity, and dynamic range of the B-IFM. Using the proposed CNN-optimized B-IFM system, the average maximum and root mean square errors between the optimized and nominal instantaneous frequencies are less than 26.3 and 15.5 MHz, which is reduced from up to 105.8 and 57.0 MHz. The system sensitivity is increased from 12.1 to 7.8 dBm for the 100 MHz frequency error, and the dynamic range is larger.
Collapse
|
329
|
Li J, Chen X, Qu Y. Effects of cyclophosphamide combined with prednisone on TNF-α expression in treatment of patients with interstitial lung disease. Exp Ther Med 2019; 18:4443-4449. [PMID: 31777548 PMCID: PMC6862246 DOI: 10.3892/etm.2019.8099] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2018] [Accepted: 08/06/2019] [Indexed: 12/25/2022] Open
Abstract
Effects of cyclophosphamide combined with prednisone on TNF-α expression in the treatment of patients with interstitial lung disease (ILD), and its clinical significance were investigated. A prospective analysis was performed on 198 patients with ILD in Jinan Central Hospital Affiliated to Shandong University from January 2010 to December 2017. Among them, 101 patients treated with cyclophosphamide combined with prednisone were assigned in the combined treatment group, and 97 patients treated with prednisone alone in the control group. Patients in the two groups were compared in terms of lung function, St. George's Respiratory Questionnaire (SGRQ) score, clinical efficacy, adverse reactions and TNF-α expression levels before and after treatment. After treatment, the patients in the combined treatment group had significantly higher forced vital capacity (FVC) and forced expiratory volume in first second (FEV1) compared with the control group, but significantly lower diffusing capacity of lung for carbon monoxide (DLCO) and DLCO% (P<0.05). In both groups, patients after treatment had higher FVC and FEV1, but lower DLCO and DLCO% (P<0.05), compared with before treatment, while SGRQ score before treatment was higher than that after treatment (P<0.05). Compared with control group, the combined treatment group had significantly more patients with complete remission (CR) and higher total effective rate, however less patients with stable disease (SD) (P<0.05). Patients with adverse reactions in the combined treatment group were less than those in the control group (P<0.05). After treatment, TNF-α expression level in the combined treatment group was significantly lower than that in the control group (P<0.05), and TNF-α expression before treatment was higher than that after treatment in both groups (P<0.05). In conclusion, cyclophosphamide combined with prednisone is effective and safe in the treatment of ILD without severe adverse reactions, reducing TNF-α expression level, and therefore is worthy of clinical application.
Collapse
Affiliation(s)
- Jun Li
- Department of Respiratory Medicine, Jinan Central Hospital Affiliated to Shandong University, Jinan, Shandong 250014, P.R. China
| | - Xiuling Chen
- Department of Gynaecology and Obstetrics, First People's Hospital of Jinan, Jinan, Shandong 250014, P.R. China
| | - Yunping Qu
- Department of Stomatology, First People's Hospital of Jinan, Jinan, Shandong 250014, P.R. China
| |
Collapse
|
330
|
Hassan A, Ghafoor M, Tariq SA, Zia T, Ahmad W. High Efficiency Video Coding (HEVC)-Based Surgical Telementoring System Using Shallow Convolutional Neural Network. J Digit Imaging 2019; 32:1027-1043. [PMID: 30980262 PMCID: PMC6841856 DOI: 10.1007/s10278-019-00206-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023] Open
Abstract
Surgical telementoring systems have gained lots of interest, especially in remote locations. However, bandwidth constraint has been the primary bottleneck for efficient telementoring systems. This study aims to establish an efficient surgical telementoring system, where the qualified surgeon (mentor) provides real-time guidance and technical assistance for surgical procedures to the on-spot physician (surgeon). High Efficiency Video Coding (HEVC/H.265)-based video compression has shown promising results for telementoring applications. However, there is a trade-off between the bandwidth resources required for video transmission and quality of video received by the remote surgeon. In order to efficiently compress and transmit real-time surgical videos, a hybrid lossless-lossy approach is proposed where surgical incision region is coded in high quality whereas the background region is coded in low quality based on distance from the surgical incision region. For surgical incision region extraction, state-of-the-art deep learning (DL) architectures for semantic segmentation can be used. However, the computational complexity of these architectures is high resulting in large training and inference times. For telementoring systems, encoding time is crucial; therefore, very deep architectures are not suitable for surgical incision extraction. In this study, we propose a shallow convolutional neural network (S-CNN)-based segmentation approach that consists of encoder network only for surgical region extraction. The segmentation performance of S-CNN is compared with one of the state-of-the-art image segmentation networks (SegNet), and results demonstrate the effectiveness of the proposed network. The proposed telementoring system is efficient and explicitly considers the physiological nature of the human visual system to encode the video by providing good overall visual impact in the location of surgery. The results of the proposed S-CNN-based segmentation demonstrated a pixel accuracy of 97% and a mean intersection over union accuracy of 79%. Similarly, HEVC experimental results showed that the proposed surgical region-based encoding scheme achieved an average bitrate reduction of 88.8% at high-quality settings in comparison with default full-frame HEVC encoding. The average gain in encoding performance (signal-to-noise) of the proposed algorithm is 11.5 dB in the surgical region. The bitrate saving and visual quality of the proposed optimal bit allocation scheme are compared with the mean shift segmentation-based coding scheme for fair comparison. The results show that the proposed scheme maintains high visual quality in surgical incision region along with achieving good bitrate saving. Based on comparison and results, the proposed encoding algorithm can be considered as an efficient and effective solution for surgical telementoring systems for low-bandwidth networks.
Collapse
Affiliation(s)
- Ali Hassan
- Department of Computer Science, COMSATS University, Islamabad, Pakistan
| | - Mubeen Ghafoor
- Department of Computer Science, COMSATS University, Islamabad, Pakistan
| | - Syed Ali Tariq
- Department of Computer Science, COMSATS University, Islamabad, Pakistan.
| | - Tehseen Zia
- Department of Computer Science, COMSATS University, Islamabad, Pakistan
| | - Waqas Ahmad
- Department of Information Systems and Technology, Mid Sweden University, Sundsvall, Sweden
| |
Collapse
|
331
|
A method for hand-foot-mouth disease prediction using GeoDetector and LSTM model in Guangxi, China. Sci Rep 2019; 9:17928. [PMID: 31784625 PMCID: PMC6884467 DOI: 10.1038/s41598-019-54495-2] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2019] [Accepted: 11/14/2019] [Indexed: 12/14/2022] Open
Abstract
Hand-foot-mouth disease (HFMD) is a common infectious disease in children and is particularly severe in Guangxi, China. Meteorological conditions are known to play a pivotal role in the HFMD. Previous studies have reported numerous models to predict the incidence of HFMD. In this study, we proposed a new method for the HFMD prediction using GeoDetector and a Long Short-Term Memory neural network (LSTM). The daily meteorological factors and HFMD records in Guangxi during 2014–2015 were adopted. First, potential risk factors for the occurrence of HFMD were identified based on the GeoDetector. Then, region-specific prediction models were developed in 14 administrative regions of Guangxi, China using an optimized three-layer LSTM model. Prediction results (the R-square ranges from 0.39 to 0.71) showed that the model proposed in this study had a good performance in HFMD predictions. This model could provide support for the prevention and control of HFMD. Moreover, this model could also be extended to the time series prediction of other infectious diseases.
Collapse
|
332
|
Choudhary P, Hazra A. Chest disease radiography in twofold: using convolutional neural networks and transfer learning. EVOLVING SYSTEMS 2019. [DOI: 10.1007/s12530-019-09316-2] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
333
|
Xu Z, Tao Y, Wenfang Z, Ne L, Zhengxing H, Jiquan L, Weiling H, Huilong D, Jianmin S. Upper gastrointestinal anatomy detection with multi-task convolutional neural networks. Healthc Technol Lett 2019; 6:176-180. [PMID: 32038853 PMCID: PMC6945683 DOI: 10.1049/htl.2019.0066] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Accepted: 10/02/2019] [Indexed: 12/11/2022] Open
Abstract
Esophagogastroduodenoscopy (EGD) has been widely applied for gastrointestinal (GI) examinations. However, there is a lack of mature technology to evaluate the quality of the EGD inspection process. In this Letter, the authors design a multi-task anatomy detection convolutional neural network (MT-AD-CNN) to evaluate the EGD inspection quality by combining the detection task of the upper digestive tract with ten anatomical structures and the classification task of informative video frames. The authors’ model is able to eliminate non-informative frames of the gastroscopic videos and detect the anatomies in real time. Specifically, a sub-branch is added to the detection network to classify NBI images, informative and non-informative images. By doing so, the detected box will be only displayed on the informative frames, which can reduce the false-positive rate. They can determine the video frames on which each anatomical location is effectively examined, so that they can analyse the diagnosis quality. Their method reaches the performance of 93.74% mean average precision for the detection task and 98.77% accuracy for the classification task. Their model can reflect the detailed circumstance of the gastroscopy examination process, which shows application potential in improving the quality of examinations.
Collapse
Affiliation(s)
- Zhang Xu
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou 310027, People's Republic of China
| | - Yu Tao
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou 310027, People's Republic of China
| | - Zheng Wenfang
- Department of Gastroenterology, Sir Run Run Shaw Hospital, Medical School, Zhejiang University, Hangzhou, 310016, People's Republic of China.,Institute of Gastroenterology, Zhejiang University, Hangzhou 310029, People's Republic of China
| | - Lin Ne
- Department of Gastroenterology, Sir Run Run Shaw Hospital, Medical School, Zhejiang University, Hangzhou, 310016, People's Republic of China.,Institute of Gastroenterology, Zhejiang University, Hangzhou 310029, People's Republic of China
| | - Huang Zhengxing
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou 310027, People's Republic of China
| | - Liu Jiquan
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou 310027, People's Republic of China
| | - Hu Weiling
- Department of Gastroenterology, Sir Run Run Shaw Hospital, Medical School, Zhejiang University, Hangzhou, 310016, People's Republic of China.,Institute of Gastroenterology, Zhejiang University, Hangzhou 310029, People's Republic of China
| | - Duan Huilong
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou 310027, People's Republic of China
| | - Si Jianmin
- Department of Gastroenterology, Sir Run Run Shaw Hospital, Medical School, Zhejiang University, Hangzhou, 310016, People's Republic of China.,Institute of Gastroenterology, Zhejiang University, Hangzhou 310029, People's Republic of China
| |
Collapse
|
334
|
Matsubara N, Teramoto A, Saito K, Fujita H. Bone suppression for chest X-ray image using a convolutional neural filter. AUSTRALASIAN PHYSICAL & ENGINEERING SCIENCES IN MEDICINE 2019; 43:10.1007/s13246-019-00822-w. [PMID: 31773501 DOI: 10.1007/s13246-019-00822-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/23/2019] [Accepted: 11/19/2019] [Indexed: 12/22/2022]
Abstract
Chest X-rays are used for mass screening for the early detection of lung cancer. However, lung nodules are often overlooked because of bones overlapping the lung fields. Bone suppression techniques based on artificial intelligence have been developed to solve this problem. However, bone suppression accuracy needs improvement. In this study, we propose a convolutional neural filter (CNF) for bone suppression based on a convolutional neural network which is frequently used in the medical field and has excellent performance in image processing. CNF outputs a value for the bone component of the target pixel by inputting pixel values in the neighborhood of the target pixel. By processing all positions in the input image, a bone-extracted image is generated. Finally, bone-suppressed image is obtained by subtracting the bone-extracted image from the original chest X-ray image. Bone suppression was most accurate when using CNF with six convolutional layers, yielding bone suppression of 89.2%. In addition, abnormalities, if present, were effectively imaged by suppressing only bone components and maintaining soft-tissue. These results suggest that the chances of missing abnormalities may be reduced by using the proposed method. The proposed method is useful for bone suppression in chest X-ray images.
Collapse
Affiliation(s)
- Naoki Matsubara
- Graduate School of Health Sciences, Fujita Health University, 1-98 Dengakugakubo, Kutsukake-cho, Toyoake-city, Aichi, 470-1192, Japan
| | - Atsushi Teramoto
- Graduate School of Health Sciences, Fujita Health University, 1-98 Dengakugakubo, Kutsukake-cho, Toyoake-city, Aichi, 470-1192, Japan.
| | - Kuniaki Saito
- Graduate School of Health Sciences, Fujita Health University, 1-98 Dengakugakubo, Kutsukake-cho, Toyoake-city, Aichi, 470-1192, Japan
| | - Hiroshi Fujita
- Department of Electrical, Electronic & Computer Engineering, Faculty of Engineering, Gifu University, 1-1 Yanagido, Gifu-city, Gifu, 501-1194, Japan
| |
Collapse
|
335
|
Learned and handcrafted features for early-stage laryngeal SCC diagnosis. Med Biol Eng Comput 2019; 57:2683-2692. [PMID: 31728933 DOI: 10.1007/s11517-019-02051-5] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2019] [Accepted: 09/21/2019] [Indexed: 01/08/2023]
Abstract
Squamous cell carcinoma (SCC) is the most common and malignant laryngeal cancer. An early-stage diagnosis is of crucial importance to lower patient mortality and preserve both the laryngeal anatomy and vocal-fold function. However, this may be challenging as the initial larynx modifications, mainly concerning the mucosa vascular tree and the epithelium texture and color, are small and can pass unnoticed to the human eye. The primary goal of this paper was to investigate a learning-based approach to early-stage SCC diagnosis, and compare the use of (i) texture-based global descriptors, such as local binary patterns, and (ii) deep-learning-based descriptors. These features, extracted from endoscopic narrow-band images of the larynx, were classified with support vector machines as to discriminate healthy, precancerous, and early-stage SCC tissues. When tested on a benchmark dataset, a median classification recall of 98% was obtained with the best feature combination, outperforming the state of the art (recall = 95%). Despite further investigation is needed (e.g., testing on a larger dataset), the achieved results support the use of the developed methodology in the actual clinical practice to provide accurate early-stage SCC diagnosis. Graphical Abstract Workflow of the proposed solution. Patches of laryngeal tissue are pre-processed and feature extraction is performed. These features are used in the laryngeal tissue classification.
Collapse
|
336
|
Xu R, Cong Z, Ye X, Hirano Y, Kido S, Gyobu T, Kawata Y, Honda O, Tomiyama N. Pulmonary Textures Classification via a Multi-Scale Attention Network. IEEE J Biomed Health Inform 2019; 24:2041-2052. [PMID: 31689221 DOI: 10.1109/jbhi.2019.2950006] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Precise classification of pulmonary textures is crucial to develop a computer aided diagnosis (CAD) system of diffuse lung diseases (DLDs). Although deep learning techniques have been applied to this task, the classification performance is not satisfied for clinical requirements, since commonly-used deep networks built by stacking convolutional blocks are not able to learn discriminative feature representation to distinguish complex pulmonary textures. For addressing this problem, we design a multi-scale attention network (MSAN) architecture comprised by several stacked residual attention modules followed by a multi-scale fusion module. Our deep network can not only exploit powerful information on different scales but also automatically select optimal features for more discriminative feature representation. Besides, we develop visualization techniques to make the proposed deep model transparent for humans. The proposed method is evaluated by using a large dataset. Experimental results show that our method has achieved the average classification accuracy of 94.78% and the average f-value of 0.9475 in the classification of 7 categories of pulmonary textures. Besides, visualization results intuitively explain the working behavior of the deep network. The proposed method has achieved the state-of-the-art performance to classify pulmonary textures on high resolution CT images.
Collapse
|
337
|
Wang Y, Li H, Huang J, Su C, Yang B, Qi C. An Improved Bar-Shaped Sliding Window CNN Tailored to Industrial Process Historical Data with Applications in Chemical Operational Optimizations. Ind Eng Chem Res 2019. [DOI: 10.1021/acs.iecr.9b03852] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Affiliation(s)
- Yongjian Wang
- College of Information Science and Technology, Beijing University of Chemical Technology, Beijing 100029, China
- Department of Chemical and Biomolecular Engineering, University of California, Los Angeles, California 90095, United States
| | - Hongguang Li
- College of Information Science and Technology, Beijing University of Chemical Technology, Beijing 100029, China
| | - Jingwen Huang
- College of Information Science and Technology, Beijing University of Chemical Technology, Beijing 100029, China
| | - Chong Su
- College of Information Science and Technology, Beijing University of Chemical Technology, Beijing 100029, China
| | - Bo Yang
- College of Information Science and Technology, Beijing University of Chemical Technology, Beijing 100029, China
| | - Chu Qi
- College of Information Science and Technology, Beijing University of Chemical Technology, Beijing 100029, China
| |
Collapse
|
338
|
P B S, Faruqi F, K S H, Kudva R. Deep Convolution Neural Network for Malignancy Detection and Classification in Microscopic Uterine Cervix Cell Images. Asian Pac J Cancer Prev 2019; 20:3447-3456. [PMID: 31759371 PMCID: PMC7062987 DOI: 10.31557/apjcp.2019.20.11.3447] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2019] [Indexed: 11/25/2022] Open
Abstract
Objective: Automated Pap smear cervical screening is one of the most effective imaging based cancer detection tools used for categorizing cervical cell images as normal and abnormal. Traditional classification methods depend on hand-engineered features and show limitations in large, diverse datasets. Effective feature extraction requires an efficient image preprocessing and segmentation, which remains prominent challenge in the field of Pathology. In this paper, a deep learning concept is used for cell image classification in large datasets. Methods: This relatively proposed novel method, combines abstract and complicated representations of data acquired in a hierarchical architecture. Convolution Neural Network (CNN) learns meaningful kernels that simulate the extraction of visual features such as edges, size, shape and colors in image classification. A deep prediction model is built using such a CNN network to classify the various grades of cancer: normal, mild, moderate, severe and carcinoma. It is an effective computational model which uses multiple processing layers to learn complex features. A large dataset is prepared for this study by systematically augmenting the images in Herlev dataset. Result: Among the three sets considered for the study, the first set of single cell enhanced original images achieved an accuracy of 94.1% for 5 class, 96.2% for 4 class, 94.8% for 3 class and 95.7% for 2 class problems. The second set includes contour extracted images showed an accuracy of 92.14%, 92.9%, 94.7% and 89.9% for 5, 4, 3 and 2 class problems. The third set of binary images showed 85.07% for 5 class, 84% for 4 class, 92.07% for 3 class and highest accuracy of 99.97% for 2 class problems. Conclusion: The experimental results of the proposed model showed an effective classification of different grades of cancer in cervical cell images, exhibiting the extensive potential of deep learning in Pap smear cell image classification.
Collapse
Affiliation(s)
- Shanthi P B
- Department of Computer Science and Engineering, Manipal Institute of Technology, Manipal Academy of Higher Educaton, Udupi, Karnataka, India
| | - Faraz Faruqi
- Department of Computer Science and Engineering, Manipal Institute of Technology, Manipal Academy of Higher Educaton, Udupi, Karnataka, India
| | - Hareesha K S
- Department of Computer Applications, Manipal Institute of Technology, Manipal Academy of Higher Education, Udupi, Karnataka, India
| | - Ranjini Kudva
- Department of Pathology, Kasturba Medical College, Manipal Academy of Higher Educaton, Udupi, Karnataka, India
| |
Collapse
|
339
|
Li X, Thrall JH, Digumarthy SR, Kalra MK, Pandharipande PV, Zhang B, Nitiwarangkul C, Singh R, Khera RD, Li Q. Deep learning-enabled system for rapid pneumothorax screening on chest CT. Eur J Radiol 2019; 120:108692. [DOI: 10.1016/j.ejrad.2019.108692] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2019] [Revised: 09/13/2019] [Accepted: 09/19/2019] [Indexed: 11/26/2022]
|
340
|
Liu F, Samsonov A, Chen L, Kijowski R, Feng L. SANTIS: Sampling-Augmented Neural neTwork with Incoherent Structure for MR image reconstruction. Magn Reson Med 2019; 82:1890-1904. [PMID: 31166049 PMCID: PMC6660404 DOI: 10.1002/mrm.27827] [Citation(s) in RCA: 69] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2019] [Revised: 05/02/2019] [Accepted: 05/03/2019] [Indexed: 12/23/2022]
Abstract
PURPOSE To develop and evaluate a novel deep learning-based reconstruction framework called SANTIS (Sampling-Augmented Neural neTwork with Incoherent Structure) for efficient MR image reconstruction with improved robustness against sampling pattern discrepancy. METHODS With a combination of data cycle-consistent adversarial network, end-to-end convolutional neural network mapping, and data fidelity enforcement for reconstructing undersampled MR data, SANTIS additionally utilizes a sampling-augmented training strategy by extensively varying undersampling patterns during training, so that the network is capable of learning various aliasing structures and thereby removing undersampling artifacts more effectively and robustly. The performance of SANTIS was demonstrated for accelerated knee imaging and liver imaging using a Cartesian trajectory and a golden-angle radial trajectory, respectively. Quantitative metrics were used to assess its performance against different references. The feasibility of SANTIS in reconstructing dynamic contrast-enhanced images was also demonstrated using transfer learning. RESULTS Compared to conventional reconstruction that exploits image sparsity, SANTIS achieved consistently improved reconstruction performance (lower errors and greater image sharpness). Compared to standard learning-based methods without sampling augmentation (e.g., training with a fixed undersampling pattern), SANTIS provides comparable reconstruction performance, but significantly improved robustness, against sampling pattern discrepancy. SANTIS also achieved encouraging results for reconstructing liver images acquired at different contrast phases. CONCLUSION By extensively varying undersampling patterns, the sampling-augmented training strategy in SANTIS can remove undersampling artifacts more robustly. The novel concept behind SANTIS can particularly be useful for improving the robustness of deep learning-based image reconstruction against discrepancy between training and inference, an important, but currently less explored, topic.
Collapse
Affiliation(s)
- Fang Liu
- Department of Radiology, University of Wisconsin-Madison, Madison, Wisconsin, USA
| | - Alexey Samsonov
- Department of Radiology, University of Wisconsin-Madison, Madison, Wisconsin, USA
| | - Lihua Chen
- Department of Radiology, Southwest Hospital, Chongqing, China
| | - Richard Kijowski
- Department of Radiology, University of Wisconsin-Madison, Madison, Wisconsin, USA
| | - Li Feng
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, USA
| |
Collapse
|
341
|
Peng L, Chen YW, Lin L, Hu H, Li H, Chen Q, Ling X, Wang D, Han X, Iwamoto Y. Classification and Quantification of Emphysema Using a Multi-Scale Residual Network. IEEE J Biomed Health Inform 2019; 23:2526-2536. [DOI: 10.1109/jbhi.2018.2890045] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
342
|
Kim GB, Jung KH, Lee Y, Kim HJ, Kim N, Jun S, Seo JB, Lynch DA. Comparison of Shallow and Deep Learning Methods on Classifying the Regional Pattern of Diffuse Lung Disease. J Digit Imaging 2019; 31:415-424. [PMID: 29043528 DOI: 10.1007/s10278-017-0028-9] [Citation(s) in RCA: 56] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022] Open
Abstract
This study aimed to compare shallow and deep learning of classifying the patterns of interstitial lung diseases (ILDs). Using high-resolution computed tomography images, two experienced radiologists marked 1200 regions of interest (ROIs), in which 600 ROIs were each acquired using a GE or Siemens scanner and each group of 600 ROIs consisted of 100 ROIs for subregions that included normal and five regional pulmonary disease patterns (ground-glass opacity, consolidation, reticular opacity, emphysema, and honeycombing). We employed the convolution neural network (CNN) with six learnable layers that consisted of four convolution layers and two fully connected layers. The classification results were compared with the results classified by a shallow learning of a support vector machine (SVM). The CNN classifier showed significantly better performance for accuracy compared with that of the SVM classifier by 6-9%. As the convolution layer increases, the classification accuracy of the CNN showed better performance from 81.27 to 95.12%. Especially in the cases showing pathological ambiguity such as between normal and emphysema cases or between honeycombing and reticular opacity cases, the increment of the convolution layer greatly drops the misclassification rate between each case. Conclusively, the CNN classifier showed significantly greater accuracy than the SVM classifier, and the results implied structural characteristics that are inherent to the specific ILD patterns.
Collapse
Affiliation(s)
- Guk Bae Kim
- Biomedical Engineering Research Center, Asan Institute of Life Science, Asan Medical Center, 388-1 Pungnap2-dong, Songpa-gu, Seoul, Republic of Korea
| | - Kyu-Hwan Jung
- VUNO, 6F, 507, Gangnamdae-ro, Seocho-gu, Seoul, Republic of Korea
| | - Yeha Lee
- VUNO, 6F, 507, Gangnamdae-ro, Seocho-gu, Seoul, Republic of Korea
| | - Hyun-Jun Kim
- VUNO, 6F, 507, Gangnamdae-ro, Seocho-gu, Seoul, Republic of Korea
| | - Namkug Kim
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, 388-1 Pungnap2-dong, Songpa-gu, Seoul, 138-736, Republic of Korea.
| | - Sanghoon Jun
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, 388-1 Pungnap2-dong, Songpa-gu, Seoul, 138-736, Republic of Korea
| | - Joon Beom Seo
- Department of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 388-1 Pungnap2-dong, Songpa-gu, Seoul, 138-736, Republic of Korea.
| | - David A Lynch
- Department of Radiology, National Jewish Medical and Research Center, Denver, CO, USA
| |
Collapse
|
343
|
Li G, Watanabe K, Anzai H, Song X, Qiao A, Ohta M. Pulse-Wave-Pattern Classification with a Convolutional Neural Network. Sci Rep 2019; 9:14930. [PMID: 31624300 PMCID: PMC6797811 DOI: 10.1038/s41598-019-51334-2] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2018] [Accepted: 09/24/2019] [Indexed: 11/29/2022] Open
Abstract
Owing to the diversity of pulse-wave morphology, pulse-based diagnosis is difficult, especially pulse-wave-pattern classification (PWPC). A powerful method for PWPC is a convolutional neural network (CNN). It outperforms conventional methods in pattern classification due to extracting informative abstraction and features. For previous PWPC criteria, the relationship between pulse and disease types is not clear. In order to improve the clinical practicability, there is a need for a CNN model to find the one-to-one correspondence between pulse pattern and disease categories. In this study, five cardiovascular diseases (CVD) and complications were extracted from medical records as classification criteria to build pulse data set 1. Four physiological parameters closely related to the selected diseases were also extracted as classification criteria to build data set 2. An optimized CNN model with stronger feature extraction capability for pulse signals was proposed, which achieved PWPC with 95% accuracy in data set 1 and 89% accuracy in data set 2. It demonstrated that pulse waves are the result of multiple physiological parameters. There are limitations when using a single physiological parameter to characterise the overall pulse pattern. The proposed CNN model can achieve high accuracy of PWPC while using CVD and complication categories as classification criteria.
Collapse
Affiliation(s)
- Gaoyang Li
- Institute of Fluid Science, Tohoku University, 2-1-1, Katahira, Aoba-ku, Sendai, Miyagi, 980-8577, Japan
- Graduate School of Biomedical Engineering, Tohoku University, 6-6 Aramaki-aza-aoba, Aoba-ku, Sendai, Miyagi, 980-8579, Japan
| | - Kazuhiro Watanabe
- Institute of Fluid Science, Tohoku University, 2-1-1, Katahira, Aoba-ku, Sendai, Miyagi, 980-8577, Japan
- Graduate School of Biomedical Engineering, Tohoku University, 6-6 Aramaki-aza-aoba, Aoba-ku, Sendai, Miyagi, 980-8579, Japan
| | - Hitomi Anzai
- Graduate School of Biomedical Engineering, Tohoku University, 6-6 Aramaki-aza-aoba, Aoba-ku, Sendai, Miyagi, 980-8579, Japan
| | - Xiaorui Song
- Department of Radiology, Taishan Medical University, No.619 Greatwall Road, Daiyue District, Taian, Shandong, 271000, China
| | - Aike Qiao
- College of Life Science and Bioengineering, Beijing University of Technology, No.100, Pingleyuan, Chaoyang District, Beijing, 100022, China
| | - Makoto Ohta
- Graduate School of Biomedical Engineering, Tohoku University, 6-6 Aramaki-aza-aoba, Aoba-ku, Sendai, Miyagi, 980-8579, Japan.
- ELyTMaX UMI 3757, CNRS-Université de Lyon-Tohoku University, Sendai, Japan.
| |
Collapse
|
344
|
Zha W, Fain SB, Schiebler ML, Nagle SK, Liu F. Deep convolutional neural networks with multiplane consensus labeling for lung function quantification using UTE proton MRI. J Magn Reson Imaging 2019; 50:1169-1181. [PMID: 30945385 PMCID: PMC7039686 DOI: 10.1002/jmri.26734] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2018] [Revised: 03/15/2019] [Accepted: 03/15/2019] [Indexed: 12/23/2022] Open
Abstract
BACKGROUND Ultrashort echo time (UTE) proton MRI has gained popularity for assessing lung structure and function in pulmonary imaging; however, the development of rapid biomarker extraction and regional quantification has lagged behind due to labor-intensive lung segmentation. PURPOSE To evaluate a deep learning (DL) approach for automated lung segmentation to extract image-based biomarkers from functional lung imaging using 3D radial UTE oxygen-enhanced (OE) MRI. STUDY TYPE Retrospective study aimed to evaluate a technical development. POPULATION Forty-five human subjects, including 16 healthy volunteers, 5 asthma, and 24 patients with cystic fibrosis. FIELD STRENGTH/SEQUENCE 1.5T MRI, 3D radial UTE (TE = 0.08 msec) sequence. ASSESSMENT Two 3D radial UTE volumes were acquired sequentially under normoxic (21% O2 ) and hyperoxic (100% O2 ) conditions. Automated segmentation of the lungs using 2D convolutional encoder-decoder based DL method, and the subsequent functional quantification via adaptive K-means were compared with the results obtained from the reference method, supervised region growing. STATISTICAL TESTS Relative to the reference method, the performance of DL on volumetric quantification was assessed using Dice coefficient with 95% confidence interval (CI) for accuracy, two-sided Wilcoxon signed-rank test for computation time, and Bland-Altman analysis on the functional measure derived from the OE images. RESULTS The DL method produced strong agreement with supervised region growing for the right (Dice: 0.97; 95% CI = [0.96, 0.97]; P < 0.001) and left lungs (Dice: 0.96; 95% CI = [0.96, 0.97]; P < 0.001). The DL method averaged 46 seconds to generate the automatic segmentations in contrast to 1.93 hours using the reference method (P < 0.001). Bland-Altman analysis showed nonsignificant intermethod differences of volumetric (P ≥ 0.12) and functional measurements (P ≥ 0.34) in the left and right lungs. DATA CONCLUSION DL provides rapid, automated, and robust lung segmentation for quantification of regional lung function using UTE proton MRI. LEVEL OF EVIDENCE 2 Technical Efficacy: Stage 1 J. Magn. Reson. Imaging 2019;50:1169-1181.
Collapse
Affiliation(s)
- Wei Zha
- Department of Medical Physics, University of Wisconsin-Madison
| | - Sean B. Fain
- Department of Medical Physics, University of Wisconsin-Madison
- Department of Radiology, University of Wisconsin-Madison
- Department of Biomedical Engineering, University of Wisconsin-Madison
| | | | - Scott K. Nagle
- Department of Medical Physics, University of Wisconsin-Madison
- Department of Radiology, University of Wisconsin-Madison
- Department of Pediatrics, University of Wisconsin-Madison
| | - Fang Liu
- Department of Radiology, University of Wisconsin-Madison
| |
Collapse
|
345
|
Zhang F, Li Z, Zhang B, Du H, Wang B, Zhang X. Multi-modal deep learning model for auxiliary diagnosis of Alzheimer’s disease. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.04.093] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
346
|
Chen Q, Wang W, Wu F, De S, Wang R, Zhang B, Huang X. A Survey on an Emerging Area: Deep Learning for Smart City Data. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE 2019. [DOI: 10.1109/tetci.2019.2907718] [Citation(s) in RCA: 59] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
347
|
He Y, Guo J, Ding X, van Ooijen PMA, Zhang Y, Chen A, Oudkerk M, Xie X. Convolutional neural network to predict the local recurrence of giant cell tumor of bone after curettage based on pre-surgery magnetic resonance images. Eur Radiol 2019; 29:5441-5451. [PMID: 30859281 DOI: 10.1007/s00330-019-06082-2] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2018] [Revised: 01/24/2019] [Accepted: 02/07/2019] [Indexed: 02/07/2023]
Abstract
OBJECTIVE To predict the local recurrence of giant cell bone tumors (GCTB) on MR features and the clinical characteristics after curettage using a deep convolutional neural network (CNN). METHODS MR images were collected from 56 patients with histopathologically confirmed GCTB after curettage who were followed up for 5.8 years (range, 2.0 to 9.5 years). The inception v3 CNN architecture was fine-tuned by two categories of the MR datasets (recurrent and non-recurrent GCTB) obtained through data augmentation and was validated using fourfold cross-validation to evaluate its generalization ability. Twenty-eight cases (50%) were chosen as the training dataset for the CNN and four radiologists, while the remaining 28 cases (50%) were used as the test dataset. A binary logistic regression model was established to predict recurrent GCTB by combining the CNN prediction and patient features (age and tumor location). Accuracy and sensitivity were used to evaluate the prediction performance. RESULTS When comparing the CNN, CNN regression, and radiologists, the accuracies of the CNN and CNN regression models were 75.5% (95% CI 55.1 to 89.3%) and 78.6% (59.0 to 91.7%), respectively, which were higher than the 64.3% (44.1 to 81.4%) accuracy of the radiologists. The sensitivities were 85.7% (42.1 to 99.6%) and 87.5% (47.3 to 99.7%), respectively, which were higher than the 58.3% (27.7 to 84.8%) sensitivity of the radiologists (p < 0.05). CONCLUSION The CNN has the potential to predict recurrent GCTB after curettage. A binary regression model combined with patient characteristics improves its prediction accuracy. KEY POINTS • Convolutional neural network (CNN) can be trained successfully on a limited number of pre-surgery MR images, by fine-tuning a pre-trained CNN architecture. • CNN has an accuracy of 75.5% to predict post-surgery recurrence of giant cell tumors of bone, which surpasses the 64.3% accuracy of human observation. • A binary logistic regression model combining CNN prediction rate, patient age, and tumor location improves the accuracy to predict post-surgery recurrence of giant cell bone tumors to 78.6%.
Collapse
Affiliation(s)
- Yifeng He
- Radiology Department, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, HaiNing Rd.100, Shanghai, 200080, China
- Radiology Department, RuiJin Hospital, Shanghai Jiao Tong University School of Medicine, RuiJin No.2 Rd.197, Shanghai, 200025, China
| | - Jiapan Guo
- University Medical Center Groningen, Center for Medical Imaging - North East Netherlands, University of Groningen, Hanzeplein 1, 9713 GZ, Groningen, The Netherlands
| | - Xiaoyi Ding
- Radiology Department, RuiJin Hospital, Shanghai Jiao Tong University School of Medicine, RuiJin No.2 Rd.197, Shanghai, 200025, China
| | - Peter M A van Ooijen
- University Medical Center Groningen, Center for Medical Imaging - North East Netherlands, University of Groningen, Hanzeplein 1, 9713 GZ, Groningen, The Netherlands
| | - Yaping Zhang
- Radiology Department, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, HaiNing Rd.100, Shanghai, 200080, China
| | - An Chen
- Radiology Department, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, HaiNing Rd.100, Shanghai, 200080, China
| | - Matthijs Oudkerk
- University Medical Center Groningen, Center for Medical Imaging - North East Netherlands, University of Groningen, Hanzeplein 1, 9713 GZ, Groningen, The Netherlands
| | - Xueqian Xie
- Radiology Department, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, HaiNing Rd.100, Shanghai, 200080, China.
| |
Collapse
|
348
|
Lyu J, Ling SH. Using Multi-level Convolutional Neural Network for Classification of Lung Nodules on CT images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2019; 2018:686-689. [PMID: 30440489 DOI: 10.1109/embc.2018.8512376] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Lung cancer is one of the four major cancers in the world. Accurate diagnosing of lung cancer in the early stage plays an important role to increase the survival rate. Computed Tomography (CT)is an effective method to help the doctor to detect the lung cancer. In this paper, we developed a multi-level convolutional neural network (ML-CNN)to investigate the problem of lung nodule malignancy classification. ML-CNN consists of three CNNs for extracting multi-scale features in lung nodule CT images. Furthermore, we flatten the output of the last pooling layer into a one-dimensional vector for every level and then concatenate them. This strategy can help to improve the performance of our model. The ML-CNN is applied to ternary classification of lung nodules (benign, indeterminate and malignant lung nodules). The experimental results show that our ML-CNN achieves 84.81\% accuracy without any additional hand-craft preprocessing algorithm. It is also indicated that our model achieves the best result in ternary classification.
Collapse
|
349
|
Gao X, Yan X, Gao P, Gao X, Zhang S. Automatic detection of epileptic seizure based on approximate entropy, recurrence quantification analysis and convolutional neural networks. Artif Intell Med 2019; 102:101711. [PMID: 31980085 DOI: 10.1016/j.artmed.2019.101711] [Citation(s) in RCA: 51] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2019] [Revised: 08/09/2019] [Accepted: 08/30/2019] [Indexed: 01/21/2023]
Abstract
Epilepsy is the most common neurological disorder in humans. Electroencephalogram is a prevalent tool for diagnosing the epileptic seizure activity in clinical, which provides valuable information for understanding the physiological mechanisms behind epileptic disorders. Approximate entropy and recurrence quantification analysis are nonlinear analysis tools to quantify the complexity and recurrence behaviors of non-stationary signals, respectively. Convolutional neural networks are powerful class of models. In this paper, a new method for automatic epileptic electroencephalogram recordings based on the approximate entropy and recurrence quantification analysis combined with a convolutional neural network were proposed. The Bonn dataset was used to assess the proposed approach. The results indicated that the performance of the epileptic seizure detection by approximate entropy and recurrence quantification analysis is good (all of the sensitivities, specificities and accuracies are greater than 80%); especially the sensitivity, specificity and accuracy of the recurrence rate achieved 92.17%, 91.75% and 92.00%. When combines the approximate entropy and recurrence quantification analysis features with convolutional neural networks to automatically differentiate seizure electroencephalogram from normal recordings, the classification result can reach to 98.84%, 99.35% and 99.26%. Thus, this makes automatic detection of epileptic recordings become possible and it would be a valuable tool for the clinical diagnosis and treatment of epilepsy.
Collapse
Affiliation(s)
- Xiaozeng Gao
- Affiliated Hospital of North China University of Science and Technology, Tangshan, 063009, China
| | - Xiaoyan Yan
- Affiliated Hospital of North China University of Science and Technology, Tangshan, 063009, China
| | - Ping Gao
- Affiliated Hospital of North China University of Science and Technology, Tangshan, 063009, China
| | - Xiujiang Gao
- Affiliated Hospital of North China University of Science and Technology, Tangshan, 063009, China
| | - Shubo Zhang
- Affiliated Hospital of North China University of Science and Technology, Tangshan, 063009, China.
| |
Collapse
|
350
|
Nensa F, Demircioglu A, Rischpler C. Artificial Intelligence in Nuclear Medicine. J Nucl Med 2019; 60:29S-37S. [DOI: 10.2967/jnumed.118.220590] [Citation(s) in RCA: 55] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Accepted: 05/16/2019] [Indexed: 02/06/2023] Open
|