1
|
Chen Y, Zhang C, Ding CHQ, Liu L. Generating and Weighting Semantically Consistent Sample Pairs for Ultrasound Contrastive Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1388-1400. [PMID: 37015698 DOI: 10.1109/tmi.2022.3228254] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Well-annotated medical datasets enable deep neural networks (DNNs) to gain strong power in extracting lesion-related features. Building such large and well-designed medical datasets is costly due to the need for high-level expertise. Model pre-training based on ImageNet is a common practice to gain better generalization when the data amount is limited. However, it suffers from the domain gap between natural and medical images. In this work, we pre-train DNNs on ultrasound (US) domains instead of ImageNet to reduce the domain gap in medical US applications. To learn US image representations based on unlabeled US videos, we propose a novel meta-learning-based contrastive learning method, namely Meta Ultrasound Contrastive Learning (Meta-USCL). To tackle the key challenge of obtaining semantically consistent sample pairs for contrastive learning, we present a positive pair generation module along with an automatic sample weighting module based on meta-learning. Experimental results on multiple computer-aided diagnosis (CAD) problems, including pneumonia detection, breast cancer classification, and breast tumor segmentation, show that the proposed self-supervised method reaches state-of-the-art (SOTA). The codes are available at https://github.com/Schuture/Meta-USCL.
Collapse
|
2
|
Seo J, Nguon LS, Park S. Vascular wall motion detection models based on long short-term memory in plane-wave-based ultrasound imaging. Phys Med Biol 2023; 68:075005. [PMID: 36881926 DOI: 10.1088/1361-6560/acc238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Accepted: 03/07/2023] [Indexed: 03/09/2023]
Abstract
Objective.Vascular wall motion can be used to diagnose cardiovascular diseases. In this study, long short-term memory (LSTM) neural networks were used to track vascular wall motion in plane-wave-based ultrasound imaging.Approach.The proposed LSTM and convolutional LSTM (ConvLSTM) models were trained using ultrasound data from simulations and tested experimentally using a tissue-mimicking vascular phantom and anin vivostudy using a carotid artery. The performance of the models in the simulation was evaluated using the mean square error from axial and lateral motions and compared with the cross-correlation (XCorr) method. Statistical analysis was performed using the Bland-Altman plot, Pearson correlation coefficient, and linear regression in comparison with the manually annotated ground truth.Main results.For thein vivodata, the median error and 95% limit of agreement from the Bland-Altman analysis were (0.01, 0.13), (0.02, 0.19), and (0.03, 0.18), the Pearson correlation coefficients were 0.97, 0.94, and 0.94, respectively, and the linear equations were 0.89x+ 0.02, 0.84x+ 0.03, and 0.88x+ 0.03 from linear regression for the ConvLSTM model, LSTM model, and XCorr method, respectively. In the longitudinal and transverse views of the carotid artery, the LSTM-based models outperformed the XCorr method. Overall, the ConvLSTM model was superior to the LSTM model and XCorr method.Significance.This study demonstrated that vascular wall motion can be tracked accurately and precisely using plane-wave-based ultrasound imaging and the proposed LSTM-based models.
Collapse
Affiliation(s)
- Jeongwung Seo
- Department of Electronic and Electrical Engineering, Ewha Womans University, Seoul 03760, Republic of Korea
| | - Leang Sim Nguon
- Department of Electronic and Electrical Engineering, Ewha Womans University, Seoul 03760, Republic of Korea
| | - Suhyun Park
- Department of Electronic and Electrical Engineering, Ewha Womans University, Seoul 03760, Republic of Korea
| |
Collapse
|
3
|
An Optimization-Linked Intelligent Security Algorithm for Smart Healthcare Organizations. Healthcare (Basel) 2023; 11:healthcare11040580. [PMID: 36833114 PMCID: PMC9956199 DOI: 10.3390/healthcare11040580] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Revised: 01/24/2023] [Accepted: 02/10/2023] [Indexed: 02/17/2023] Open
Abstract
IoT-enabled healthcare apps are providing significant value to society by offering cost-effective patient monitoring solutions in IoT-enabled buildings. However, with a large number of users and sensitive personal information readily available in today's fast-paced, internet, and cloud-based environment, the security of these healthcare systems must be a top priority. The idea of safely storing a patient's health data in an electronic format raises issues regarding patient data privacy and security. Furthermore, with traditional classifiers, processing large amounts of data is a difficult challenge. Several computational intelligence approaches are useful for effectively categorizing massive quantities of data for this goal. For many of these reasons, a novel healthcare monitoring system that tracks disease processes and forecasts diseases based on the available data obtained from patients in distant communities is proposed in this study. The proposed framework consists of three major stages, namely data collection, secured storage, and disease detection. The data are collected using IoT sensor devices. After that, the homomorphic encryption (HE) model is used for secured data storage. Finally, the disease detection framework is designed with the help of Centered Convolutional Restricted Boltzmann Machines-based whale optimization (CCRBM-WO) algorithm. The experiment is conducted on a Python-based cloud tool. The proposed system outperforms current e-healthcare solutions, according to the findings of the experiments. The accuracy, precision, F1-measure, and recall of our suggested technique are 96.87%, 97.45%, 97.78%, and 98.57%, respectively, according to the proposed method.
Collapse
|
4
|
Simović A, Lutovac-Banduka M, Lekić S, Kuleto V. Smart Visualization of Medical Images as a Tool in the Function of Education in Neuroradiology. Diagnostics (Basel) 2022; 12:3208. [PMID: 36553215 PMCID: PMC9777748 DOI: 10.3390/diagnostics12123208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 12/09/2022] [Accepted: 12/12/2022] [Indexed: 12/23/2022] Open
Abstract
The smart visualization of medical images (SVMI) model is based on multi-detector computed tomography (MDCT) data sets and can provide a clearer view of changes in the brain, such as tumors (expansive changes), bleeding, and ischemia on native imaging (i.e., a non-contrast MDCT scan). The new SVMI method provides a more precise representation of the brain image by hiding pixels that are not carrying information and rescaling and coloring the range of pixels essential for detecting and visualizing the disease. In addition, SVMI can be used to avoid the additional exposure of patients to ionizing radiation, which can lead to the occurrence of allergic reactions due to the contrast media administration. Results of the SVMI model were compared with the final diagnosis of the disease after additional diagnostics and confirmation by neuroradiologists, who are highly trained physicians with many years of experience. The application of the realized and presented SVMI model can optimize the engagement of material, medical, and human resources and has the potential for general application in medical training, education, and clinical research.
Collapse
Affiliation(s)
- Aleksandar Simović
- Department of Information Technology, Information Technology School ITS, 11000 Belgrade, Serbia
| | - Maja Lutovac-Banduka
- Department of RT-RK Institute, RT-RK for Computer Based Systems, 21000 Novi Sad, Serbia
| | - Snežana Lekić
- Department of Emergency Neuroradiology, University Clinical Centre of Serbia UKCS, 11000 Belgrade, Serbia
| | - Valentin Kuleto
- Department of Information Technology, Information Technology School ITS, 11000 Belgrade, Serbia
| |
Collapse
|
5
|
Wan C, Fang L, Cao S, Luo J, Jiang Y, Wei Y, Lv C, Si W. Research on classification algorithm of cerebral small vessel disease based on convolutional neural network. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-213212] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/07/2022]
Abstract
The investigation on brain magnetic resonance imaging (MRI) of cerebral small vessel disease (CSVD) classification algorithm based on deep learning is particularly important in medical image analyses and has not been reported. This paper proposes an MRI classification algorithm based on convolutional neural network (MRINet), for accurately classifying CSVD and improving the classification performance. The working method includes five main stages: fabricating dataset, designing network model, configuring the training options, training model and testing performance. The actual training and testing datasets of MRI of CSVD are fabricated, the MRINet model is designed for extracting more detailedly features, a smooth categorical-cross-entropy loss function and Adam optimization algorithm are adopted, and the appropriate training parameters are set. The network model is trained and tested in the fabricated datasets, and the classification performance of CSVD is fully investigated. Experimental results show that the loss and accuracy curves demonstrate the better classification performance in the training process. The confusion matrices confirm that the designed network model demonstrates the better classification results, especially for luminal infarction. The average classification accuracy of MRINet is up to 80.95% when classifying MRI of CSVD, which demonstrates the superior classification performance over others. This work provides a sound experimental foundation for further improving the classification accuracy and enhancing the actual application in medical image analyses.
Collapse
Affiliation(s)
- Chenxia Wan
- College of Information and Communication Engineering, Harbin Engineering University, Harbin, China
| | - Liqun Fang
- Forth Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Shaodong Cao
- Forth Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Jiaji Luo
- College of Information and Communication Engineering, Harbin Engineering University, Harbin, China
| | - Yijing Jiang
- Forth Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Yuanxiao Wei
- Forth Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Cancan Lv
- Forth Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Weijian Si
- College of Information and Communication Engineering, Harbin Engineering University, Harbin, China
| |
Collapse
|
6
|
Yousef R, Gupta G, Yousef N, Khari M. A holistic overview of deep learning approach in medical imaging. MULTIMEDIA SYSTEMS 2022; 28:881-914. [PMID: 35079207 PMCID: PMC8776556 DOI: 10.1007/s00530-021-00884-5] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Accepted: 12/23/2021] [Indexed: 05/07/2023]
Abstract
Medical images are a rich source of invaluable necessary information used by clinicians. Recent technologies have introduced many advancements for exploiting the most of this information and use it to generate better analysis. Deep learning (DL) techniques have been empowered in medical images analysis using computer-assisted imaging contexts and presenting a lot of solutions and improvements while analyzing these images by radiologists and other specialists. In this paper, we present a survey of DL techniques used for variety of tasks along with the different medical image's modalities to provide critical review of the recent developments in this direction. We have organized our paper to provide significant contribution of deep leaning traits and learn its concepts, which is in turn helpful for non-expert in medical society. Then, we present several applications of deep learning (e.g., segmentation, classification, detection, etc.) which are commonly used for clinical purposes for different anatomical site, and we also present the main key terms for DL attributes like basic architecture, data augmentation, transfer learning, and feature selection methods. Medical images as inputs to deep learning architectures will be the mainstream in the coming years, and novel DL techniques are predicted to be the core of medical images analysis. We conclude our paper by addressing some research challenges and the suggested solutions for them found in literature, and also future promises and directions for further developments.
Collapse
Affiliation(s)
- Rammah Yousef
- Yogananda School of AI Computer and Data Sciences, Shoolini University, Solan, 173229 Himachal Pradesh India
| | - Gaurav Gupta
- Yogananda School of AI Computer and Data Sciences, Shoolini University, Solan, 173229 Himachal Pradesh India
| | - Nabhan Yousef
- Electronics and Communication Engineering, Marwadi University, Rajkot, Gujrat India
| | - Manju Khari
- Jawaharlal Nehru University, New Delhi, India
| |
Collapse
|
7
|
Homayoun H, Ebrahimpour-komleh H. Automated Segmentation of Abnormal Tissues in Medical Images. J Biomed Phys Eng 2021; 11:415-424. [PMID: 34458189 PMCID: PMC8385212 DOI: 10.31661/jbpe.v0i0.958] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2018] [Accepted: 08/14/2018] [Indexed: 11/29/2022]
Abstract
Nowadays, medical image modalities are almost available everywhere. These modalities are bases of diagnosis of various diseases sensitive to specific tissue type.
Usually physicians look for abnormalities in these modalities in diagnostic procedures. Count and volume of abnormalities are very important for optimal treatment of patients.
Segmentation is a preliminary step for these measurements and also further analysis. Manual segmentation of abnormalities is cumbersome, error prone, and subjective. As a result,
automated segmentation of abnormal tissue is a need. In this study, representative techniques for segmentation of abnormal tissues are reviewed. Main focus is on the segmentation of
multiple sclerosis lesions, breast cancer masses, lung nodules, and skin lesions. As experimental results demonstrate, the methods based on deep learning techniques perform better than
other methods that are usually based on handy feature engineering techniques. Finally, the most common measures to evaluate automated abnormal tissue segmentation methods are reported
Collapse
Affiliation(s)
- Hassan Homayoun
- PhD, Department of Computer Engineering, Faculty of Electrical and Computer Engineering, University of Kashan, Kashan, Iran
| | - Hossein Ebrahimpour-komleh
- PhD, Department of Computer Engineering, Faculty of Electrical and Computer Engineering, University of Kashan, Kashan, Iran
| |
Collapse
|
8
|
|
9
|
Security and Privacy of Cloud- and IoT-Based Medical Image Diagnosis Using Fuzzy Convolutional Neural Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:6615411. [PMID: 33790958 PMCID: PMC7997756 DOI: 10.1155/2021/6615411] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 01/27/2021] [Accepted: 03/04/2021] [Indexed: 01/16/2023]
Abstract
In recent times, security in cloud computing has become a significant part in healthcare services specifically in medical data storage and disease prediction. A large volume of data are produced in the healthcare environment day by day due to the development in the medical devices. Thus, cloud computing technology is utilised for storing, processing, and handling these large volumes of data in a highly secured manner from various attacks. This paper focuses on disease classification by utilising image processing with secured cloud computing environment using an extended zigzag image encryption scheme possessing a greater tolerance to different data attacks. Secondly, a fuzzy convolutional neural network (FCNN) algorithm is proposed for effective classification of images. The decrypted images are used for classification of cancer levels with different layers of training. After classification, the results are transferred to the concern doctors and patients for further treatment process. Here, the experimental process is carried out by utilising the standard dataset. The results from the experiment concluded that the proposed algorithm shows better performance than the other existing algorithms and can be effectively utilised for the medical image diagnosis.
Collapse
|
10
|
Raza K, Singh NK. A Tour of Unsupervised Deep Learning for Medical Image Analysis. Curr Med Imaging 2021; 17:1059-1077. [PMID: 33504314 DOI: 10.2174/1573405617666210127154257] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2020] [Revised: 11/17/2020] [Accepted: 12/16/2020] [Indexed: 11/22/2022]
Abstract
BACKGROUND Interpretation of medical images for the diagnosis and treatment of complex diseases from high-dimensional and heterogeneous data remains a key challenge in transforming healthcare. In the last few years, both supervised and unsupervised deep learning achieved promising results in the area of medical image analysis. Several reviews on supervised deep learning are published, but hardly any rigorous review on unsupervised deep learning for medical image analysis is available. OBJECTIVES The objective of this review is to systematically present various unsupervised deep learning models, tools, and benchmark datasets applied to medical image analysis. Some of the discussed models are autoencoders and its other variants, Restricted Boltzmann machines (RBM), Deep belief networks (DBN), Deep Boltzmann machine (DBM), and Generative adversarial network (GAN). Further, future research opportunities and challenges of unsupervised deep learning techniques for medical image analysis are also discussed. CONCLUSION Currently, interpretation of medical images for diagnostic purposes is usually performed by human experts that may be replaced by computer-aided diagnosis due to advancement in machine learning techniques, including deep learning, and the availability of cheap computing infrastructure through cloud computing. Both supervised and unsupervised machine learning approaches are widely applied in medical image analysis, each of them having certain pros and cons. Since human supervisions are not always available or inadequate or biased, therefore, unsupervised learning algorithms give a big hope with lots of advantages for biomedical image analysis.
Collapse
Affiliation(s)
- Khalid Raza
- Department of Computer Science, Jamia Millia Islamia, New Delhi. India
| | | |
Collapse
|
11
|
Angulakshmi M, Deepa M. A Review on Deep Learning Architecture and Methods for MRI Brain Tumour Segmentation. Curr Med Imaging 2021; 17:695-706. [PMID: 33423651 DOI: 10.2174/1573405616666210108122048] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Revised: 10/03/2020] [Accepted: 10/15/2020] [Indexed: 11/22/2022]
Abstract
BACKGROUND The automatic segmentation of brain tumour from MRI medical images is mainly covered in this review. Recently, state-of-the-art performance is provided by deep learning- based approaches in the field of image classification, segmentation, object detection, and tracking tasks. INTRODUCTION The core feature deep learning approach is the hierarchical representation of features from images, thus avoiding domain-specific handcrafted features. METHODS In this review paper, we have dealt with a review of Deep Learning Architecture and Methods for MRI Brain Tumour Segmentation. First, we have discussed the basic architecture and approaches for deep learning methods. Secondly, we have discussed the literature survey of MRI brain tumour segmentation using deep learning methods and its multimodality fusion. Then, the advantages and disadvantages of each method are analyzed and finally, it is concluded with a discussion on the merits and challenges of deep learning techniques. RESULTS The review of brain tumour identification using deep learning. CONCLUSION Techniques may help the researchers to have a better focus on it.
Collapse
Affiliation(s)
- M Angulakshmi
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, India
| | - M Deepa
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, India
| |
Collapse
|
12
|
A machine learning-based framework for diagnosis of COVID-19 from chest X-ray images. Interdiscip Sci 2021; 13:103-117. [PMID: 33387306 PMCID: PMC7776293 DOI: 10.1007/s12539-020-00403-6] [Citation(s) in RCA: 58] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2020] [Revised: 11/05/2020] [Accepted: 11/20/2020] [Indexed: 02/06/2023]
Abstract
Corona virus disease (COVID-19) acknowledged as a pandemic by the WHO and mankind all over the world is vulnerable to this virus. Alternative tools are needed that can help in diagnosis of the coronavirus. Researchers of this article investigated the potential of machine learning methods for automatic diagnosis of corona virus with high accuracy from X-ray images. Two most commonly used classifiers were selected: logistic regression (LR) and convolutional neural networks (CNN). The main reason was to make the system fast and efficient. Moreover, a dimensionality reduction approach was also investigated based on principal component analysis (PCA) to further speed up the learning process and improve the classification accuracy by selecting the highly discriminate features. The deep learning-based methods demand large amount of training samples compared to conventional approaches, yet adequate amount of labelled training samples was not available for COVID-19 X-ray images. Therefore, data augmentation technique using generative adversarial network (GAN) was employed to further increase the training samples and reduce the overfitting problem. We used the online available dataset and incorporated GAN to have 500 X-ray images in total for this study. Both CNN and LR showed encouraging results for COVID-19 patient identification. The LR and CNN models showed 95.2-97.6% overall accuracy without PCA and 97.6-100% with PCA for positive cases identification, respectively.
Collapse
|
13
|
Cui Y, Zhao S, Chen Y, Han J, Guo L, Xie L, Liu T. Modeling Brain Diverse and Complex Hemodynamic Response Patterns via Deep Recurrent Autoencoder. IEEE Trans Cogn Dev Syst 2020. [DOI: 10.1109/tcds.2019.2949195] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
14
|
Ramakrishna RR, Abd Hamid Z, Wan Zaki WMD, Huddin AB, Mathialagan R. Stem cell imaging through convolutional neural networks: current issues and future directions in artificial intelligence technology. PeerJ 2020; 8:e10346. [PMID: 33240655 PMCID: PMC7680049 DOI: 10.7717/peerj.10346] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Accepted: 10/21/2020] [Indexed: 12/12/2022] Open
Abstract
Stem cells are primitive and precursor cells with the potential to reproduce into diverse mature and functional cell types in the body throughout the developmental stages of life. Their remarkable potential has led to numerous medical discoveries and breakthroughs in science. As a result, stem cell-based therapy has emerged as a new subspecialty in medicine. One promising stem cell being investigated is the induced pluripotent stem cell (iPSC), which is obtained by genetically reprogramming mature cells to convert them into embryonic-like stem cells. These iPSCs are used to study the onset of disease, drug development, and medical therapies. However, functional studies on iPSCs involve the analysis of iPSC-derived colonies through manual identification, which is time-consuming, error-prone, and training-dependent. Thus, an automated instrument for the analysis of iPSC colonies is needed. Recently, artificial intelligence (AI) has emerged as a novel technology to tackle this challenge. In particular, deep learning, a subfield of AI, offers an automated platform for analyzing iPSC colonies and other colony-forming stem cells. Deep learning rectifies data features using a convolutional neural network (CNN), a type of multi-layered neural network that can play an innovative role in image recognition. CNNs are able to distinguish cells with high accuracy based on morphologic and textural changes. Therefore, CNNs have the potential to create a future field of deep learning tasks aimed at solving various challenges in stem cell studies. This review discusses the progress and future of CNNs in stem cell imaging for therapy and research.
Collapse
Affiliation(s)
- Ramanaesh Rao Ramakrishna
- Biomedical Science Programme and Centre for Diagnostic, Therapeutic and Investigative Science, Faculty of Health Sciences, Universiti Kebangsaan Malaysia, Kuala Lumpur, Malaysia
| | - Zariyantey Abd Hamid
- Biomedical Science Programme and Centre for Diagnostic, Therapeutic and Investigative Science, Faculty of Health Sciences, Universiti Kebangsaan Malaysia, Kuala Lumpur, Malaysia
| | - Wan Mimi Diyana Wan Zaki
- Department of Electrical, Electronic & Systems Engineering, Faculty of Engineering & Built Environment, Universiti Kebangsaan Malaysia, Bangi, Selangor, Malaysia
| | - Aqilah Baseri Huddin
- Department of Electrical, Electronic & Systems Engineering, Faculty of Engineering & Built Environment, Universiti Kebangsaan Malaysia, Bangi, Selangor, Malaysia
| | - Ramya Mathialagan
- Biomedical Science Programme and Centre for Diagnostic, Therapeutic and Investigative Science, Faculty of Health Sciences, Universiti Kebangsaan Malaysia, Kuala Lumpur, Malaysia
| |
Collapse
|
15
|
Peña-Solórzano CA, Albrecht DW, Bassed RB, Burke MD, Dimmock MR. Findings from machine learning in clinical medical imaging applications - Lessons for translation to the forensic setting. Forensic Sci Int 2020; 316:110538. [PMID: 33120319 PMCID: PMC7568766 DOI: 10.1016/j.forsciint.2020.110538] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2019] [Revised: 04/28/2020] [Accepted: 10/04/2020] [Indexed: 12/18/2022]
Abstract
Machine learning (ML) techniques are increasingly being used in clinical medical imaging to automate distinct processing tasks. In post-mortem forensic radiology, the use of these algorithms presents significant challenges due to variability in organ position, structural changes from decomposition, inconsistent body placement in the scanner, and the presence of foreign bodies. Existing ML approaches in clinical imaging can likely be transferred to the forensic setting with careful consideration to account for the increased variability and temporal factors that affect the data used to train these algorithms. Additional steps are required to deal with these issues, by incorporating the possible variability into the training data through data augmentation, or by using atlases as a pre-processing step to account for death-related factors. A key application of ML would be then to highlight anatomical and gross pathological features of interest, or present information to help optimally determine the cause of death. In this review, we highlight results and limitations of applications in clinical medical imaging that use ML to determine key implications for their application in the forensic setting.
Collapse
Affiliation(s)
- Carlos A Peña-Solórzano
- Department of Medical Imaging and Radiation Sciences, Monash University, Wellington Rd, Clayton, Melbourne, VIC 3800, Australia.
| | - David W Albrecht
- Clayton School of Information Technology, Monash University, Wellington Rd, Clayton, Melbourne, VIC 3800, Australia.
| | - Richard B Bassed
- Victorian Institute of Forensic Medicine, 57-83 Kavanagh St., Southbank, Melbourne, VIC 3006, Australia; Department of Forensic Medicine, Monash University, Wellington Rd, Clayton, Melbourne, VIC 3800, Australia.
| | - Michael D Burke
- Victorian Institute of Forensic Medicine, 57-83 Kavanagh St., Southbank, Melbourne, VIC 3006, Australia; Department of Forensic Medicine, Monash University, Wellington Rd, Clayton, Melbourne, VIC 3800, Australia.
| | - Matthew R Dimmock
- Department of Medical Imaging and Radiation Sciences, Monash University, Wellington Rd, Clayton, Melbourne, VIC 3800, Australia.
| |
Collapse
|
16
|
Xi X, Meng X, Qin Z, Nie X, Yin Y, Chen X. IA-net: informative attention convolutional neural network for choroidal neovascularization segmentation in OCT images. BIOMEDICAL OPTICS EXPRESS 2020; 11:6122-6136. [PMID: 33282479 PMCID: PMC7687935 DOI: 10.1364/boe.400816] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Revised: 09/22/2020] [Accepted: 09/22/2020] [Indexed: 05/08/2023]
Abstract
Choroidal neovascularization (CNV) is a characteristic feature of wet age-related macular degeneration (AMD). Quantification of CNV is useful to clinicians in the diagnosis and treatment of CNV disease. Before quantification, CNV lesion should be delineated by automatic CNV segmentation technology. Recently, deep learning methods have achieved significant success for medical image segmentation. However, some CNVs are small objects which are hard to discriminate, resulting in performance degradation. In addition, it's difficult to train an effective network for accurate segmentation due to the complicated characteristics of CNV in OCT images. In order to tackle these two challenges, this paper proposed a novel Informative Attention Convolutional Neural Network (IA-net) for automatic CNV segmentation in OCT images. Considering that the attention mechanism has the ability to enhance the discriminative power of the interesting regions in the feature maps, the attention enhancement block is developed by introducing the additional attention constraint. It has the ability to force the model to pay high attention on CNV in the learned feature maps, improving the discriminative ability of the learned CNV features, which is useful to improve the segmentation performance on small CNV. For accurate pixel classification, the novel informative loss is proposed with the incorporation of an informative attention map. It can focus training on a set of informative samples that are difficult to be predicted. Therefore, the trained model has the ability to learn enough information to classify these informative samples, further improving the performance. The experimental results on our database demonstrate that the proposed method outperforms traditional CNV segmentation methods.
Collapse
Affiliation(s)
- Xiaoming Xi
- School of Computer Science and Technology, Shandong Jianzhu University, 250101, China
| | - Xianjing Meng
- School of Computer Science and Technology, Shandong University of Finance and Economics, 250014, China
| | - Zheyun Qin
- School of Software, Shandong University, 250101, China
| | - Xiushan Nie
- School of Computer Science and Technology, Shandong Jianzhu University, 250101, China
| | - Yilong Yin
- School of Software, Shandong University, 250101, China
| | - Xinjian Chen
- School of Electronic and Information Engineering, Soochow University, 215006, China
| |
Collapse
|
17
|
Hassan M, Ali S, Alquhayz H, Safdar K. Developing intelligent medical image modality classification system using deep transfer learning and LDA. Sci Rep 2020; 10:12868. [PMID: 32732962 PMCID: PMC7393510 DOI: 10.1038/s41598-020-69813-2] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2019] [Accepted: 07/19/2020] [Indexed: 01/07/2023] Open
Abstract
Rapid advancement in imaging technology generates an enormous amount of heterogeneous medical data for disease diagnosis and rehabilitation process. Radiologists may require related clinical cases from medical archives for analysis and disease diagnosis. It is challenging to retrieve the associated clinical cases automatically, efficiently and accurately from the substantial medical image archive due to diversity in diseases and imaging modalities. We proposed an efficient and accurate approach for medical image modality classification that can used for retrieval of clinical cases from large medical repositories. The proposed approach is developed using transfer learning concept with pre-trained ResNet50 Deep learning model for optimized features extraction followed by linear discriminant analysis classification (TLRN-LDA). Extensive experiments are performed on challenging standard benchmark ImageCLEF-2012 dataset of 31 classes. The developed approach yields improved average classification accuracy of 87.91%, which is higher up-to 10% compared to the state-of-the-art approaches on the same dataset. Moreover, hand-crafted features are extracted for comparison. Performance of TLRN-LDA system demonstrates the effectiveness over state-of-the-art systems. The developed approach may be deployed to diagnostic centers to assist the practitioners for accurate and efficient clinical case retrieval and disease diagnosis.
Collapse
Affiliation(s)
- Mehdi Hassan
- Department of Computer Science, Air University, PAF Complex Sector E-9, Islamabad, Pakistan.
| | - Safdar Ali
- Directorate General National Repository, Islamabad, Pakistan
| | - Hani Alquhayz
- Department of Computer Science and Information, College of Science in Zulfi, Majmaah University, Al-Majmaah, 11952, Saudi Arabia
| | - Khushbakht Safdar
- Al Nafees Medical College and Teaching Hospital, ISRA University, Lehtrar Road, Islamabad, Pakistan
| |
Collapse
|
18
|
A review of deep learning with special emphasis on architectures, applications and recent trends. Knowl Based Syst 2020. [DOI: 10.1016/j.knosys.2020.105596] [Citation(s) in RCA: 121] [Impact Index Per Article: 24.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
19
|
Yang X, Lin Y, Wang Z, Li X, Cheng KT. Bi-Modality Medical Image Synthesis Using Semi-Supervised Sequential Generative Adversarial Networks. IEEE J Biomed Health Inform 2020; 24:855-865. [DOI: 10.1109/jbhi.2019.2922986] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
20
|
Wu Y, Lu X, Hong J, Lin W, Chen S, Mou S, Feng G, Yan R, Cheng Z. Detection of extremity chronic traumatic osteomyelitis by machine learning based on computed-tomography images: A retrospective study. Medicine (Baltimore) 2020; 99:e19239. [PMID: 32118728 PMCID: PMC7478522 DOI: 10.1097/md.0000000000019239] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/17/2023] Open
Abstract
Despite the availability of a series of tests, detection of chronic traumatic osteomyelitis is still exhausting in clinical practice. We hypothesized that machine learning based on computed-tomography (CT) images would provide better diagnostic performance for extremity traumatic chronic osteomyelitis than the serological biomarker alone. A retrospective study was carried out to collect medical data from patients with extremity traumatic osteomyelitis according to the criteria of musculoskeletal infection society. In each patient, serum levels of C-reactive protein (CRP), erythrocyte sedimentation rate (ESR), and D-dimer were measured and CT scan of the extremity was conducted 7 days after admission preoperatively. A deep residual network (ResNet) machine learning model was established for recognition of bone lesion on the CT image. A total of 28,718 CT images from 163 adult patients were included. Then, we randomly extracted 80% of all CT images from each patient for training, 10% for validation, and 10% for testing. Our results showed that machine learning (83.4%) outperformed CRP (53.2%), ESR (68.8%), and D-dimer (68.1%) separately in accuracy. Meanwhile, machine learning (88.0%) demonstrated highest sensitivity when compared with CRP (50.6%), ESR (73.0%), and D-dimer (51.7%). Considering the specificity, machine learning (77.0%) is better than CRP (59.4%) and ESR (62.2%), but not D-dimer (83.8%). Our findings indicated that machine learning based on CT images is an effective and promising avenue for detection of chronic traumatic osteomyelitis in the extremity.
Collapse
Affiliation(s)
- Yifan Wu
- Department of Surgery, Zhejiang University Hospital
| | - Xin Lu
- College of Information Science & Electronic Engineering, Key Lab. of Advanced Micro/Nano Electronics Devices & Smart Systems of Zhejiang, Zhejiang University
| | - Jianqiao Hong
- Department of Orthopedic Surgery, Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Weijie Lin
- College of Information Science & Electronic Engineering, Key Lab. of Advanced Micro/Nano Electronics Devices & Smart Systems of Zhejiang, Zhejiang University
| | - Shiming Chen
- Department of Surgery, Shaoxing Second Hospital, Shaoxing, Zhejiang Province, China
| | - Shenghong Mou
- College of Information Science & Electronic Engineering, Key Lab. of Advanced Micro/Nano Electronics Devices & Smart Systems of Zhejiang, Zhejiang University
| | - Gang Feng
- Department of Orthopedic Surgery, Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Ruijian Yan
- Department of Orthopedic Surgery, Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Zhiyuan Cheng
- College of Information Science & Electronic Engineering, Key Lab. of Advanced Micro/Nano Electronics Devices & Smart Systems of Zhejiang, Zhejiang University
| |
Collapse
|
21
|
Liu B, Chi W, Li X, Li P, Liang W, Liu H, Wang W, He J. Evolving the pulmonary nodules diagnosis from classical approaches to deep learning-aided decision support: three decades' development course and future prospect. J Cancer Res Clin Oncol 2020; 146:153-185. [PMID: 31786740 DOI: 10.1007/s00432-019-03098-5] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2019] [Accepted: 11/25/2019] [Indexed: 02/06/2023]
Abstract
PURPOSE Lung cancer is the commonest cause of cancer deaths worldwide, and its mortality can be reduced significantly by performing early diagnosis and screening. Since the 1960s, driven by the pressing needs to accurately and effectively interpret the massive volume of chest images generated daily, computer-assisted diagnosis of pulmonary nodule has opened up new opportunities to relax the limitation from physicians' subjectivity, experiences and fatigue. And the fair access to the reliable and affordable computer-assisted diagnosis will fight the inequalities in incidence and mortality between populations. It has been witnessed that significant and remarkable advances have been achieved since the 1980s, and consistent endeavors have been exerted to deal with the grand challenges on how to accurately detect the pulmonary nodules with high sensitivity at low false-positive rate as well as on how to precisely differentiate between benign and malignant nodules. There is a lack of comprehensive examination of the techniques' development which is evolving the pulmonary nodules diagnosis from classical approaches to machine learning-assisted decision support. The main goal of this investigation is to provide a comprehensive state-of-the-art review of the computer-assisted nodules detection and benign-malignant classification techniques developed over three decades, which have evolved from the complicated ad hoc analysis pipeline of conventional approaches to the simplified seamlessly integrated deep learning techniques. This review also identifies challenges and highlights opportunities for future work in learning models, learning algorithms and enhancement schemes for bridging current state to future prospect and satisfying future demand. CONCLUSION It is the first literature review of the past 30 years' development in computer-assisted diagnosis of lung nodules. The challenges indentified and the research opportunities highlighted in this survey are significant for bridging current state to future prospect and satisfying future demand. The values of multifaceted driving forces and multidisciplinary researches are acknowledged that will make the computer-assisted diagnosis of pulmonary nodules enter into the main stream of clinical medicine and raise the state-of-the-art clinical applications as well as increase both welfares of physicians and patients. We firmly hold the vision that fair access to the reliable, faithful, and affordable computer-assisted diagnosis for early cancer diagnosis would fight the inequalities in incidence and mortality between populations, and save more lives.
Collapse
Affiliation(s)
- Bo Liu
- Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, China.
| | - Wenhao Chi
- Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Xinran Li
- Department of Mathematics, University of Wisconsin-Madison, Madison, WI, 53706, USA
| | - Peng Li
- Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, China
| | - Wenhua Liang
- Department of Thoracic Surgery and Oncology, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
- China State Key Laboratory of Respiratory Disease, Guangzhou, China
| | - Haiping Liu
- PET/CT Center, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
- China State Key Laboratory of Respiratory Disease, Guangzhou, China
| | - Wei Wang
- Department of Thoracic Surgery and Oncology, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
- China State Key Laboratory of Respiratory Disease, Guangzhou, China
| | - Jianxing He
- Department of Thoracic Surgery and Oncology, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China.
- China State Key Laboratory of Respiratory Disease, Guangzhou, China.
| |
Collapse
|
22
|
Choudhary P, Hazra A. Chest disease radiography in twofold: using convolutional neural networks and transfer learning. EVOLVING SYSTEMS 2019. [DOI: 10.1007/s12530-019-09316-2] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
23
|
Using deep learning techniques in medical imaging: a systematic review of applications on CT and PET. Artif Intell Rev 2019. [DOI: 10.1007/s10462-019-09788-3] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
24
|
Xu R, Cong Z, Ye X, Hirano Y, Kido S, Gyobu T, Kawata Y, Honda O, Tomiyama N. Pulmonary Textures Classification via a Multi-Scale Attention Network. IEEE J Biomed Health Inform 2019; 24:2041-2052. [PMID: 31689221 DOI: 10.1109/jbhi.2019.2950006] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Precise classification of pulmonary textures is crucial to develop a computer aided diagnosis (CAD) system of diffuse lung diseases (DLDs). Although deep learning techniques have been applied to this task, the classification performance is not satisfied for clinical requirements, since commonly-used deep networks built by stacking convolutional blocks are not able to learn discriminative feature representation to distinguish complex pulmonary textures. For addressing this problem, we design a multi-scale attention network (MSAN) architecture comprised by several stacked residual attention modules followed by a multi-scale fusion module. Our deep network can not only exploit powerful information on different scales but also automatically select optimal features for more discriminative feature representation. Besides, we develop visualization techniques to make the proposed deep model transparent for humans. The proposed method is evaluated by using a large dataset. Experimental results show that our method has achieved the average classification accuracy of 94.78% and the average f-value of 0.9475 in the classification of 7 categories of pulmonary textures. Besides, visualization results intuitively explain the working behavior of the deep network. The proposed method has achieved the state-of-the-art performance to classify pulmonary textures on high resolution CT images.
Collapse
|
25
|
Cui Y, Zhao S, Wang H, Xie L, Chen Y, Han J, Guo L, Zhou F, Liu T. Identifying Brain Networks at Multiple Time Scales via Deep Recurrent Neural Network. IEEE J Biomed Health Inform 2019; 23:2515-2525. [PMID: 30475739 PMCID: PMC6914656 DOI: 10.1109/jbhi.2018.2882885] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Abstract
For decades, task functional magnetic resonance imaging has been a powerful noninvasive tool to explore the organizational architecture of human brain function. Researchers have developed a variety of brain network analysis methods for task fMRI data, including the general linear model, independent component analysis, and sparse representation methods. However, these shallow models are limited in faithful reconstruction and modeling of the hierarchical and temporal structures of brain networks, as demonstrated in more and more studies. Recently, recurrent neural networks (RNNs) exhibit great ability of modeling hierarchical and temporal dependence features in the machine learning field, which might be suitable for task fMRI data modeling. To explore such possible advantages of RNNs for task fMRI data, we propose a novel framework of a deep recurrent neural network (DRNN) to model the functional brain networks from task fMRI data. Experimental results on the motor task fMRI data of Human Connectome Project 900 subjects release demonstrated that the proposed DRNN can not only faithfully reconstruct functional brain networks, but also identify more meaningful brain networks with multiple time scales which are overlooked by traditional shallow models. In general, this work provides an effective and powerful approach to identifying functional brain networks at multiple time scales from task fMRI data.
Collapse
Affiliation(s)
- Yan Cui
- College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, 310027, China
| | - Shijie Zhao
- School of Automation, Northwestern Polytechnical University, Xi’an, 710072, China
| | - Han Wang
- College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, 310027, China
| | - Li Xie
- College of Biomedical Engineering & Instrument Science, and the State Key Laboratory of Industrial Control Technology, Zhejiang University, Hangzhou, 310027, China
| | - Yaowu Chen
- College of Biomedical Engineering & Instrument Science, and Zhejiang Provincial Key Laboratory for Network Multimedia Technologies, Zhejiang University, Hangzhou, 310027, China
| | - Junwei Han
- School of Automation, Northwestern Polytechnical University, Xi’an, 710072, China
| | - Lei Guo
- School of Automation, Northwestern Polytechnical University, Xi’an, 710072, China
| | - Fan Zhou
- College of Biomedical Engineering & Instrument Science, Zhejiang University, and the Zhejiang University Embedded System Engineering Research Center, Ministry of Education of China, Hangzhou, 310027, China
| | - Tianming Liu
- Cortical Architecture Imaging and Discovery Lab, Department of Computer Science and Bioimaging Research Center, The University of Georgia, Athens, GA, 30602, USA
| |
Collapse
|
26
|
Kim GB, Jung KH, Lee Y, Kim HJ, Kim N, Jun S, Seo JB, Lynch DA. Comparison of Shallow and Deep Learning Methods on Classifying the Regional Pattern of Diffuse Lung Disease. J Digit Imaging 2019; 31:415-424. [PMID: 29043528 DOI: 10.1007/s10278-017-0028-9] [Citation(s) in RCA: 56] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022] Open
Abstract
This study aimed to compare shallow and deep learning of classifying the patterns of interstitial lung diseases (ILDs). Using high-resolution computed tomography images, two experienced radiologists marked 1200 regions of interest (ROIs), in which 600 ROIs were each acquired using a GE or Siemens scanner and each group of 600 ROIs consisted of 100 ROIs for subregions that included normal and five regional pulmonary disease patterns (ground-glass opacity, consolidation, reticular opacity, emphysema, and honeycombing). We employed the convolution neural network (CNN) with six learnable layers that consisted of four convolution layers and two fully connected layers. The classification results were compared with the results classified by a shallow learning of a support vector machine (SVM). The CNN classifier showed significantly better performance for accuracy compared with that of the SVM classifier by 6-9%. As the convolution layer increases, the classification accuracy of the CNN showed better performance from 81.27 to 95.12%. Especially in the cases showing pathological ambiguity such as between normal and emphysema cases or between honeycombing and reticular opacity cases, the increment of the convolution layer greatly drops the misclassification rate between each case. Conclusively, the CNN classifier showed significantly greater accuracy than the SVM classifier, and the results implied structural characteristics that are inherent to the specific ILD patterns.
Collapse
Affiliation(s)
- Guk Bae Kim
- Biomedical Engineering Research Center, Asan Institute of Life Science, Asan Medical Center, 388-1 Pungnap2-dong, Songpa-gu, Seoul, Republic of Korea
| | - Kyu-Hwan Jung
- VUNO, 6F, 507, Gangnamdae-ro, Seocho-gu, Seoul, Republic of Korea
| | - Yeha Lee
- VUNO, 6F, 507, Gangnamdae-ro, Seocho-gu, Seoul, Republic of Korea
| | - Hyun-Jun Kim
- VUNO, 6F, 507, Gangnamdae-ro, Seocho-gu, Seoul, Republic of Korea
| | - Namkug Kim
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, 388-1 Pungnap2-dong, Songpa-gu, Seoul, 138-736, Republic of Korea.
| | - Sanghoon Jun
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, 388-1 Pungnap2-dong, Songpa-gu, Seoul, 138-736, Republic of Korea
| | - Joon Beom Seo
- Department of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 388-1 Pungnap2-dong, Songpa-gu, Seoul, 138-736, Republic of Korea.
| | - David A Lynch
- Department of Radiology, National Jewish Medical and Research Center, Denver, CO, USA
| |
Collapse
|
27
|
Gao Q, Rohr K. A Global Method for Non-Rigid Registration of Cell Nuclei in Live Cell Time-Lapse Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2259-2270. [PMID: 30835217 DOI: 10.1109/tmi.2019.2901918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Non-rigid registration of cell nuclei in time-lapse microscopy images can be achieved through estimating the deformation fields using optical flow methods. In contrast to local optical flow models employed in the existing non-rigid registration methods, we introduce approaches based on a global optical flow model. Our registration model consists of a data fidelity term and a regularization term. We compared different regularizers for the deformation fields and found that a convex quadratic function is more suitable than non-convex ones. To improve the robustness, we propose an adaptive weighting scheme based on the statistics of the noise in fluorescence microscopy images as well as a combined local-global scheme. Moreover, we extend the global method by exploiting high-order image features. The best suitable high-order features are determined through learning two generative image models, namely, fields of experts and convolutional Gaussian restricted Boltzmann machine, whose model formulations are both consistent with the assumption of high-order feature constancy in the registration model. Using multiple data sets of real 2D and 3D live cell microscopy image sequences as well as synthetic image data, we demonstrate that our proposed approach outperforms the previous methods in terms of both registration accuracy and computational efficiency.
Collapse
|
28
|
Brain tumor classification for MR images using transfer learning and fine-tuning. Comput Med Imaging Graph 2019; 75:34-46. [DOI: 10.1016/j.compmedimag.2019.05.001] [Citation(s) in RCA: 195] [Impact Index Per Article: 32.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2018] [Revised: 02/09/2019] [Accepted: 05/13/2019] [Indexed: 01/19/2023]
|
29
|
Jansen MJA, Kuijf HJ, Veldhuis WB, Wessels FJ, Viergever MA, Pluim JPW. Automatic classification of focal liver lesions based on MRI and risk factors. PLoS One 2019; 14:e0217053. [PMID: 31095624 PMCID: PMC6522218 DOI: 10.1371/journal.pone.0217053] [Citation(s) in RCA: 40] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2018] [Accepted: 05/03/2019] [Indexed: 12/13/2022] Open
Abstract
OBJECTIVES Accurate classification of focal liver lesions is an important part of liver disease diagnostics. In clinical practice, the lesion type is often determined from the abdominal MR examination, which includes T2-weighted and dynamic contrast enhanced (DCE) MR images. To date, only T2-weighted images are exploited for automatic classification of focal liver lesions. In this study additional MR sequences and risk factors are used for automatic classification to improve the results and to make a step forward to a clinically useful aid for radiologists. MATERIALS AND METHODS Clinical MRI data sets of 95 patients with in total 125 benign lesions (40 adenomas, 29 cysts and 56 hemangiomas) and 88 malignant lesions (30 hepatocellular carcinomas (HCC) and 58 metastases) were included in this study. Contrast curve, gray level histogram, and gray level co-occurrence matrix texture features were extracted from the DCE-MR and T2-weighted images. In addition, risk factors including the presence of steatosis, cirrhosis, and a known primary tumor were used as features. Fifty features with the highest ANOVA F-score were selected and fed to an extremely randomized trees classifier. The classifier evaluation was performed using the leave-one-out principle and receiver operating characteristic (ROC) curve analysis. RESULTS The overall accuracy for the classification of the five major focal liver lesion types is 0.77. The sensitivity/specificity is 0.80/0.78, 0.93/0.93, 0.84/0.82, 0.73/0.56, and 0.62/0.77 for adenoma, cyst, hemangioma, HCC, and metastasis, respectively. CONCLUSION The proposed classification system using features derived from clinical DCE-MR and T2-weighted images, with additional risk factors is able to differentiate five common types of lesions and is a step forward to a clinically useful aid for focal liver lesion diagnosis.
Collapse
Affiliation(s)
- Mariëlle J. A. Jansen
- Image Sciences Institute, University Medical Center Utrecht & Utrecht University, Utrecht, the Netherlands
- * E-mail:
| | - Hugo J. Kuijf
- Image Sciences Institute, University Medical Center Utrecht & Utrecht University, Utrecht, the Netherlands
| | - Wouter B. Veldhuis
- Department of Radiology, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Frank J. Wessels
- Department of Radiology, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Max A. Viergever
- Image Sciences Institute, University Medical Center Utrecht & Utrecht University, Utrecht, the Netherlands
| | - Josien P. W. Pluim
- Image Sciences Institute, University Medical Center Utrecht & Utrecht University, Utrecht, the Netherlands
| |
Collapse
|
30
|
Zhang Z, Sejdić E. Radiological images and machine learning: Trends, perspectives, and prospects. Comput Biol Med 2019; 108:354-370. [PMID: 31054502 PMCID: PMC6531364 DOI: 10.1016/j.compbiomed.2019.02.017] [Citation(s) in RCA: 77] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2018] [Revised: 02/19/2019] [Accepted: 02/19/2019] [Indexed: 01/18/2023]
Abstract
The application of machine learning to radiological images is an increasingly active research area that is expected to grow in the next five to ten years. Recent advances in machine learning have the potential to recognize and classify complex patterns from different radiological imaging modalities such as x-rays, computed tomography, magnetic resonance imaging and positron emission tomography imaging. In many applications, machine learning based systems have shown comparable performance to human decision-making. The applications of machine learning are the key ingredients of future clinical decision making and monitoring systems. This review covers the fundamental concepts behind various machine learning techniques and their applications in several radiological imaging areas, such as medical image segmentation, brain function studies and neurological disease diagnosis, as well as computer-aided systems, image registration, and content-based image retrieval systems. Synchronistically, we will briefly discuss current challenges and future directions regarding the application of machine learning in radiological imaging. By giving insight on how take advantage of machine learning powered applications, we expect that clinicians can prevent and diagnose diseases more accurately and efficiently.
Collapse
Affiliation(s)
- Zhenwei Zhang
- Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA, 15261, USA
| | - Ervin Sejdić
- Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA, 15261, USA.
| |
Collapse
|
31
|
Effective Diagnosis and Treatment through Content-Based Medical Image Retrieval (CBMIR) by Using Artificial Intelligence. J Clin Med 2019; 8:jcm8040462. [PMID: 30959798 PMCID: PMC6518303 DOI: 10.3390/jcm8040462] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2019] [Revised: 04/02/2019] [Accepted: 04/03/2019] [Indexed: 02/07/2023] Open
Abstract
Medical-image-based diagnosis is a tedious task‚ and small lesions in various medical images can be overlooked by medical experts due to the limited attention span of the human visual system, which can adversely affect medical treatment. However, this problem can be resolved by exploring similar cases in the previous medical database through an efficient content-based medical image retrieval (CBMIR) system. In the past few years, heterogeneous medical imaging databases have been growing rapidly with the advent of different types of medical imaging modalities. Recently, a medical doctor usually refers to various types of imaging modalities all together such as computed tomography (CT), magnetic resonance imaging (MRI), X-ray, and ultrasound, etc of various organs in order for the diagnosis and treatment of specific disease. Accurate classification and retrieval of multimodal medical imaging data is the key challenge for the CBMIR system. Most previous attempts use handcrafted features for medical image classification and retrieval, which show low performance for a massive collection of multimodal databases. Although there are a few previous studies on the use of deep features for classification, the number of classes is very small. To solve this problem, we propose the classification-based retrieval system of the multimodal medical images from various types of imaging modalities by using the technique of artificial intelligence, named as an enhanced residual network (ResNet). Experimental results with 12 databases including 50 classes demonstrate that the accuracy and F1.score by our method are respectively 81.51% and 82.42% which are higher than those by the previous method of CBMIR (the accuracy of 69.71% and F1.score of 69.63%).
Collapse
|
32
|
Diamant A, Chatterjee A, Vallières M, Shenouda G, Seuntjens J. Deep learning in head & neck cancer outcome prediction. Sci Rep 2019; 9:2764. [PMID: 30809047 PMCID: PMC6391436 DOI: 10.1038/s41598-019-39206-1] [Citation(s) in RCA: 109] [Impact Index Per Article: 18.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2018] [Accepted: 01/15/2019] [Indexed: 12/21/2022] Open
Abstract
Traditional radiomics involves the extraction of quantitative texture features from medical images in an attempt to determine correlations with clinical endpoints. We hypothesize that convolutional neural networks (CNNs) could enhance the performance of traditional radiomics, by detecting image patterns that may not be covered by a traditional radiomic framework. We test this hypothesis by training a CNN to predict treatment outcomes of patients with head and neck squamous cell carcinoma, based solely on their pre-treatment computed tomography image. The training (194 patients) and validation sets (106 patients), which are mutually independent and include 4 institutions, come from The Cancer Imaging Archive. When compared to a traditional radiomic framework applied to the same patient cohort, our method results in a AUC of 0.88 in predicting distant metastasis. When combining our model with the previous model, the AUC improves to 0.92. Our framework yields models that are shown to explicitly recognize traditional radiomic features, be directly visualized and perform accurate outcome prediction.
Collapse
Affiliation(s)
- André Diamant
- Medical Physics Unit, McGill University and Cedars Cancer Center, 1001 Décarie Blvd, Montréal, QC, H4A 3J1, Canada.
| | - Avishek Chatterjee
- Medical Physics Unit, McGill University and Cedars Cancer Center, 1001 Décarie Blvd, Montréal, QC, H4A 3J1, Canada
| | - Martin Vallières
- Medical Physics Unit, McGill University and Cedars Cancer Center, 1001 Décarie Blvd, Montréal, QC, H4A 3J1, Canada
| | - George Shenouda
- Medical Physics Unit, McGill University and Cedars Cancer Center, 1001 Décarie Blvd, Montréal, QC, H4A 3J1, Canada
| | - Jan Seuntjens
- Medical Physics Unit, McGill University and Cedars Cancer Center, 1001 Décarie Blvd, Montréal, QC, H4A 3J1, Canada
| |
Collapse
|
33
|
Hu X, Yu Z. Diagnosis of mesothelioma with deep learning. Oncol Lett 2019; 17:1483-1490. [PMID: 30675203 PMCID: PMC6341823 DOI: 10.3892/ol.2018.9761] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2018] [Accepted: 10/03/2018] [Indexed: 12/14/2022] Open
Abstract
Malignant mesothelioma (MM) is a rare but aggressive cancer. The definitive diagnosis of MM is critical for effective treatment and has important medicolegal significance. However, the definitive diagnosis of MM is challenging due to its composite epithelial/mesenchymal pattern. The aim of the current study was to develop a deep learning method to automatically diagnose MM. A retrospective analysis of 324 participants with or without MM was performed. Significant features were selected using a genetic algorithm (GA) or a ReliefF algorithm performed in MATLAB software. Subsequently, the current study constructed and trained several models based on a backpropagation (BP) algorithm, extreme learning machine algorithm and stacked sparse autoencoder (SSAE) to diagnose MM. A confusion matrix, F-measure and a receiver operating characteristic (ROC) curve were used to evaluate the performance of each model. A total of 34 potential variables were analyzed, while the GA and ReliefF algorithms selected 19 and 5 effective features, respectively. The selected features were used as the inputs of the three models. SSAE and GA+SSAE demonstrated the highest performance in terms of classification accuracy, specificity, F-measure and the area under the ROC curve. Overall, the GA+SSAE model was the preferred model since it required a shorter CPU time and fewer variables. Therefore, the SSAE with GA feature selection was selected as the most accurate model for the diagnosis of MM. The deep learning methods developed based on the GA+SSAE model may assist physicians with the diagnosis of MM.
Collapse
Affiliation(s)
- Xue Hu
- Department of Blood Transfusion, The First Affiliated Hospital of Chongqing Medical University, Chongqing 400016, P.R. China
| | - Zebo Yu
- Department of Blood Transfusion, The First Affiliated Hospital of Chongqing Medical University, Chongqing 400016, P.R. China
| |
Collapse
|
34
|
Xu M, Qi S, Yue Y, Teng Y, Xu L, Yao Y, Qian W. Segmentation of lung parenchyma in CT images using CNN trained with the clustering algorithm generated dataset. Biomed Eng Online 2019; 18:2. [PMID: 30602393 PMCID: PMC6317251 DOI: 10.1186/s12938-018-0619-9] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2018] [Accepted: 12/19/2018] [Indexed: 11/24/2022] Open
Abstract
Background Lung segmentation constitutes a critical procedure for any clinical-decision supporting system aimed to improve the early diagnosis and treatment of lung diseases. Abnormal lungs mainly include lung parenchyma with commonalities on CT images across subjects, diseases and CT scanners, and lung lesions presenting various appearances. Segmentation of lung parenchyma can help locate and analyze the neighboring lesions, but is not well studied in the framework of machine learning. Methods We proposed to segment lung parenchyma using a convolutional neural network (CNN) model. To reduce the workload of manually preparing the dataset for training the CNN, one clustering algorithm based method is proposed firstly. Specifically, after splitting CT slices into image patches, the k-means clustering algorithm with two categories is performed twice using the mean and minimum intensity of image patch, respectively. A cross-shaped verification, a volume intersection, a connected component analysis and a patch expansion are followed to generate final dataset. Secondly, we design a CNN architecture consisting of only one convolutional layer with six kernels, followed by one maximum pooling layer and two fully connected layers. Using the generated dataset, a variety of CNN models are trained and optimized, and their performances are evaluated by eightfold cross-validation. A separate validation experiment is further conducted using a dataset of 201 subjects (4.62 billion patches) with lung cancer or chronic obstructive pulmonary disease, scanned by CT or PET/CT. The segmentation results by our method are compared with those yielded by manual segmentation and some available methods. Results A total of 121,728 patches are generated to train and validate the CNN models. After the parameter optimization, our CNN model achieves an average F-score of 0.9917 and an area of curve up to 0.9991 for classification of lung parenchyma and non-lung-parenchyma. The obtain model can segment the lung parenchyma accurately for 201 subjects with heterogeneous lung diseases and CT scanners. The overlap ratio between the manual segmentation and the one by our method reaches 0.96. Conclusions The results demonstrated that the proposed clustering algorithm based method can generate the training dataset for CNN models. The obtained CNN model can segment lung parenchyma with very satisfactory performance and have the potential to locate and analyze lung lesions.
Collapse
Affiliation(s)
- Mingjie Xu
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, No. 195 Chuangxin Avenue, Hunnan District, Shenyang, 110169, China
| | - Shouliang Qi
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, No. 195 Chuangxin Avenue, Hunnan District, Shenyang, 110169, China. .,Key Laboratory of Medical Image Computing of Northeastern University (Ministry of Education), Shenyang, China.
| | - Yong Yue
- Department of Radiology, Shengjing Hospital of China Medical University, No. 36 Sanhao Street, Shenyang, 110004, China
| | - Yueyang Teng
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, No. 195 Chuangxin Avenue, Hunnan District, Shenyang, 110169, China
| | - Lisheng Xu
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, No. 195 Chuangxin Avenue, Hunnan District, Shenyang, 110169, China
| | - Yudong Yao
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, No. 195 Chuangxin Avenue, Hunnan District, Shenyang, 110169, China.,Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ, 07030, USA
| | - Wei Qian
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, No. 195 Chuangxin Avenue, Hunnan District, Shenyang, 110169, China.,College of Engineering, University of Texas at El Paso, 500 W University, El Paso, TX, 79902, USA
| |
Collapse
|
35
|
Liu J, Chen F, Wang D. Data Compression Based on Stacked RBM-AE Model for Wireless Sensor Networks. SENSORS 2018; 18:s18124273. [PMID: 30518155 PMCID: PMC6308808 DOI: 10.3390/s18124273] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/06/2018] [Revised: 11/27/2018] [Accepted: 12/01/2018] [Indexed: 11/16/2022]
Abstract
Data compression is very important in wireless sensor networks (WSNs) with the limited energy of sensor nodes. Data communication results in energy consumption most of the time; the lifetime of sensor nodes is usually prolonged by reducing data transmission and reception. In this paper, we propose a new Stacked RBM Auto-Encoder (Stacked RBM-AE) model to compress sensing data, which is composed of a encode layer and a decode layer. In the encode layer, the sensing data is compressed; and in the decode layer, the sensing data is reconstructed. The encode layer and the decode layer are composed of four standard Restricted Boltzmann Machines (RBMs). We also provide an energy optimization method that can further reduce the energy consumption of the model storage and calculation by pruning the parameters of the model. We test the performance of the model by using the environment data collected by Intel Lab. When the compression ratio of the model is 10, the average Percentage RMS Difference value is 10.04%, and the average temperature reconstruction error value is 0.2815 °C. The node communication energy consumption in WSNs can be reduced by 90%. Compared with the traditional method, the proposed model has better compression efficiency and reconstruction accuracy under the same compression ratio. Our experiment results show that the new neural network model can not only apply to data compression for WSNs, but also have high compression efficiency and good transfer learning ability.
Collapse
Affiliation(s)
- Jianlin Liu
- School of Mechanical Engineering and Electronic Information, China University of Geosciences, Wuhan 430074, China.
| | - Fenxiong Chen
- School of Mechanical Engineering and Electronic Information, China University of Geosciences, Wuhan 430074, China.
| | - Dianhong Wang
- School of Mechanical Engineering and Electronic Information, China University of Geosciences, Wuhan 430074, China.
| |
Collapse
|
36
|
Anwar SM, Majid M, Qayyum A, Awais M, Alnowami M, Khan MK. Medical Image Analysis using Convolutional Neural Networks: A Review. J Med Syst 2018; 42:226. [DOI: 10.1007/s10916-018-1088-1] [Citation(s) in RCA: 247] [Impact Index Per Article: 35.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2018] [Accepted: 09/25/2018] [Indexed: 01/03/2023]
|
37
|
Wu J, Mazur TR, Ruan S, Lian C, Daniel N, Lashmett H, Ochoa L, Zoberi I, Anastasio MA, Gach HM, Mutic S, Thomas M, Li H. A deep Boltzmann machine-driven level set method for heart motion tracking using cine MRI images. Med Image Anal 2018; 47:68-80. [PMID: 29679848 PMCID: PMC6501847 DOI: 10.1016/j.media.2018.03.015] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2017] [Revised: 03/21/2018] [Accepted: 03/26/2018] [Indexed: 11/19/2022]
Abstract
Heart motion tracking for radiation therapy treatment planning can result in effective motion management strategies to minimize radiation-induced cardiotoxicity. However, automatic heart motion tracking is challenging due to factors that include the complex spatial relationship between the heart and its neighboring structures, dynamic changes in heart shape, and limited image contrast, resolution, and volume coverage. In this study, we developed and evaluated a deep generative shape model-driven level set method to address these challenges. The proposed heart motion tracking method makes use of a heart shape model that characterizes the statistical variations in heart shapes present in a training data set. This heart shape model was established by training a three-layered deep Boltzmann machine (DBM) in order to characterize both local and global heart shape variations. During the tracking phase, a distance regularized level-set evolution (DRLSE) method was applied to delineate the heart contour on each frame of a cine MRI image sequence. The trained shape model was embedded into the DRLSE method as a shape prior term to constrain an evolutional shape to reach the desired heart boundary. Frame-by-frame heart motion tracking was achieved by iteratively mapping the obtained heart contour for each frame to the next frame as a reliable initialization, and performing a level-set evolution. The performance of the proposed motion tracking method was demonstrated using thirty-eight coronal cine MRI image sequences.
Collapse
Affiliation(s)
- Jian Wu
- Department of Radiation Oncology, Washington University, St. Louis, MO 63110, USA
| | - Thomas R Mazur
- Department of Radiation Oncology, Washington University, St. Louis, MO 63110, USA
| | - Su Ruan
- Laboratoire LITIS (EA 4108), Equipe Quantif, University of Rouen, Rouen 76183, France
| | - Chunfeng Lian
- Laboratoire LITIS (EA 4108), Equipe Quantif, University of Rouen, Rouen 76183, France
| | - Nalini Daniel
- Department of Radiation Oncology, Washington University, St. Louis, MO 63110, USA
| | - Hilary Lashmett
- Department of Radiation Oncology, Washington University, St. Louis, MO 63110, USA
| | - Laura Ochoa
- Department of Radiation Oncology, Washington University, St. Louis, MO 63110, USA
| | - Imran Zoberi
- Department of Radiation Oncology, Washington University, St. Louis, MO 63110, USA
| | - Mark A Anastasio
- Department of Biomedical Engineering, Washington University, St. Louis, MO 63110, USA
| | - H Michael Gach
- Department of Radiation Oncology, Washington University, St. Louis, MO 63110, USA
| | - Sasa Mutic
- Department of Radiation Oncology, Washington University, St. Louis, MO 63110, USA
| | - Maria Thomas
- Department of Radiation Oncology, Washington University, St. Louis, MO 63110, USA
| | - Hua Li
- Department of Radiation Oncology, Washington University, St. Louis, MO 63110, USA.
| |
Collapse
|
38
|
He JY, Wu X, Jiang YG, Peng Q, Jain R. Hookworm Detection in Wireless Capsule Endoscopy Images With Deep Learning. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:2379-2392. [PMID: 29470172 DOI: 10.1109/tip.2018.2801119] [Citation(s) in RCA: 74] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
As one of the most common human helminths, hookworm is a leading cause of maternal and child morbidity, which seriously threatens human health. Recently, wireless capsule endoscopy (WCE) has been applied to automatic hookworm detection. Unfortunately, it remains a challenging task. In recent years, deep convolutional neural network (CNN) has demonstrated impressive performance in various image and video analysis tasks. In this paper, a novel deep hookworm detection framework is proposed for WCE images, which simultaneously models visual appearances and tubular patterns of hookworms. This is the first deep learning framework specifically designed for hookworm detection in WCE images. Two CNN networks, namely edge extraction network and hookworm classification network, are seamlessly integrated in the proposed framework, which avoid the edge feature caching and speed up the classification. Two edge pooling layers are introduced to integrate the tubular regions induced from edge extraction network and the feature maps from hookworm classification network, leading to enhanced feature maps emphasizing the tubular regions. Experiments have been conducted on one of the largest WCE datasets with WCE images, which demonstrate the effectiveness of the proposed hookworm detection framework. It significantly outperforms the state-of-the-art approaches. The high sensitivity and accuracy of the proposed method in detecting hookworms shows its potential for clinical application.
Collapse
|
39
|
Pereira S, Meier R, McKinley R, Wiest R, Alves V, Silva CA, Reyes M. Enhancing interpretability of automatically extracted machine learning features: application to a RBM-Random Forest system on brain lesion segmentation. Med Image Anal 2017; 44:228-244. [PMID: 29289703 DOI: 10.1016/j.media.2017.12.009] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2017] [Revised: 10/15/2017] [Accepted: 12/12/2017] [Indexed: 12/19/2022]
Abstract
Machine learning systems are achieving better performances at the cost of becoming increasingly complex. However, because of that, they become less interpretable, which may cause some distrust by the end-user of the system. This is especially important as these systems are pervasively being introduced to critical domains, such as the medical field. Representation Learning techniques are general methods for automatic feature computation. Nevertheless, these techniques are regarded as uninterpretable "black boxes". In this paper, we propose a methodology to enhance the interpretability of automatically extracted machine learning features. The proposed system is composed of a Restricted Boltzmann Machine for unsupervised feature learning, and a Random Forest classifier, which are combined to jointly consider existing correlations between imaging data, features, and target variables. We define two levels of interpretation: global and local. The former is devoted to understanding if the system learned the relevant relations in the data correctly, while the later is focused on predictions performed on a voxel- and patient-level. In addition, we propose a novel feature importance strategy that considers both imaging data and target variables, and we demonstrate the ability of the approach to leverage the interpretability of the obtained representation for the task at hand. We evaluated the proposed methodology in brain tumor segmentation and penumbra estimation in ischemic stroke lesions. We show the ability of the proposed methodology to unveil information regarding relationships between imaging modalities and extracted features and their usefulness for the task at hand. In both clinical scenarios, we demonstrate that the proposed methodology enhances the interpretability of automatically learned features, highlighting specific learning patterns that resemble how an expert extracts relevant data from medical images.
Collapse
Affiliation(s)
- Sérgio Pereira
- CMEMS-UMinho Research Unit, University of Minho, Guimarães, Portugal; Centro Algoritmi, University of Minho, Braga, Portugal.
| | - Raphael Meier
- Institute for Surgical Technology and Biomechanics, University of Bern, Switzerland.
| | - Richard McKinley
- Support Center for Advanced Neuroimaging - Institute for Diagnostic and Interventional Neuroradiology, University Hospital and University of Bern, Switzerland.
| | - Roland Wiest
- Support Center for Advanced Neuroimaging - Institute for Diagnostic and Interventional Neuroradiology, University Hospital and University of Bern, Switzerland.
| | - Victor Alves
- Centro Algoritmi, University of Minho, Braga, Portugal.
| | - Carlos A Silva
- CMEMS-UMinho Research Unit, University of Minho, Guimarães, Portugal.
| | - Mauricio Reyes
- Institute for Surgical Technology and Biomechanics, University of Bern, Switzerland.
| |
Collapse
|
40
|
Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, van der Laak JAWM, van Ginneken B, Sánchez CI. A survey on deep learning in medical image analysis. Med Image Anal 2017; 42:60-88. [PMID: 28778026 DOI: 10.1016/j.media.2017.07.005] [Citation(s) in RCA: 4777] [Impact Index Per Article: 597.1] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2017] [Revised: 07/24/2017] [Accepted: 07/25/2017] [Indexed: 02/07/2023]
Affiliation(s)
- Geert Litjens
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands.
| | - Thijs Kooi
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | | | | | - Francesco Ciompi
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Mohsen Ghafoorian
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | | | - Bram van Ginneken
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Clara I Sánchez
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| |
Collapse
|
41
|
Qayyum A, Anwar SM, Awais M, Majid M. Medical image retrieval using deep convolutional neural network. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2017.05.025] [Citation(s) in RCA: 109] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
42
|
Antunes S, Esposito A, Palmisanov A, Colantoni C, de Cobelli F, Del Maschio A. Characterization of normal and scarred myocardium based on texture analysis of cardiac computed tomography images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2017; 2016:4161-4164. [PMID: 28269199 DOI: 10.1109/embc.2016.7591643] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
An accurate detection of myocardial scar using Cardiac CT may have a strong clinical impact; however, the main drawback is the insufficient contrast to noise ratio of delayed iodine enhanced (DIE) CT images, which makes its accurate segmentation (manual as well as automatic) difficult. In this work, we investigate texture parameters applied on the different scans in order to obtain the scans and features that best differentiates normal from scarred myocardium. The experiments on 7 cases of myocarditis show the accuracy of the parameter energy in all scans, as well as the good performance of the angiographic scan (having higher spatial resolution) with different parameters for the segmentation propose. Moreover, the best performance was obtained on the baseline scan for the energy feature, with an accuracy of 94%.
Collapse
|
43
|
A Comparison of Texture Features Versus Deep Learning for Image Classification in Interstitial Lung Disease. ACTA ACUST UNITED AC 2017. [DOI: 10.1007/978-3-319-60964-5_65] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register]
|
44
|
Abstract
This review covers computer-assisted analysis of images in the field of medical imaging. Recent advances in machine learning, especially with regard to deep learning, are helping to identify, classify, and quantify patterns in medical images. At the core of these advances is the ability to exploit hierarchical feature representations learned solely from data, instead of features designed by hand according to domain-specific knowledge. Deep learning is rapidly becoming the state of the art, leading to enhanced performance in various medical applications. We introduce the fundamentals of deep learning methods and review their successes in image registration, detection of anatomical and cellular structures, tissue segmentation, computer-aided disease diagnosis and prognosis, and so on. We conclude by discussing research issues and suggesting future directions for further improvement.
Collapse
Affiliation(s)
- Dinggang Shen
- Department of Radiology, University of North Carolina, Chapel Hill, North Carolina 27599;
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea;
| | - Guorong Wu
- Department of Radiology, University of North Carolina, Chapel Hill, North Carolina 27599;
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea;
| |
Collapse
|
45
|
Sun W, Zheng B, Qian W. Automatic feature learning using multichannel ROI based on deep structured algorithms for computerized lung cancer diagnosis. Comput Biol Med 2017; 89:530-539. [PMID: 28473055 DOI: 10.1016/j.compbiomed.2017.04.006] [Citation(s) in RCA: 89] [Impact Index Per Article: 11.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2017] [Revised: 03/10/2017] [Accepted: 04/11/2017] [Indexed: 12/21/2022]
Abstract
This study aimed to analyze the ability of extracting automatically generated features using deep structured algorithms in lung nodule CT image diagnosis, and compare its performance with traditional computer aided diagnosis (CADx) systems using hand-crafted features. All of the 1018 cases were acquired from Lung Image Database Consortium (LIDC) public lung cancer database. The nodules were segmented according to four radiologists' markings, and 13,668 samples were generated by rotating every slice of nodule images. Three multichannel ROI based deep structured algorithms were designed and implemented in this study: convolutional neural network (CNN), deep belief network (DBN), and stacked denoising autoencoder (SDAE). For the comparison purpose, we also implemented a CADx system using hand-crafted features including density features, texture features and morphological features. The performance of every scheme was evaluated by using a 10-fold cross-validation method and an assessment index of the area under the receiver operating characteristic curve (AUC). The observed highest area under the curve (AUC) was 0.899±0.018 achieved by CNN, which was significantly higher than traditional CADx with the AUC=0.848±0.026. The results from DBN was also slightly higher than CADx, while SDAE was slightly lower. By visualizing the automatic generated features, we found some meaningful detectors like curvy stroke detectors from deep structured schemes. The study results showed the deep structured algorithms with automatically generated features can achieve desirable performance in lung nodule diagnosis. With well-tuned parameters and large enough dataset, the deep learning algorithms can have better performance than current popular CADx. We believe the deep learning algorithms with similar data preprocessing procedure can be used in other medical image analysis areas as well.
Collapse
Affiliation(s)
- Wenqing Sun
- College of Engineering, University of Texas at El Paso, El Paso, TX, United States
| | - Bin Zheng
- College of Engineering, University of Oklahoma, Norman, OK, United States
| | - Wei Qian
- College of Engineering, University of Texas at El Paso, El Paso, TX, United States.
| |
Collapse
|
46
|
Wang Q, Zheng Y, Yang G, Jin W, Chen X, Yin Y. Multiscale Rotation-Invariant Convolutional Neural Networks for Lung Texture Classification. IEEE J Biomed Health Inform 2017; 22:184-195. [PMID: 28333649 DOI: 10.1109/jbhi.2017.2685586] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
We propose a new multiscale rotation-invariant convolutional neural network (MRCNN) model for classifying various lung tissue types on high-resolution computed tomography. MRCNN employs Gabor-local binary pattern that introduces a good property in image analysis-invariance to image scales and rotations. In addition, we offer an approach to deal with the problems caused by imbalanced number of samples between different classes in most of the existing works, accomplished by changing the overlapping size between the adjacent patches. Experimental results on a public interstitial lung disease database show a superior performance of the proposed method to state of the art.
Collapse
|
47
|
Ksieniewicz P, Graña M, Woźniak M. Paired feature multilayer ensemble – concept and evaluation of a classifier. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2017. [DOI: 10.3233/jifs-169139] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Paweł Ksieniewicz
- Department of Systems and Computer Networks, Faculty of Electronics Wrocław University of Science and Technology, Wrocław, Poland
| | - Manuel Graña
- University of the Basque Country, Leioa, Bizkaia, Spain
| | - Michał Woźniak
- Department of Systems and Computer Networks, Faculty of Electronics Wrocław University of Science and Technology, Wrocław, Poland
| |
Collapse
|
48
|
Christodoulidis S, Anthimopoulos M, Ebner L, Christe A, Mougiakakou S. Multisource Transfer Learning With Convolutional Neural Networks for Lung Pattern Analysis. IEEE J Biomed Health Inform 2016; 21:76-84. [PMID: 28114048 DOI: 10.1109/jbhi.2016.2636929] [Citation(s) in RCA: 119] [Impact Index Per Article: 13.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Early diagnosis of interstitial lung diseases is crucial for their treatment, but even experienced physicians find it difficult, as their clinical manifestations are similar. In order to assist with the diagnosis, computer-aided diagnosis systems have been developed. These commonly rely on a fixed scale classifier that scans CT images, recognizes textural lung patterns, and generates a map of pathologies. In a previous study, we proposed a method for classifying lung tissue patterns using a deep convolutional neural network (CNN), with an architecture designed for the specific problem. In this study, we present an improved method for training the proposed network by transferring knowledge from the similar domain of general texture classification. Six publicly available texture databases are used to pretrain networks with the proposed architecture, which are then fine-tuned on the lung tissue data. The resulting CNNs are combined in an ensemble and their fused knowledge is compressed back to a network with the original architecture. The proposed approach resulted in an absolute increase of about 2% in the performance of the proposed CNN. The results demonstrate the potential of transfer learning in the field of medical image analysis, indicate the textural nature of the problem and show that the method used for training a network can be as important as designing its architecture.
Collapse
|