1
|
Zhang K, Hu X. Unsupervised separation of nonlinearly mixed event-related potentials using manifold clustering and non-negative matrix factorization. Comput Biol Med 2024; 178:108700. [PMID: 38852400 DOI: 10.1016/j.compbiomed.2024.108700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Revised: 05/12/2024] [Accepted: 06/01/2024] [Indexed: 06/11/2024]
Abstract
Event-related potentials (ERPs) can quantify brain responses to reveal the neural mechanisms of sensory perception. However, ERPs often reflect nonlinear mixture responses to multiple sources of sensory stimuli, and an accurate separation of the response to each stimulus remains a challenge. This study aimed to separate the ERP into nonlinearly mixed source components specific to individual stimuli. We developed an unsupervised learning method based on clustering of manifold structures of mixture signals combined with channel optimization for signal source reconstruction using non-negative matrix factorization (NMF). Specifically, we first implemented manifold learning based on Local Tangent Space Alignment (LTSA) to extract the spatial manifold structure of multi-resolution sub-signals separated via wavelet packet transform. We then used fuzzy entropy to extract the dynamical process of the manifold structures and performed a k-means clustering to separate different sources. Lastly, we used NMF to obtain the optimal contributions of multiple channels to ensure accurate source reconstructions. We evaluated our developed approach using a simulated ERP dataset with known ground truth of two components of ERP mixture signals. Our results show that the correlation coefficient between the reconstructed source signal and the true source signal was 92.8 % and that the separation accuracy in ERP amplitude was 91.6 %. The results show that our unsupervised separation approach can accurately separate ERP signals from nonlinear mixture source components. The outcomes provide a promising way to isolate brain responses to multiple stimulus sources during multisensory perception.
Collapse
Affiliation(s)
- Kai Zhang
- Department of Mechanical Engineering, Pennsylvania State University, University Park, USA
| | - Xiaogang Hu
- Department of Mechanical Engineering, Pennsylvania State University, University Park, USA; Department of Kinesiology, Pennsylvania State University, University Park, USA; Department of Physical Medicine & Rehabilitation, Pennsylvania State Hershey College of Medicine, USA; Huck Institutes of the Life Sciences, Pennsylvania State University, University Park, USA; Center for Neural Engineering, Pennsylvania State University, University Park, USA.
| |
Collapse
|
2
|
Kirimtat A, Krejcar O. GPU-Based Parallel Processing Techniques for Enhanced Brain Magnetic Resonance Imaging Analysis: A Review of Recent Advances. SENSORS (BASEL, SWITZERLAND) 2024; 24:1591. [PMID: 38475138 DOI: 10.3390/s24051591] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 02/21/2024] [Accepted: 02/28/2024] [Indexed: 03/14/2024]
Abstract
The approach of using more than one processor to compute in order to overcome the complexity of different medical imaging methods that make up an overall job is known as GPU (graphic processing unit)-based parallel processing. It is extremely important for several medical imaging techniques such as image classification, object detection, image segmentation, registration, and content-based image retrieval, since the GPU-based parallel processing approach allows for time-efficient computation by a software, allowing multiple computations to be completed at once. On the other hand, a non-invasive imaging technology that may depict the shape of an anatomy and the biological advancements of the human body is known as magnetic resonance imaging (MRI). Implementing GPU-based parallel processing approaches in brain MRI analysis with medical imaging techniques might be helpful in achieving immediate and timely image capture. Therefore, this extended review (the extension of the IWBBIO2023 conference paper) offers a thorough overview of the literature with an emphasis on the expanding use of GPU-based parallel processing methods for the medical analysis of brain MRIs with the imaging techniques mentioned above, given the need for quicker computation to acquire early and real-time feedback in medicine. Between 2019 and 2023, we examined the articles in the literature matrix that include the tasks, techniques, MRI sequences, and processing results. As a result, the methods discussed in this review demonstrate the advancements achieved until now in minimizing computing runtime as well as the obstacles and problems still to be solved in the future.
Collapse
Affiliation(s)
- Ayca Kirimtat
- Center for Basic and Applied Research, Faculty of Informatics and Management, University of Hradec Kralove, Rokitanskeho 62, 500 03 Hradec Kralove, Czech Republic
| | - Ondrej Krejcar
- Center for Basic and Applied Research, Faculty of Informatics and Management, University of Hradec Kralove, Rokitanskeho 62, 500 03 Hradec Kralove, Czech Republic
| |
Collapse
|
3
|
Ryali S, Zhang Y, de los Angeles C, Supekar K, Menon V. Deep learning models reveal replicable, generalizable, and behaviorally relevant sex differences in human functional brain organization. Proc Natl Acad Sci U S A 2024; 121:e2310012121. [PMID: 38377194 PMCID: PMC10907309 DOI: 10.1073/pnas.2310012121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 12/21/2023] [Indexed: 02/22/2024] Open
Abstract
Sex plays a crucial role in human brain development, aging, and the manifestation of psychiatric and neurological disorders. However, our understanding of sex differences in human functional brain organization and their behavioral consequences has been hindered by inconsistent findings and a lack of replication. Here, we address these challenges using a spatiotemporal deep neural network (stDNN) model to uncover latent functional brain dynamics that distinguish male and female brains. Our stDNN model accurately differentiated male and female brains, demonstrating consistently high cross-validation accuracy (>90%), replicability, and generalizability across multisession data from the same individuals and three independent cohorts (N ~ 1,500 young adults aged 20 to 35). Explainable AI (XAI) analysis revealed that brain features associated with the default mode network, striatum, and limbic network consistently exhibited significant sex differences (effect sizes > 1.5) across sessions and independent cohorts. Furthermore, XAI-derived brain features accurately predicted sex-specific cognitive profiles, a finding that was also independently replicated. Our results demonstrate that sex differences in functional brain dynamics are not only highly replicable and generalizable but also behaviorally relevant, challenging the notion of a continuum in male-female brain organization. Our findings underscore the crucial role of sex as a biological determinant in human brain organization, have significant implications for developing personalized sex-specific biomarkers in psychiatric and neurological disorders, and provide innovative AI-based computational tools for future research.
Collapse
Affiliation(s)
- Srikanth Ryali
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA94305
| | - Yuan Zhang
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA94305
| | - Carlo de los Angeles
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA94305
| | - Kaustubh Supekar
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA94305
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA94305
- Stanford Institute for Human-Centered Artificial Intelligence, Stanford University, Stanford, CA94305
| | - Vinod Menon
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA94305
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA94305
- Stanford Institute for Human-Centered Artificial Intelligence, Stanford University, Stanford, CA94305
- Department of Neurology and Neurological Sciences, Stanford University, Stanford, CA94305
| |
Collapse
|
4
|
Haque SBU, Zafar A. Robust Medical Diagnosis: A Novel Two-Phase Deep Learning Framework for Adversarial Proof Disease Detection in Radiology Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:308-338. [PMID: 38343214 PMCID: PMC11266337 DOI: 10.1007/s10278-023-00916-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 09/23/2023] [Accepted: 10/08/2023] [Indexed: 03/02/2024]
Abstract
In the realm of medical diagnostics, the utilization of deep learning techniques, notably in the context of radiology images, has emerged as a transformative force. The significance of artificial intelligence (AI), specifically machine learning (ML) and deep learning (DL), lies in their capacity to rapidly and accurately diagnose diseases from radiology images. This capability has been particularly vital during the COVID-19 pandemic, where rapid and precise diagnosis played a pivotal role in managing the spread of the virus. DL models, trained on vast datasets of radiology images, have showcased remarkable proficiency in distinguishing between normal and COVID-19-affected cases, offering a ray of hope amidst the crisis. However, as with any technological advancement, vulnerabilities emerge. Deep learning-based diagnostic models, although proficient, are not immune to adversarial attacks. These attacks, characterized by carefully crafted perturbations to input data, can potentially disrupt the models' decision-making processes. In the medical context, such vulnerabilities could have dire consequences, leading to misdiagnoses and compromised patient care. To address this, we propose a two-phase defense framework that combines advanced adversarial learning and adversarial image filtering techniques. We use a modified adversarial learning algorithm to enhance the model's resilience against adversarial examples during the training phase. During the inference phase, we apply JPEG compression to mitigate perturbations that cause misclassification. We evaluate our approach on three models based on ResNet-50, VGG-16, and Inception-V3. These models perform exceptionally in classifying radiology images (X-ray and CT) of lung regions into normal, pneumonia, and COVID-19 pneumonia categories. We then assess the vulnerability of these models to three targeted adversarial attacks: fast gradient sign method (FGSM), projected gradient descent (PGD), and basic iterative method (BIM). The results show a significant drop in model performance after the attacks. However, our defense framework greatly improves the models' resistance to adversarial attacks, maintaining high accuracy on adversarial examples. Importantly, our framework ensures the reliability of the models in diagnosing COVID-19 from clean images.
Collapse
Affiliation(s)
- Sheikh Burhan Ul Haque
- Department of Computer Science, Aligarh Muslim University, Uttar Pradesh, Aligarh, 202002, India.
| | - Aasim Zafar
- Department of Computer Science, Aligarh Muslim University, Uttar Pradesh, Aligarh, 202002, India
| |
Collapse
|
5
|
Abd Algani YM, Vidhya S, Ghai B, Acharjee PB, Kathiravan MN, Dwivedi VK. Innovative Method for Alzheimer Disease Prediction using GP-ELM-RNN. 2023 2ND INTERNATIONAL CONFERENCE ON APPLIED ARTIFICIAL INTELLIGENCE AND COMPUTING (ICAAIC) 2023. [DOI: 10.1109/icaaic56838.2023.10140571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Affiliation(s)
| | - S. Vidhya
- RMK College of Engineering and Technology,S&H(Mathematics),Chennai,Tamilnadu,India
| | - Bhupaesh Ghai
- CCSIT Teerthankar Mahaveer University,Moradabad,Uttar Pradesh,India
| | | | | | - Vijay Kumar Dwivedi
- Vishwavidyalaya Engineering College,Department of Mathematics,Surguja,Chhattisgarh,India
| |
Collapse
|
6
|
Kashyap A, Callison-Burch C, Boland MR. A deep learning method to detect opioid prescription and opioid use disorder from electronic health records. Int J Med Inform 2023; 171:104979. [PMID: 36621078 PMCID: PMC9898169 DOI: 10.1016/j.ijmedinf.2022.104979] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Revised: 12/12/2022] [Accepted: 12/27/2022] [Indexed: 01/01/2023]
Abstract
OBJECTIVE As the opioid epidemic continues across the United States, methods are needed to accurately and quickly identify patients at risk for opioid use disorder (OUD). The purpose of this study is to develop two predictive algorithms: one to predict opioid prescription and one to predict OUD. MATERIALS AND METHODS We developed an informatics algorithm that trains two deep learning models over patient Electronic Health Records (EHRs) using the MIMIC-III database. We utilize both the structured and unstructured parts of the EHR and show that it is possible to predict both challenging outcomes. RESULTS Our deep learning models incorporate elements from EHRs to predict opioid prescription with an F1-score of 0.88 ± 0.003 and an AUC-ROC of 0.93 ± 0.002. We also constructed a model to predict OUD diagnosis achieving an F1-score of 0.82 ± 0.05 and AUC-ROC of 0.94 ± 0.008. DISCUSSION Our model for OUD prediction outperformed prior algorithms for specificity, F1 score and AUC-ROC while achieving equivalent sensitivity. This demonstrates the importance of a) deep learning approaches in predicting OUD and b) incorporating both structured and unstructured data for this prediction task. No prediction models for opioid prescription as an outcome were found in the literature and therefore our model is the first to predict opioid prescribing behavior. CONCLUSION Algorithms such as those described in this paper will become increasingly important to understand the drivers underlying this national epidemic.
Collapse
Affiliation(s)
- Aditya Kashyap
- Department of Computer Science, University of Pennsylvania, United States of America
| | - Chris Callison-Burch
- Department of Computer Science, University of Pennsylvania, United States of America
| | - Mary Regina Boland
- Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, University of Pennsylvania, United States of America; Institute for Biomedical Informatics, University of Pennsylvania, United States of America; Center for Excellence in Environmental Toxicology, University of Pennsylvania, United States of America; Department of Biomedical and Health Informatics, Children's Hospital of Philadelphia, United States of America.
| |
Collapse
|
7
|
Chauvin L, Kumar K, Desrosiers C, Wells W, Toews M. Efficient Pairwise Neuroimage Analysis Using the Soft Jaccard Index and 3D Keypoint Sets. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:836-845. [PMID: 34699353 PMCID: PMC9022638 DOI: 10.1109/tmi.2021.3123252] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
We propose a novel pairwise distance measure between image keypoint sets, for the purpose of large-scale medical image indexing. Our measure generalizes the Jaccard index to account for soft set equivalence (SSE) between keypoint elements, via an adaptive kernel framework modeling uncertainty in keypoint appearance and geometry. A new kernel is proposed to quantify the variability of keypoint geometry in location and scale. Our distance measure may be estimated between O (N 2) image pairs in [Formula: see text] operations via keypoint indexing. Experiments report the first results for the task of predicting family relationships from medical images, using 1010 T1-weighted MRI brain volumes of 434 families including monozygotic and dizygotic twins, siblings and half-siblings sharing 100%-25% of their polymorphic genes. Soft set equivalence and the keypoint geometry kernel improve upon standard hard set equivalence (HSE) and appearance kernels alone in predicting family relationships. Monozygotic twin identification is near 100%, and three subjects with uncertain genotyping are automatically paired with their self-reported families, the first reported practical application of image-based family identification. Our distance measure can also be used to predict group categories, sex is predicted with an AUC = 0.97. Software is provided for efficient fine-grained curation of large, generic image datasets.
Collapse
|
8
|
Early Detection of Alzheimer’s Disease using Bottleneck Transformers. INTERNATIONAL JOURNAL OF INTELLIGENT INFORMATION TECHNOLOGIES 2022. [DOI: 10.4018/ijiit.296268] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Early detection of Alzheimer’s Disease (AD) and its prodromal state, Mild Cognitive Impairment (MCI), is crucial for providing suitable treatment and preventing the disease from progressing. It can also aid researchers and clinicians to identify early biomarkers and minister new treatments that have been a subject of extensive research. The application of deep learning techniques on structural Magnetic Resonance Imaging (MRI) has shown promising results in diagnosing the disease. In this research, we intend to introduce a novel approach of using an ensemble of the self-attention-based Bottleneck Transformers with a sharpness aware minimizer for early detection of Alzheimer’s Disease. The proposed approach has been tested on the widely accepted ADNI dataset and evaluated using accuracy, precision, recall, F1 score, and ROC-AUC score as the performance metrics.
Collapse
|
9
|
Seo Y, Jang H, Lee H. Potential Applications of Artificial Intelligence in Clinical Trials for Alzheimer’s Disease. Life (Basel) 2022; 12:life12020275. [PMID: 35207561 PMCID: PMC8879055 DOI: 10.3390/life12020275] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Revised: 02/05/2022] [Accepted: 02/09/2022] [Indexed: 01/18/2023] Open
Abstract
Clinical trials for Alzheimer’s disease (AD) face multiple challenges, such as the high screen failure rate and the even allocation of heterogeneous participants. Artificial intelligence (AI), which has become a potent tool of modern science with the expansion in the volume, variety, and velocity of biological data, offers promising potential to address these issues in AD clinical trials. In this review, we introduce the current status of AD clinical trials and the topic of machine learning. Then, a comprehensive review is focused on the potential applications of AI in the steps of AD clinical trials, including the prediction of protein and MRI AD biomarkers in the prescreening process during eligibility assessment and the likelihood stratification of AD subjects into rapid and slow progressors in randomization. Finally, this review provides challenges, developments, and the future outlook on the integration of AI into AD clinical trials.
Collapse
Affiliation(s)
| | | | - Hyejoo Lee
- Correspondence: ; Tel.: +82-2-3410-1233; Fax: +82-2-3410-0052
| |
Collapse
|
10
|
Tufail AB, Ullah K, Khan RA, Shakir M, Khan MA, Ullah I, Ma YK, Ali MS. On Improved 3D-CNN-Based Binary and Multiclass Classification of Alzheimer's Disease Using Neuroimaging Modalities and Data Augmentation Methods. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:1302170. [PMID: 35186220 PMCID: PMC8856791 DOI: 10.1155/2022/1302170] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Revised: 01/17/2022] [Accepted: 01/20/2022] [Indexed: 12/15/2022]
Abstract
Alzheimer's disease (AD) is an irreversible illness of the brain impacting the functional and daily activities of elderly population worldwide. Neuroimaging sensory systems such as Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) measure the pathological changes in the brain associated with this disorder especially in its early stages. Deep learning (DL) architectures such as Convolutional Neural Networks (CNNs) are successfully used in recognition, classification, segmentation, detection, and other domains for data interpretation. Data augmentation schemes work alongside DL techniques and may impact the final task performance positively or negatively. In this work, we have studied and compared the impact of three data augmentation techniques on the final performances of CNN architectures in the 3D domain for the early diagnosis of AD. We have studied both binary and multiclass classification problems using MRI and PET neuroimaging modalities. We have found the performance of random zoomed in/out augmentation to be the best among all the augmentation methods. It is also observed that combining different augmentation methods may result in deteriorating performances on the classification tasks. Furthermore, we have seen that architecture engineering has less impact on the final classification performance in comparison to the data manipulation schemes. We have also observed that deeper architectures may not provide performance advantages in comparison to their shallower counterparts. We have further observed that these augmentation schemes do not alleviate the class imbalance issue.
Collapse
Affiliation(s)
- Ahsan Bin Tufail
- School of Electronics and Information Engineering, Harbin Institute of Technology, Harbin 150001, China
- Department of Electrical and Computer Engineering, COMSATS University Islamabad Sahiwal Campus, Sahiwal, Pakistan
| | - Kalim Ullah
- Department of Zoology, Kohat University of Science and Technology, Kohat 26000, Pakistan
| | - Rehan Ali Khan
- Department of Electrical Engineering, University of Science and Technology Bannu, Bannu 28100, Pakistan
| | - Mustafa Shakir
- Department of Electrical Engineering, Superior University, Lahore 54000, Pakistan
| | - Muhammad Abbas Khan
- Department of Electrical Engineering, Balochistan University of Information Technology,Engineering and Management Sciences, Quetta,Balochistan 87300, Pakistan
| | - Inam Ullah
- College of Internet of Things (IoT) Engineering, Hohai University (HHU), Changzhou Campus 213022, China
| | - Yong-Kui Ma
- School of Electronics and Information Engineering, Harbin Institute of Technology, Harbin 150001, China
| | - Md. Sadek Ali
- Communication Research Laboratory, Department of Information and Communication Technology, Islamic University, Kushtia-7003, Bangladesh
| |
Collapse
|
11
|
Gao Z, Liu W, McDonough DJ, Zeng N, Lee JE. The Dilemma of Analyzing Physical Activity and Sedentary Behavior with Wrist Accelerometer Data: Challenges and Opportunities. J Clin Med 2021; 10:5951. [PMID: 34945247 PMCID: PMC8706489 DOI: 10.3390/jcm10245951] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Revised: 12/14/2021] [Accepted: 12/16/2021] [Indexed: 12/20/2022] Open
Abstract
Physical behaviors (e.g., physical activity and sedentary behavior) have been the focus among many researchers in the biomedical and behavioral science fields. The recent shift from hip- to wrist-worn accelerometers in these fields has signaled the need to develop novel approaches to process raw acceleration data of physical activity and sedentary behavior. However, there is currently no consensus regarding the best practices for analyzing wrist-worn accelerometer data to accurately predict individuals' energy expenditure and the times spent in different intensities of free-living physical activity and sedentary behavior. To this end, accurately analyzing and interpreting wrist-worn accelerometer data has become a major challenge facing many clinicians and researchers. In response, this paper attempts to review different methodologies for analyzing wrist-worn accelerometer data and offer cutting edge, yet appropriate analysis plans for wrist-worn accelerometer data in the assessment of physical behavior. In this paper, we first discuss the fundamentals of wrist-worn accelerometer data, followed by various methods of processing these data (e.g., cut points, steps per minute, machine learning), and then we discuss the opportunities, challenges, and directions for future studies in this area of inquiry. This is the most comprehensive review paper to date regarding the analysis and interpretation of free-living physical activity data derived from wrist-worn accelerometers, aiming to help establish a blueprint for processing wrist-derived accelerometer data.
Collapse
Affiliation(s)
- Zan Gao
- School of Kinesiology, University of Minnesota-Twin Cities, 1900 University Ave. SE, Minneapolis, MN 55455, USA
| | - Wenxi Liu
- Department of Physical Education, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Daniel J McDonough
- Division of Epidemiology and Community Health, School of Public Health, University of Minnesota-Twin Cities, 420 Delaware St. SE, Minneapolis, MN 55455, USA
| | - Nan Zeng
- Prevention Research Center, Department of Pediatrics, University of New Mexico Health Sciences Center, Albuquerque, NM 87131, USA
| | - Jung Eun Lee
- Department of Applied Human Sciences, University of Minnesota-Duluth, 1216 Ordean Court SpHC 109, Duluth, MN 55812, USA
| |
Collapse
|
12
|
Zhang L, Liu J. Research Progress of ECG Monitoring Equipment and Algorithms Based on Polymer Materials. MICROMACHINES 2021; 12:1282. [PMID: 34832693 PMCID: PMC8624836 DOI: 10.3390/mi12111282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/29/2021] [Revised: 10/17/2021] [Accepted: 10/19/2021] [Indexed: 11/22/2022]
Abstract
Heart diseases such as myocardial ischemia (MI) are the main causes of human death. The prediction of MI and arrhythmia is an effective method for the early detection, diagnosis, and treatment of heart disease. For the rapid detection of arrhythmia and myocardial ischemia, the electrocardiogram (ECG) is widely used in clinical diagnosis, and its detection equipment and algorithm are constantly optimized. This paper introduces the current progress of portable ECG monitoring equipment, including the use of polymer material sensors and the use of deep learning algorithms. First, it introduces the latest portable ECG monitoring equipment and the polymer material sensor it uses and then focuses on reviewing the progress of detection algorithms. We mainly introduce the basic structure of existing deep learning methods and enumerate the internationally recognized ECG datasets. This paper outlines the deep learning algorithms used for ECG diagnosis, compares the prediction results of different classifiers, and summarizes two existing problems of ECG detection technology: imbalance of categories and high computational overhead. Finally, we put forward the development direction of using generative adversarial networks (GAN) to improve the quality of the ECG database and lightweight ECG diagnosis algorithm to adapt to portable ECG monitoring equipment.
Collapse
Affiliation(s)
| | - Jihong Liu
- College of Information Science and Engineering, Northeastern University, Shenyang 110819, China;
| |
Collapse
|
13
|
Kumar Y, Gupta S, Singla R, Hu YC. A Systematic Review of Artificial Intelligence Techniques in Cancer Prediction and Diagnosis. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING : STATE OF THE ART REVIEWS 2021; 29:2043-2070. [PMID: 34602811 PMCID: PMC8475374 DOI: 10.1007/s11831-021-09648-w] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2021] [Accepted: 09/11/2021] [Indexed: 05/05/2023]
Abstract
Artificial intelligence has aided in the advancement of healthcare research. The availability of open-source healthcare statistics has prompted researchers to create applications that aid cancer detection and prognosis. Deep learning and machine learning models provide a reliable, rapid, and effective solution to deal with such challenging diseases in these circumstances. PRISMA guidelines had been used to select the articles published on the web of science, EBSCO, and EMBASE between 2009 and 2021. In this study, we performed an efficient search and included the research articles that employed AI-based learning approaches for cancer prediction. A total of 185 papers are considered impactful for cancer prediction using conventional machine and deep learning-based classifications. In addition, the survey also deliberated the work done by the different researchers and highlighted the limitations of the existing literature, and performed the comparison using various parameters such as prediction rate, accuracy, sensitivity, specificity, dice score, detection rate, area undercover, precision, recall, and F1-score. Five investigations have been designed, and solutions to those were explored. Although multiple techniques recommended in the literature have achieved great prediction results, still cancer mortality has not been reduced. Thus, more extensive research to deal with the challenges in the area of cancer prediction is required.
Collapse
Affiliation(s)
- Yogesh Kumar
- Department of Computer Engineering, Indus Institute of Technology & Engineering, Indus University, Rancharda, Via: Shilaj, Ahmedabad, Gujarat 382115 India
| | - Surbhi Gupta
- School of Computer Science and Engineering, Model Institute of Engineering and Technology, Kot bhalwal, Jammu, J&K 181122 India
| | - Ruchi Singla
- Department of Research, Innovations, Sponsored Projects and Entrepreneurship, Chandigarh Group of Colleges, Landran, Mohali India
| | - Yu-Chen Hu
- Department of Computer Science and Information Management, Providence University, Taichung City, Taiwan, ROC
| |
Collapse
|
14
|
Wang F, Diao X, Chang S, Xu L. Recent Progress of Deep Learning in Drug Discovery. Curr Pharm Des 2021; 27:2088-2096. [PMID: 33511933 DOI: 10.2174/1381612827666210129123231] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Accepted: 11/11/2020] [Indexed: 11/22/2022]
Abstract
Deep learning, an emerging field of artificial intelligence based on neural networks in machine learning, has been applied in various fields and is highly valued. Herein, we mainly review several mainstream architectures in deep learning, including deep neural networks, convolutional neural networks and recurrent neural networks in the field of drug discovery. The applications of these architectures in molecular de novo design, property prediction, biomedical imaging and synthetic planning have also been explored. Apart from that, we further discuss the future direction of the deep learning approaches and the main challenges we need to address.
Collapse
Affiliation(s)
- Feng Wang
- College of Information Science and Engineering, Huaide College of Changzhou University, Taizhou 214500, China
| | - XiaoMin Diao
- College of Information Science and Engineering, Huaide College of Changzhou University, Taizhou 214500, China
| | - Shan Chang
- Institute of Bioinformatics and Medical Engineering, Jiangsu University of Technology, Changzhou 213001, China
| | - Lei Xu
- Institute of Bioinformatics and Medical Engineering, Jiangsu University of Technology, Changzhou 213001, China
| |
Collapse
|
15
|
Zhao X, Ang CKE, Acharya UR, Cheong KH. Application of Artificial Intelligence techniques for the detection of Alzheimer’s disease using structural MRI images. Biocybern Biomed Eng 2021. [DOI: 10.1016/j.bbe.2021.02.006] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
|
16
|
Wei Y, Huang N, Liu Y, Zhang X, Wang S, Tang X. Hippocampal and Amygdalar Morphological Abnormalities in Alzheimer's Disease Based on Three Chinese MRI Datasets. Curr Alzheimer Res 2021; 17:1221-1231. [PMID: 33602087 DOI: 10.2174/1567205018666210218150223] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Revised: 10/12/2020] [Accepted: 12/22/2020] [Indexed: 11/22/2022]
Abstract
BACKGROUND Early detection of Alzheimer's disease (AD) and its early stage, the mild cognitive impairment (MCI), has important scientific, clinical and social significance. Magnetic resonance imaging (MRI) based statistical shape analysis provides an opportunity to detect regional structural abnormalities of brain structures caused by AD and MCI. OBJECTIVE In this work, we aimed to employ a well-established statistical shape analysis pipeline, in the framework of large deformation diffeomorphic metric mapping, to identify and quantify the regional shape abnormalities of the bilateral hippocampus and amygdala at different prodromal stages of AD, using three Chinese MRI datasets collected from different domestic hospitals. METHODS We analyzed the region-specific shape abnormalities at different stages of the neuropathology of AD by comparing the localized shape characteristics of the bilateral hippocampi and amygdalas between healthy controls and two disease groups (MCI and AD). In addition to group comparison analyses, we also investigated the association between the shape characteristics and the Mini Mental State Examination (MMSE) of each structure of interest in the disease group (MCI and AD combined) as well as the discriminative power of different morphometric biomarkers. RESULTS We found the strongest disease pathology (regional atrophy) at the subiculum and CA1 subregions of the hippocampus and the basolateral, basomedial as well as centromedial subregions of the amygdala. Furthermore, the shape characteristics of the hippocampal and amygdalar subregions exhibiting the strongest AD related atrophy were found to have the most significant positive associations with the MMSE. Employing the shape deformation marker of the hippocampus or the amygdala for automated MCI or AD detection yielded a significant accuracy boost over the corresponding volume measurement. CONCLUSION Our results suggested that the amygdalar and hippocampal morphometrics, especially those of shape morphometrics, can be used as auxiliary indicators for monitoring the disease status of an AD patient.
Collapse
Affiliation(s)
- Yuanyuan Wei
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, Guangdong, China
| | - Nianwei Huang
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, Guangdong, China
| | - Yong Liu
- Brainnetome Center, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Xi Zhang
- Department of Neurology, Nanlou Division, Chinese PLA General Hospital; National Clinical Research Center for Geriatric Diseases, Beijing, China
| | - Silun Wang
- YIWEI Medical Technology Co., Ltd, Shenzhen, Guangdong, China
| | - Xiaoying Tang
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, Guangdong, China
| |
Collapse
|
17
|
Zheng W, Liu S, Chai QW, Pan JS, Chu SC. Automatic Measurement of Pennation Angle from Ultrasound Images using Resnets. ULTRASONIC IMAGING 2021; 43:74-87. [PMID: 33563138 DOI: 10.1177/0161734621989598] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
In this study, an automatic pennation angle measuring approach based on deep learning is proposed. Firstly, the Local Radon Transform (LRT) is used to detect the superficial and deep aponeuroses on the ultrasound image. Secondly, a reference line are introduced between the deep and superficial aponeuroses to assist the detection of the orientation of muscle fibers. The Deep Residual Networks (Resnets) are used to judge the relative orientation of the reference line and muscle fibers. Then, reference line is revised until the line is parallel to the orientation of the muscle fibers. Finally, the pennation angle is obtained according to the direction of the detected aponeuroses and the muscle fibers. The angle detected by our proposed method differs by about 1° from the angle manually labeled. With a CPU, the average inference time for a single image of the muscle fibers with the proposed method is around 1.6 s, compared to 0.47 s for one of the image of a sequential image sequence. Experimental results show that the proposed method can achieve accurate and robust measurements of pennation angle.
Collapse
Affiliation(s)
- Weimin Zheng
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, China
| | - Shangkun Liu
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, China
| | - Qing-Wei Chai
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, China
| | - Jeng-Shyang Pan
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, China
| | - Shu-Chuan Chu
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, China
| |
Collapse
|
18
|
Raza K, Singh NK. A Tour of Unsupervised Deep Learning for Medical Image Analysis. Curr Med Imaging 2021; 17:1059-1077. [PMID: 33504314 DOI: 10.2174/1573405617666210127154257] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2020] [Revised: 11/17/2020] [Accepted: 12/16/2020] [Indexed: 11/22/2022]
Abstract
BACKGROUND Interpretation of medical images for the diagnosis and treatment of complex diseases from high-dimensional and heterogeneous data remains a key challenge in transforming healthcare. In the last few years, both supervised and unsupervised deep learning achieved promising results in the area of medical image analysis. Several reviews on supervised deep learning are published, but hardly any rigorous review on unsupervised deep learning for medical image analysis is available. OBJECTIVES The objective of this review is to systematically present various unsupervised deep learning models, tools, and benchmark datasets applied to medical image analysis. Some of the discussed models are autoencoders and its other variants, Restricted Boltzmann machines (RBM), Deep belief networks (DBN), Deep Boltzmann machine (DBM), and Generative adversarial network (GAN). Further, future research opportunities and challenges of unsupervised deep learning techniques for medical image analysis are also discussed. CONCLUSION Currently, interpretation of medical images for diagnostic purposes is usually performed by human experts that may be replaced by computer-aided diagnosis due to advancement in machine learning techniques, including deep learning, and the availability of cheap computing infrastructure through cloud computing. Both supervised and unsupervised machine learning approaches are widely applied in medical image analysis, each of them having certain pros and cons. Since human supervisions are not always available or inadequate or biased, therefore, unsupervised learning algorithms give a big hope with lots of advantages for biomedical image analysis.
Collapse
Affiliation(s)
- Khalid Raza
- Department of Computer Science, Jamia Millia Islamia, New Delhi. India
| | | |
Collapse
|
19
|
Angulakshmi M, Deepa M. A Review on Deep Learning Architecture and Methods for MRI Brain Tumour Segmentation. Curr Med Imaging 2021; 17:695-706. [PMID: 33423651 DOI: 10.2174/1573405616666210108122048] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Revised: 10/03/2020] [Accepted: 10/15/2020] [Indexed: 11/22/2022]
Abstract
BACKGROUND The automatic segmentation of brain tumour from MRI medical images is mainly covered in this review. Recently, state-of-the-art performance is provided by deep learning- based approaches in the field of image classification, segmentation, object detection, and tracking tasks. INTRODUCTION The core feature deep learning approach is the hierarchical representation of features from images, thus avoiding domain-specific handcrafted features. METHODS In this review paper, we have dealt with a review of Deep Learning Architecture and Methods for MRI Brain Tumour Segmentation. First, we have discussed the basic architecture and approaches for deep learning methods. Secondly, we have discussed the literature survey of MRI brain tumour segmentation using deep learning methods and its multimodality fusion. Then, the advantages and disadvantages of each method are analyzed and finally, it is concluded with a discussion on the merits and challenges of deep learning techniques. RESULTS The review of brain tumour identification using deep learning. CONCLUSION Techniques may help the researchers to have a better focus on it.
Collapse
Affiliation(s)
- M Angulakshmi
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, India
| | - M Deepa
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, India
| |
Collapse
|
20
|
Jun E, Na K, Kang W, Lee J, Suk H, Ham B. Identifying
resting‐state
effective connectivity abnormalities in
drug‐naïve
major depressive disorder diagnosis via graph convolutional networks. Hum Brain Mapp 2020. [DOI: 10.1002/hbm.25175 10.1002/hbm.25175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
Affiliation(s)
- Eunji Jun
- Department of Brain and Cognitive Engineering Korea University Seoul Republic of Korea
| | - Kyoung‐Sae Na
- Department of Psychiatry Gachon University Gil Medical Center Incheon Republic of Korea
| | - Wooyoung Kang
- Department of Biomedical Sciences Korea University College of Medicine Seoul Republic of Korea
| | - Jiyeon Lee
- Department of Brain and Cognitive Engineering Korea University Seoul Republic of Korea
| | - Heung‐Il Suk
- Department of Brain and Cognitive Engineering Korea University Seoul Republic of Korea
- Department of Artificial Intelligence Korea University Seoul Republic of Korea
| | - Byung‐Joo Ham
- Department of Psychiatry Korea University Anam Hospital, Korea University College of Medicine Seoul Republic of Korea
| |
Collapse
|
21
|
Kumaravel SK, Subramani RK, Jayaraj Sivakumar TK, Madurai Elavarasan R, Manavalanagar Vetrichelvan A, Annam A, Subramaniam U. Investigation on the impacts of COVID-19 quarantine on society and environment: Preventive measures and supportive technologies. 3 Biotech 2020; 10:393. [PMID: 32821645 PMCID: PMC7429420 DOI: 10.1007/s13205-020-02382-3] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Accepted: 08/05/2020] [Indexed: 12/14/2022] Open
Abstract
The present outbreak of the novel coronavirus SARS-CoV-2, epicentered in China in December 2019, has spread to many other countries. The entire humanity has a vital responsibility to tackle this pandemic and the technologies are being helpful to them to a greater extent. The purpose of the work is to precisely bring scientific and general awareness to the people all around the world who are currently fighting the war against COVID-19. It's visible that the number of people infected is increasing day by day and the medical community is tirelessly working to maintain the situation under control. Other than the negative effects caused by COVID-19, it is also equally important for the public to understand some of the positive impacts it has directly or indirectly given to society. This work emphasizes the various impacts that are created on society as well as the environment. As a special additive, some important key areas are highlighted namely, how the modernized technologies are aiding the people during the period of social distancing. Some effective technological implications carried out by both information technology and educational institutions are highlighted. There are also several steps taken by the state government and central government in each country in adopting the complete lockdown rule. These steps are taken primarily to prevent the people from COVID-19 impact. Moreover, the teachings we need to learn from the quarantine situation created to prevent further spread of this global pandemic is discussed in brief and the importance of carrying them to the future. Finally, the paper also elucidates the general preventive measures that have to be taken to prevent this deadly coronavirus, and the role of technology in this pandemic situation has also been discussed.
Collapse
Affiliation(s)
- Santhosh Kumar Kumaravel
- Department of Electrical and Electronics Engineering, Sri Venkateswara College of Engineering, Chennai, Tamil Nadu 602117 India
| | - Ranjith Kumar Subramani
- Department of Electrical and Electronics Engineering, Sri Venkateswara College of Engineering, Chennai, Tamil Nadu 602117 India
| | - Tharun Kumar Jayaraj Sivakumar
- Department of Electrical and Electronics Engineering, Sri Venkateswara College of Engineering, Chennai, Tamil Nadu 602117 India
| | - Rajvikram Madurai Elavarasan
- Department of Electrical and Electronics Engineering, Sri Venkateswara College of Engineering, Chennai, Tamil Nadu 602117 India
- Electrical and Automotive Parts Manufacturing Unit, AA Industries, Chennai, Tamil Nadu 600123 India
| | | | - Annapurna Annam
- Department of Electrical and Electronics Engineering, Sri Venkateswara College of Engineering, Chennai, Tamil Nadu 602117 India
| | - Umashankar Subramaniam
- Renewable Energy Laboratory, Faculty of Engineering, Prince Sultan University, Riyadh, 11586 Saudi Arabia
| |
Collapse
|
22
|
Jun E, Na KS, Kang W, Lee J, Suk HI, Ham BJ. Identifying resting-state effective connectivity abnormalities in drug-naïve major depressive disorder diagnosis via graph convolutional networks. Hum Brain Mapp 2020; 41:4997-5014. [PMID: 32813309 PMCID: PMC7643383 DOI: 10.1002/hbm.25175] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2019] [Revised: 07/13/2020] [Accepted: 08/01/2020] [Indexed: 02/06/2023] Open
Abstract
Major depressive disorder (MDD) is a leading cause of disability; its symptoms interfere with social, occupational, interpersonal, and academic functioning. However, the diagnosis of MDD is still made by phenomenological approach. The advent of neuroimaging techniques allowed numerous studies to use resting-state functional magnetic resonance imaging (rs-fMRI) and estimate functional connectivity for brain-disease identification. Recently, attempts have been made to investigate effective connectivity (EC) that represents causal relations among regions of interest. In the meantime, to identify meaningful phenotypes for clinical diagnosis, graph-based approaches such as graph convolutional networks (GCNs) have been leveraged recently to explore complex pairwise similarities in imaging/nonimaging features among subjects. In this study, we validate the use of EC for MDD identification by estimating its measures via a group sparse representation along with a structured equation modeling approach in a whole-brain data-driven manner from rs-fMRI. To distinguish drug-naïve MDD patients from healthy controls, we utilize spectral GCNs based on a population graph to successfully integrate EC and nonimaging phenotypic information. Furthermore, we devise a novel sensitivity analysis method to investigate the discriminant connections for MDD identification in our trained GCNs. Our experimental results validated the effectiveness of our method in various scenarios, and we identified altered connectivities associated with the diagnosis of MDD.
Collapse
Affiliation(s)
- Eunji Jun
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea
| | - Kyoung-Sae Na
- Department of Psychiatry, Gachon University Gil Medical Center, Incheon, Republic of Korea
| | - Wooyoung Kang
- Department of Biomedical Sciences, Korea University College of Medicine, Seoul, Republic of Korea
| | - Jiyeon Lee
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea.,Department of Artificial Intelligence, Korea University, Seoul, Republic of Korea
| | - Byung-Joo Ham
- Department of Psychiatry, Korea University Anam Hospital, Korea University College of Medicine, Seoul, Republic of Korea
| |
Collapse
|
23
|
Wu H, Yin H, Chen H, Sun M, Liu X, Yu Y, Tang Y, Long H, Zhang B, Zhang J, Zhou Y, Li Y, Zhang G, Zhang P, Zhan Y, Liao J, Luo S, Xiao R, Su Y, Zhao J, Wang F, Zhang J, Zhang W, Zhang J, Lu Q. A deep learning, image based approach for automated diagnosis for inflammatory skin diseases. ANNALS OF TRANSLATIONAL MEDICINE 2020; 8:581. [PMID: 32566608 PMCID: PMC7290553 DOI: 10.21037/atm.2020.04.39] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Background As the booming of deep learning era, especially the advances in convolutional neural networks (CNNs), CNNs have been applied in medicine fields like radiology and pathology. However, the application of CNNs in dermatology, which is also based on images, is very limited. Inflammatory skin diseases, such as psoriasis (Pso), eczema (Ecz), and atopic dermatitis (AD), are very easily to be mis-diagnosed in practice. Methods Based on the EfficientNet-b4 CNN algorithm, we developed an artificial intelligence dermatology diagnosis assistant (AIDDA) for Pso, Ecz & AD and healthy skins (HC). The proposed CNN model was trained based on 4,740 clinical images, and the performance was evaluated on experts-confirmed clinical images grouped into 3 different dermatologist-labelled diagnosis classifications (HC, Pso, Ecz & AD). Results The overall diagnosis accuracy of AIDDA is 95.80%±0.09%, with the sensitivity of 94.40%±0.12% and specificity 97.20%±0.06%. AIDDA showed accuracy for Pso is 89.46%, with sensitivity of 91.4% and specificity of 95.48%, and accuracy for AD & Ecz 92.57%, with sensitivity of 94.56% and specificity of 94.41%. Conclusions AIDDA is thus already achieving an impact in the diagnosis of inflammatory skin diseases, highlighting how deep learning network tools can help advance clinical practice.
Collapse
Affiliation(s)
- Haijing Wu
- Department of Dermatology, Second Xiangya Hospital, Central South University, Hunan Key Laboratory of Medical Epigenomics, Changsha 410011, China
| | - Heng Yin
- Department of Dermatology, Second Xiangya Hospital, Central South University, Hunan Key Laboratory of Medical Epigenomics, Changsha 410011, China
| | | | | | | | - Yizhou Yu
- DeepWise AI Lab, Beijing 100080, China
| | - Yang Tang
- Guanlan Networks (Hangzhou) Co., Ltd., Hangzhou 310000, China
| | - Hai Long
- Department of Dermatology, Second Xiangya Hospital, Central South University, Hunan Key Laboratory of Medical Epigenomics, Changsha 410011, China
| | - Bo Zhang
- Department of Dermatology, Second Xiangya Hospital, Central South University, Hunan Key Laboratory of Medical Epigenomics, Changsha 410011, China
| | - Jing Zhang
- Department of Dermatology, Second Xiangya Hospital, Central South University, Hunan Key Laboratory of Medical Epigenomics, Changsha 410011, China
| | - Ying Zhou
- Department of Dermatology, Second Xiangya Hospital, Central South University, Hunan Key Laboratory of Medical Epigenomics, Changsha 410011, China
| | - Yaping Li
- Department of Dermatology, Second Xiangya Hospital, Central South University, Hunan Key Laboratory of Medical Epigenomics, Changsha 410011, China
| | - Guiyuing Zhang
- Department of Dermatology, Second Xiangya Hospital, Central South University, Hunan Key Laboratory of Medical Epigenomics, Changsha 410011, China
| | - Peng Zhang
- Department of Dermatology, Second Xiangya Hospital, Central South University, Hunan Key Laboratory of Medical Epigenomics, Changsha 410011, China
| | - Yi Zhan
- Department of Dermatology, Second Xiangya Hospital, Central South University, Hunan Key Laboratory of Medical Epigenomics, Changsha 410011, China
| | - Jieyue Liao
- Department of Dermatology, Second Xiangya Hospital, Central South University, Hunan Key Laboratory of Medical Epigenomics, Changsha 410011, China
| | - Shuaihantian Luo
- Department of Dermatology, Second Xiangya Hospital, Central South University, Hunan Key Laboratory of Medical Epigenomics, Changsha 410011, China
| | - Rong Xiao
- Department of Dermatology, Second Xiangya Hospital, Central South University, Hunan Key Laboratory of Medical Epigenomics, Changsha 410011, China
| | - Yuwen Su
- Department of Dermatology, Second Xiangya Hospital, Central South University, Hunan Key Laboratory of Medical Epigenomics, Changsha 410011, China
| | - Juanjuan Zhao
- Guanlan Networks (Hangzhou) Co., Ltd., Hangzhou 310000, China
| | - Fei Wang
- Guanlan Networks (Hangzhou) Co., Ltd., Hangzhou 310000, China
| | - Jing Zhang
- Guanlan Networks (Hangzhou) Co., Ltd., Hangzhou 310000, China
| | - Wei Zhang
- Guanlan Networks (Hangzhou) Co., Ltd., Hangzhou 310000, China
| | - Jin Zhang
- Guanlan Networks (Hangzhou) Co., Ltd., Hangzhou 310000, China
| | - Qianjin Lu
- Department of Dermatology, Second Xiangya Hospital, Central South University, Hunan Key Laboratory of Medical Epigenomics, Changsha 410011, China
| |
Collapse
|
24
|
A manifold learning regularization approach to enhance 3D CT image-based lung nodule classification. Int J Comput Assist Radiol Surg 2019; 15:287-295. [PMID: 31768885 DOI: 10.1007/s11548-019-02097-8] [Citation(s) in RCA: 38] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2019] [Accepted: 11/16/2019] [Indexed: 02/07/2023]
Abstract
PURPOSE Diagnosis of lung cancer requires radiologists to review every lung nodule in CT images. Such a process can be very time-consuming, and the accuracy is affected by many factors, such as experience of radiologists and available diagnosis time. To address this problem, we proposed to develop a deep learning-based system to automatically classify benign and malignant lung nodules. METHODS The proposed method automatically determines benignity or malignancy given the 3D CT image patch of a lung nodule to assist diagnosis process. Motivated by the fact that real structure among data is often embedded on a low-dimensional manifold, we developed a novel manifold regularized classification deep neural network (MRC-DNN) to perform classification directly based on the manifold representation of lung nodule images. The concise manifold representation revealing important data structure is expected to benefit the classification, while the manifold regularization enforces strong, but natural constraints on network training, preventing over-fitting. RESULTS The proposed method achieves accurate manifold learning with reconstruction error of ~ 30 HU on real lung nodule CT image data. In addition, the classification accuracy on testing data is 0.90 with sensitivity of 0.81 and specificity of 0.95, which outperforms state-of-the-art deep learning methods. CONCLUSION The proposed MRC-DNN facilitates an accurate manifold learning approach for lung nodule classification based on 3D CT images. More importantly, MRC-DNN suggests a new and effective idea of enforcing regularization for network training, possessing the potential impact to a board range of applications.
Collapse
|
25
|
Durstewitz D, Koppe G, Meyer-Lindenberg A. Deep neural networks in psychiatry. Mol Psychiatry 2019; 24:1583-1598. [PMID: 30770893 DOI: 10.1038/s41380-019-0365-9] [Citation(s) in RCA: 119] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/05/2018] [Revised: 01/02/2019] [Accepted: 01/24/2019] [Indexed: 01/03/2023]
Abstract
Machine and deep learning methods, today's core of artificial intelligence, have been applied with increasing success and impact in many commercial and research settings. They are powerful tools for large scale data analysis, prediction and classification, especially in very data-rich environments ("big data"), and have started to find their way into medical applications. Here we will first give an overview of machine learning methods, with a focus on deep and recurrent neural networks, their relation to statistics, and the core principles behind them. We will then discuss and review directions along which (deep) neural networks can be, or already have been, applied in the context of psychiatry, and will try to delineate their future potential in this area. We will also comment on an emerging area that so far has been much less well explored: by embedding semantically interpretable computational models of brain dynamics or behavior into a statistical machine learning context, insights into dysfunction beyond mere prediction and classification may be gained. Especially this marriage of computational models with statistical inference may offer insights into neural and behavioral mechanisms that could open completely novel avenues for psychiatric treatment.
Collapse
Affiliation(s)
- Daniel Durstewitz
- Department of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim/Heidelberg University, 68159, Mannheim, Germany.
| | - Georgia Koppe
- Department of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim/Heidelberg University, 68159, Mannheim, Germany.,Department of Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim/Heidelberg University, 68159, Mannheim, Germany
| | - Andreas Meyer-Lindenberg
- Department of Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim/Heidelberg University, 68159, Mannheim, Germany
| |
Collapse
|
26
|
Shen C, Gonzalez Y, Klages P, Qin N, Jung H, Chen L, Nguyen D, Jiang SB, Jia X. Intelligent inverse treatment planning via deep reinforcement learning, a proof-of-principle study in high dose-rate brachytherapy for cervical cancer. Phys Med Biol 2019; 64:115013. [PMID: 30978709 DOI: 10.1088/1361-6560/ab18bf] [Citation(s) in RCA: 76] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Inverse treatment planning in radiation therapy is formulated as solving optimization problems. The objective function and constraints consist of multiple terms designed for different clinical and practical considerations. Weighting factors of these terms are needed to define the optimization problem. While a treatment planning optimization engine can solve the optimization problem with given weights, adjusting the weights to yield a high-quality plan is typically performed by a human planner. Yet the weight-tuning task is labor intensive, time consuming, and it critically affects the final plan quality. An automatic weight-tuning approach is strongly desired. The procedure of weight adjustment to improve the plan quality is essentially a decision-making problem. Motivated by the tremendous success in deep learning for decision making with human-level intelligence, we propose a novel framework to adjust the weights in a human-like manner. This study used inverse treatment planning in high-dose-rate brachytherapy (HDRBT) for cervical cancer as an example. We developed a weight-tuning policy network (WTPN) that observes dose volume histograms of a plan and outputs an action to adjust organ weighting factors, similar to the behaviors of a human planner. We trained the WTPN via end-to-end deep reinforcement learning. Experience replay was performed with the epsilon greedy algorithm. After training was completed, we applied the trained WTPN to guide treatment planning of five testing patient cases. It was found that the trained WTPN successfully learnt the treatment planning goals and was able to guide the weight tuning process. On average, the quality score of plans generated under the WTPN's guidance was improved by ~8.5% compared to the initial plan with arbitrarily set weights, and by 10.7% compared to the plans generated by human planners. To our knowledge, this was the first time that a tool was developed to adjust organ weights for the treatment planning optimization problem in a human-like fashion based on intelligence learnt from a training process, which was different from existing strategies based on pre-defined rules. The study demonstrated potential feasibility to develop intelligent treatment planning approaches via deep reinforcement learning.
Collapse
Affiliation(s)
- Chenyang Shen
- Innovative Technology Of Radiotherapy Computation and Hardware (iTORCH) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75287, United States of America. Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75287, United States of America
| | | | | | | | | | | | | | | | | |
Collapse
|
27
|
Recent Deep Learning Techniques, Challenges and Its Applications for Medical Healthcare System: A Review. Neural Process Lett 2019. [DOI: 10.1007/s11063-018-09976-2] [Citation(s) in RCA: 48] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
28
|
Napel S, Mu W, Jardim‐Perassi BV, Aerts HJWL, Gillies RJ. Quantitative imaging of cancer in the postgenomic era: Radio(geno)mics, deep learning, and habitats. Cancer 2018; 124:4633-4649. [PMID: 30383900 PMCID: PMC6482447 DOI: 10.1002/cncr.31630] [Citation(s) in RCA: 138] [Impact Index Per Article: 19.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2018] [Revised: 07/11/2018] [Accepted: 07/17/2018] [Indexed: 11/07/2022]
Abstract
Although cancer often is referred to as "a disease of the genes," it is indisputable that the (epi)genetic properties of individual cancer cells are highly variable, even within the same tumor. Hence, preexisting resistant clones will emerge and proliferate after therapeutic selection that targets sensitive clones. Herein, the authors propose that quantitative image analytics, known as "radiomics," can be used to quantify and characterize this heterogeneity. Virtually every patient with cancer is imaged radiologically. Radiomics is predicated on the beliefs that these images reflect underlying pathophysiologies, and that they can be converted into mineable data for improved diagnosis, prognosis, prediction, and therapy monitoring. In the last decade, the radiomics of cancer has grown from a few laboratories to a worldwide enterprise. During this growth, radiomics has established a convention, wherein a large set of annotated image features (1-2000 features) are extracted from segmented regions of interest and used to build classifier models to separate individual patients into their appropriate class (eg, indolent vs aggressive disease). An extension of this conventional radiomics is the application of "deep learning," wherein convolutional neural networks can be used to detect the most informative regions and features without human intervention. A further extension of radiomics involves automatically segmenting informative subregions ("habitats") within tumors, which can be linked to underlying tumor pathophysiology. The goal of the radiomics enterprise is to provide informed decision support for the practice of precision oncology.
Collapse
Affiliation(s)
- Sandy Napel
- Department of RadiologyStanford UniversityStanfordCalifornia
| | - Wei Mu
- Department of Cancer PhysiologyH. Lee Moffitt Cancer CenterTampaFlorida
| | | | - Hugo J. W. L. Aerts
- Dana‐Farber Cancer Institute, Department of Radiology, Brigham and Women’s HospitalHarvard Medical SchoolBostonMassachusetts
| | - Robert J. Gillies
- Department of Cancer PhysiologyH. Lee Moffitt Cancer CenterTampaFlorida
| |
Collapse
|
29
|
Miotto R, Wang F, Wang S, Jiang X, Dudley JT. Deep learning for healthcare: review, opportunities and challenges. Brief Bioinform 2018; 19:1236-1246. [PMID: 28481991 PMCID: PMC6455466 DOI: 10.1093/bib/bbx044] [Citation(s) in RCA: 856] [Impact Index Per Article: 122.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2016] [Revised: 02/19/2017] [Indexed: 02/07/2023] Open
Abstract
Gaining knowledge and actionable insights from complex, high-dimensional and heterogeneous biomedical data remains a key challenge in transforming health care. Various types of data have been emerging in modern biomedical research, including electronic health records, imaging, -omics, sensor data and text, which are complex, heterogeneous, poorly annotated and generally unstructured. Traditional data mining and statistical learning approaches typically need to first perform feature engineering to obtain effective and more robust features from those data, and then build prediction or clustering models on top of them. There are lots of challenges on both steps in a scenario of complicated data and lacking of sufficient domain knowledge. The latest advances in deep learning technologies provide new effective paradigms to obtain end-to-end learning models from complex data. In this article, we review the recent literature on applying deep learning technologies to advance the health care domain. Based on the analyzed work, we suggest that deep learning approaches could be the vehicle for translating big biomedical data into improved human health. However, we also note limitations and needs for improved methods development and applications, especially in terms of ease-of-understanding for domain experts and citizen scientists. We discuss such challenges and suggest developing holistic and meaningful interpretable architectures to bridge deep learning models and human interpretability.
Collapse
Affiliation(s)
- Riccardo Miotto
- Institute for Next Generation Healthcare, Department of Genetics and Genomic Sciences at the Icahn School of Medicine at Mount Sinai, New York, NY
| | - Fei Wang
- Division of Health Informatics, Department of Healthcare Policy and Research at Weill Cornell Medicine at Cornell University, New York, NY
| | - Shuang Wang
- Department of Biomedical Informatics at the University of California San Diego, La Jolla, CA
| | - Xiaoqian Jiang
- Department of Biomedical Informatics at the University of California San Diego, La Jolla, CA
| | - Joel T Dudley
- the Institute for Next Generation Healthcare and associate professor in the Department of Genetics and Genomic Sciences at the Icahn School of Medicine at Mount Sinai, New York, NY
| |
Collapse
|
30
|
Reading the (functional) writing on the (structural) wall: Multimodal fusion of brain structure and function via a deep neural network based translation approach reveals novel impairments in schizophrenia. Neuroimage 2018; 181:734-747. [PMID: 30055372 DOI: 10.1016/j.neuroimage.2018.07.047] [Citation(s) in RCA: 36] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2017] [Revised: 07/16/2018] [Accepted: 07/18/2018] [Indexed: 01/01/2023] Open
Abstract
This work presents a novel approach to finding linkage/association between multimodal brain imaging data, such as structural MRI (sMRI) and functional MRI (fMRI). Motivated by the machine translation domain, we employ a deep learning model, and consider two different imaging views of the same brain like two different languages conveying some common facts. That analogy enables finding linkages between two modalities. The proposed translation-based fusion model contains a computing layer that learns "alignments" (or links) between dynamic connectivity features from fMRI data and static gray matter patterns from sMRI data. The approach is evaluated on a multi-site dataset consisting of eyes-closed resting state imaging data collected from 298 subjects (age- and gender matched 154 healthy controls and 144 patients with schizophrenia). Results are further confirmed on an independent dataset consisting of eyes-open resting state imaging data from 189 subjects (age- and gender matched 91 healthy controls and 98 patients with schizophrenia). We used dynamic functional connectivity (dFNC) states as the functional features and ICA-based sources from gray matter densities as the structural features. The dFNC states characterized by weakly correlated intrinsic connectivity networks (ICNs) were found to have stronger association with putamen and insular gray matter pattern, while the dFNC states of profuse strongly correlated ICNs exhibited stronger links with the gray matter pattern in precuneus, posterior cingulate cortex (PCC), and temporal cortex. Further investigation with the estimated link strength (or alignment score) showed significant group differences between healthy controls and patients with schizophrenia in several key regions including temporal lobe, and linked these to connectivity states showing less occupancy in healthy controls. Moreover, this novel approach revealed significant correlation between a cognitive score (attention/vigilance) and the function/structure alignment score that was not detected when data modalities were considered separately.
Collapse
|
31
|
Mahmud M, Kaiser MS, Hussain A, Vassanelli S. Applications of Deep Learning and Reinforcement Learning to Biological Data. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:2063-2079. [PMID: 29771663 DOI: 10.1109/tnnls.2018.2790388] [Citation(s) in RCA: 240] [Impact Index Per Article: 34.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Rapid advances in hardware-based technologies during the past decades have opened up new possibilities for life scientists to gather multimodal data in various application domains, such as omics, bioimaging, medical imaging, and (brain/body)-machine interfaces. These have generated novel opportunities for development of dedicated data-intensive machine learning techniques. In particular, recent research in deep learning (DL), reinforcement learning (RL), and their combination (deep RL) promise to revolutionize the future of artificial intelligence. The growth in computational power accompanied by faster and increased data storage, and declining computing costs have already allowed scientists in various fields to apply these techniques on data sets that were previously intractable owing to their size and complexity. This paper provides a comprehensive survey on the application of DL, RL, and deep RL techniques in mining biological data. In addition, we compare the performances of DL techniques when applied to different data sets across various application domains. Finally, we outline open issues in this challenging research area and discuss future development perspectives.
Collapse
|
32
|
Islam J, Zhang Y. Brain MRI analysis for Alzheimer's disease diagnosis using an ensemble system of deep convolutional neural networks. Brain Inform 2018; 5:2. [PMID: 29881892 PMCID: PMC6170939 DOI: 10.1186/s40708-018-0080-3] [Citation(s) in RCA: 115] [Impact Index Per Article: 16.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2018] [Accepted: 04/18/2018] [Indexed: 01/11/2023] Open
Abstract
Alzheimer's disease is an incurable, progressive neurological brain disorder. Earlier detection of Alzheimer's disease can help with proper treatment and prevent brain tissue damage. Several statistical and machine learning models have been exploited by researchers for Alzheimer's disease diagnosis. Analyzing magnetic resonance imaging (MRI) is a common practice for Alzheimer's disease diagnosis in clinical research. Detection of Alzheimer's disease is exacting due to the similarity in Alzheimer's disease MRI data and standard healthy MRI data of older people. Recently, advanced deep learning techniques have successfully demonstrated human-level performance in numerous fields including medical image analysis. We propose a deep convolutional neural network for Alzheimer's disease diagnosis using brain MRI data analysis. While most of the existing approaches perform binary classification, our model can identify different stages of Alzheimer's disease and obtains superior performance for early-stage diagnosis. We conducted ample experiments to demonstrate that our proposed model outperformed comparative baselines on the Open Access Series of Imaging Studies dataset.
Collapse
Affiliation(s)
- Jyoti Islam
- Department of Computer Science, Georgia State University, Atlanta, GA 30302-5060 USA
| | - Yanqing Zhang
- Department of Computer Science, Georgia State University, Atlanta, GA 30302-5060 USA
| |
Collapse
|
33
|
Mohanty R, Sinha AM, Remsik AB, Dodd KC, Young BM, Jacobson T, McMillan M, Thoma J, Advani H, Nair VA, Kang TJ, Caldera K, Edwards DF, Williams JC, Prabhakaran V. Machine Learning Classification to Identify the Stage of Brain-Computer Interface Therapy for Stroke Rehabilitation Using Functional Connectivity. Front Neurosci 2018; 12:353. [PMID: 29896082 PMCID: PMC5986965 DOI: 10.3389/fnins.2018.00353] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2018] [Accepted: 05/07/2018] [Indexed: 01/19/2023] Open
Abstract
Interventional therapy using brain-computer interface (BCI) technology has shown promise in facilitating motor recovery in stroke survivors; however, the impact of this form of intervention on functional networks outside of the motor network specifically is not well-understood. Here, we investigated resting-state functional connectivity (rs-FC) in stroke participants undergoing BCI therapy across stages, namely pre- and post-intervention, to identify discriminative functional changes using a machine learning classifier with the goal of categorizing participants into one of the two therapy stages. Twenty chronic stroke participants with persistent upper-extremity motor impairment received neuromodulatory training using a closed-loop neurofeedback BCI device, and rs-functional MRI (rs-fMRI) scans were collected at four time points: pre-, mid-, post-, and 1 month post-therapy. To evaluate the peak effects of this intervention, rs-FC was analyzed from two specific stages, namely pre- and post-therapy. In total, 236 seeds spanning both motor and non-motor regions of the brain were computed at each stage. A univariate feature selection was applied to reduce the number of features followed by a principal component-based data transformation used by a linear binary support vector machine (SVM) classifier to classify each participant into a therapy stage. The SVM classifier achieved a cross-validation accuracy of 92.5% using a leave-one-out method. Outside of the motor network, seeds from the fronto-parietal task control, default mode, subcortical, and visual networks emerged as important contributors to the classification. Furthermore, a higher number of functional changes were observed to be strengthening from the pre- to post-therapy stage than the ones weakening, both of which involved motor and non-motor regions of the brain. These findings may provide new evidence to support the potential clinical utility of BCI therapy as a form of stroke rehabilitation that not only benefits motor recovery but also facilitates recovery in other brain networks. Moreover, delineation of stronger and weaker changes may inform more optimal designs of BCI interventional therapy so as to facilitate strengthened and suppress weakened changes in the recovery process.
Collapse
Affiliation(s)
- Rosaleena Mohanty
- Department of Radiology, University of Wisconsin-Madison, Madison, WI, United States.,Department of Electrical Engineering, University of Wisconsin-Madison, Madison, WI, United States
| | - Anita M Sinha
- Department of Radiology, University of Wisconsin-Madison, Madison, WI, United States.,Department of Biomedical Engineering, University of Wisconsin-Madison, Madison, WI, United States
| | - Alexander B Remsik
- Department of Radiology, University of Wisconsin-Madison, Madison, WI, United States.,Department of Kinesiology, University of Wisconsin-Madison, Madison, WI, United States
| | - Keith C Dodd
- Department of Radiology, University of Wisconsin-Madison, Madison, WI, United States.,Department of Biomedical Engineering, University of Wisconsin-Madison, Madison, WI, United States
| | - Brittany M Young
- Medical Scientist Training Program, University of Wisconsin-Madison, Madison, WI, United States.,Neuroscience Training Program, University of Wisconsin-Madison, Madison, WI, United States
| | - Tyler Jacobson
- Department of Radiology, University of Wisconsin-Madison, Madison, WI, United States.,Deparment of Psychology, University of Wisconsin-Madison, Madison, WI, United States
| | - Matthew McMillan
- Department of Radiology, University of Wisconsin-Madison, Madison, WI, United States.,Department of Biomedical Engineering, University of Wisconsin-Madison, Madison, WI, United States
| | - Jaclyn Thoma
- Department of Radiology, University of Wisconsin-Madison, Madison, WI, United States.,Neuroscience Training Program, University of Wisconsin-Madison, Madison, WI, United States
| | - Hemali Advani
- Department of Radiology, University of Wisconsin-Madison, Madison, WI, United States
| | - Veena A Nair
- Department of Radiology, University of Wisconsin-Madison, Madison, WI, United States
| | - Theresa J Kang
- Department of Radiology, University of Wisconsin-Madison, Madison, WI, United States
| | - Kristin Caldera
- Department of Orthopedics and Rehabilitation, University of Wisconsin-Madison, Madison, WI, United States
| | - Dorothy F Edwards
- Department of Kinesiology, University of Wisconsin-Madison, Madison, WI, United States
| | - Justin C Williams
- Department of Biomedical Engineering, University of Wisconsin-Madison, Madison, WI, United States
| | - Vivek Prabhakaran
- Department of Radiology, University of Wisconsin-Madison, Madison, WI, United States.,Medical Scientist Training Program, University of Wisconsin-Madison, Madison, WI, United States.,Neuroscience Training Program, University of Wisconsin-Madison, Madison, WI, United States.,Department of Medical Physics, University of Wisconsin-Madison, Madison, WI, United States
| |
Collapse
|
34
|
Bermudez C, Plassard AJ, Davis TL, Newton AT, Resnick SM, Landman BA. Learning Implicit Brain MRI Manifolds with Deep Learning. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2018; 10574:105741L. [PMID: 29887659 PMCID: PMC5990281 DOI: 10.1117/12.2293515] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
An important task in image processing and neuroimaging is to extract quantitative information from the acquired images in order to make observations about the presence of disease or markers of development in populations. Having a low-dimensional manifold of an image allows for easier statistical comparisons between groups and the synthesis of group representatives. Previous studies have sought to identify the best mapping of brain MRI to a low-dimensional manifold, but have been limited by assumptions of explicit similarity measures. In this work, we use deep learning techniques to investigate implicit manifolds of normal brains and generate new, high-quality images. We explore implicit manifolds by addressing the problems of image synthesis and image denoising as important tools in manifold learning. First, we propose the unsupervised synthesis of T1-weighted brain MRI using a Generative Adversarial Network (GAN) by learning from 528 examples of 2D axial slices of brain MRI. Synthesized images were first shown to be unique by performing a cross-correlation with the training set. Real and synthesized images were then assessed in a blinded manner by two imaging experts providing an image quality score of 1-5. The quality score of the synthetic image showed substantial overlap with that of the real images. Moreover, we use an autoencoder with skip connections for image denoising, showing that the proposed method results in higher PSNR than FSL SUSAN after denoising. This work shows the power of artificial networks to synthesize realistic imaging data, which can be used to improve image processing techniques and provide a quantitative framework to structural changes in the brain.
Collapse
Affiliation(s)
- Camilo Bermudez
- Department of Biomedical Engineering, Vanderbilt University, 2201 West End Ave, Nashville, TN, USA 37235
| | - Andrew J Plassard
- Department of Computer Science, Vanderbilt University, 2201 West End Ave, Nashville, TN, USA 37235
| | - Taylor L Davis
- Department of Radiology, Vanderbilt University Medical Center, 2201 West End Ave, Nashville, TN, USA 37235
| | - Allen T Newton
- Department of Radiology, Vanderbilt University Medical Center, 2201 West End Ave, Nashville, TN, USA 37235
| | - Susan M Resnick
- Department of Computer Science, Vanderbilt University, 2201 West End Ave, Nashville, TN, USA 37235
| | - Bennett A Landman
- Department of Biomedical Engineering, Vanderbilt University, 2201 West End Ave, Nashville, TN, USA 37235
| |
Collapse
|
35
|
Deep Convolutional Neural Networks for Automated Diagnosis of Alzheimer’s Disease and Mild Cognitive Impairment Using 3D Brain MRI. Brain Inform 2018. [DOI: 10.1007/978-3-030-05587-5_34] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022] Open
|
36
|
Guo X, Dominick KC, Minai AA, Li H, Erickson CA, Lu LJ. Diagnosing Autism Spectrum Disorder from Brain Resting-State Functional Connectivity Patterns Using a Deep Neural Network with a Novel Feature Selection Method. Front Neurosci 2017; 11:460. [PMID: 28871217 PMCID: PMC5566619 DOI: 10.3389/fnins.2017.00460] [Citation(s) in RCA: 97] [Impact Index Per Article: 12.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2017] [Accepted: 07/31/2017] [Indexed: 12/22/2022] Open
Abstract
The whole-brain functional connectivity (FC) pattern obtained from resting-state functional magnetic resonance imaging data are commonly applied to study neuropsychiatric conditions such as autism spectrum disorder (ASD) by using different machine learning models. Recent studies indicate that both hyper- and hypo- aberrant ASD-associated FCs were widely distributed throughout the entire brain rather than only in some specific brain regions. Deep neural networks (DNN) with multiple hidden layers have shown the ability to systematically extract lower-to-higher level information from high dimensional data across a series of neural hidden layers, significantly improving classification accuracy for such data. In this study, a DNN with a novel feature selection method (DNN-FS) is developed for the high dimensional whole-brain resting-state FC pattern classification of ASD patients vs. typical development (TD) controls. The feature selection method is able to help the DNN generate low dimensional high-quality representations of the whole-brain FC patterns by selecting features with high discriminating power from multiple trained sparse auto-encoders. For the comparison, a DNN without the feature selection method (DNN-woFS) is developed, and both of them are tested with different architectures (i.e., with different numbers of hidden layers/nodes). Results show that the best classification accuracy of 86.36% is generated by the DNN-FS approach with 3 hidden layers and 150 hidden nodes (3/150). Remarkably, DNN-FS outperforms DNN-woFS for all architectures studied. The most significant accuracy improvement was 9.09% with the 3/150 architecture. The method also outperforms other feature selection methods, e.g., two sample t-test and elastic net. In addition to improving the classification accuracy, a Fisher's score-based biomarker identification method based on the DNN is also developed, and used to identify 32 FCs related to ASD. These FCs come from or cross different pre-defined brain networks including the default-mode, cingulo-opercular, frontal-parietal, and cerebellum. Thirteen of them are statically significant between ASD and TD groups (two sample t-test p < 0.05) while 19 of them are not. The relationship between the statically significant FCs and the corresponding ASD behavior symptoms is discussed based on the literature and clinician's expert knowledge. Meanwhile, the potential reason of obtaining 19 FCs which are not statistically significant is also provided.
Collapse
Affiliation(s)
- Xinyu Guo
- Division of Biomedical Informatics, Cincinnati Children's Hospital Research FoundationCincinnati, OH, United States
- Department of Electrical Engineering and Computing Systems, University of CincinnatiCincinnati, OH, United States
| | - Kelli C. Dominick
- The Kelly O'Leary Center for Autism Spectrum Disorders, Cincinnati Children's Hospital Medical CenterCincinnati, OH, United States
| | - Ali A. Minai
- Department of Electrical Engineering and Computing Systems, University of CincinnatiCincinnati, OH, United States
| | - Hailong Li
- Division of Biomedical Informatics, Cincinnati Children's Hospital Research FoundationCincinnati, OH, United States
| | - Craig A. Erickson
- The Kelly O'Leary Center for Autism Spectrum Disorders, Cincinnati Children's Hospital Medical CenterCincinnati, OH, United States
| | - Long J. Lu
- Division of Biomedical Informatics, Cincinnati Children's Hospital Research FoundationCincinnati, OH, United States
- Department of Electrical Engineering and Computing Systems, University of CincinnatiCincinnati, OH, United States
- School of Information Management, Wuhan UniversityWuhan, China
- Department of Environmental Health, College of Medicine, University of CincinnatiCincinnati, OH, United States
| |
Collapse
|
37
|
Abstract
This review covers computer-assisted analysis of images in the field of medical imaging. Recent advances in machine learning, especially with regard to deep learning, are helping to identify, classify, and quantify patterns in medical images. At the core of these advances is the ability to exploit hierarchical feature representations learned solely from data, instead of features designed by hand according to domain-specific knowledge. Deep learning is rapidly becoming the state of the art, leading to enhanced performance in various medical applications. We introduce the fundamentals of deep learning methods and review their successes in image registration, detection of anatomical and cellular structures, tissue segmentation, computer-aided disease diagnosis and prognosis, and so on. We conclude by discussing research issues and suggesting future directions for further improvement.
Collapse
Affiliation(s)
- Dinggang Shen
- Department of Radiology, University of North Carolina, Chapel Hill, North Carolina 27599;
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea;
| | - Guorong Wu
- Department of Radiology, University of North Carolina, Chapel Hill, North Carolina 27599;
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea;
| |
Collapse
|
38
|
Suk HI, Lee SW, Shen D. Deep ensemble learning of sparse regression models for brain disease diagnosis. Med Image Anal 2017; 37:101-113. [PMID: 28167394 PMCID: PMC5808465 DOI: 10.1016/j.media.2017.01.008] [Citation(s) in RCA: 123] [Impact Index Per Article: 15.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2016] [Revised: 01/14/2017] [Accepted: 01/23/2017] [Indexed: 01/18/2023]
Abstract
Recent studies on brain imaging analysis witnessed the core roles of machine learning techniques in computer-assisted intervention for brain disease diagnosis. Of various machine-learning techniques, sparse regression models have proved their effectiveness in handling high-dimensional data but with a small number of training samples, especially in medical problems. In the meantime, deep learning methods have been making great successes by outperforming the state-of-the-art performances in various applications. In this paper, we propose a novel framework that combines the two conceptually different methods of sparse regression and deep learning for Alzheimer's disease/mild cognitive impairment diagnosis and prognosis. Specifically, we first train multiple sparse regression models, each of which is trained with different values of a regularization control parameter. Thus, our multiple sparse regression models potentially select different feature subsets from the original feature set; thereby they have different powers to predict the response values, i.e., clinical label and clinical scores in our work. By regarding the response values from our sparse regression models as target-level representations, we then build a deep convolutional neural network for clinical decision making, which thus we call 'Deep Ensemble Sparse Regression Network.' To our best knowledge, this is the first work that combines sparse regression models with deep neural network. In our experiments with the ADNI cohort, we validated the effectiveness of the proposed method by achieving the highest diagnostic accuracies in three classification tasks. We also rigorously analyzed our results and compared with the previous studies on the ADNI cohort in the literature.
Collapse
Affiliation(s)
- Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea.
| | - Seong-Whan Lee
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Dinggang Shen
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea; Biomedical Research Imaging Center and Department of Radiology, University of North Carolina at Chapel Hill, NC 27599, USA
| |
Collapse
|
39
|
Vieira S, Pinaya WHL, Mechelli A. Using deep learning to investigate the neuroimaging correlates of psychiatric and neurological disorders: Methods and applications. Neurosci Biobehav Rev 2017; 74:58-75. [PMID: 28087243 DOI: 10.1016/j.neubiorev.2017.01.002] [Citation(s) in RCA: 284] [Impact Index Per Article: 35.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2016] [Revised: 12/22/2016] [Accepted: 01/04/2017] [Indexed: 12/29/2022]
Abstract
Deep learning (DL) is a family of machine learning methods that has gained considerable attention in the scientific community, breaking benchmark records in areas such as speech and visual recognition. DL differs from conventional machine learning methods by virtue of its ability to learn the optimal representation from the raw data through consecutive nonlinear transformations, achieving increasingly higher levels of abstraction and complexity. Given its ability to detect abstract and complex patterns, DL has been applied in neuroimaging studies of psychiatric and neurological disorders, which are characterised by subtle and diffuse alterations. Here we introduce the underlying concepts of DL and review studies that have used this approach to classify brain-based disorders. The results of these studies indicate that DL could be a powerful tool in the current search for biomarkers of psychiatric and neurologic disease. We conclude our review by discussing the main promises and challenges of using DL to elucidate brain-based disorders, as well as possible directions for future research.
Collapse
Affiliation(s)
- Sandra Vieira
- Department of Psychosis Studies, Institute of Psychiatry, Psychology & Neuroscience, King's College London, 16 De Crespigny Park, SE5 8AF, United Kingdom.
| | - Walter H L Pinaya
- Centre of Mathematics, Computation, and Cognition, Universidade Federal do ABC, Rua Arcturus, Jardim Antares, São Bernardo do Campo, SP CEP 09.606-070, Brazil
| | - Andrea Mechelli
- Department of Psychosis Studies, Institute of Psychiatry, Psychology & Neuroscience, King's College London, 16 De Crespigny Park, SE5 8AF, United Kingdom
| |
Collapse
|
40
|
Parisot S, Ktena SI, Ferrante E, Lee M, Moreno RG, Glocker B, Rueckert D. Spectral Graph Convolutions for Population-Based Disease Prediction. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION − MICCAI 2017 2017. [DOI: 10.1007/978-3-319-66179-7_21] [Citation(s) in RCA: 60] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
|
41
|
A Novel Deep Learning Based Multi-class Classification Method for Alzheimer’s Disease Detection Using Brain MRI Data. Brain Inform 2017. [DOI: 10.1007/978-3-319-70772-3_20] [Citation(s) in RCA: 63] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
|
42
|
Shi J, Zhou S, Liu X, Zhang Q, Lu M, Wang T. Stacked deep polynomial network based representation learning for tumor classification with small ultrasound image dataset. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2016.01.074] [Citation(s) in RCA: 53] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
43
|
Calhoun VD, Sui J. Multimodal fusion of brain imaging data: A key to finding the missing link(s) in complex mental illness. BIOLOGICAL PSYCHIATRY. COGNITIVE NEUROSCIENCE AND NEUROIMAGING 2016; 1:230-244. [PMID: 27347565 PMCID: PMC4917230 DOI: 10.1016/j.bpsc.2015.12.005] [Citation(s) in RCA: 182] [Impact Index Per Article: 20.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
It is becoming increasingly clear that combining multi-modal brain imaging data is able to provide more information for individual subjects by exploiting the rich multimodal information that exists. However, the number of studies that do true multimodal fusion (i.e. capitalizing on joint information among modalities) is still remarkably small given the known benefits. In part, this is because multi-modal studies require broader expertise in collecting, analyzing, and interpreting the results than do unimodal studies. In this paper, we start by introducing the basic reasons why multimodal data fusion is important and what it can do, and importantly how it can help us avoid wrong conclusions and help compensate for imperfect brain imaging studies. We also discuss the challenges that need to be confronted for such approaches to be more widely applied by the community. We then provide a review of the diverse studies that have used multimodal data fusion (primarily focused on psychosis) as well as provide an introduction to some of the existing analytic approaches. Finally, we discuss some up-and-coming approaches to multi-modal fusion including deep learning and multimodal classification which show considerable promise. Our conclusion is that multimodal data fusion is rapidly growing, but it is still underutilized. The complexity of the human brain coupled with the incomplete measurement provided by existing imaging technology makes multimodal fusion essential in order to mitigate against misdirection and hopefully provide a key to finding the missing link(s) in complex mental illness.
Collapse
Affiliation(s)
- Vince D Calhoun
- The Mind Research Network & LBERI, Albuquerque, New Mexico.; Dept. of ECE, University of New Mexico, Albuquerque, New Mexico
| | - Jing Sui
- The Mind Research Network & LBERI, Albuquerque, New Mexico.; Brainnetome Center and National Laboratory of Pattern Recognition, Beijing, China; CAS Center for Excellence in Brain Science, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
44
|
Zheng C, Xia Y, Pan Y, Chen J. Automated identification of dementia using medical imaging: a survey from a pattern classification perspective. Brain Inform 2015; 3:17-27. [PMID: 27747596 PMCID: PMC4883162 DOI: 10.1007/s40708-015-0027-x] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2015] [Accepted: 11/26/2015] [Indexed: 11/28/2022] Open
Abstract
In this review paper, we summarized the automated dementia identification algorithms in the literature from a pattern classification perspective. Since most of those algorithms consist of both feature extraction and classification, we provide a survey on three categories of feature extraction methods, including the voxel-, vertex- and ROI-based ones, and four categories of classifiers, including the linear discriminant analysis, Bayes classifiers, support vector machines, and artificial neural networks. We also compare the reported performance of many recently published dementia identification algorithms. Our comparison shows that many algorithms can differentiate the Alzheimer’s disease (AD) from elderly normal with a largely satisfying accuracy, whereas distinguishing the mild cognitive impairment from AD or elderly normal still remains a major challenge.
Collapse
Affiliation(s)
- Chuanchuan Zheng
- Shaanxi Key Lab of Speech & Image Information Processing (SAIIP), School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an, 710072, People's Republic of China
- Centre for Multidisciplinary Convergence Computing (CMCC), School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an, 710072, People's Republic of China
| | - Yong Xia
- Shaanxi Key Lab of Speech & Image Information Processing (SAIIP), School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an, 710072, People's Republic of China.
- Centre for Multidisciplinary Convergence Computing (CMCC), School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an, 710072, People's Republic of China.
| | - Yongsheng Pan
- Shaanxi Key Lab of Speech & Image Information Processing (SAIIP), School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an, 710072, People's Republic of China
- Centre for Multidisciplinary Convergence Computing (CMCC), School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an, 710072, People's Republic of China
| | - Jinhu Chen
- Department of Radiation Oncology, Shandong Tumor Hospital, Jinan, 250117, People's Republic of China.
| |
Collapse
|
45
|
Kim J, Calhoun VD, Shim E, Lee JH. Deep neural network with weight sparsity control and pre-training extracts hierarchical features and enhances classification performance: Evidence from whole-brain resting-state functional connectivity patterns of schizophrenia. Neuroimage 2015; 124:127-146. [PMID: 25987366 DOI: 10.1016/j.neuroimage.2015.05.018] [Citation(s) in RCA: 195] [Impact Index Per Article: 19.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2014] [Revised: 05/01/2015] [Accepted: 05/07/2015] [Indexed: 12/19/2022] Open
Abstract
Functional connectivity (FC) patterns obtained from resting-state functional magnetic resonance imaging data are commonly employed to study neuropsychiatric conditions by using pattern classifiers such as the support vector machine (SVM). Meanwhile, a deep neural network (DNN) with multiple hidden layers has shown its ability to systematically extract lower-to-higher level information of image and speech data from lower-to-higher hidden layers, markedly enhancing classification accuracy. The objective of this study was to adopt the DNN for whole-brain resting-state FC pattern classification of schizophrenia (SZ) patients vs. healthy controls (HCs) and identification of aberrant FC patterns associated with SZ. We hypothesized that the lower-to-higher level features learned via the DNN would significantly enhance the classification accuracy, and proposed an adaptive learning algorithm to explicitly control the weight sparsity in each hidden layer via L1-norm regularization. Furthermore, the weights were initialized via stacked autoencoder based pre-training to further improve the classification performance. Classification accuracy was systematically evaluated as a function of (1) the number of hidden layers/nodes, (2) the use of L1-norm regularization, (3) the use of the pre-training, (4) the use of framewise displacement (FD) removal, and (5) the use of anatomical/functional parcellation. Using FC patterns from anatomically parcellated regions without FD removal, an error rate of 14.2% was achieved by employing three hidden layers and 50 hidden nodes with both L1-norm regularization and pre-training, which was substantially lower than the error rate from the SVM (22.3%). Moreover, the trained DNN weights (i.e., the learned features) were found to represent the hierarchical organization of aberrant FC patterns in SZ compared with HC. Specifically, pairs of nodes extracted from the lower hidden layer represented sparse FC patterns implicated in SZ, which was quantified by using kurtosis/modularity measures and features from the higher hidden layer showed holistic/global FC patterns differentiating SZ from HC. Our proposed schemes and reported findings attained by using the DNN classifier and whole-brain FC data suggest that such approaches show improved ability to learn hidden patterns in brain imaging data, which may be useful for developing diagnostic tools for SZ and other neuropsychiatric disorders and identifying associated aberrant FC patterns.
Collapse
Affiliation(s)
- Junghoe Kim
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea
| | - Vince D Calhoun
- Department of Electrical and Computer Engineering, University of New Mexico, NM, USA; The Mind Research Network & LBERI, NM, USA
| | - Eunsoo Shim
- Samsung Advanced Institute of Technology, Samsung Electronics, Suwon, Republic of Korea
| | - Jong-Hwan Lee
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea.
| |
Collapse
|
46
|
Brosch T, Tam R. Efficient Training of Convolutional Deep Belief Networks in the Frequency Domain for Application to High-Resolution 2D and 3D Images. Neural Comput 2015; 27:211-27. [DOI: 10.1162/neco_a_00682] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Deep learning has traditionally been computationally expensive, and advances in training methods have been the prerequisite for improving its efficiency in order to expand its application to a variety of image classification problems. In this letter, we address the problem of efficient training of convolutional deep belief networks by learning the weights in the frequency domain, which eliminates the time-consuming calculation of convolutions. An essential consideration in the design of the algorithm is to minimize the number of transformations to and from frequency space. We have evaluated the running time improvements using two standard benchmark data sets, showing a speed-up of up to 8 times on 2D images and up to 200 times on 3D volumes. Our training algorithm makes training of convolutional deep belief networks on 3D medical images with a resolution of up to 128 × 128 × 128 voxels practical, which opens new directions for using deep learning for medical image analysis.
Collapse
Affiliation(s)
- Tom Brosch
- MS/MRI Research Group, Vancouver, BC V6T 2B5, Canada, and Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC V6T 1Z4, Canada
| | - Roger Tam
- MS/MRI Research Group, Vancouver, BC V6T 2B5, Canada, and Department of Radiology, University of British Columbia, Vancouver, BC V5Z 1M9, Canada
| |
Collapse
|
47
|
Liu S, Liu S, Cai W, Che H, Pujol S, Kikinis R, Feng D, Fulham MJ. Multimodal neuroimaging feature learning for multiclass diagnosis of Alzheimer's disease. IEEE Trans Biomed Eng 2014; 62:1132-40. [PMID: 25423647 DOI: 10.1109/tbme.2014.2372011] [Citation(s) in RCA: 229] [Impact Index Per Article: 20.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
The accurate diagnosis of Alzheimer's disease (AD) is essential for patient care and will be increasingly important as disease modifying agents become available, early in the course of the disease. Although studies have applied machine learning methods for the computer-aided diagnosis of AD, a bottleneck in the diagnostic performance was shown in previous methods, due to the lacking of efficient strategies for representing neuroimaging biomarkers. In this study, we designed a novel diagnostic framework with deep learning architecture to aid the diagnosis of AD. This framework uses a zero-masking strategy for data fusion to extract complementary information from multiple data modalities. Compared to the previous state-of-the-art workflows, our method is capable of fusing multimodal neuroimaging features in one setting and has the potential to require less labeled data. A performance gain was achieved in both binary classification and multiclass classification of AD. The advantages and limitations of the proposed framework are discussed.
Collapse
|
48
|
Unsupervised Pre-training Across Image Domains Improves Lung Tissue Classification. MEDICAL COMPUTER VISION: ALGORITHMS FOR BIG DATA 2014. [DOI: 10.1007/978-3-319-13972-2_8] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|