1
|
Chen Q, Wang L, Deng Z, Wang R, Wang L, Jian C, Zhu YM. Cooperative multi-task learning and interpretable image biomarkers for glioma grading and molecular subtyping. Med Image Anal 2025; 101:103435. [PMID: 39778265 DOI: 10.1016/j.media.2024.103435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2024] [Revised: 11/12/2024] [Accepted: 12/09/2024] [Indexed: 01/11/2025]
Abstract
Deep learning methods have been widely used for various glioma predictions. However, they are usually task-specific, segmentation-dependent and lack of interpretable biomarkers. How to accurately predict the glioma histological grade and molecular subtypes at the same time and provide reliable imaging biomarkers is still challenging. To achieve this, we propose a novel cooperative multi-task learning network (CMTLNet) which consists of a task-common feature extraction (CFE) module, a task-specific unique feature extraction (UFE) module and a unique-common feature collaborative classification (UCFC) module. In CFE, a segmentation-free tumor feature perception (SFTFP) module is first designed to extract the tumor-aware features in a classification manner rather than a segmentation manner. Following that, based on the multi-scale tumor-aware features extracted by SFTFP module, CFE uses convolutional layers to further refine these features, from which the task-common features are learned. In UFE, based on orthogonal projection and conditional classification strategies, the task-specific unique features are extracted. In UCFC, the unique and common features are fused with an attention mechanism to make them adaptive to different glioma prediction tasks. Finally, deep features-guided interpretable radiomic biomarkers for each glioma prediction task are explored by combining SHAP values and correlation analysis. Through the comparisons with recent reported methods on a large multi-center dataset comprising over 1800 cases, we demonstrated the superiority of the proposed CMTLNet, with the mean Matthews correlation coefficient in validation and test sets improved by (4.1%, 10.7%), (3.6%, 23.4%), and (2.7%, 22.7%) respectively for glioma grading, 1p/19q and IDH status prediction tasks. In addition, we found that some radiomic features are highly related to uninterpretable deep features and that their variation trends are consistent in multi-center datasets, which can be taken as reliable imaging biomarkers for glioma diagnosis. The proposed CMTLNet provides an interpretable tool for glioma multi-task prediction, which is beneficial for glioma precise diagnosis and personalized treatment.
Collapse
Affiliation(s)
- Qijian Chen
- Key Laboratory of Advanced Medical Imaging and Intelligent Computing of Guizhou Province, Engineering Research Center of Text Computing, Ministry of Education, State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
| | - Lihui Wang
- Key Laboratory of Advanced Medical Imaging and Intelligent Computing of Guizhou Province, Engineering Research Center of Text Computing, Ministry of Education, State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China.
| | - Zeyu Deng
- Key Laboratory of Advanced Medical Imaging and Intelligent Computing of Guizhou Province, Engineering Research Center of Text Computing, Ministry of Education, State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
| | - Rongpin Wang
- Radiology department, Guizhou Provincial People's Hospital, Guiyang, 550002, China
| | - Li Wang
- Key Laboratory of Advanced Medical Imaging and Intelligent Computing of Guizhou Province, Engineering Research Center of Text Computing, Ministry of Education, State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
| | - Caiqing Jian
- Key Laboratory of Advanced Medical Imaging and Intelligent Computing of Guizhou Province, Engineering Research Center of Text Computing, Ministry of Education, State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
| | - Yue-Min Zhu
- University Lyon, INSA Lyon, CNRS, Inserm, CREATIS UMR5220, U1206, Lyon 69621, France
| |
Collapse
|
2
|
You W, Feng J, Lu J, Chen T, Liu X, Wu Z, Gong G, Sui Y, Wang Y, Zhang Y, Ye W, Chen X, Lv J, Wei D, Tang Y, Deng D, Gui S, Lin J, Chen P, Wang Z, Gong W, Wang Y, Zhu C, Zhang Y, Saloner DA, Mitsouras D, Guan S, Li Y, Jiang Y, Wang Y. Diagnosis of intracranial aneurysms by computed tomography angiography using deep learning-based detection and segmentation. J Neurointerv Surg 2024; 17:e132-e138. [PMID: 38238009 DOI: 10.1136/jnis-2023-021022] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2023] [Accepted: 11/11/2023] [Indexed: 12/28/2024]
Abstract
BACKGROUND Detecting and segmenting intracranial aneurysms (IAs) from angiographic images is a laborious task. OBJECTIVE To evaluates a novel deep-learning algorithm, named vessel attention (VA)-Unet, for the efficient detection and segmentation of IAs. METHODS This retrospective study was conducted using head CT angiography (CTA) examinations depicting IAs from two hospitals in China between 2010 and 2021. Training included cases with subarachnoid hemorrhage (SAH) and arterial stenosis, common accompanying vascular abnormalities. Testing was performed in cohorts with reference-standard digital subtraction angiography (cohort 1), with SAH (cohort 2), acquired outside the time interval of training data (cohort 3), and an external dataset (cohort 4). The algorithm's performance was evaluated using sensitivity, recall, false positives per case (FPs/case), and Dice coefficient, with manual segmentation as the reference standard. RESULTS The study included 3190 CTA scans with 4124 IAs. Sensitivity, recall, and FPs/case for detection of IAs were, respectively, 98.58%, 96.17%, and 2.08 in cohort 1; 95.00%, 88.8%, and 3.62 in cohort 2; 96.00%, 93.77%, and 2.60 in cohort 3; and, 96.17%, 94.05%, and 3.60 in external cohort 4. The segmentation accuracy, as measured by the Dice coefficient, was 0.78, 0.71, 0.71, and 0.66 for cohorts 1-4, respectively. VA-Unet detection recall and FPs/case and segmentation accuracy were affected by several clinical factors, including aneurysm size, bifurcation aneurysms, and the presence of arterial stenosis and SAH. CONCLUSIONS VA-Unet accurately detected and segmented IAs in head CTA comparably to expert interpretation. The proposed algorithm has significant potential to assist radiologists in efficiently detecting and segmenting IAs from CTA images.
Collapse
Affiliation(s)
- Wei You
- Department of Neurosurgery, Beijing Tiantan Hospital and Beijing Neurosurgical Institute, Capital Medical University, Beijing, China
| | - Junqiang Feng
- Department of Neurosurgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Jing Lu
- Department of Radiology, Third Medical Center of Chinese PLA General Hospital, Beijing, China
| | - Ting Chen
- School of Biomedical Engineering, Capital Medical University, Beijing, China
| | - Xinke Liu
- Department of Neurosurgery, Beijing Tiantan Hospital and Beijing Neurosurgical Institute, Capital Medical University, Beijing, China
| | - Zhenzhou Wu
- Artificial Intelligence Research Center, China National Clinical Research Center for Neurological Diseases, Beijing, China
| | - Guoyang Gong
- Artificial Intelligence Research Center, China National Clinical Research Center for Neurological Diseases, Beijing, China
| | - Yutong Sui
- Artificial Intelligence Research Center, China National Clinical Research Center for Neurological Diseases, Beijing, China
| | - Yanwen Wang
- Artificial Intelligence Research Center, China National Clinical Research Center for Neurological Diseases, Beijing, China
| | - Yifan Zhang
- Artificial Intelligence Research Center, China National Clinical Research Center for Neurological Diseases, Beijing, China
| | - Wanxing Ye
- Artificial Intelligence Research Center, China National Clinical Research Center for Neurological Diseases, Beijing, China
| | - Xiheng Chen
- Department of Neurosurgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Jian Lv
- Department of Neurosurgery, Beijing Tiantan Hospital and Beijing Neurosurgical Institute, Capital Medical University, Beijing, China
| | - Dachao Wei
- Department of Neurosurgery, Beijing Tiantan Hospital and Beijing Neurosurgical Institute, Capital Medical University, Beijing, China
| | - Yudi Tang
- Department of Neurosurgery, Beijing Tiantan Hospital and Beijing Neurosurgical Institute, Capital Medical University, Beijing, China
| | - Dingwei Deng
- Department of Intervention, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Siming Gui
- Department of Neurosurgery, Beijing Tiantan Hospital and Beijing Neurosurgical Institute, Capital Medical University, Beijing, China
| | - Jun Lin
- Department of Neurosurgery, Beijing Tiantan Hospital and Beijing Neurosurgical Institute, Capital Medical University, Beijing, China
| | - Peike Chen
- Department of Neurosurgery, Beijing Tiantan Hospital and Beijing Neurosurgical Institute, Capital Medical University, Beijing, China
| | - Ziyao Wang
- Department of Interventional Neuroradiology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Wentao Gong
- Department of Interventional Neuroradiology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Yang Wang
- Department of Neurosurgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Chengcheng Zhu
- Department of Radiology, University of Washington, Seattle, Washington, USA
| | - Yue Zhang
- San Francisco Veterans Affairs Medical Center, San Francisco, California, USA
| | - David A Saloner
- San Francisco Veterans Affairs Medical Center, San Francisco, California, USA
- Department of Radiology and Biomedical Imaging, University California, San Francisco, San Francisco, California, USA
| | - Dimitrios Mitsouras
- San Francisco Veterans Affairs Medical Center, San Francisco, California, USA
- Department of Radiology and Biomedical Imaging, University California, San Francisco, San Francisco, California, USA
| | - Sheng Guan
- Department of Interventional Neuroradiology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Youxiang Li
- Department of Neurosurgery, Beijing Tiantan Hospital and Beijing Neurosurgical Institute, Capital Medical University, Beijing, China
- Department of Neurointerventional Engineering and Technology (NO: BG0287), Beijing Engineering Research Center, Beijing, China
| | - Yuhua Jiang
- Department of Neurosurgery, Beijing Tiantan Hospital and Beijing Neurosurgical Institute, Capital Medical University, Beijing, China
- Department of Neurointerventional Engineering and Technology (NO: BG0287), Beijing Engineering Research Center, Beijing, China
| | - Yan Wang
- San Francisco Veterans Affairs Medical Center, San Francisco, California, USA
- Department of Radiology and Biomedical Imaging, University California, San Francisco, San Francisco, California, USA
| |
Collapse
|
3
|
Doganay MT, Chakraborty P, Bommakanti SM, Jammalamadaka S, Battalapalli D, Madabhushi A, Draz MS. Artificial intelligence performance in testing microfluidics for point-of-care. LAB ON A CHIP 2024; 24:4998-5008. [PMID: 39360887 PMCID: PMC11448392 DOI: 10.1039/d4lc00671b] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2024] [Accepted: 09/16/2024] [Indexed: 10/06/2024]
Abstract
Artificial intelligence (AI) is revolutionizing medicine by automating tasks like image segmentation and pattern recognition. These AI approaches support seamless integration with existing platforms, enhancing diagnostics, treatment, and patient care. While recent advancements have demonstrated AI superiority in advancing microfluidics for point of care (POC) diagnostics, a gap remains in comparative evaluations of AI algorithms in testing microfluidics. We conducted a comparative evaluation of AI models specifically for the two-class classification problem of identifying the presence or absence of bubbles in microfluidic channels under various imaging conditions. Using a model microfluidic system with a single channel loaded with 3D transparent objects (bubbles), we challenged each of the tested machine learning (ML) (n = 6) and deep learning (DL) (n = 9) models across different background settings. Evaluation revealed that the random forest ML model achieved 95.52% sensitivity, 82.57% specificity, and 97% AUC, outperforming other ML algorithms. Among DL models suitable for mobile integration, DenseNet169 demonstrated superior performance, achieving 92.63% sensitivity, 92.22% specificity, and 92% AUC. Remarkably, DenseNet169 integration into a mobile POC system demonstrated exceptional accuracy (>0.84) in testing microfluidics at under challenging imaging settings. Our study confirms the transformative potential of AI in healthcare, emphasizing its capacity to revolutionize precision medicine through accurate and accessible diagnostics. The integration of AI into healthcare systems holds promise for enhancing patient outcomes and streamlining healthcare delivery.
Collapse
Affiliation(s)
- Mert Tunca Doganay
- Department of Medicine, Case Western Reserve University School of Medicine, Cleveland, OH, 44106, USA.
| | - Purbali Chakraborty
- Department of Medicine, Case Western Reserve University School of Medicine, Cleveland, OH, 44106, USA.
| | - Sri Moukthika Bommakanti
- Department of Medicine, Case Western Reserve University School of Medicine, Cleveland, OH, 44106, USA.
| | - Soujanya Jammalamadaka
- Department of Medicine, Case Western Reserve University School of Medicine, Cleveland, OH, 44106, USA.
| | | | - Anant Madabhushi
- Department of Biomedical Engineering, Emory University, Atlanta, GA, USA
- Atlanta Veterans Administration Medical Center, Atlanta, GA, USA
| | - Mohamed S Draz
- Department of Medicine, Case Western Reserve University School of Medicine, Cleveland, OH, 44106, USA.
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
- Department of Biomedical Engineering, Cleveland Clinic, Cleveland, OH, 44106, USA
| |
Collapse
|
4
|
Vujić A, Klasić M, Lauc G, Polašek O, Zoldoš V, Vojta A. Predicting Biochemical and Physiological Parameters: Deep Learning from IgG Glycome Composition. Int J Mol Sci 2024; 25:9988. [PMID: 39337475 PMCID: PMC11432235 DOI: 10.3390/ijms25189988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2024] [Revised: 08/29/2024] [Accepted: 09/11/2024] [Indexed: 09/30/2024] Open
Abstract
In immunoglobulin G (IgG), N-glycosylation plays a pivotal role in structure and function. It is often altered in different diseases, suggesting that it could be a promising health biomarker. Studies indicate that IgG glycosylation not only associates with various diseases but also has predictive capabilities. Additionally, changes in IgG glycosylation correlate with physiological and biochemical traits known to reflect overall health state. This study aimed to investigate the power of IgG glycans to predict physiological and biochemical parameters. We developed two models using IgG N-glycan data as an input: a regression model using elastic net and a machine learning model using deep learning. Data were obtained from the Korčula and Vis cohorts. The Korčula cohort data were used to train both models, while the Vis cohort was used exclusively for validation. Our results demonstrated that IgG glycome composition effectively predicts several biochemical and physiological parameters, especially those related to lipid and glucose metabolism and cardiovascular events. Both models performed similarly on the Korčula cohort; however, the deep learning model showed a higher potential for generalization when validated on the Vis cohort. This study reinforces the idea that IgG glycosylation reflects individuals' health state and brings us one step closer to implementing glycan-based diagnostics in personalized medicine. Additionally, it shows that the predictive power of IgG glycans can be used for imputing missing covariate data in deep learning frameworks.
Collapse
Affiliation(s)
- Ana Vujić
- Department of Biology, Faculty of Science, University of Zagreb, 10000 Zagreb, Croatia
| | - Marija Klasić
- Department of Biology, Faculty of Science, University of Zagreb, 10000 Zagreb, Croatia
| | - Gordan Lauc
- Genos Glycoscience Research Laboratory, 10000 Zagreb, Croatia
- Faculty of Pharmacy and Biochemistry, University of Zagreb, 10000 Zagreb, Croatia
| | - Ozren Polašek
- Department of Public Health, University of Split School of Medicine, 21000 Split, Croatia
- Croatian Science Foundation, 10000 Zagreb, Croatia
| | - Vlatka Zoldoš
- Department of Biology, Faculty of Science, University of Zagreb, 10000 Zagreb, Croatia
| | - Aleksandar Vojta
- Department of Biology, Faculty of Science, University of Zagreb, 10000 Zagreb, Croatia
- Genos Glycoscience Research Laboratory, 10000 Zagreb, Croatia
| |
Collapse
|
5
|
Geng S, Zhai S, Ye J, Gao Y, Luo H, Li C, Liu X, Liu S. Decoupling and predicting natural gas deviation factor using machine learning methods. Sci Rep 2024; 14:21640. [PMID: 39285210 PMCID: PMC11405880 DOI: 10.1038/s41598-024-72499-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Accepted: 09/09/2024] [Indexed: 09/20/2024] Open
Abstract
Accurately predicting the deviation factor (Z-factor) of natural gas is crucial for the estimation of natural gas reserves, evaluation of gas reservoir recovery, and assessment of natural gas transport in pipelines. Traditional machine learning algorithms, such as Support Vector Machine (SVM), Extreme Gradient Boosting (XGBoost), Light Gradient Boosting Machine (LightGBM), Artificial Neural Network (ANN) and Bidirectional Long Short-Term Memory Neural Networks (BiLSTM), often lack accuracy and robustness in various situations due to their inability to generalize across different gas components and temperature-pressure conditions. To address this limitation, we propose a novel and efficient machine learning framework for predicting natural gas Z-factor. Our approach first utilizes a signal decomposition algorithm like Variational Mode Decomposition (VMD), Empirical Fourier Decomposition (EFD) and Ensemble Empirical Mode Decomposition (EEMD) to decouple the Z-factor into multiple components. Subsequently, traditional machine learning algorithms is employed to predict each decomposed Z-factor component, where combination of SVM and VMD achieved the best performance. Decoupling the Z-factors firstly and then predicting the decoupled components can significantly improve prediction accuracy of all traditional machine learning algorithms. We thoroughly evaluate the impact of the decoupling method and the number of decomposed components on the model's performance. Compared to traditional machine learning models without decomposition, our framework achieves an average correlation coefficient exceeding 0.99 and an average mean absolute percentage error below 0.83% on 10 datasets with different natural gas components, high temperatures, and pressures. These results indicate that hybrid model effectively learns the patterns of Z-factor variations and can be applied to the prediction of natural gas Z-factors under various conditions. This study significantly advances methodologies for predicting natural gas properties, offering a unified and robust solution for precise estimations, thereby benefiting the natural gas industry in resource estimation and reservoir management.
Collapse
Affiliation(s)
- Shaoyang Geng
- Chengdu University of Technology, College of Energy, Chengdu, 610059, China
| | - Shuo Zhai
- Chengdu University of Technology, College of Energy, Chengdu, 610059, China
| | - Jianwen Ye
- Sinopec Southwest Oil and Gas Company, Chengdu, 611930, China
| | - Yajie Gao
- PetroChina Southwest Oil and Gasfield Company, Chengdu, 610051, China
| | - Hao Luo
- PetroChina Southwest Oil and Gasfield Company, Chengdu, 610051, China
| | - Chengyong Li
- Chengdu University of Technology, College of Energy, Chengdu, 610059, China
| | - Xianshan Liu
- Chengdu University of Technology, College of Energy, Chengdu, 610059, China.
| | - Shudong Liu
- Chengdu University of Technology, College of Energy, Chengdu, 610059, China.
| |
Collapse
|
6
|
Asghar R, Kumar S, Shaukat A, Hynds P. Classification of white blood cells (leucocytes) from blood smear imagery using machine and deep learning models: A global scoping review. PLoS One 2024; 19:e0292026. [PMID: 38885231 PMCID: PMC11182552 DOI: 10.1371/journal.pone.0292026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 05/13/2024] [Indexed: 06/20/2024] Open
Abstract
Machine learning (ML) and deep learning (DL) models are being increasingly employed for medical imagery analyses, with both approaches used to enhance the accuracy of classification/prediction in the diagnoses of various cancers, tumors and bloodborne diseases. To date however, no review of these techniques and their application(s) within the domain of white blood cell (WBC) classification in blood smear images has been undertaken, representing a notable knowledge gap with respect to model selection and comparison. Accordingly, the current study sought to comprehensively identify, explore and contrast ML and DL methods for classifying WBCs. Following development and implementation of a formalized review protocol, a cohort of 136 primary studies published between January 2006 and May 2023 were identified from the global literature, with the most widely used techniques and best-performing WBC classification methods subsequently ascertained. Studies derived from 26 countries, with highest numbers from high-income countries including the United States (n = 32) and The Netherlands (n = 26). While WBC classification was originally rooted in conventional ML, there has been a notable shift toward the use of DL, and particularly convolutional neural networks (CNN), with 54.4% of identified studies (n = 74) including the use of CNNs, and particularly in concurrence with larger datasets and bespoke features e.g., parallel data pre-processing, feature selection, and extraction. While some conventional ML models achieved up to 99% accuracy, accuracy was shown to decrease in concurrence with decreasing dataset size. Deep learning models exhibited improved performance for more extensive datasets and exhibited higher levels of accuracy in concurrence with increasingly large datasets. Availability of appropriate datasets remains a primary challenge, potentially resolvable using data augmentation techniques. Moreover, medical training of computer science researchers is recommended to improve current understanding of leucocyte structure and subsequent selection of appropriate classification models. Likewise, it is critical that future health professionals be made aware of the power, efficacy, precision and applicability of computer science, soft computing and artificial intelligence contributions to medicine, and particularly in areas like medical imaging.
Collapse
Affiliation(s)
- Rabia Asghar
- Spatiotemporal Environmental Epidemiology Research (STEER) Group, Technological University Dublin, Dublin, Ireland
| | - Sanjay Kumar
- National University of Sciences and Technology (NUST), Islamabad, Pakistan
| | - Arslan Shaukat
- National University of Sciences and Technology (NUST), Islamabad, Pakistan
| | - Paul Hynds
- Spatiotemporal Environmental Epidemiology Research (STEER) Group, Technological University Dublin, Dublin, Ireland
| |
Collapse
|
7
|
Wang CK, Wang TW, Yang YX, Wu YT. Deep Learning for Nasopharyngeal Carcinoma Segmentation in Magnetic Resonance Imaging: A Systematic Review and Meta-Analysis. Bioengineering (Basel) 2024; 11:504. [PMID: 38790370 PMCID: PMC11118180 DOI: 10.3390/bioengineering11050504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2024] [Revised: 05/11/2024] [Accepted: 05/15/2024] [Indexed: 05/26/2024] Open
Abstract
Nasopharyngeal carcinoma is a significant health challenge that is particularly prevalent in Southeast Asia and North Africa. MRI is the preferred diagnostic tool for NPC due to its superior soft tissue contrast. The accurate segmentation of NPC in MRI is crucial for effective treatment planning and prognosis. We conducted a search across PubMed, Embase, and Web of Science from inception up to 20 March 2024, adhering to the PRISMA 2020 guidelines. Eligibility criteria focused on studies utilizing DL for NPC segmentation in adults via MRI. Data extraction and meta-analysis were conducted to evaluate the performance of DL models, primarily measured by Dice scores. We assessed methodological quality using the CLAIM and QUADAS-2 tools, and statistical analysis was performed using random effects models. The analysis incorporated 17 studies, demonstrating a pooled Dice score of 78% for DL models (95% confidence interval: 74% to 83%), indicating a moderate to high segmentation accuracy by DL models. Significant heterogeneity and publication bias were observed among the included studies. Our findings reveal that DL models, particularly convolutional neural networks, offer moderately accurate NPC segmentation in MRI. This advancement holds the potential for enhancing NPC management, necessitating further research toward integration into clinical practice.
Collapse
Affiliation(s)
- Chih-Keng Wang
- School of Medicine, College of Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan; (C.-K.W.)
- Department of Otolaryngology-Head and Neck Surgery, Taichung Veterans General Hospital, Taichung 407219, Taiwan
| | - Ting-Wei Wang
- School of Medicine, College of Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan; (C.-K.W.)
- Institute of Biophotonics, National Yang-Ming Chiao Tung University, 155, Sec. 2, Li-Nong St. Beitou Dist., Taipei 112304, Taiwan
| | - Ya-Xuan Yang
- Department of Otolaryngology-Head and Neck Surgery, Taichung Veterans General Hospital, Taichung 407219, Taiwan
| | - Yu-Te Wu
- Institute of Biophotonics, National Yang-Ming Chiao Tung University, 155, Sec. 2, Li-Nong St. Beitou Dist., Taipei 112304, Taiwan
| |
Collapse
|
8
|
Ciet P, Eade C, Ho ML, Laborie LB, Mahomed N, Naidoo J, Pace E, Segal B, Toso S, Tschauner S, Vamyanmane DK, Wagner MW, Shelmerdine SC. The unintended consequences of artificial intelligence in paediatric radiology. Pediatr Radiol 2024; 54:585-593. [PMID: 37665368 DOI: 10.1007/s00247-023-05746-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 08/07/2023] [Accepted: 08/08/2023] [Indexed: 09/05/2023]
Abstract
Over the past decade, there has been a dramatic rise in the interest relating to the application of artificial intelligence (AI) in radiology. Originally only 'narrow' AI tasks were possible; however, with increasing availability of data, teamed with ease of access to powerful computer processing capabilities, we are becoming more able to generate complex and nuanced prediction models and elaborate solutions for healthcare. Nevertheless, these AI models are not without their failings, and sometimes the intended use for these solutions may not lead to predictable impacts for patients, society or those working within the healthcare profession. In this article, we provide an overview of the latest opinions regarding AI ethics, bias, limitations, challenges and considerations that we should all contemplate in this exciting and expanding field, with a special attention to how this applies to the unique aspects of a paediatric population. By embracing AI technology and fostering a multidisciplinary approach, it is hoped that we can harness the power AI brings whilst minimising harm and ensuring a beneficial impact on radiology practice.
Collapse
Affiliation(s)
- Pierluigi Ciet
- Department of Radiology and Nuclear Medicine, Erasmus MC - Sophia's Children's Hospital, Rotterdam, The Netherlands
- Department of Medical Sciences, University of Cagliari, Cagliari, Italy
| | | | - Mai-Lan Ho
- University of Missouri, Columbia, MO, USA
| | - Lene Bjerke Laborie
- Department of Radiology, Section for Paediatrics, Haukeland University Hospital, Bergen, Norway
- Department of Clinical Medicine, University of Bergen, Bergen, Norway
| | - Nasreen Mahomed
- Department of Radiology, University of Witwatersrand, Johannesburg, South Africa
| | - Jaishree Naidoo
- Paediatric Diagnostic Imaging, Dr J Naidoo Inc., Johannesburg, South Africa
- Envisionit Deep AI Ltd, Coveham House, Downside Bridge Road, Cobham, UK
| | - Erika Pace
- Department of Diagnostic Radiology, The Royal Marsden NHS Foundation Trust, London, UK
| | - Bradley Segal
- Department of Radiology, University of Witwatersrand, Johannesburg, South Africa
| | - Seema Toso
- Pediatric Radiology, Children's Hospital, University Hospitals of Geneva, Geneva, Switzerland
| | - Sebastian Tschauner
- Division of Paediatric Radiology, Department of Radiology, Medical University of Graz, Graz, Austria
| | - Dhananjaya K Vamyanmane
- Department of Pediatric Radiology, Indira Gandhi Institute of Child Health, Bangalore, India
| | - Matthias W Wagner
- Department of Diagnostic Imaging, Division of Neuroradiology, The Hospital for Sick Children, Toronto, Canada
- Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
- Department of Neuroradiology, University Hospital Augsburg, Augsburg, Germany
| | - Susan C Shelmerdine
- Department of Clinical Radiology, Great Ormond Street Hospital for Children NHS Foundation Trust, Great Ormond Street, London, WC1H 3JH, UK.
- Great Ormond Street Hospital for Children, UCL Great Ormond Street Institute of Child Health, London, UK.
- NIHR Great Ormond Street Hospital Biomedical Research Centre, 30 Guilford Street, Bloomsbury, London, UK.
- Department of Clinical Radiology, St George's Hospital, London, UK.
| |
Collapse
|
9
|
Abdollahifard S, Farrokhi A, Kheshti F, Jalali M, Mowla A. Application of convolutional network models in detection of intracranial aneurysms: A systematic review and meta-analysis. Interv Neuroradiol 2023; 29:738-747. [PMID: 35549574 PMCID: PMC10680951 DOI: 10.1177/15910199221097475] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Accepted: 04/11/2022] [Indexed: 11/15/2022] Open
Abstract
INTRODUCTION Intracranial aneurysms have a high prevalence in human population. It also has a heavy burden of disease and high mortality rate in the case of rupture. Convolutional neural network(CNN) is a type of deep learning architecture which has been proven powerful to detect intracranial aneurysms. METHODS Four databases were searched using artificial intelligence, intracranial aneurysms, and synonyms to find eligible studies. Articles which had applied CNN for detection of intracranial aneurisms were included in this review. Sensitivity and specificity of the models and human readers regarding modality, size, and location of aneurysms were sought to be extracted. Random model was the preferred model for analyses using CMA 2 to determine pooled sensitivity and specificity. RESULTS Overall, 20 studies were used in this review. Deep learning models could detect intracranial aneurysms with a sensitivity of 90/6% (CI: 87/2-93/2%) and specificity of 94/6% (CI: 0/914-0/966). CTA was the most sensitive modality (92.0%(CI:85/2-95/8%)). Overall sensitivity of the models for aneurysms more than 3 mm was above 98% (98%-100%) and 74.6 for aneurysms less than 3 mm. With the aid of AI, the clinicians' sensitivity increased to 12/8% and interrater agreement to 0/193. CONCLUSION CNN models had an acceptable sensitivity for detection of intracranial aneurysms, surpassing human readers in some fields. The logical approach for application of deep learning models would be its use as a highly capable assistant. In essence, deep learning models are a groundbreaking technology that can assist clinicians and allow them to diagnose intracranial aneurysms more accurately.
Collapse
Affiliation(s)
- Saeed Abdollahifard
- Research center for neuromodulation and pain, Shiraz, Iran
- Student research committee, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Amirmohammad Farrokhi
- Research center for neuromodulation and pain, Shiraz, Iran
- Student research committee, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Fatemeh Kheshti
- Research center for neuromodulation and pain, Shiraz, Iran
- Student research committee, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Mahtab Jalali
- Research center for neuromodulation and pain, Shiraz, Iran
- Student research committee, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Ashkan Mowla
- Division of Stroke and Endovascular Neurosurgery, Department of Neurological Surgery, Keck School of Medicine, University of Southern California (USC), Los Angeles, CA, USA
| |
Collapse
|
10
|
Zhu K, Yan B. Multifunctional Eu(III)-modified HOFs: roxarsone and aristolochic acid carcinogen monitoring and latent fingerprint identification based on artificial intelligence. MATERIALS HORIZONS 2023; 10:5782-5795. [PMID: 37814901 DOI: 10.1039/d3mh01253k] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/11/2023]
Abstract
The exploration of multifunctional materials and intelligent technologies used for fluorescence sensing and latent fingerprint (LFP) identification is a research hotspot of material science. In this study, an emerging crystalline luminescent material, Eu3+-functionalized hydrogen-bonded organic framework (Eu@HOF-BTB, Eu@1), is fabricated successfully. Eu@1 can emit purple red fluorescence with a high photoluminescence quantum yield of 36.82%. Combined with artificial intelligence (AI) algorithms including support vector machine, principal component analysis, and hierarchical clustering analysis, Eu@1 as a sensor can concurrently distinguish two carcinogens, roxarsone and aristolochic acid, based on different mechanisms. The sensing process exhibits high selectivity, high efficiency, and excellent anti-interference. Meanwhile, Eu@1 is also an excellent eikonogen for LFP identification with high-resolution and high-contrast. Based on an automatic fingerprint identification system, the simultaneous differentiation of two fingerprint images is achieved. Moreover, a simulation experiment of criminal arrest is conducted. By virtue of the Alexnet-based fingerprint analysis platform of AI, unknown LFPs can be compared with a database to identify the criminal within one second with over 90% recognition accuracy. With AI technology, HOFs are applied for the first time in the LFP identification field, which provides a new material and solution for investigators to track criminal clues and handle cases efficiently.
Collapse
Affiliation(s)
- Kai Zhu
- Shanghai Key Lab of Chemical Assessment and Sustainability, School of Chemical Science and Engineering, Tongji University, Siping Road 1239, Shanghai 200092, China.
| | - Bing Yan
- Shanghai Key Lab of Chemical Assessment and Sustainability, School of Chemical Science and Engineering, Tongji University, Siping Road 1239, Shanghai 200092, China.
| |
Collapse
|
11
|
Constant C, Aubin CE, Kremers HM, Garcia DVV, Wyles CC, Rouzrokh P, Larson AN. The use of deep learning in medical imaging to improve spine care: A scoping review of current literature and clinical applications. NORTH AMERICAN SPINE SOCIETY JOURNAL 2023; 15:100236. [PMID: 37599816 PMCID: PMC10432249 DOI: 10.1016/j.xnsj.2023.100236] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Accepted: 06/14/2023] [Indexed: 08/22/2023]
Abstract
Background Artificial intelligence is a revolutionary technology that promises to assist clinicians in improving patient care. In radiology, deep learning (DL) is widely used in clinical decision aids due to its ability to analyze complex patterns and images. It allows for rapid, enhanced data, and imaging analysis, from diagnosis to outcome prediction. The purpose of this study was to evaluate the current literature and clinical utilization of DL in spine imaging. Methods This study is a scoping review and utilized the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology to review the scientific literature from 2012 to 2021. A search in PubMed, Web of Science, Embased, and IEEE Xplore databases with syntax specific for DL and medical imaging in spine care applications was conducted to collect all original publications on the subject. Specific data was extracted from the available literature, including algorithm application, algorithms tested, database type and size, algorithm training method, and outcome of interest. Results A total of 365 studies (total sample of 232,394 patients) were included and grouped into 4 general applications: diagnostic tools, clinical decision support tools, automated clinical/instrumentation assessment, and clinical outcome prediction. Notable disparities exist in the selected algorithms and the training across multiple disparate databases. The most frequently used algorithms were U-Net and ResNet. A DL model was developed and validated in 92% of included studies, while a pre-existing DL model was investigated in 8%. Of all developed models, only 15% of them have been externally validated. Conclusions Based on this scoping review, DL in spine imaging is used in a broad range of clinical applications, particularly for diagnosing spinal conditions. There is a wide variety of DL algorithms, database characteristics, and training methods. Future studies should focus on external validation of existing models before bringing them into clinical use.
Collapse
Affiliation(s)
- Caroline Constant
- Orthopedic Surgery AI Laboratory, Mayo Clinic, 200 1st St Southwest, Rochester, MN, 55902, United States
- Polytechnique Montreal, 2500 Chem. de Polytechnique, Montréal, QC H3T 1J4, Canada
- AO Research Institute Davos, Clavadelerstrasse 8, CH 7270, Davos, Switzerland
| | - Carl-Eric Aubin
- Polytechnique Montreal, 2500 Chem. de Polytechnique, Montréal, QC H3T 1J4, Canada
| | - Hilal Maradit Kremers
- Orthopedic Surgery AI Laboratory, Mayo Clinic, 200 1st St Southwest, Rochester, MN, 55902, United States
| | - Diana V. Vera Garcia
- Orthopedic Surgery AI Laboratory, Mayo Clinic, 200 1st St Southwest, Rochester, MN, 55902, United States
| | - Cody C. Wyles
- Orthopedic Surgery AI Laboratory, Mayo Clinic, 200 1st St Southwest, Rochester, MN, 55902, United States
- Department of Orthopedic Surgery, Mayo Clinic, 200, 1st St Southwest, Rochester, MN, 55902, United States
| | - Pouria Rouzrokh
- Orthopedic Surgery AI Laboratory, Mayo Clinic, 200 1st St Southwest, Rochester, MN, 55902, United States
- Radiology Informatics Laboratory, Mayo Clinic, 200, 1st St Southwest, Rochester, MN, 55902, United States
| | - Annalise Noelle Larson
- Orthopedic Surgery AI Laboratory, Mayo Clinic, 200 1st St Southwest, Rochester, MN, 55902, United States
- Department of Orthopedic Surgery, Mayo Clinic, 200, 1st St Southwest, Rochester, MN, 55902, United States
| |
Collapse
|
12
|
Zhu Y, Wang M, Yin X, Zhang J, Meijering E, Hu J. Deep Learning in Diverse Intelligent Sensor Based Systems. SENSORS (BASEL, SWITZERLAND) 2022; 23:62. [PMID: 36616657 PMCID: PMC9823653 DOI: 10.3390/s23010062] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 12/06/2022] [Accepted: 12/14/2022] [Indexed: 05/27/2023]
Abstract
Deep learning has become a predominant method for solving data analysis problems in virtually all fields of science and engineering. The increasing complexity and the large volume of data collected by diverse sensor systems have spurred the development of deep learning methods and have fundamentally transformed the way the data are acquired, processed, analyzed, and interpreted. With the rapid development of deep learning technology and its ever-increasing range of successful applications across diverse sensor systems, there is an urgent need to provide a comprehensive investigation of deep learning in this domain from a holistic view. This survey paper aims to contribute to this by systematically investigating deep learning models/methods and their applications across diverse sensor systems. It also provides a comprehensive summary of deep learning implementation tips and links to tutorials, open-source codes, and pretrained models, which can serve as an excellent self-contained reference for deep learning practitioners and those seeking to innovate deep learning in this space. In addition, this paper provides insights into research topics in diverse sensor systems where deep learning has not yet been well-developed, and highlights challenges and future opportunities. This survey serves as a catalyst to accelerate the application and transformation of deep learning in diverse sensor systems.
Collapse
Affiliation(s)
- Yanming Zhu
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW 2052, Australia
| | - Min Wang
- School of Engineering and Information Technology, University of New South Wales, Canberra, ACT 2612, Australia
| | - Xuefei Yin
- School of Engineering and Information Technology, University of New South Wales, Canberra, ACT 2612, Australia
| | - Jue Zhang
- School of Engineering and Information Technology, University of New South Wales, Canberra, ACT 2612, Australia
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW 2052, Australia
| | - Jiankun Hu
- School of Engineering and Information Technology, University of New South Wales, Canberra, ACT 2612, Australia
| |
Collapse
|
13
|
Kuroiwa T, Jagtap J, Starlinger J, Lui H, Akkus Z, Erickson B, Amadio P. Deep Learning Estimation of Median Nerve Volume Using Ultrasound Imaging in a Human Cadaver Model. ULTRASOUND IN MEDICINE & BIOLOGY 2022; 48:2237-2248. [PMID: 35961866 DOI: 10.1016/j.ultrasmedbio.2022.06.011] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/27/2021] [Revised: 06/14/2022] [Accepted: 06/15/2022] [Indexed: 06/15/2023]
Abstract
Median nerve swelling is one of the features of carpal tunnel syndrome (CTS), and ultrasound measurement of maximum median nerve cross-sectional area is commonly used to diagnose CTS. We hypothesized that volume might be a more sensitive measure than cross-sectional area for CTS diagnosis. We therefore assessed the accuracy and reliability of 3-D volume measurements of the median nerve in human cadavers, comparing direct measurements with ultrasound images interpreted using deep learning algorithms. Ultrasound images of a 10-cm segment of the median nerve were used to train the U-Net model, which achieved an average volume similarity of 0.89 and area under the curve of 0.90 from the threefold cross-validation. Correlation coefficients were calculated using the areas measured by each method. The intraclass correlation coefficient was 0.86. Pearson's correlation coefficient R between the estimated volume from the manually measured cross-sectional area and the estimated volume of deep learning was 0.85. In this study using deep learning to segment the median nerve longitudinally, estimated volume had high reliability. We plan to assess its clinical usefulness in future clinical studies. The volume of the median nerve may provide useful additional information on disease severity, beyond maximum cross-sectional area.
Collapse
Affiliation(s)
- Tomoyuki Kuroiwa
- Department of Orthopedic Surgery, Mayo Clinic, Rochester, Minnesota, USA
| | - Jaidip Jagtap
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, USA
| | - Julia Starlinger
- Department of Orthopedic Surgery, Mayo Clinic, Rochester, Minnesota, USA; Department for Orthopedics and Trauma Surgery, Medical University Vienna, Vienna, Austria
| | - Hayman Lui
- Department of Orthopedic Surgery, Mayo Clinic, Rochester, Minnesota, USA
| | - Zeynettin Akkus
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, USA
| | | | - Peter Amadio
- Department of Orthopedic Surgery, Mayo Clinic, Rochester, Minnesota, USA.
| |
Collapse
|
14
|
Balwant M. A Review on Convolutional Neural Networks for Brain Tumor Segmentation: Methods, Datasets, Libraries, and Future Directions. Ing Rech Biomed 2022. [DOI: 10.1016/j.irbm.2022.05.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
15
|
Ahanin Z, Ismail MA. A multi-label emoji classification method using balanced pointwise mutual information-based feature selection. COMPUT SPEECH LANG 2022. [DOI: 10.1016/j.csl.2021.101330] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
|
16
|
An Integrated Analysis Framework of Convolutional Neural Network for Embedded Edge Devices. ELECTRONICS 2022. [DOI: 10.3390/electronics11071041] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Recently, IoT applications using Deep Neural Network (DNN) to embedded edge devices are increasing. Generally, in the case of DNN applications in the IoT system, training is mainly performed in the server and inference operation is performed on the edge device. The embedded edge devices still take a lot of loads in inference operations due to low computing resources, so proper customization of DNN with architectural exploration is required. However, there are few integrated frameworks to facilitate exploration and customization of various DNN models and their operations in embedded edge devices. In this paper, we propose an integrated framework that can explore and customize DNN inference operations of DNN models on embedded edge devices. The framework consists of the GUI interface part, the inference engine part, and the hardware Deep Learning Accelerator (DLA) Virtual Platform (VP) part. Specifically it focuses on Convolutional Neural Network (CNN), and provides integrated interoperability for convolutional neural network models and neural network customization techniques such as quantization and cross-inference functions. In addition, performance estimation is possible by providing hardware DLA VP for embedded edge devices. Those features are provided as web-based GUI interfaces, so users can easily utilize them.
Collapse
|
17
|
Asadifar S, Kahani M, Shekarpour S. Schema and content aware classification for predicting the sources containing an answer over corpus and knowledge graphs. PeerJ Comput Sci 2022; 8:e846. [PMID: 35494835 PMCID: PMC9044320 DOI: 10.7717/peerj-cs.846] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Accepted: 12/16/2021] [Indexed: 06/14/2023]
Abstract
Today, several attempts to manage question answering (QA) have been made in three separate areas: (1) knowledge-based (KB), (2) text-based and (3) hybrid, which takes advantage of both prior areas in extracting the response. On the other hand, in question answering on a large number of sources, source prediction to ensure scalability is very important. In this paper, a method for source prediction is presented in hybrid QA, involving several KB sources and a text source. In a few hybrid methods for source selection, including only one KB source in addition to the textual source, prioritization or heuristics have been used that have not been evaluated so far. Most methods available in source selection services are based on general metadata or triple instances. These methods are not suitable due to the unstructured source in hybrid QA. In this research, we need data details to predict the source. In addition, unlike KB federated methods that are based on triple instances, we use the behind idea of mediated schema to ensure data integration and scalability. Results from evaluations that consider word, triple, and question level information, show that the proposed approach performs well against a few benchmarks. In addition, the comparison of the proposed method with the existing approaches in hybrid and KB source prediction and also QA tasks has shown a significant reduction in response time and increased accuracy.
Collapse
Affiliation(s)
- Somayeh Asadifar
- Faculty of Engineering, Ferdowsi University of Mashhad, Mashhad, Khorasan Razavi, Iran
| | - Mohsen Kahani
- Faculty of Engineering, Ferdowsi University of Mashhad, Mashhad, Khorasan Razavi, Iran
| | - Saeedeh Shekarpour
- College of Arts and Sciences: Computer Science, University of Dayton, Dayton, Ohio, United States
| |
Collapse
|
18
|
Öksüz C, Urhan O, Güllü MK. Brain tumor classification using the fused features extracted from expanded tumor region. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103356] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
19
|
Deep convolutional neural networks for computer-aided breast cancer diagnostic: a survey. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06804-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
20
|
Qin K, Li J, Fang Y, Xu Y, Wu J, Zhang H, Li H, Liu S, Li Q. Convolution neural network for the diagnosis of wireless capsule endoscopy: a systematic review and meta-analysis. Surg Endosc 2022; 36:16-31. [PMID: 34426876 PMCID: PMC8741689 DOI: 10.1007/s00464-021-08689-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Accepted: 08/07/2021] [Indexed: 02/07/2023]
Abstract
BACKGROUND Wireless capsule endoscopy (WCE) is considered to be a powerful instrument for the diagnosis of intestine diseases. Convolution neural network (CNN) is a type of artificial intelligence that has the potential to assist the detection of WCE images. We aimed to perform a systematic review of the current research progress to the CNN application in WCE. METHODS A search in PubMed, SinoMed, and Web of Science was conducted to collect all original publications about CNN implementation in WCE. Assessment of the risk of bias was performed by Quality Assessment of Diagnostic Accuracy Studies-2 risk list. Pooled sensitivity and specificity were calculated by an exact binominal rendition of the bivariate mixed-effects regression model. I2 was used for the evaluation of heterogeneity. RESULTS 16 articles with 23 independent studies were included. CNN application to WCE was divided into detection on erosion/ulcer, gastrointestinal bleeding (GI bleeding), and polyps/cancer. The pooled sensitivity of CNN for erosion/ulcer is 0.96 [95% CI 0.91, 0.98], for GI bleeding is 0.97 (95% CI 0.93-0.99), and for polyps/cancer is 0.97 (95% CI 0.82-0.99). The corresponding specificity of CNN for erosion/ulcer is 0.97 (95% CI 0.93-0.99), for GI bleeding is 1.00 (95% CI 0.99-1.00), and for polyps/cancer is 0.98 (95% CI 0.92-0.99). CONCLUSION Based on our meta-analysis, CNN-dependent diagnosis of erosion/ulcer, GI bleeding, and polyps/cancer approached a high-level performance because of its high sensitivity and specificity. Therefore, future perspective, CNN has the potential to become an important assistant for the diagnosis of WCE.
Collapse
Affiliation(s)
- Kaiwen Qin
- Nanfang Hospital (The First School of Clinical Medicine), Southern Medical University, Guangzhou, Guangdong, China
| | - Jianmin Li
- Guangzhou SiDe MedTech Co.,Ltd, Guangzhou, Guangdong, China
| | - Yuxin Fang
- Guangdong Provincial Key Laboratory of Gastroenterology, Department of Gastroenterology, Nanfang Hospital, Southern Medical University, No. 1838, Guangzhou Avenue North, Guangzhou, Guangdong, China
| | - Yuyuan Xu
- State Key Laboratory of Organ Failure Research, Guangdong Provincial Key Laboratory of Viral Hepatitis Research, Department of Hepatology Unit and Infectious Diseases, Nanfang Hospital, Southern Medical University, Guangzhou, Guangdong, China
| | - Jiahao Wu
- Guangzhou SiDe MedTech Co.,Ltd, Guangzhou, Guangdong, China
| | - Haonan Zhang
- Guangdong Provincial Key Laboratory of Gastroenterology, Department of Gastroenterology, Nanfang Hospital, Southern Medical University, No. 1838, Guangzhou Avenue North, Guangzhou, Guangdong, China
| | - Haolin Li
- Nanfang Hospital (The First School of Clinical Medicine), Southern Medical University, Guangzhou, Guangdong, China
- Guangdong Provincial Key Laboratory of Gastroenterology, Department of Gastroenterology, Nanfang Hospital, Southern Medical University, No. 1838, Guangzhou Avenue North, Guangzhou, Guangdong, China
| | - Side Liu
- Guangdong Provincial Key Laboratory of Gastroenterology, Department of Gastroenterology, Nanfang Hospital, Southern Medical University, No. 1838, Guangzhou Avenue North, Guangzhou, Guangdong, China
| | - Qingyuan Li
- Guangdong Provincial Key Laboratory of Gastroenterology, Department of Gastroenterology, Nanfang Hospital, Southern Medical University, No. 1838, Guangzhou Avenue North, Guangzhou, Guangdong, China.
| |
Collapse
|
21
|
Payrovnaziri SN, Xing A, Salman S, Liu X, Bian J, He Z. Assessing the Impact of Imputation on the Interpretations of Prediction Models: A Case Study on Mortality Prediction for Patients with Acute Myocardial Infarction. AMIA JOINT SUMMITS ON TRANSLATIONAL SCIENCE PROCEEDINGS. AMIA JOINT SUMMITS ON TRANSLATIONAL SCIENCE 2021; 2021:465-474. [PMID: 34457162 PMCID: PMC8378616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Acute myocardial infarction poses significant health risks and financial burden on healthcare and families. Prediction of mortality risk among AM! patients using rich electronic health record (EHR) data can potentially save lives and healthcare costs. Nevertheless, EHR-based prediction models usually use a missing data imputation method without considering its impact on the performance and interpretability of the model, hampering its real-world applicability in the healthcare setting. This study examines the impact of different methods for imputing missing values in EHR data on both the performance and the interpretations of predictive models. Our results showed that a small standard deviation in root mean squared error across different runs of an imputation method does not necessarily imply a small standard deviation in the prediction models' performance and interpretation. We also showed that the level of missingness and the imputation method used can have a significant impact on the interpretation of the models.
Collapse
Affiliation(s)
| | - Aiwen Xing
- Florida State University, Tallahassee, Florida, USA
| | | | - Xiuwen Liu
- Florida State University, Tallahassee, Florida, USA
| | - Jiang Bian
- University of Florida, Gainesville, Florida, USA
| | - Zhe He
- Florida State University, Tallahassee, Florida, USA
| |
Collapse
|
22
|
Örnek MN, Kahramanlı Örnek H. Developing a deep neural network model for predicting carrots volume. JOURNAL OF FOOD MEASUREMENT AND CHARACTERIZATION 2021. [DOI: 10.1007/s11694-021-00923-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
23
|
Abstract
Abstract
Deep learning is transforming most areas of science and technology, including electron microscopy. This review paper offers a practical perspective aimed at developers with limited familiarity. For context, we review popular applications of deep learning in electron microscopy. Following, we discuss hardware and software needed to get started with deep learning and interface with electron microscopes. We then review neural network components, popular architectures, and their optimization. Finally, we discuss future directions of deep learning in electron microscopy.
Collapse
|
24
|
Image Classification for the Automatic Feature Extraction in Human Worn Fashion Data. MATHEMATICS 2021. [DOI: 10.3390/math9060624] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
With the always increasing amount of image data, it has become a necessity to automatically look for and process information in these images. As fashion is captured in images, the fashion sector provides the perfect foundation to be supported by the integration of a service or application that is built on an image classification model. In this article, the state of the art for image classification is analyzed and discussed. Based on the elaborated knowledge, four different approaches will be implemented to successfully extract features out of fashion data. For this purpose, a human-worn fashion dataset with 2567 images was created, but it was significantly enlarged by the performed image operations. The results show that convolutional neural networks are the undisputed standard for classifying images, and that TensorFlow is the best library to build them. Moreover, through the introduction of dropout layers, data augmentation and transfer learning, model overfitting was successfully prevented, and it was possible to incrementally improve the validation accuracy of the created dataset from an initial 69% to a final validation accuracy of 84%. More distinct apparel like trousers, shoes and hats were better classified than other upper body clothes.
Collapse
|
25
|
Yu Y, Wang J, Chun HE, Xu Y, Fong ELS, Wee A, Yu H. Implementation of Machine Learning-Aided Imaging Analytics for Histopathological Image Diagnosis. SYSTEMS MEDICINE 2021. [DOI: 10.1016/b978-0-12-801238-3.11388-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
|
26
|
Zhao H, Zhang HX, Cao QJ, Sun SJ, Han X, Palaoag TD. Design and Development of Image Recognition Toolkit Based on Deep Learning. INT J PATTERN RECOGN 2021. [DOI: 10.1142/s0218001421590023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Deep learning algorithms have shown superior performance than traditional algorithms when dealing with computationally intensive tasks in many fields. The algorithm model based on deep learning has good performance and can improve the recognition accuracy in relevant applications in the field of computer vision. TensorFlow is a flexible opensource machine learning platform proposed by Google, which can run on a variety of platforms, such as CPU, GPU, and mobile devices. TensorFlow platform can also support current popular deep learning models. In this paper, an image recognition toolkit based on TensorFlow is designed and developed to simplify the development process of more and more image recognition applications. The toolkit uses convolutional neural networks to build a training model, which consists of two convolutional layers: one batch normalization layer before each convolutional layer, and the other pooling layer after each convolutional layer. The last two layers of the model use the full connection layer to output recognition results. Batch gradient descent algorithm is adopted in the optimization algorithm, and it integrates the advantages of both the gradient descent algorithm and the stochastic gradient descent algorithm, which greatly reduces the number of convergence iterations and has little influence on the convergence effect. The total training parameters of the toolkit model reach 1.7 million. In order to prevent overfitting problems, the dropout layer before each full connection layer is added and the threshold of 0.5 is set in the design. The convolution neural network model is trained and tested by the MNIST set on TensorFlow. The experimental result shows that the toolkit achieves the recognition accuracy of 99% on the MNIST test set. The development of the toolkit provides powerful technical support for the development of various image recognition applications, reduces its difficulty, and improves the efficiency of resource utilization.
Collapse
Affiliation(s)
- Hui Zhao
- School of Information & Electrical Engineering, Hebei University of Engineering, Taiji Road 19, Handan, Hebei 056038, P. R. China
- College of Teacher Education, University of the Cordilleras, Governor Pack Rd., Baguio City, 2600, Philippines
| | - Hai-Xia Zhang
- College of Energy and Environmental Engineering, Hebei University of Engineering, Taiji Road 19, Handan, Hebei 056038, P. R. China
| | - Qing-Jiao Cao
- School of Water Conservancy and Hydroelectric Power, Hebei University of Engineering, Taiji Road 19, Handan, Hebei 056038, P. R. China
| | - Sheng-Juan Sun
- School of Information & Electrical Engineering, Hebei University of Engineering, Taiji Road 19, Handan, Hebei 056038, P. R. China
| | - Xuanzhe Han
- Library, Liupanshui Normal University, Minghu Road, Liupanshui, Guizhou province, 553004, P. R. China
| | - Thelma D. Palaoag
- College of Information Technology and Computer Science, University of the Cordilleras, Governor Pack Rd., Baguio City, 2600, Philippines
| |
Collapse
|
27
|
Jamasb AR, Day B, Cangea C, Liò P, Blundell TL. Deep Learning for Protein-Protein Interaction Site Prediction. Methods Mol Biol 2021; 2361:263-288. [PMID: 34236667 DOI: 10.1007/978-1-0716-1641-3_16] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Protein-protein interactions (PPIs) are central to cellular functions. Experimental methods for predicting PPIs are well developed but are time and resource expensive and suffer from high false-positive error rates at scale. Computational prediction of PPIs is highly desirable for a mechanistic understanding of cellular processes and offers the potential to identify highly selective drug targets. In this chapter, details of developing a deep learning approach to predicting which residues in a protein are involved in forming a PPI-a task known as PPI site prediction-are outlined. The key decisions to be made in defining a supervised machine learning project in this domain are here highlighted. Alternative training regimes for deep learning models to address shortcomings in existing approaches and provide starting points for further research are discussed. This chapter is written to serve as a companion to developing deep learning approaches to protein-protein interaction site prediction, and an introduction to developing geometric deep learning projects operating on protein structure graphs.
Collapse
Affiliation(s)
- Arian R Jamasb
- Department of Computer Science and Technology, University of Cambridge, Cambridge, UK.,Department of Biochemistry, University of Cambridge, Cambridge, UK
| | - Ben Day
- Department of Computer Science and Technology, University of Cambridge, Cambridge, UK
| | - Cătălina Cangea
- Department of Computer Science and Technology, University of Cambridge, Cambridge, UK
| | - Pietro Liò
- Department of Computer Science and Technology, University of Cambridge, Cambridge, UK
| | - Tom L Blundell
- Department of Biochemistry, University of Cambridge, Cambridge, UK.
| |
Collapse
|
28
|
An Experimental Analysis of Deep Learning Architectures for Supervised Speech Enhancement. ELECTRONICS 2020. [DOI: 10.3390/electronics10010017] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Recent speech enhancement research has shown that deep learning techniques are very effective in removing background noise. Many deep neural networks are being proposed, showing promising results for improving overall speech perception. The Deep Multilayer Perceptron, Convolutional Neural Networks, and the Denoising Autoencoder are well-established architectures for speech enhancement; however, choosing between different deep learning models has been mainly empirical. Consequently, a comparative analysis is needed between these three architecture types in order to show the factors affecting their performance. In this paper, this analysis is presented by comparing seven deep learning models that belong to these three categories. The comparison includes evaluating the performance in terms of the overall quality of the output speech using five objective evaluation metrics and a subjective evaluation with 23 listeners; the ability to deal with challenging noise conditions; generalization ability; complexity; and, processing time. Further analysis is then provided while using two different approaches. The first approach investigates how the performance is affected by changing network hyperparameters and the structure of the data, including the Lombard effect. While the second approach interprets the results by visualizing the spectrogram of the output layer of all the investigated models, and the spectrograms of the hidden layers of the convolutional neural network architecture. Finally, a general evaluation is performed for supervised deep learning-based speech enhancement while using SWOC analysis, to discuss the technique’s Strengths, Weaknesses, Opportunities, and Challenges. The results of this paper contribute to the understanding of how different deep neural networks perform the speech enhancement task, highlight the strengths and weaknesses of each architecture, and provide recommendations for achieving better performance. This work facilitates the development of better deep neural networks for speech enhancement in the future.
Collapse
|
29
|
Abd-Elsalam NM, Fawzi SA, Kandil AH. Comparing Different Pre-Trained Models Based on Transfer Learning Technique in Classifying Mammogram Masses. 2020 30TH INTERNATIONAL CONFERENCE ON COMPUTER THEORY AND APPLICATIONS (ICCTA) 2020. [DOI: 10.1109/iccta52020.2020.9477663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
|
30
|
Szczęsna A, Błaszczyszyn M, Kawala-Sterniuk A. Convolutional neural network in upper limb functional motion analysis after stroke. PeerJ 2020; 8:e10124. [PMID: 33083146 PMCID: PMC7549467 DOI: 10.7717/peerj.10124] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2020] [Accepted: 09/17/2020] [Indexed: 12/03/2022] Open
Abstract
In this work, implementation of Convolutional Neural Network (CNN) for the purpose of analysis of functional upper limb movement pattern was applied. The main aim of the study was to compare motion of selected activities of daily living of participants after stroke with the healthy ones (in similar age). The optical, marker-based motion capture system was applied for the purpose of data acquisition. There were some attempts made in order to find the existing differences in the motion pattern of the upper limb. For this purpose, the motion features of dominant and non-dominant upper limb of healthy participants were compared with motion features of paresis and non-paresis upper limbs of participants after stroke. On the basis of the newly collected data set, a new CNN application was presented to the classification of motion data in two different class label configurations. Analyzing individual segments of the upper body, it turned out that the arm was the most sensitive segment for capturing changes in the trajectory of the lifting movements of objects.
Collapse
Affiliation(s)
- Agnieszka Szczęsna
- Faculty of Automatic Control, Electronics and Computer Science, Silesian University of Technology, Gliwice, Poland
| | - Monika Błaszczyszyn
- Faculty of Physical Education and Physiotherapy, Opole University of Technology, Opole, Poland
| | - Aleksandra Kawala-Sterniuk
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Opole, Poland
| |
Collapse
|
31
|
Abstract
The last decade has transformed the field of artificial intelligence, with deep learning at the forefront of this development. With its ability to 'self-learn' discriminative patterns directly from data, deep learning is a promising computational approach for automating the classification of visual, spatial and acoustic information in the context of environmental conservation. Here, we first highlight the current and future applications of supervised deep learning in environmental conservation. Next, we describe a number of technical and implementation-related challenges that can potentially impede the real-world adoption of this technology in conservation programmes. Lastly, to mitigate these pitfalls, we discuss priorities for guiding future research and hope that these recommendations will help make this technology more accessible to environmental scientists and conservation practitioners.
Collapse
Affiliation(s)
- Aakash Lamba
- School of Biological Sciences, University of Adelaide, Adelaide, Australia
| | - Phillip Cassey
- School of Biological Sciences, University of Adelaide, Adelaide, Australia
| | | | - Lian Pin Koh
- School of Biological Sciences, University of Adelaide, Adelaide, Australia; Betty and Gordon Moore Center for Science, Conservation International, Arlington, VA, USA.
| |
Collapse
|
32
|
Meijering E. A bird's-eye view of deep learning in bioimage analysis. Comput Struct Biotechnol J 2020; 18:2312-2325. [PMID: 32994890 PMCID: PMC7494605 DOI: 10.1016/j.csbj.2020.08.003] [Citation(s) in RCA: 64] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Revised: 07/26/2020] [Accepted: 08/01/2020] [Indexed: 02/07/2023] Open
Abstract
Deep learning of artificial neural networks has become the de facto standard approach to solving data analysis problems in virtually all fields of science and engineering. Also in biology and medicine, deep learning technologies are fundamentally transforming how we acquire, process, analyze, and interpret data, with potentially far-reaching consequences for healthcare. In this mini-review, we take a bird's-eye view at the past, present, and future developments of deep learning, starting from science at large, to biomedical imaging, and bioimage analysis in particular.
Collapse
Affiliation(s)
- Erik Meijering
- School of Computer Science and Engineering & Graduate School of Biomedical Engineering, University of New South Wales, Sydney, Australia
| |
Collapse
|
33
|
Liu X, Zhang Y, Jing H, Wang L, Zhao S. Ore image segmentation method using U-Net and Res_Unet convolutional networks. RSC Adv 2020; 10:9396-9406. [PMID: 35497237 PMCID: PMC9050132 DOI: 10.1039/c9ra05877j] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2019] [Accepted: 02/11/2020] [Indexed: 11/24/2022] Open
Abstract
Image segmentation has been increasingly used to identify the particle size distribution of crushed ore; however, the adhesion of ore particles and dark areas in the images of blast heaps and conveyor belts usually results in lower segmentation accuracy. To overcome this issue, an image segmentation method UR based on deep learning U-Net and Res_Unet networks is proposed in this study. Gray-scale, median filter and adaptive histogram equalization techniques are used to preprocess the original ore images captured from an open pit mine to reduce noise and extract the target region. U-Net and Res_Unet are utilized to generate ore contour detection and optimization models, and the ore image segmentation result is illustrated by OpenCV. The efficiency and accuracy of the newly proposed UR method is demonstrated and validated by comparing with the existing image segmentation methods. Image segmentation has been increasingly used to identify the particle size of crushed ore. How to accurately identify the ore particles in complex a environment is particularly important.![]()
Collapse
Affiliation(s)
- Xiaobo Liu
- Intelligent Mine Research Center, Northeastern University Shenyang 110819 China .,National-local Joint Engineering Research Center of High-efficient Exploitation Technology for Refractory Iron Ore Resource, Northeastern University Shenyang 110819 China
| | - Yuwei Zhang
- Intelligent Mine Research Center, Northeastern University Shenyang 110819 China .,National-local Joint Engineering Research Center of High-efficient Exploitation Technology for Refractory Iron Ore Resource, Northeastern University Shenyang 110819 China
| | - Hongdi Jing
- Intelligent Mine Research Center, Northeastern University Shenyang 110819 China .,National-local Joint Engineering Research Center of High-efficient Exploitation Technology for Refractory Iron Ore Resource, Northeastern University Shenyang 110819 China
| | - Liancheng Wang
- Intelligent Mine Research Center, Northeastern University Shenyang 110819 China .,National-local Joint Engineering Research Center of High-efficient Exploitation Technology for Refractory Iron Ore Resource, Northeastern University Shenyang 110819 China
| | - Sheng Zhao
- Intelligent Mine Research Center, Northeastern University Shenyang 110819 China .,National-local Joint Engineering Research Center of High-efficient Exploitation Technology for Refractory Iron Ore Resource, Northeastern University Shenyang 110819 China
| |
Collapse
|
34
|
Montagnon E, Cerny M, Cadrin-Chênevert A, Hamilton V, Derennes T, Ilinca A, Vandenbroucke-Menu F, Turcotte S, Kadoury S, Tang A. Deep learning workflow in radiology: a primer. Insights Imaging 2020; 11:22. [PMID: 32040647 PMCID: PMC7010882 DOI: 10.1186/s13244-019-0832-5] [Citation(s) in RCA: 86] [Impact Index Per Article: 17.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2019] [Accepted: 12/17/2019] [Indexed: 02/08/2023] Open
Abstract
Interest for deep learning in radiology has increased tremendously in the past decade due to the high achievable performance for various computer vision tasks such as detection, segmentation, classification, monitoring, and prediction. This article provides step-by-step practical guidance for conducting a project that involves deep learning in radiology, from defining specifications, to deployment and scaling. Specifically, the objectives of this article are to provide an overview of clinical use cases of deep learning, describe the composition of multi-disciplinary team, and summarize current approaches to patient, data, model, and hardware selection. Key ideas will be illustrated by examples from a prototypical project on imaging of colorectal liver metastasis. This article illustrates the workflow for liver lesion detection, segmentation, classification, monitoring, and prediction of tumor recurrence and patient survival. Challenges are discussed, including ethical considerations, cohorting, data collection, anonymization, and availability of expert annotations. The practical guidance may be adapted to any project that requires automated medical image analysis.
Collapse
Affiliation(s)
- Emmanuel Montagnon
- Centre de recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), Montréal, Québec, Canada
| | - Milena Cerny
- Centre de recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), Montréal, Québec, Canada
| | | | - Vincent Hamilton
- Centre de recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), Montréal, Québec, Canada
| | - Thomas Derennes
- Centre de recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), Montréal, Québec, Canada
| | - André Ilinca
- Centre de recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), Montréal, Québec, Canada
| | - Franck Vandenbroucke-Menu
- Department of Surgery, Hepatopancreatobiliary and Liver Transplantation Service, Centre Hospitalier de l'Université de Montréal (CHUM), Montréal, Quebec, Canada
| | - Simon Turcotte
- Centre de recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), Montréal, Québec, Canada
- Department of Surgery, Hepatopancreatobiliary and Liver Transplantation Service, Centre Hospitalier de l'Université de Montréal (CHUM), Montréal, Quebec, Canada
| | | | - An Tang
- Centre de recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), Montréal, Québec, Canada
- Department of Radiology, Radio-Oncology and Nuclear Medicine, Université Montréal and CRCHUM, 1058 rue Saint-Denis, Montréal, Québec, H2X 3 J4, Canada
| |
Collapse
|
35
|
Monitoring of Coral Reefs Using Artificial Intelligence: A Feasible and Cost-Effective Approach. REMOTE SENSING 2020. [DOI: 10.3390/rs12030489] [Citation(s) in RCA: 41] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Ecosystem monitoring is central to effective management, where rapid reporting is essential to provide timely advice. While digital imagery has greatly improved the speed of underwater data collection for monitoring benthic communities, image analysis remains a bottleneck in reporting observations. In recent years, a rapid evolution of artificial intelligence in image recognition has been evident in its broad applications in modern society, offering new opportunities for increasing the capabilities of coral reef monitoring. Here, we evaluated the performance of Deep Learning Convolutional Neural Networks for automated image analysis, using a global coral reef monitoring dataset. The study demonstrates the advantages of automated image analysis for coral reef monitoring in terms of error and repeatability of benthic abundance estimations, as well as cost and benefit. We found unbiased and high agreement between expert and automated observations (97%). Repeated surveys and comparisons against existing monitoring programs also show that automated estimation of benthic composition is equally robust in detecting change and ensuring the continuity of existing monitoring data. Using this automated approach, data analysis and reporting can be accelerated by at least 200x and at a fraction of the cost (1%). Combining commonly used underwater imagery in monitoring with automated image annotation can dramatically improve how we measure and monitor coral reefs worldwide, particularly in terms of allocating limited resources, rapid reporting and data integration within and across management areas.
Collapse
|
36
|
Leite AF, Vasconcelos KDF, Willems H, Jacobs R. Radiomics and Machine Learning in Oral Healthcare. Proteomics Clin Appl 2020; 14:e1900040. [PMID: 31950592 DOI: 10.1002/prca.201900040] [Citation(s) in RCA: 59] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2019] [Revised: 12/09/2019] [Indexed: 12/12/2022]
Abstract
The increasing storage of information, data, and forms of knowledge has led to the development of new technologies that can help to accomplish complex tasks in different areas, such as in dentistry. In this context, the role of computational methods, such as radiomics and Artificial Intelligence (AI) applications, has been progressing remarkably for dentomaxillofacial radiology (DMFR). These tools bring new perspectives for diagnosis, classification, and prediction of oral diseases, treatment planning, and for the evaluation and prediction of outcomes, minimizing the possibilities of human errors. A comprehensive review of the state-of-the-art of using radiomics and machine learning (ML) for imaging in oral healthcare is presented in this paper. Although the number of published studies is still relatively low, the preliminary results are very promising and in a near future, an augmented dentomaxillofacial radiology (ADMFR) will combine the use of radiomics-based and AI-based analyses with the radiologist's evaluation. In addition to the opportunities and possibilities, some challenges and limitations have also been discussed for further investigations.
Collapse
Affiliation(s)
- André Ferreira Leite
- Department of Dentistry, Faculty of Health Sciences, University of Brasília, Brasília, 70910-900, Brazil.,Omfsimpath Research Group, Department of Imaging and Pathology, Biomedical Sciences, KU Leuven and Dentomaxillofacial Imaging Department, University Hospitals Leuven, Leuven, 3000, Belgium
| | - Karla de Faria Vasconcelos
- Omfsimpath Research Group, Department of Imaging and Pathology, Biomedical Sciences, KU Leuven and Dentomaxillofacial Imaging Department, University Hospitals Leuven, Leuven, 3000, Belgium
| | - Holger Willems
- Relu, Innovatie-en incubatiecentrum KU Leuven, Leuven, 3000, Belgium
| | - Reinhilde Jacobs
- Omfsimpath Research Group, Department of Imaging and Pathology, Biomedical Sciences, KU Leuven and Dentomaxillofacial Imaging Department, University Hospitals Leuven, Leuven, 3000, Belgium.,Department of Dental Medicine, Karolinska Institutet, Huddinge, 17177, Sweden
| |
Collapse
|
37
|
The Importance of Imaging Informatics and Informaticists in the Implementation of AI. Acad Radiol 2020; 27:113-116. [PMID: 31636003 DOI: 10.1016/j.acra.2019.10.002] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2019] [Accepted: 10/01/2019] [Indexed: 12/18/2022]
Abstract
Imaging informatics is critical to the success of AI implementation in radiology. An imaging informaticist is a unique individual who sits at the intersection of clinical radiology, data science, and information technology. With the ability to understand each of the different domains and translate between the experts in these domains, imaging informaticists are now essential players in the development, evaluation, and deployment of AI in the clinical environment.
Collapse
|
38
|
Hierarchical Poincaré analysis for anaesthesia monitoring. J Clin Monit Comput 2019; 34:1321-1330. [DOI: 10.1007/s10877-019-00447-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Accepted: 12/14/2019] [Indexed: 02/07/2023]
|
39
|
Thafar M, Raies AB, Albaradei S, Essack M, Bajic VB. Comparison Study of Computational Prediction Tools for Drug-Target Binding Affinities. Front Chem 2019; 7:782. [PMID: 31824921 PMCID: PMC6879652 DOI: 10.3389/fchem.2019.00782] [Citation(s) in RCA: 60] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2019] [Accepted: 10/30/2019] [Indexed: 12/30/2022] Open
Abstract
The drug development is generally arduous, costly, and success rates are low. Thus, the identification of drug-target interactions (DTIs) has become a crucial step in early stages of drug discovery. Consequently, developing computational approaches capable of identifying potential DTIs with minimum error rate are increasingly being pursued. These computational approaches aim to narrow down the search space for novel DTIs and shed light on drug functioning context. Most methods developed to date use binary classification to predict if the interaction between a drug and its target exists or not. However, it is more informative but also more challenging to predict the strength of the binding between a drug and its target. If that strength is not sufficiently strong, such DTI may not be useful. Therefore, the methods developed to predict drug-target binding affinities (DTBA) are of great value. In this study, we provide a comprehensive overview of the existing methods that predict DTBA. We focus on the methods developed using artificial intelligence (AI), machine learning (ML), and deep learning (DL) approaches, as well as related benchmark datasets and databases. Furthermore, guidance and recommendations are provided that cover the gaps and directions of the upcoming work in this research area. To the best of our knowledge, this is the first comprehensive comparison analysis of tools focused on DTBA with reference to AI/ML/DL.
Collapse
Affiliation(s)
- Maha Thafar
- Computer, Electrical and Mathematical Science and Engineering (CEMSE) Division, Computational Bioscience Research Center (CBRC), King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia
- College of Computers and Information Technology, Taif University, Taif, Saudi Arabia
| | - Arwa Bin Raies
- Computer, Electrical and Mathematical Science and Engineering (CEMSE) Division, Computational Bioscience Research Center (CBRC), King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia
| | - Somayah Albaradei
- Computer, Electrical and Mathematical Science and Engineering (CEMSE) Division, Computational Bioscience Research Center (CBRC), King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia
- Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Magbubah Essack
- Computer, Electrical and Mathematical Science and Engineering (CEMSE) Division, Computational Bioscience Research Center (CBRC), King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia
| | - Vladimir B. Bajic
- Computer, Electrical and Mathematical Science and Engineering (CEMSE) Division, Computational Bioscience Research Center (CBRC), King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia
| |
Collapse
|
40
|
Abstract
There is recent popularity in applying machine learning to medical imaging, notably deep learning, which has achieved state-of-the-art performance in image analysis and processing. The rapid adoption of deep learning may be attributed to the availability of machine learning frameworks and libraries to simplify their use. In this tutorial, we provide a high-level overview of how to build a deep neural network for medical image classification, and provide code that can help those new to the field begin their informatics projects.
Collapse
|
41
|
Patwardhan RS, Hamadah HA, Patel KM, Hafiz RH, Al-Gwaiz MM. Applications of Advanced Analytics at Saudi Aramco: A Practitioners’ Perspective. Ind Eng Chem Res 2019. [DOI: 10.1021/acs.iecr.8b06205] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Affiliation(s)
- Rohit S. Patwardhan
- Process & Control Systems Department, Saudi Aramco, Dhahran 31311, Saudi Arabia
| | - Hamza A. Hamadah
- Process & Control Systems Department, Saudi Aramco, Dhahran 31311, Saudi Arabia
| | - Kalpesh M. Patel
- Process & Control Systems Department, Saudi Aramco, Dhahran 31311, Saudi Arabia
| | - Rayan H. Hafiz
- Process & Control Systems Department, Saudi Aramco, Dhahran 31311, Saudi Arabia
| | - Majid M. Al-Gwaiz
- Process & Control Systems Department, Saudi Aramco, Dhahran 31311, Saudi Arabia
| |
Collapse
|
42
|
Detection and classification of social media-based extremist affiliations using sentiment analysis techniques. HUMAN-CENTRIC COMPUTING AND INFORMATION SCIENCES 2019. [DOI: 10.1186/s13673-019-0185-6] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Abstract
Identification and classification of extremist-related tweets is a hot issue. Extremist gangs have been involved in using social media sites like Facebook and Twitter for propagating their ideology and recruitment of individuals. This work aims at proposing a terrorism-related content analysis framework with the focus on classifying tweets into extremist and non-extremist classes. Based on user-generated social media posts on Twitter, we develop a tweet classification system using deep learning-based sentiment analysis techniques to classify the tweets as extremist or non-extremist. The experimental results are encouraging and provide a gateway for future researchers.
Collapse
|
43
|
Agajanian S, Oluyemi O, Verkhivker GM. Integration of Random Forest Classifiers and Deep Convolutional Neural Networks for Classification and Biomolecular Modeling of Cancer Driver Mutations. Front Mol Biosci 2019; 6:44. [PMID: 31245384 PMCID: PMC6579812 DOI: 10.3389/fmolb.2019.00044] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Accepted: 05/23/2019] [Indexed: 12/21/2022] Open
Abstract
Development of machine learning solutions for prediction of functional and clinical significance of cancer driver genes and mutations are paramount in modern biomedical research and have gained a significant momentum in a recent decade. In this work, we integrate different machine learning approaches, including tree based methods, random forest and gradient boosted tree (GBT) classifiers along with deep convolutional neural networks (CNN) for prediction of cancer driver mutations in the genomic datasets. The feasibility of CNN in using raw nucleotide sequences for classification of cancer driver mutations was initially explored by employing label encoding, one hot encoding, and embedding to preprocess the DNA information. These classifiers were benchmarked against their tree-based alternatives in order to evaluate the performance on a relative scale. We then integrated DNA-based scores generated by CNN with various categories of conservational, evolutionary and functional features into a generalized random forest classifier. The results of this study have demonstrated that CNN can learn high level features from genomic information that are complementary to the ensemble-based predictors often employed for classification of cancer mutations. By combining deep learning-generated score with only two main ensemble-based functional features, we can achieve a superior performance of various machine learning classifiers. Our findings have also suggested that synergy of nucleotide-based deep learning scores and integrated metrics derived from protein sequence conservation scores can allow for robust classification of cancer driver mutations with a limited number of highly informative features. Machine learning predictions are leveraged in molecular simulations, protein stability, and network-based analysis of cancer mutations in the protein kinase genes to obtain insights about molecular signatures of driver mutations and enhance the interpretability of cancer-specific classification models.
Collapse
Affiliation(s)
- Steve Agajanian
- Graduate Program in Computational and Data Sciences, Schmid College of Science and Technology, Chapman University, Orange, CA, United States
| | - Odeyemi Oluyemi
- Graduate Program in Computational and Data Sciences, Schmid College of Science and Technology, Chapman University, Orange, CA, United States
| | - Gennady M Verkhivker
- Graduate Program in Computational and Data Sciences, Schmid College of Science and Technology, Chapman University, Orange, CA, United States.,Department of Biomedical and Pharmaceutical Sciences, Chapman University School of Pharmacy, Irvine, CA, United States
| |
Collapse
|
44
|
Abdelhafiz D, Yang C, Ammar R, Nabavi S. Deep convolutional neural networks for mammography: advances, challenges and applications. BMC Bioinformatics 2019; 20:281. [PMID: 31167642 PMCID: PMC6551243 DOI: 10.1186/s12859-019-2823-4] [Citation(s) in RCA: 73] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND The limitations of traditional computer-aided detection (CAD) systems for mammography, the extreme importance of early detection of breast cancer and the high impact of the false diagnosis of patients drive researchers to investigate deep learning (DL) methods for mammograms (MGs). Recent breakthroughs in DL, in particular, convolutional neural networks (CNNs) have achieved remarkable advances in the medical fields. Specifically, CNNs are used in mammography for lesion localization and detection, risk assessment, image retrieval, and classification tasks. CNNs also help radiologists providing more accurate diagnosis by delivering precise quantitative analysis of suspicious lesions. RESULTS In this survey, we conducted a detailed review of the strengths, limitations, and performance of the most recent CNNs applications in analyzing MG images. It summarizes 83 research studies for applying CNNs on various tasks in mammography. It focuses on finding the best practices used in these research studies to improve the diagnosis accuracy. This survey also provides a deep insight into the architecture of CNNs used for various tasks. Furthermore, it describes the most common publicly available MG repositories and highlights their main features and strengths. CONCLUSIONS The mammography research community can utilize this survey as a basis for their current and future studies. The given comparison among common publicly available MG repositories guides the community to select the most appropriate database for their application(s). Moreover, this survey lists the best practices that improve the performance of CNNs including the pre-processing of images and the use of multi-view images. In addition, other listed techniques like transfer learning (TL), data augmentation, batch normalization, and dropout are appealing solutions to reduce overfitting and increase the generalization of the CNN models. Finally, this survey identifies the research challenges and directions that require further investigations by the community.
Collapse
Affiliation(s)
- Dina Abdelhafiz
- Department of Computer Science and Engineering, University of Connecticut, Storrs, 06269 CT USA
- The Informatics Research Institute (IRI), City of Scientific Research and Technological Application (SRTA-City), New Borg El-Arab, Egypt
| | - Clifford Yang
- Department of Diagnostic Imaging, University of Connecticut Health Center, Farmington, 06030 CT USA
| | - Reda Ammar
- Department of Computer Science and Engineering, University of Connecticut, Storrs, 06269 CT USA
| | - Sheida Nabavi
- Department of Computer Science and Engineering, University of Connecticut, Storrs, 06269 CT USA
| |
Collapse
|
45
|
Levine AB, Schlosser C, Grewal J, Coope R, Jones SJM, Yip S. Rise of the Machines: Advances in Deep Learning for Cancer Diagnosis. Trends Cancer 2019; 5:157-169. [PMID: 30898263 DOI: 10.1016/j.trecan.2019.02.002] [Citation(s) in RCA: 77] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2018] [Revised: 01/31/2019] [Accepted: 02/04/2019] [Indexed: 02/08/2023]
Abstract
Deep learning refers to a set of computer models that have recently been used to make unprecedented progress in the way computers extract information from images. These algorithms have been applied to tasks in numerous medical specialties, most extensively radiology and pathology, and in some cases have attained performance comparable to human experts. Furthermore, it is possible that deep learning could be used to extract data from medical images that would not be apparent by human analysis and could be used to inform on molecular status, prognosis, or treatment sensitivity. In this review, we outline the current developments and state-of-the-art in applying deep learning for cancer diagnosis, and discuss the challenges in adapting the technology for widespread clinical deployment.
Collapse
Affiliation(s)
- Adrian B Levine
- Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada
| | - Colin Schlosser
- Canada's Michael Smith Genome Sciences Centre, Vancouver, BC, Canada
| | - Jasleen Grewal
- Canada's Michael Smith Genome Sciences Centre, Vancouver, BC, Canada
| | - Robin Coope
- Canada's Michael Smith Genome Sciences Centre, Vancouver, BC, Canada
| | - Steve J M Jones
- Canada's Michael Smith Genome Sciences Centre, Vancouver, BC, Canada
| | - Stephen Yip
- Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada.
| |
Collapse
|
46
|
A Review on a Deep Learning Perspective in Brain Cancer Classification. Cancers (Basel) 2019; 11:cancers11010111. [PMID: 30669406 PMCID: PMC6356431 DOI: 10.3390/cancers11010111] [Citation(s) in RCA: 174] [Impact Index Per Article: 29.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2018] [Revised: 01/07/2019] [Accepted: 01/10/2019] [Indexed: 12/12/2022] Open
Abstract
A World Health Organization (WHO) Feb 2018 report has recently shown that mortality rate due to brain or central nervous system (CNS) cancer is the highest in the Asian continent. It is of critical importance that cancer be detected earlier so that many of these lives can be saved. Cancer grading is an important aspect for targeted therapy. As cancer diagnosis is highly invasive, time consuming and expensive, there is an immediate requirement to develop a non-invasive, cost-effective and efficient tools for brain cancer characterization and grade estimation. Brain scans using magnetic resonance imaging (MRI), computed tomography (CT), as well as other imaging modalities, are fast and safer methods for tumor detection. In this paper, we tried to summarize the pathophysiology of brain cancer, imaging modalities of brain cancer and automatic computer assisted methods for brain cancer characterization in a machine and deep learning paradigm. Another objective of this paper is to find the current issues in existing engineering methods and also project a future paradigm. Further, we have highlighted the relationship between brain cancer and other brain disorders like stroke, Alzheimer’s, Parkinson’s, and Wilson’s disease, leukoriaosis, and other neurological disorders in the context of machine learning and the deep learning paradigm.
Collapse
|
47
|
Sichtermann T, Faron A, Sijben R, Teichert N, Freiherr J, Wiesmann M. Deep Learning-Based Detection of Intracranial Aneurysms in 3D TOF-MRA. AJNR Am J Neuroradiol 2018; 40:25-32. [PMID: 30573461 DOI: 10.3174/ajnr.a5911] [Citation(s) in RCA: 82] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2018] [Accepted: 10/29/2018] [Indexed: 12/12/2022]
Abstract
BACKGROUND AND PURPOSE The rupture of an intracranial aneurysm is a serious incident, causing subarachnoid hemorrhage associated with high fatality and morbidity rates. Because the demand for radiologic examinations is steadily growing, physician fatigue due to an increased workload is a real concern and may lead to mistaken diagnoses of potentially relevant findings. Our aim was to develop a sufficient system for automated detection of intracranial aneurysms. MATERIALS AND METHODS In a retrospective study, we established a system for the detection of intracranial aneurysms from 3D TOF-MRA data. The system is based on an open-source neural network, originally developed for segmentation of anatomic structures in medical images. Eighty-five datasets of patients with a total of 115 intracranial aneurysms were used to train the system and evaluate its performance. Manual annotation of aneurysms based on radiologic reports and critical revision of image data served as the reference standard. Sensitivity, false-positives per case, and positive predictive value were determined for different pipelines with modified pre- and postprocessing. RESULTS The highest overall sensitivity of our system for the detection of intracranial aneurysms was 90% with a sensitivity of 96% for aneurysms with a diameter of 3-7 mm and 100% for aneurysms of >7 mm. The best location-dependent performance was in the posterior circulation. Pre- and postprocessing sufficiently reduced the number of false-positives. CONCLUSIONS Our system, based on a deep learning convolutional network, can detect intracranial aneurysms with a high sensitivity from 3D TOF-MRA data.
Collapse
Affiliation(s)
- T Sichtermann
- From the Department of Diagnostic and Interventional Neuroradiology (T.S., A.F., R.S., N.T., J.F., M.W.), University Hospital RWTH Aachen, Aachen, Germany
| | - A Faron
- From the Department of Diagnostic and Interventional Neuroradiology (T.S., A.F., R.S., N.T., J.F., M.W.), University Hospital RWTH Aachen, Aachen, Germany.,Department of Radiology (A.F.), University Hospital Bonn, Bonn, Germany
| | - R Sijben
- From the Department of Diagnostic and Interventional Neuroradiology (T.S., A.F., R.S., N.T., J.F., M.W.), University Hospital RWTH Aachen, Aachen, Germany
| | - N Teichert
- From the Department of Diagnostic and Interventional Neuroradiology (T.S., A.F., R.S., N.T., J.F., M.W.), University Hospital RWTH Aachen, Aachen, Germany.,Department of Radiology (A.F.), University Hospital Bonn, Bonn, Germany
| | - J Freiherr
- From the Department of Diagnostic and Interventional Neuroradiology (T.S., A.F., R.S., N.T., J.F., M.W.), University Hospital RWTH Aachen, Aachen, Germany
| | - M Wiesmann
- From the Department of Diagnostic and Interventional Neuroradiology (T.S., A.F., R.S., N.T., J.F., M.W.), University Hospital RWTH Aachen, Aachen, Germany
| |
Collapse
|
48
|
Lundervold AS, Lundervold A. An overview of deep learning in medical imaging focusing on MRI. Z Med Phys 2018; 29:102-127. [PMID: 30553609 DOI: 10.1016/j.zemedi.2018.11.002] [Citation(s) in RCA: 771] [Impact Index Per Article: 110.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2018] [Revised: 11/19/2018] [Accepted: 11/21/2018] [Indexed: 02/06/2023]
Abstract
What has happened in machine learning lately, and what does it mean for the future of medical image analysis? Machine learning has witnessed a tremendous amount of attention over the last few years. The current boom started around 2009 when so-called deep artificial neural networks began outperforming other established models on a number of important benchmarks. Deep neural networks are now the state-of-the-art machine learning models across a variety of areas, from image analysis to natural language processing, and widely deployed in academia and industry. These developments have a huge potential for medical imaging technology, medical data analysis, medical diagnostics and healthcare in general, slowly being realized. We provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis. As this has become a very broad and fast expanding field we will not survey the entire landscape of applications, but put particular focus on deep learning in MRI. Our aim is threefold: (i) give a brief introduction to deep learning with pointers to core references; (ii) indicate how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction; (iii) provide a starting point for people interested in experimenting and perhaps contributing to the field of deep learning for medical imaging by pointing out good educational resources, state-of-the-art open-source code, and interesting sources of data and problems related medical imaging.
Collapse
Affiliation(s)
- Alexander Selvikvåg Lundervold
- Mohn Medical Imaging and Visualization Centre (MMIV), Haukeland University Hospital, Norway; Department of Computing, Mathematics and Physics, Western Norway University of Applied Sciences, Norway.
| | - Arvid Lundervold
- Mohn Medical Imaging and Visualization Centre (MMIV), Haukeland University Hospital, Norway; Neuroinformatics and Image Analysis Laboratory, Department of Biomedicine, University of Bergen, Norway; Department of Health and Functioning, Western Norway University of Applied Sciences, Norway.
| |
Collapse
|
49
|
Carpenter KA, Cohen DS, Jarrell JT, Huang X. Deep learning and virtual drug screening. Future Med Chem 2018; 10:2557-2567. [PMID: 30288997 PMCID: PMC6563286 DOI: 10.4155/fmc-2018-0314] [Citation(s) in RCA: 69] [Impact Index Per Article: 9.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2018] [Accepted: 09/21/2018] [Indexed: 12/21/2022] Open
Abstract
Current drug development is still costly and slow given tremendous technological advancements in drug discovery and medicinal chemistry. Using machine learning (ML) to virtually screen compound libraries promises to fix this for generating drug leads more efficiently and accurately. Herein, we explain the broad basics and integration of both virtual screening (VS) and ML. We then discuss artificial neural networks (ANNs) and their usage for VS. The ANN is emerging as the dominant classifier for ML in general, and has proven its utility for both structure-based and ligand-based VS. Techniques such as dropout, multitask learning and convolution improve the performance of ANNs and enable them to take on chemical meaning when learning about the drug-target-binding activity of compounds.
Collapse
Affiliation(s)
- Kristy A Carpenter
- Neurochemistry Laboratory, Department of Psychiatry, Massachusetts General Hospital & Harvard Medical School, Charlestown, MA 02129, USA
| | - David S Cohen
- Neurochemistry Laboratory, Department of Psychiatry, Massachusetts General Hospital & Harvard Medical School, Charlestown, MA 02129, USA
| | - Juliet T Jarrell
- Neurochemistry Laboratory, Department of Psychiatry, Massachusetts General Hospital & Harvard Medical School, Charlestown, MA 02129, USA
| | - Xudong Huang
- Neurochemistry Laboratory, Department of Psychiatry, Massachusetts General Hospital & Harvard Medical School, Charlestown, MA 02129, USA
| |
Collapse
|
50
|
An Emotion-Aware Personalized Music Recommendation System Using a Convolutional Neural Networks Approach. APPLIED SCIENCES-BASEL 2018. [DOI: 10.3390/app8071103] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|