1
|
Zeng Q, Yang L, Li Y, Xie L, Feng Y. RGVPSeg: multimodal information fusion network for retinogeniculate visual pathway segmentation. Med Biol Eng Comput 2025; 63:1397-1411. [PMID: 39743637 DOI: 10.1007/s11517-024-03248-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2024] [Accepted: 11/12/2024] [Indexed: 01/04/2025]
Abstract
The segmentation of the retinogeniculate visual pathway (RGVP) enables quantitative analysis of its anatomical structure. Multimodal learning has exhibited considerable potential in segmenting the RGVP based on structural MRI (sMRI) and diffusion MRI (dMRI). However, the intricate nature of the skull base environment and the slender morphology of the RGVP pose challenges for existing methodologies to adequately leverage the complementary information from each modality. In this study, we propose a multimodal information fusion network designed to optimize and select the complementary information across multiple modalities: the T1-weighted (T1w) images, the fractional anisotropy (FA) images, and the fiber orientation distribution function (fODF) peaks, and the modalities can supervise each other during the process. Specifically, we add a supervised master-assistant cross-modal learning framework between the encoder layers of different modalities so that the characteristics of different modalities can be more fully utilized to achieve a more accurate segmentation result. We apply RGVPSeg to an MRI dataset with 102 subjects from the Human Connectome Project (HCP) and 10 subjects from Multi-shell Diffusion MRI (MDM), the experimental results show promising results, which demonstrate that the proposed framework is feasible and outperforms the methods mentioned in this paper. Our code is freely available at https://github.com/yanglin9911/RGVPSeg .
Collapse
Affiliation(s)
- Qingrun Zeng
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, China
| | - Lin Yang
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, China
| | - Yongqiang Li
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, China
| | - Lei Xie
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, China.
| | - Yuanjing Feng
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, China.
| |
Collapse
|
2
|
Xena-Bosch C, Kodali S, Sahi N, Chard D, Llufriu S, Toosy AT, Martinez-Heras E, Prados F. Advances in MRI optic nerve segmentation. Mult Scler Relat Disord 2025; 98:106437. [PMID: 40220726 DOI: 10.1016/j.msard.2025.106437] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2024] [Revised: 03/05/2025] [Accepted: 04/08/2025] [Indexed: 04/14/2025]
Abstract
Understanding optic nerve structure and monitoring changes within it can provide insights into neurodegenerative diseases like multiple sclerosis, in which optic nerves are often damaged by inflammatory episodes of optic neuritis. Over the past decades, interest in the optic nerve has increased, particularly with advances in magnetic resonance technology and the advent of deep learning solutions. These advances have significantly improved the visualisation and analysis of optic nerves, making it possible to detect subtle changes that aid the early diagnosis and treatment of optic nerve-related diseases, and for planning radiotherapy interventions. Effective segmentation techniques, therefore, are crucial for enhancing the accuracy of predictive models, planning interventions and treatment strategies. This comprehensive review, which includes 27 peer-reviewed articles published between 2007 and 2024, examines and highlights the evolution of optic nerve magnetic resonance imaging segmentation over the past decade, tracing the development from intensity-based methods to the latest deep learning algorithms, including multi-atlas solutions using single or multiple image modalities.
Collapse
Affiliation(s)
- Carla Xena-Bosch
- e-Health Center, Universitat Oberta de Catalunya, Barcelona, Spain.
| | - Srikirti Kodali
- Queen Square MS Centre, Department of Neuroinflammation, UCL Institute of Neurology, Faculty of Brain Sciences, University College London, London, United Kingdom
| | - Nitin Sahi
- Queen Square MS Centre, Department of Neuroinflammation, UCL Institute of Neurology, Faculty of Brain Sciences, University College London, London, United Kingdom
| | - Declan Chard
- Queen Square MS Centre, Department of Neuroinflammation, UCL Institute of Neurology, Faculty of Brain Sciences, University College London, London, United Kingdom; National Institute for Health Research (NIHR) University College London Hospitals (UCLH) Biomedical Research Centre, United Kingdom
| | - Sara Llufriu
- Neuroimmunology and Multiple Sclerosis Unit, Hospital Clínic de Barcelona, Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS) and Universitat de Barcelona, Barcelona, Spain
| | - Ahmed T Toosy
- Queen Square MS Centre, Department of Neuroinflammation, UCL Institute of Neurology, Faculty of Brain Sciences, University College London, London, United Kingdom
| | - Eloy Martinez-Heras
- Neuroimmunology and Multiple Sclerosis Unit, Hospital Clínic de Barcelona, Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS) and Universitat de Barcelona, Barcelona, Spain
| | - Ferran Prados
- e-Health Center, Universitat Oberta de Catalunya, Barcelona, Spain; Queen Square MS Centre, Department of Neuroinflammation, UCL Institute of Neurology, Faculty of Brain Sciences, University College London, London, United Kingdom; Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
| |
Collapse
|
3
|
Jiang Z, Parida A, Anwar SM, Tang Y, Roth HR, Fisher MJ, Packer RJ, Avery RA, Linguraru MG. Automatic Visual Acuity Loss Prediction in Children with Optic Pathway Gliomas using Magnetic Resonance Imaging. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-5. [PMID: 38083430 PMCID: PMC11283911 DOI: 10.1109/embc40787.2023.10339961] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Children with optic pathway gliomas (OPGs), a low-grade brain tumor associated with neurofibromatosis type 1 (NF1-OPG), are at risk for permanent vision loss. While OPG size has been associated with vision loss, it is unclear how changes in size, shape, and imaging features of OPGs are associated with the likelihood of vision loss. This paper presents a fully automatic framework for accurate prediction of visual acuity loss using multi-sequence magnetic resonance images (MRIs). Our proposed framework includes a transformer-based segmentation network using transfer learning, statistical analysis of radiomic features, and a machine learning method for predicting vision loss. Our segmentation network was evaluated on multi-sequence MRIs acquired from 75 pediatric subjects with NF1-OPG and obtained an average Dice similarity coefficient of 0.791. The ability to predict vision loss was evaluated on a subset of 25 subjects with ground truth using cross-validation and achieved an average accuracy of 0.8. Analyzing multiple MRI features appear to be good indicators of vision loss, potentially permitting early treatment decisions.Clinical relevance- Accurately determining which children with NF1-OPGs are at risk and hence require preventive treatment before vision loss remains challenging, towards this we present a fully automatic deep learning-based framework for vision outcome prediction, potentially permitting early treatment decisions.
Collapse
|
4
|
Jin R, Cai Y, Zhang S, Yang T, Feng H, Jiang H, Zhang X, Hu Y, Liu J. Computational approaches for the reconstruction of optic nerve fibers along the visual pathway from medical images: a comprehensive review. Front Neurosci 2023; 17:1191999. [PMID: 37304011 PMCID: PMC10250625 DOI: 10.3389/fnins.2023.1191999] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Accepted: 05/09/2023] [Indexed: 06/13/2023] Open
Abstract
Optic never fibers in the visual pathway play significant roles in vision formation. Damages of optic nerve fibers are biomarkers for the diagnosis of various ophthalmological and neurological diseases; also, there is a need to prevent the optic nerve fibers from getting damaged in neurosurgery and radiation therapy. Reconstruction of optic nerve fibers from medical images can facilitate all these clinical applications. Although many computational methods are developed for the reconstruction of optic nerve fibers, a comprehensive review of these methods is still lacking. This paper described both the two strategies for optic nerve fiber reconstruction applied in existing studies, i.e., image segmentation and fiber tracking. In comparison to image segmentation, fiber tracking can delineate more detailed structures of optic nerve fibers. For each strategy, both conventional and AI-based approaches were introduced, and the latter usually demonstrates better performance than the former. From the review, we concluded that AI-based methods are the trend for optic nerve fiber reconstruction and some new techniques like generative AI can help address the current challenges in optic nerve fiber reconstruction.
Collapse
Affiliation(s)
- Richu Jin
- Research Institute of Trustworthy Autonomous Systems, Southern University of Science and Technology, Shenzhen, China
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Yongning Cai
- Research Institute of Trustworthy Autonomous Systems, Southern University of Science and Technology, Shenzhen, China
| | - Shiyang Zhang
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Ting Yang
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Haibo Feng
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Hongyang Jiang
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Xiaoqing Zhang
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Yan Hu
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Jiang Liu
- Research Institute of Trustworthy Autonomous Systems, Southern University of Science and Technology, Shenzhen, China
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
- Guangdong Provincial Key Laboratory of Brain-inspired Intelligent Computation, Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| |
Collapse
|
5
|
Xie L, Huang J, Yu J, Zeng Q, Hu Q, Chen Z, Xie G, Feng Y. CNTSeg: A multimodal deep-learning-based network for cranial nerves tract segmentation. Med Image Anal 2023; 86:102766. [PMID: 36812693 DOI: 10.1016/j.media.2023.102766] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Revised: 09/21/2022] [Accepted: 02/08/2023] [Indexed: 02/12/2023]
Abstract
The segmentation of cranial nerves (CNs) tracts based on diffusion magnetic resonance imaging (dMRI) provides a valuable quantitative tool for the analysis of the morphology and course of individual CNs. Tractography-based approaches can describe and analyze the anatomical area of CNs by selecting the reference streamlines in combination with ROIs-based (regions-of-interests) or clustering-based. However, due to the slender structure of CNs and the complex anatomical environment, single-modality data based on dMRI cannot provide a complete and accurate description, resulting in low accuracy or even failure of current algorithms in performing individualized CNs segmentation. In this work, we propose a novel multimodal deep-learning-based multi-class network for automated cranial nerves tract segmentation without using tractography, ROI placement or clustering, called CNTSeg. Specifically, we introduced T1w images, fractional anisotropy (FA) images, and fiber orientation distribution function (fODF) peaks into the training data set, and design the back-end fusion module which uses the complementary information of the interphase feature fusion to improve the segmentation performance. CNTSeg has achieved the segmentation of 5 pairs of CNs (i.e. optic nerve CN II, oculomotor nerve CN III, trigeminal nerve CN V, and facial-vestibulocochlear nerve CN VII/VIII). Extensive comparisons and ablation experiments show promising results and are anatomically convincing even for difficult tracts. The code will be openly available at https://github.com/IPIS-XieLei/CNTSeg.
Collapse
Affiliation(s)
- Lei Xie
- Institute of Information Processing and Automation, College of Information Engineering, Zhejiang University of Technology, Hangzhou 310023, China.
| | - Jiahao Huang
- Institute of Information Processing and Automation, College of Information Engineering, Zhejiang University of Technology, Hangzhou 310023, China
| | - Jiangli Yu
- Institute of Information Processing and Automation, College of Information Engineering, Zhejiang University of Technology, Hangzhou 310023, China
| | - Qingrun Zeng
- Institute of Information Processing and Automation, College of Information Engineering, Zhejiang University of Technology, Hangzhou 310023, China
| | - Qiming Hu
- Institute of Information Processing and Automation, College of Information Engineering, Zhejiang University of Technology, Hangzhou 310023, China
| | - Zan Chen
- Institute of Information Processing and Automation, College of Information Engineering, Zhejiang University of Technology, Hangzhou 310023, China; Zhejiang Provincial United Key Laboratory of Embedded Systems, Hangzhou 310023, China
| | - Guoqiang Xie
- Nuclear Industry 215 Hospital of Shaanxi Province, Xianyang, 712000, China.
| | - Yuanjing Feng
- Institute of Information Processing and Automation, College of Information Engineering, Zhejiang University of Technology, Hangzhou 310023, China; Zhejiang Provincial United Key Laboratory of Embedded Systems, Hangzhou 310023, China.
| |
Collapse
|
6
|
A review on voice pathology: Taxonomy, diagnosis, medical procedures and detection techniques, open challenges, limitations, and recommendations for future directions. JOURNAL OF INTELLIGENT SYSTEMS 2022. [DOI: 10.1515/jisys-2022-0058] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Abstract
Speech is a primary means of human communication and one of the most basic features of human conduct. Voice is an important part of its subsystems. A speech disorder is a condition that affects the ability of a person to speak normally, which occasionally results in voice impairment with psychological and emotional consequences. Early detection of voice problems is a crucial factor. Computer-based procedures are less costly and easier to administer for such purposes than traditional methods. This study highlights the following issues: recent studies, methods of voice pathology detection, machine learning and deep learning (DL) methods used in data classification, main datasets utilized, and the role of Internet of things (IoT) systems employed in voice pathology diagnosis. Moreover, this study presents different applications, open challenges, and recommendations for future directions of IoT systems and artificial intelligence (AI) approaches in the voice pathology diagnosis. Finally, this study highlights some limitations of voice pathology datasets in comparison with the role of IoT in the healthcare sector, which shows the urgent need to provide efficient approaches and easy and ideal medical diagnostic procedures and treatments of disease identification for doctors and patients. This review covered voice pathology taxonomy, detection techniques, open challenges, limitations, and recommendations for future directions to provide a clear background for doctors and patients. Standard databases, including the Massachusetts Eye and Ear Infirmary, Saarbruecken Voice Database, and the Arabic Voice Pathology Database, were used in most articles reviewed in this article. The classes, features, and main purpose for voice pathology identification are also highlighted. This study focuses on the extraction of voice pathology features, especially speech analysis, extends feature vectors comprising static and dynamic features, and converts these extended feature vectors into solid vectors before passing them to the recognizer.
Collapse
|
7
|
Zhang XQ, Hu Y, Xiao ZJ, Fang JS, Higashita R, Liu J. Machine Learning for Cataract Classification/Grading on Ophthalmic Imaging Modalities: A Survey. MACHINE INTELLIGENCE RESEARCH 2022; 19:184-208. [DOI: 10.1007/s11633-022-1329-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/16/2022] [Accepted: 03/28/2022] [Indexed: 01/04/2025]
Abstract
AbstractCataracts are the leading cause of visual impairment and blindness globally. Over the years, researchers have achieved significant progress in developing state-of-the-art machine learning techniques for automatic cataract classification and grading, aiming to prevent cataracts early and improve clinicians’ diagnosis efficiency. This survey provides a comprehensive survey of recent advances in machine learning techniques for cataract classification/grading based on ophthalmic images. We summarize existing literature from two research directions: conventional machine learning methods and deep learning methods. This survey also provides insights into existing works of both merits and limitations. In addition, we discuss several challenges of automatic cataract classification/grading based on machine learning techniques and present possible solutions to these challenges for future research.
Collapse
|
8
|
Yogesh MJ, Karthikeyan J. Health Informatics: Engaging Modern Healthcare Units: A Brief Overview. Front Public Health 2022; 10:854688. [PMID: 35570921 PMCID: PMC9099090 DOI: 10.3389/fpubh.2022.854688] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 03/31/2022] [Indexed: 11/13/2022] Open
Abstract
In the current scenario, with a large amount of unstructured data, Health Informatics is gaining traction, allowing Healthcare Units to leverage and make meaningful insights for doctors and decision-makers with relevant information to scale operations and predict the future view of treatments via Information Systems Communication. Now, around the world, massive amounts of data are being collected and analyzed for better patient diagnosis and treatment, improving public health systems and assisting government agencies in designing and implementing public health policies, instilling confidence in future generations who want to use better public health systems. This article provides an overview of the HL7 FHIR Architecture, including the workflow state, linkages, and various informatics approaches used in healthcare units. The article discusses future trends and directions in Health Informatics for successful application to provide public health safety. With the advancement of technology, healthcare units face new issues that must be addressed with appropriate adoption policies and standards.
Collapse
Affiliation(s)
- M. J. Yogesh
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, India
| | | |
Collapse
|
9
|
Aljabri M, AlAmir M, AlGhamdi M, Abdel-Mottaleb M, Collado-Mesa F. Towards a better understanding of annotation tools for medical imaging: a survey. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:25877-25911. [PMID: 35350630 PMCID: PMC8948453 DOI: 10.1007/s11042-022-12100-1] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Revised: 08/04/2021] [Accepted: 01/03/2022] [Indexed: 05/07/2023]
Abstract
Medical imaging refers to several different technologies that are used to view the human body to diagnose, monitor, or treat medical conditions. It requires significant expertise to efficiently and correctly interpret the images generated by each of these technologies, which among others include radiography, ultrasound, and magnetic resonance imaging. Deep learning and machine learning techniques provide different solutions for medical image interpretation including those associated with detection and diagnosis. Despite the huge success of deep learning algorithms in image analysis, training algorithms to reach human-level performance in these tasks depends on the availability of large amounts of high-quality training data, including high-quality annotations to serve as ground-truth. Different annotation tools have been developed to assist with the annotation process. In this survey, we present the currently available annotation tools for medical imaging, including descriptions of graphical user interfaces (GUI) and supporting instruments. The main contribution of this study is to provide an intensive review of the popular annotation tools and show their successful usage in annotating medical imaging dataset to guide researchers in this area.
Collapse
Affiliation(s)
- Manar Aljabri
- Department of Computer Science, Umm Al-Qura University, Mecca, Saudi Arabia
| | - Manal AlAmir
- Department of Computer Science, Umm Al-Qura University, Mecca, Saudi Arabia
| | - Manal AlGhamdi
- Department of Computer Science, Umm Al-Qura University, Mecca, Saudi Arabia
| | | | - Fernando Collado-Mesa
- Department of Radiology, University of Miami Miller School of Medicine, Florida, FL USA
| |
Collapse
|
10
|
Li L, Zimmer VA, Schnabel JA, Zhuang X. AtrialJSQnet: A New framework for joint segmentation and quantification of left atrium and scars incorporating spatial and shape information. Med Image Anal 2022; 76:102303. [PMID: 34875581 DOI: 10.1016/j.media.2021.102303] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Revised: 10/08/2021] [Accepted: 11/08/2021] [Indexed: 10/19/2022]
Abstract
Left atrial (LA) and atrial scar segmentation from late gadolinium enhanced magnetic resonance imaging (LGE MRI) is an important task in clinical practice. The automatic segmentation is however still challenging due to the poor image quality, the various LA shapes, the thin wall, and the surrounding enhanced regions. Previous methods normally solved the two tasks independently and ignored the intrinsic spatial relationship between LA and scars. In this work, we develop a new framework, namely AtrialJSQnet, where LA segmentation, scar projection onto the LA surface, and scar quantification are performed simultaneously in an end-to-end style. We propose a mechanism of shape attention (SA) via an implicit surface projection to utilize the inherent correlation between LA cavity and scars. In specific, the SA scheme is embedded into a multi-task architecture to perform joint LA segmentation and scar quantification. Besides, a spatial encoding (SE) loss is introduced to incorporate continuous spatial information of the target in order to reduce noisy patches in the predicted segmentation. We evaluated the proposed framework on 60 post-ablation LGE MRIs from the MICCAI2018 Atrial Segmentation Challenge. Moreover, we explored the domain generalization ability of the proposed AtrialJSQnet on 40 pre-ablation LGE MRIs from this challenge and 30 post-ablation multi-center LGE MRIs from another challenge (ISBI2012 Left Atrium Fibrosis and Scar Segmentation Challenge). Extensive experiments on public datasets demonstrated the effect of the proposed AtrialJSQnet, which achieved competitive performance over the state-of-the-art. The relatedness between LA segmentation and scar quantification was explicitly explored and has shown significant performance improvements for both tasks. The code has been released via https://zmiclab.github.io/projects.html.
Collapse
Affiliation(s)
- Lei Li
- School of Data Science, Fudan University, Shanghai, China; School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China; School of Biomedical Engineering and Imaging Sciences, Kings College London, London, UK
| | - Veronika A Zimmer
- School of Biomedical Engineering and Imaging Sciences, Kings College London, London, UK; Technical University Munich, Munich, Germany
| | - Julia A Schnabel
- School of Biomedical Engineering and Imaging Sciences, Kings College London, London, UK; Technical University Munich, Munich, Germany; Helmholtz Center Munich, Germany
| | - Xiahai Zhuang
- School of Data Science, Fudan University, Shanghai, China.
| |
Collapse
|
11
|
Golemati S, Yanni A, Tsiaparas NN, Lechareas S, Vlachos IS, Cokkinos DD, Krokidis M, Nikita KS, Perrea D, Chatziioannou A. CurveletTransform-Based Texture Analysis of Carotid B-mode Ultrasound Images in Asymptomatic Men With Moderate and Severe Stenoses: A Preliminary Clinical Study. ULTRASOUND IN MEDICINE & BIOLOGY 2022; 48:78-90. [PMID: 34666918 DOI: 10.1016/j.ultrasmedbio.2021.09.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/14/2020] [Revised: 09/02/2021] [Accepted: 09/04/2021] [Indexed: 06/13/2023]
Abstract
The curvelet transform, which represents images in terms of their geometric and textural characteristics, was investigated toward revealing differences between moderate (50%-69%, n = 11) and severe (70%-100%, n = 14) stenosis asymptomatic plaque from B-mode ultrasound. Texture features were estimated in original and curvelet transformed images of atheromatous plaque (PL), the adjacent arterial wall (intima-media [IM]) and the plaque shoulder (SH) (i.e., the boundary between plaque and wall), separately at end systole and end diastole. Seventeen features derived from the original images were significantly different between the two groups (4 for IM, 3 for PL and 10 for SH; 9 for end diastole and 8 for end systole); 19 of 234 features (2 for IM and 17 for SH; 8 for end systole and 11 for end diastole) derived from curvelet transformed images were significantly higher in the patients with severe stenosis, indicating higher magnitude, variation and randomness of image gray levels. In these patients, lower body height and higher serum creatinine concentration were observed. Our findings suggest that (a) moderate and severe plaque have similar curvelet-based texture properties, and (b) IM and SH provide useful information about arterial wall pathophysiology, complementary to PL itself. The curvelet transform is promising for identifying novel indices of cardiovascular risk and warrants further investigation in larger cohorts.
Collapse
Affiliation(s)
- Spyretta Golemati
- Medical School, National and Kapodistrian University of Athens, Athens, Greece.
| | - Amalia Yanni
- Department of Nutrition and Dietetics, Harokopio University of Athens, Athens, Greece
| | - Nikolaos N Tsiaparas
- Biomedical Simulations and Imaging Laboratory, School of Electrical and Computer Engineering, National Technical University of Athens, Athens, Greece
| | - Symeon Lechareas
- Medical School, National and Kapodistrian University of Athens, Athens, Greece
| | - Ioannis S Vlachos
- Medical School, National and Kapodistrian University of Athens, Athens, Greece
| | | | - Miltiadis Krokidis
- Medical School, National and Kapodistrian University of Athens, Athens, Greece
| | - Konstantina S Nikita
- Biomedical Simulations and Imaging Laboratory, School of Electrical and Computer Engineering, National Technical University of Athens, Athens, Greece
| | - Despina Perrea
- Medical School, National and Kapodistrian University of Athens, Athens, Greece
| | | |
Collapse
|
12
|
Liu J, Shen C, Aguilera N, Cukras C, Hufnagel RB, Zein WM, Liu T, Tam J. Active Cell Appearance Model Induced Generative Adversarial Networks for Annotation-Efficient Cell Segmentation and Identification on Adaptive Optics Retinal Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2820-2831. [PMID: 33507868 PMCID: PMC8548993 DOI: 10.1109/tmi.2021.3055483] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Data annotation is a fundamental precursor for establishing large training sets to effectively apply deep learning methods to medical image analysis. For cell segmentation, obtaining high quality annotations is an expensive process that usually requires manual grading by experts. This work introduces an approach to efficiently generate annotated images, called "A-GANs", created by combining an active cell appearance model (ACAM) with conditional generative adversarial networks (C-GANs). ACAM is a statistical model that captures a realistic range of cell characteristics and is used to ensure that the image statistics of generated cells are guided by real data. C-GANs utilize cell contours generated by ACAM to produce cells that match input contours. By pairing ACAM-generated contours with A-GANs-based generated images, high quality annotated images can be efficiently generated. Experimental results on adaptive optics (AO) retinal images showed that A-GANs robustly synthesize realistic, artificial images whose cell distributions are exquisitely specified by ACAM. The cell segmentation performance using as few as 64 manually-annotated real AO images combined with 248 artificially-generated images from A-GANs was similar to the case of using 248 manually-annotated real images alone (Dice coefficients of 88% for both). Finally, application to rare diseases in which images exhibit never-seen characteristics demonstrated improvements in cell segmentation without the need for incorporating manual annotations from these new retinal images. Overall, A-GANs introduce a methodology for generating high quality annotated data that statistically captures the characteristics of any desired dataset and can be used to more efficiently train deep-learning-based medical image analysis applications.
Collapse
|
13
|
Yang S, Zhu F, Ling X, Liu Q, Zhao P. Intelligent Health Care: Applications of Deep Learning in Computational Medicine. Front Genet 2021; 12:607471. [PMID: 33912213 PMCID: PMC8075004 DOI: 10.3389/fgene.2021.607471] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2020] [Accepted: 03/05/2021] [Indexed: 12/24/2022] Open
Abstract
With the progress of medical technology, biomedical field ushered in the era of big data, based on which and driven by artificial intelligence technology, computational medicine has emerged. People need to extract the effective information contained in these big biomedical data to promote the development of precision medicine. Traditionally, the machine learning methods are used to dig out biomedical data to find the features from data, which generally rely on feature engineering and domain knowledge of experts, requiring tremendous time and human resources. Different from traditional approaches, deep learning, as a cutting-edge machine learning branch, can automatically learn complex and robust feature from raw data without the need for feature engineering. The applications of deep learning in medical image, electronic health record, genomics, and drug development are studied, where the suggestion is that deep learning has obvious advantage in making full use of biomedical data and improving medical health level. Deep learning plays an increasingly important role in the field of medical health and has a broad prospect of application. However, the problems and challenges of deep learning in computational medical health still exist, including insufficient data, interpretability, data privacy, and heterogeneity. Analysis and discussion on these problems provide a reference to improve the application of deep learning in medical health.
Collapse
Affiliation(s)
- Sijie Yang
- School of Computer Science and Technology, Soochow University, Suzhou, China
| | - Fei Zhu
- School of Computer Science and Technology, Soochow University, Suzhou, China
| | - Xinghong Ling
- School of Computer Science and Technology, Soochow University, Suzhou, China
- WenZheng College of Soochow University, Suzhou, China
| | - Quan Liu
- School of Computer Science and Technology, Soochow University, Suzhou, China
| | - Peiyao Zhao
- School of Computer Science and Technology, Soochow University, Suzhou, China
| |
Collapse
|
14
|
Raza K, Singh NK. A Tour of Unsupervised Deep Learning for Medical Image Analysis. Curr Med Imaging 2021; 17:1059-1077. [PMID: 33504314 DOI: 10.2174/1573405617666210127154257] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2020] [Revised: 11/17/2020] [Accepted: 12/16/2020] [Indexed: 11/22/2022]
Abstract
BACKGROUND Interpretation of medical images for the diagnosis and treatment of complex diseases from high-dimensional and heterogeneous data remains a key challenge in transforming healthcare. In the last few years, both supervised and unsupervised deep learning achieved promising results in the area of medical image analysis. Several reviews on supervised deep learning are published, but hardly any rigorous review on unsupervised deep learning for medical image analysis is available. OBJECTIVES The objective of this review is to systematically present various unsupervised deep learning models, tools, and benchmark datasets applied to medical image analysis. Some of the discussed models are autoencoders and its other variants, Restricted Boltzmann machines (RBM), Deep belief networks (DBN), Deep Boltzmann machine (DBM), and Generative adversarial network (GAN). Further, future research opportunities and challenges of unsupervised deep learning techniques for medical image analysis are also discussed. CONCLUSION Currently, interpretation of medical images for diagnostic purposes is usually performed by human experts that may be replaced by computer-aided diagnosis due to advancement in machine learning techniques, including deep learning, and the availability of cheap computing infrastructure through cloud computing. Both supervised and unsupervised machine learning approaches are widely applied in medical image analysis, each of them having certain pros and cons. Since human supervisions are not always available or inadequate or biased, therefore, unsupervised learning algorithms give a big hope with lots of advantages for biomedical image analysis.
Collapse
Affiliation(s)
- Khalid Raza
- Department of Computer Science, Jamia Millia Islamia, New Delhi. India
| | | |
Collapse
|
15
|
Liu Y, Gu X. Evaluation and comparison of global-feature-based and local-feature-based segmentation algorithms in intracranial visual pathway delineation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:1766-1769. [PMID: 33018340 DOI: 10.1109/embc44109.2020.9175937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Intracranial visual pathway is related to the effective transmission of visual signals to brain. It was not only the target organ of diseases but also the organs at risk in radiotherapy thus its delineation plays an important role in both diagnosis and treatment planning. Traditional manual segmentation method suffered from time- and labor- consuming as well as intra- and inter- variability. In order to overcome these problems, state-of-the-art segmentation models were designed and various features were extracted and utilized, but it's hard to tell their effectiveness on intracranial visual pathway delineation. It's because that these methods worked on different dataset and accompanied with different training tricks. This study aimed to research the contribution of global features and local features in delineating the intracranial visual pathway from MRI scans. The two typical segmentation models, 3D UNet and DeepMedic, were chosen since they focused on global features and local features respectively. We constructed the hybrid model through serially connecting the two mentioned models to validate the performance of combined global and local features. Validation results showed that the hybrid model outperformed the individual ones. It proved that multi scale feature fusion was important in improving the segmentation performance.
Collapse
|
16
|
Ai D, Zhao Z, Fan J, Song H, Qu X, Xian J, Yang J. Spatial probabilistic distribution map-based two-channel 3D U-net for visual pathway segmentation. Pattern Recognit Lett 2020. [DOI: 10.1016/j.patrec.2020.09.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
17
|
Tor-Diez C, Porras AR, Packer RJ, Avery RA, Linguraru MG. Unsupervised MRI Homogenization: Application to Pediatric Anterior Visual Pathway Segmentation. ACTA ACUST UNITED AC 2020; 12436:180-188. [PMID: 34327515 DOI: 10.1007/978-3-030-59861-7_19] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
Abstract
Deep learning strategies have become ubiquitous optimization tools for medical image analysis. With the appropriate amount of data, these approaches outperform classic methodologies in a variety of image processing tasks. However, rare diseases and pediatric imaging often lack extensive data. Specially, MRI are uncommon because they require sedation in young children. Moreover, the lack of standardization in MRI protocols introduces a strong variability between different datasets. In this paper, we present a general deep learning architecture for MRI homogenization that also provides the segmentation map of an anatomical region of interest. Homogenization is achieved using an unsupervised architecture based on variational autoencoder with cycle generative adversarial networks, which learns a common space (i.e. a representation of the optimal imaging protocol) using an unpaired image-to-image translation network. The segmentation is simultaneously generated by a supervised learning strategy. We evaluated our method segmenting the challenging anterior visual pathway using three brain T1-weighted MRI datasets (variable protocols and vendors). Our method significantly outperformed a non-homogenized multi-protocol U-Net.
Collapse
Affiliation(s)
- Carlos Tor-Diez
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC 20010, USA
| | - Antonio R Porras
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC 20010, USA
| | - Roger J Packer
- Center for Neuroscience & Behavioral Health, Children's National Hospital, Washington, DC 20010, USA
- Gilbert Neurofibromatosis Institute, Children's National Hospital, Washington, DC 20010, USA
| | - Robert A Avery
- Division of Pediatric Ophthalmology, Children's Hospital of Philadelphia, Philadelphia, PA 19104, USA
| | - Marius George Linguraru
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC 20010, USA
- School of Medicine and Health Sciences, George Washington University, Washington, DC 20037, USA
| |
Collapse
|
18
|
Renard F, Guedria S, Palma ND, Vuillerme N. Variability and reproducibility in deep learning for medical image segmentation. Sci Rep 2020; 10:13724. [PMID: 32792540 PMCID: PMC7426407 DOI: 10.1038/s41598-020-69920-0] [Citation(s) in RCA: 82] [Impact Index Per Article: 16.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2019] [Accepted: 07/11/2020] [Indexed: 12/11/2022] Open
Abstract
Medical image segmentation is an important tool for current clinical applications. It is the backbone of numerous clinical diagnosis methods, oncological treatments and computer-integrated surgeries. A new class of machine learning algorithm, deep learning algorithms, outperforms the results of classical segmentation in terms of accuracy. However, these techniques are complex and can have a high range of variability, calling the reproducibility of the results into question. In this article, through a literature review, we propose an original overview of the sources of variability to better understand the challenges and issues of reproducibility related to deep learning for medical image segmentation. Finally, we propose 3 main recommendations to address these potential issues: (1) an adequate description of the framework of deep learning, (2) a suitable analysis of the different sources of variability in the framework of deep learning, and (3) an efficient system for evaluating the segmentation results.
Collapse
Affiliation(s)
- Félix Renard
- Univ. Grenoble Alpes, CNRS, Grenoble INP, LIG, 38000, Grenoble, France.
- Univ. Grenoble Alpes, AGEIS, 38000, Grenoble, France.
| | - Soulaimane Guedria
- Univ. Grenoble Alpes, CNRS, Grenoble INP, LIG, 38000, Grenoble, France
- Univ. Grenoble Alpes, AGEIS, 38000, Grenoble, France
| | - Noel De Palma
- Univ. Grenoble Alpes, CNRS, Grenoble INP, LIG, 38000, Grenoble, France
| | - Nicolas Vuillerme
- Univ. Grenoble Alpes, AGEIS, 38000, Grenoble, France
- Institut Universitaire de France, Paris, France
| |
Collapse
|
19
|
A Confrontation Decision-Making Method with Deep Reinforcement Learning and Knowledge Transfer for Multi-Agent System. Symmetry (Basel) 2020. [DOI: 10.3390/sym12040631] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
In this paper, deep reinforcement learning (DRL) and knowledge transfer are used to achieve the effective control of the learning agent for the confrontation in the multi-agent systems. Firstly, a multi-agent Deep Deterministic Policy Gradient (DDPG) algorithm with parameter sharing is proposed to achieve confrontation decision-making of multi-agent. In the process of training, the information of other agents is introduced to the critic network to improve the strategy of confrontation. The parameter sharing mechanism can reduce the loss of experience storage. In the DDPG algorithm, we use four neural networks to generate real-time action and Q-value function respectively and use a momentum mechanism to optimize the training process to accelerate the convergence rate for the neural network. Secondly, this paper introduces an auxiliary controller using a policy-based reinforcement learning (RL) method to achieve the assistant decision-making for the game agent. In addition, an effective reward function is used to help agents balance losses of enemies and our side. Furthermore, this paper also uses the knowledge transfer method to extend the learning model to more complex scenes and improve the generalization of the proposed confrontation model. Two confrontation decision-making experiments are designed to verify the effectiveness of the proposed method. In a small-scale task scenario, the trained agent can successfully learn to fight with the competitors and achieve a good winning rate. For large-scale confrontation scenarios, the knowledge transfer method can gradually improve the decision-making level of the learning agent.
Collapse
|
20
|
Mansoor A, Cerrolaza JJ, Perez G, Biggs E, Okada K, Nino G, Linguraru MG. A Generic Approach to Lung Field Segmentation From Chest Radiographs Using Deep Space and Shape Learning. IEEE Trans Biomed Eng 2020; 67:1206-1220. [PMID: 31425015 PMCID: PMC7293875 DOI: 10.1109/tbme.2019.2933508] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Computer-aided diagnosis (CAD) techniques for lung field segmentation from chest radiographs (CXR) have been proposed for adult cohorts, but rarely for pediatric subjects. Statistical shape models (SSMs), the workhorse of most state-of-the-art CXR-based lung field segmentation methods, do not efficiently accommodate shape variation of the lung field during the pediatric developmental stages. The main contributions of our work are: 1) a generic lung field segmentation framework from CXR accommodating large shape variation for adult and pediatric cohorts; 2) a deep representation learning detection mechanism, ensemble space learning, for robust object localization; and 3) marginal shape deep learning for the shape deformation parameter estimation. Unlike the iterative approach of conventional SSMs, the proposed shape learning mechanism transforms the parameter space into marginal subspaces that are solvable efficiently using the recursive representation learning mechanism. Furthermore, our method is the first to include the challenging retro-cardiac region in the CXR-based lung segmentation for accurate lung capacity estimation. The framework is evaluated on 668 CXRs of patients between 3 month to 89 year of age. We obtain a mean Dice similarity coefficient of 0.96 ±0.03 (including the retro-cardiac region). For a given accuracy, the proposed approach is also found to be faster than conventional SSM-based iterative segmentation methods. The computational simplicity of the proposed generic framework could be similarly applied to the fast segmentation of other deformable objects.
Collapse
|
21
|
|
22
|
Sahiner B, Pezeshk A, Hadjiiski LM, Wang X, Drukker K, Cha KH, Summers RM, Giger ML. Deep learning in medical imaging and radiation therapy. Med Phys 2019; 46:e1-e36. [PMID: 30367497 PMCID: PMC9560030 DOI: 10.1002/mp.13264] [Citation(s) in RCA: 398] [Impact Index Per Article: 66.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2018] [Revised: 09/18/2018] [Accepted: 10/09/2018] [Indexed: 12/15/2022] Open
Abstract
The goals of this review paper on deep learning (DL) in medical imaging and radiation therapy are to (a) summarize what has been achieved to date; (b) identify common and unique challenges, and strategies that researchers have taken to address these challenges; and (c) identify some of the promising avenues for the future both in terms of applications as well as technical innovations. We introduce the general principles of DL and convolutional neural networks, survey five major areas of application of DL in medical imaging and radiation therapy, identify common themes, discuss methods for dataset expansion, and conclude by summarizing lessons learned, remaining challenges, and future directions.
Collapse
Affiliation(s)
- Berkman Sahiner
- DIDSR/OSEL/CDRH U.S. Food and Drug AdministrationSilver SpringMD20993USA
| | - Aria Pezeshk
- DIDSR/OSEL/CDRH U.S. Food and Drug AdministrationSilver SpringMD20993USA
| | | | - Xiaosong Wang
- Imaging Biomarkers and Computer‐aided Diagnosis LabRadiology and Imaging SciencesNIH Clinical CenterBethesdaMD20892‐1182USA
| | - Karen Drukker
- Department of RadiologyUniversity of ChicagoChicagoIL60637USA
| | - Kenny H. Cha
- DIDSR/OSEL/CDRH U.S. Food and Drug AdministrationSilver SpringMD20993USA
| | - Ronald M. Summers
- Imaging Biomarkers and Computer‐aided Diagnosis LabRadiology and Imaging SciencesNIH Clinical CenterBethesdaMD20892‐1182USA
| | | |
Collapse
|
23
|
Mahmud M, Kaiser MS, Hussain A, Vassanelli S. Applications of Deep Learning and Reinforcement Learning to Biological Data. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:2063-2079. [PMID: 29771663 DOI: 10.1109/tnnls.2018.2790388] [Citation(s) in RCA: 242] [Impact Index Per Article: 34.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Rapid advances in hardware-based technologies during the past decades have opened up new possibilities for life scientists to gather multimodal data in various application domains, such as omics, bioimaging, medical imaging, and (brain/body)-machine interfaces. These have generated novel opportunities for development of dedicated data-intensive machine learning techniques. In particular, recent research in deep learning (DL), reinforcement learning (RL), and their combination (deep RL) promise to revolutionize the future of artificial intelligence. The growth in computational power accompanied by faster and increased data storage, and declining computing costs have already allowed scientists in various fields to apply these techniques on data sets that were previously intractable owing to their size and complexity. This paper provides a comprehensive survey on the application of DL, RL, and deep RL techniques in mining biological data. In addition, we compare the performances of DL techniques when applied to different data sets across various application domains. Finally, we outline open issues in this challenging research area and discuss future development perspectives.
Collapse
|
24
|
Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, van der Laak JAWM, van Ginneken B, Sánchez CI. A survey on deep learning in medical image analysis. Med Image Anal 2017; 42:60-88. [PMID: 28778026 DOI: 10.1016/j.media.2017.07.005] [Citation(s) in RCA: 4777] [Impact Index Per Article: 597.1] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2017] [Revised: 07/24/2017] [Accepted: 07/25/2017] [Indexed: 02/07/2023]
Affiliation(s)
- Geert Litjens
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands.
| | - Thijs Kooi
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | | | | | - Francesco Ciompi
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Mohsen Ghafoorian
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | | | - Bram van Ginneken
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Clara I Sánchez
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| |
Collapse
|
25
|
Mansoor A, Perez G, Nino G, Linguraru MG. Automatic tissue characterization of air trapping in chest radiographs using deep neural networks. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2017; 2016:97-100. [PMID: 28324924 DOI: 10.1109/embc.2016.7590649] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Significant progress has been made in recent years for computer-aided diagnosis of abnormal pulmonary textures from computed tomography (CT) images. Similar initiatives in chest radiographs (CXR), the common modality for pulmonary diagnosis, are much less developed. CXR are fast, cost effective and low-radiation solution to diagnosis over CT. However, the subtlety of textures in CXR makes them hard to discern even by trained eye. We explore the performance of deep learning abnormal tissue characterization from CXR. Prior studies have used CT imaging to characterize air trapping in subjects with pulmonary disease; however, the use of CT in children is not recommended mainly due to concerns pertaining to radiation dosage. In this work, we present a stacked autoencoder (SAE) deep learning architecture for automated tissue characterization of air-trapping from CXR. To our best knowledge this is the first study applying deep learning framework for the specific problem on 51 CXRs, an F-score of ≈ 76.5% and a strong correlation with the expert visual scoring (R=0.93, p =<; 0.01) demonstrate the potential of the proposed method to characterization of air trapping.
Collapse
|
26
|
Abstract
Children with neurofibromatosis type 1 frequently manifest optic pathway gliomas-low-grade gliomas intrinsic to the visual pathway. This review describes the molecular and genetic mechanisms driving optic pathway gliomas as well as the clinical symptoms of this relatively common genetic condition. Recommendations for clinical management and descriptions of the newest imaging techniques are discussed.
Collapse
Affiliation(s)
| | - Robert A Avery
- Division of Ophthalmology, The Children's Hospital of Philadelphia, Philadelphia, PA; Department of Ophthalmology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA; Department of Neurology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA
| |
Collapse
|
27
|
Ravi D, Wong C, Deligianni F, Berthelot M, Andreu-Perez J, Lo B, Yang GZ. Deep Learning for Health Informatics. IEEE J Biomed Health Inform 2016; 21:4-21. [PMID: 28055930 DOI: 10.1109/jbhi.2016.2636665] [Citation(s) in RCA: 625] [Impact Index Per Article: 69.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
With a massive influx of multimodality data, the role of data analytics in health informatics has grown rapidly in the last decade. This has also prompted increasing interests in the generation of analytical, data driven models based on machine learning in health informatics. Deep learning, a technique with its foundation in artificial neural networks, is emerging in recent years as a powerful tool for machine learning, promising to reshape the future of artificial intelligence. Rapid improvements in computational power, fast data storage, and parallelization have also contributed to the rapid uptake of the technology in addition to its predictive power and ability to generate automatically optimized high-level features and semantic interpretation from the input data. This article presents a comprehensive up-to-date review of research employing deep learning in health informatics, providing a critical analysis of the relative merit, and potential pitfalls of the technique as well as its future outlook. The paper mainly focuses on key applications of deep learning in the fields of translational bioinformatics, medical imaging, pervasive sensing, medical informatics, and public health.
Collapse
|
28
|
Avery RA, Mansoor A, Idrees R, Trimboli-Heidler C, Ishikawa H, Packer RJ, Linguraru MG. Optic pathway glioma volume predicts retinal axon degeneration in neurofibromatosis type 1. Neurology 2016; 87:2403-2407. [PMID: 27815398 DOI: 10.1212/wnl.0000000000003402] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2016] [Accepted: 08/19/2016] [Indexed: 02/04/2023] Open
Abstract
OBJECTIVE To determine whether tumor size is associated with retinal nerve fiber layer (RNFL) thickness, a measure of axonal degeneration and an established biomarker of visual impairment in children with optic pathway gliomas (OPGs) secondary to neurofibromatosis type 1 (NF1). METHODS Children with NF1-OPGs involving the optic nerve (extension into the chiasm and tracts permitted) who underwent both volumetric MRI analysis and optical coherence tomography (OCT) within 2 weeks of each other were included. Volumetric measurement of the entire anterior visual pathway (AVP; optic nerve, chiasm, and tract) was performed using high-resolution T1-weighted MRI. OCT measured the average RNFL thickness around the optic nerve. Linear regression models evaluated the relationship between RNFL thickness and AVP dimensions and volume. RESULTS Thirty-eight participants contributed 55 study eyes. The mean age was 5.78 years. Twenty-two participants (58%) were female. RNFL thickness had a significant negative relationship to total AVP volume and total brain volume (p < 0.05, all comparisons). For every 1 mL increase in AVP volume, RNFL thickness declined by approximately 5 microns. A greater AVP volume of OPGs involving the optic nerve and chiasm, but not the tracts, was independently associated with a lower RNFL thickness (p < 0.05). All participants with an optic chiasm volume >1.3 mL demonstrated axonal damage (i.e., RNFL thickness <80 microns). CONCLUSIONS Greater OPG and AVP volume predicts axonal degeneration, a biomarker of vision loss, in children with NF1-OPGs. MRI volumetric measures may help stratify the risk of visual loss from NF1-OPGs.
Collapse
Affiliation(s)
- Robert A Avery
- From the Center for Neuroscience and Behavior (R.A.A., R.J.P.), The Gilbert Family Neurofibromatosis Institute (R.A.A., C.T.-H., R.J.P.), Sheikh Zayed Institute for Pediatric Surgical Innovation (A.M., M.G.L.), and The Brain Tumor Institute (R.J.P.), Children's National Health System; The George Washington University School of Medicine and Health Sciences (R.I., M.G.L.), Washington, DC; UPMC Eye Center, Eye and Ear Institute (H.I.), Ophthalmology and Visual Science Research Center, Department of Ophthalmology, University of Pittsburgh School of Medicine; and Department of Bioengineering (H.I.), Swanson School of Engineering, University of Pittsburgh, PA.
| | - Awais Mansoor
- From the Center for Neuroscience and Behavior (R.A.A., R.J.P.), The Gilbert Family Neurofibromatosis Institute (R.A.A., C.T.-H., R.J.P.), Sheikh Zayed Institute for Pediatric Surgical Innovation (A.M., M.G.L.), and The Brain Tumor Institute (R.J.P.), Children's National Health System; The George Washington University School of Medicine and Health Sciences (R.I., M.G.L.), Washington, DC; UPMC Eye Center, Eye and Ear Institute (H.I.), Ophthalmology and Visual Science Research Center, Department of Ophthalmology, University of Pittsburgh School of Medicine; and Department of Bioengineering (H.I.), Swanson School of Engineering, University of Pittsburgh, PA
| | - Rabia Idrees
- From the Center for Neuroscience and Behavior (R.A.A., R.J.P.), The Gilbert Family Neurofibromatosis Institute (R.A.A., C.T.-H., R.J.P.), Sheikh Zayed Institute for Pediatric Surgical Innovation (A.M., M.G.L.), and The Brain Tumor Institute (R.J.P.), Children's National Health System; The George Washington University School of Medicine and Health Sciences (R.I., M.G.L.), Washington, DC; UPMC Eye Center, Eye and Ear Institute (H.I.), Ophthalmology and Visual Science Research Center, Department of Ophthalmology, University of Pittsburgh School of Medicine; and Department of Bioengineering (H.I.), Swanson School of Engineering, University of Pittsburgh, PA
| | - Carmelina Trimboli-Heidler
- From the Center for Neuroscience and Behavior (R.A.A., R.J.P.), The Gilbert Family Neurofibromatosis Institute (R.A.A., C.T.-H., R.J.P.), Sheikh Zayed Institute for Pediatric Surgical Innovation (A.M., M.G.L.), and The Brain Tumor Institute (R.J.P.), Children's National Health System; The George Washington University School of Medicine and Health Sciences (R.I., M.G.L.), Washington, DC; UPMC Eye Center, Eye and Ear Institute (H.I.), Ophthalmology and Visual Science Research Center, Department of Ophthalmology, University of Pittsburgh School of Medicine; and Department of Bioengineering (H.I.), Swanson School of Engineering, University of Pittsburgh, PA
| | - Hiroshi Ishikawa
- From the Center for Neuroscience and Behavior (R.A.A., R.J.P.), The Gilbert Family Neurofibromatosis Institute (R.A.A., C.T.-H., R.J.P.), Sheikh Zayed Institute for Pediatric Surgical Innovation (A.M., M.G.L.), and The Brain Tumor Institute (R.J.P.), Children's National Health System; The George Washington University School of Medicine and Health Sciences (R.I., M.G.L.), Washington, DC; UPMC Eye Center, Eye and Ear Institute (H.I.), Ophthalmology and Visual Science Research Center, Department of Ophthalmology, University of Pittsburgh School of Medicine; and Department of Bioengineering (H.I.), Swanson School of Engineering, University of Pittsburgh, PA
| | - Roger J Packer
- From the Center for Neuroscience and Behavior (R.A.A., R.J.P.), The Gilbert Family Neurofibromatosis Institute (R.A.A., C.T.-H., R.J.P.), Sheikh Zayed Institute for Pediatric Surgical Innovation (A.M., M.G.L.), and The Brain Tumor Institute (R.J.P.), Children's National Health System; The George Washington University School of Medicine and Health Sciences (R.I., M.G.L.), Washington, DC; UPMC Eye Center, Eye and Ear Institute (H.I.), Ophthalmology and Visual Science Research Center, Department of Ophthalmology, University of Pittsburgh School of Medicine; and Department of Bioengineering (H.I.), Swanson School of Engineering, University of Pittsburgh, PA
| | - Marius George Linguraru
- From the Center for Neuroscience and Behavior (R.A.A., R.J.P.), The Gilbert Family Neurofibromatosis Institute (R.A.A., C.T.-H., R.J.P.), Sheikh Zayed Institute for Pediatric Surgical Innovation (A.M., M.G.L.), and The Brain Tumor Institute (R.J.P.), Children's National Health System; The George Washington University School of Medicine and Health Sciences (R.I., M.G.L.), Washington, DC; UPMC Eye Center, Eye and Ear Institute (H.I.), Ophthalmology and Visual Science Research Center, Department of Ophthalmology, University of Pittsburgh School of Medicine; and Department of Bioengineering (H.I.), Swanson School of Engineering, University of Pittsburgh, PA
| |
Collapse
|