1
|
Jiang B, Ozkara BB, Creeden S, Zhu G, Ding VY, Chen H, Lanzman B, Wolman D, Shams S, Trinh A, Li Y, Khalaf A, Parker JJ, Halpern CH, Wintermark M. Validation of a deep learning model for traumatic brain injury detection and NIRIS grading on non-contrast CT: a multi-reader study with promising results and opportunities for improvement. Neuroradiology 2023; 65:1605-1617. [PMID: 37269414 DOI: 10.1007/s00234-023-03170-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Accepted: 05/21/2023] [Indexed: 06/05/2023]
Abstract
PURPOSE This study aimed to assess and externally validate the performance of a deep learning (DL) model for the interpretation of non-contrast computed tomography (NCCT) scans of patients with suspicion of traumatic brain injury (TBI). METHODS This retrospective and multi-reader study included patients with TBI suspicion who were transported to the emergency department and underwent NCCT scans. Eight reviewers, with varying levels of training and experience (two neuroradiology attendings, two neuroradiology fellows, two neuroradiology residents, one neurosurgery attending, and one neurosurgery resident), independently evaluated NCCT head scans. The same scans were evaluated using the version 5.0 of the DL model icobrain tbi. The establishment of the ground truth involved a thorough assessment of all accessible clinical and laboratory data, as well as follow-up imaging studies, including NCCT and magnetic resonance imaging, as a consensus amongst the study reviewers. The outcomes of interest included neuroimaging radiological interpretation system (NIRIS) scores, the presence of midline shift, mass effect, hemorrhagic lesions, hydrocephalus, and severe hydrocephalus, as well as measurements of midline shift and volumes of hemorrhagic lesions. Comparisons using weighted Cohen's kappa coefficient were made. The McNemar test was used to compare the diagnostic performance. Bland-Altman plots were used to compare measurements. RESULTS One hundred patients were included, with the DL model successfully categorizing 77 scans. The median age for the total group was 48, with the omitted group having a median age of 44.5 and the included group having a median age of 48. The DL model demonstrated moderate agreement with the ground truth, trainees, and attendings. With the DL model's assistance, trainees' agreement with the ground truth improved. The DL model showed high specificity (0.88) and positive predictive value (0.96) in classifying NIRIS scores as 0-2 or 3-4. Trainees and attendings had the highest accuracy (0.95). The DL model's performance in classifying various TBI CT imaging common data elements was comparable to that of trainees and attendings. The average difference for the DL model in quantifying the volume of hemorrhagic lesions was 6.0 mL with a wide 95% confidence interval (CI) of - 68.32 to 80.22, and for midline shift, the average difference was 1.4 mm with a 95% CI of - 3.4 to 6.2. CONCLUSION While the DL model outperformed trainees in some aspects, attendings' assessments remained superior in most instances. Using the DL model as an assistive tool benefited trainees, improving their NIRIS score agreement with the ground truth. Although the DL model showed high potential in classifying some TBI CT imaging common data elements, further refinement and optimization are necessary to enhance its clinical utility.
Collapse
Affiliation(s)
- Bin Jiang
- Department of Radiology, Neuroradiology Division, Stanford University, Stanford, CA, USA
| | | | - Sean Creeden
- Deparment of Neuroradiology, University of Illinois College of Medicine Peoria, Peoria, IL, USA
| | - Guangming Zhu
- Department of Neurology, The University of Arizona, Tucson, AZ, USA
| | - Victoria Y Ding
- Department of Medicine, Stanford University, Stanford, CA, USA
| | - Hui Chen
- Department of Neuroradiology, MD Anderson Cancer Center, Houston, TX, USA
| | - Bryan Lanzman
- Department of Radiology, Neuroradiology Division, Stanford University, Stanford, CA, USA
| | - Dylan Wolman
- Department of Neuroimaging and Neurointervention, Stanford University, Stanford, CA, USA
| | - Sara Shams
- Department of Radiology, Neuroradiology Division, Stanford University, Stanford, CA, USA
- Department of Radiology, Karolinska University Hospital, Stockholm, Sweden
- Institution for Clinical Neuroscience, Karolinska Institute, Stockholm, Sweden
| | - Austin Trinh
- Department of Neuroimaging and Neurointervention, Stanford University, Stanford, CA, USA
| | - Ying Li
- Department of Radiology, Neuroradiology Division, Stanford University, Stanford, CA, USA
| | - Alexander Khalaf
- Department of Neuroimaging and Neurointervention, Stanford University, Stanford, CA, USA
| | - Jonathon J Parker
- Device-Based Neuroelectronics Laboratory, Mayo Clinic, Phoenix, AZ, USA
- Department of Neurological Surgery, Mayo Clinic, Phoenix, AZ, USA
| | - Casey H Halpern
- Department of Neurosurgery, University of Pennsylvania School of Medicine, Philadelphia, PA, USA
- Department of Surgery, Corporal Michael J. Crescenz Veterans Affairs Medical Center, Philadelphia, PA, USA
| | - Max Wintermark
- Department of Neuroradiology, MD Anderson Cancer Center, Houston, TX, USA.
| |
Collapse
|
2
|
Bennett A, Garner R, Morris MD, La Rocca M, Barisano G, Cua R, Loon J, Alba C, Carbone P, Gao S, Pantoja A, Khan A, Nouaili N, Vespa P, Toga AW, Duncan D. Manual lesion segmentations for traumatic brain injury characterization. FRONTIERS IN NEUROIMAGING 2023; 2:1068591. [PMID: 37554636 PMCID: PMC10406209 DOI: 10.3389/fnimg.2023.1068591] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Accepted: 02/17/2023] [Indexed: 08/10/2023]
Abstract
Traumatic brain injury (TBI) often results in heterogenous lesions that can be visualized through various neuroimaging techniques, such as magnetic resonance imaging (MRI). However, injury burden varies greatly between patients and structural deformations often impact usability of available analytic algorithms. Therefore, it is difficult to segment lesions automatically and accurately in TBI cohorts. Mislabeled lesions will ultimately lead to inaccurate findings regarding imaging biomarkers. Therefore, manual segmentation is currently considered the gold standard as this produces more accurate masks than existing automated algorithms. These masks can provide important lesion phenotype data including location, volume, and intensity, among others. There has been a recent push to investigate the correlation between these characteristics and the onset of post traumatic epilepsy (PTE), a disabling consequence of TBI. One motivation of the Epilepsy Bioinformatics Study for Antiepileptogenic Therapy (EpiBioS4Rx) is to identify reliable imaging biomarkers of PTE. Here, we report the protocol and importance of our manual segmentation process in patients with moderate-severe TBI enrolled in EpiBioS4Rx. Through these methods, we have generated a dataset of 127 validated lesion segmentation masks for TBI patients. These ground-truths can be used for robust PTE biomarker analyses, including optimization of multimodal MRI analysis via inclusion of lesioned tissue labels. Moreover, our protocol allows for analysis of the refinement process. Though tedious, the methods reported in this work are necessary to create reliable data for effective training of future machine-learning based lesion segmentation methods in TBI patients and subsequent PTE analyses.
Collapse
Affiliation(s)
- Alexis Bennett
- USC Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Rachael Garner
- USC Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Michael D. Morris
- David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States
| | - Marianna La Rocca
- USC Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
- Dipartimento Interateneo di Fisica “M. Merlin”, Università degli studi di Bari “A. Moro”, Bari, Italy
| | - Giuseppe Barisano
- USC Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Ruskin Cua
- USC Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Jordan Loon
- USC Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Celina Alba
- USC Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Patrick Carbone
- USC Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Shawn Gao
- USC Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Asenat Pantoja
- USC Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Azrin Khan
- USC Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Noor Nouaili
- USC Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Paul Vespa
- David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States
| | - Arthur W. Toga
- USC Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Dominique Duncan
- USC Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| |
Collapse
|
3
|
Hibi A, Jaberipour M, Cusimano MD, Bilbily A, Krishnan RG, Aviv RI, Tyrrell PN. Automated identification and quantification of traumatic brain injury from CT scans: Are we there yet? Medicine (Baltimore) 2022; 101:e31848. [PMID: 36451512 PMCID: PMC9704869 DOI: 10.1097/md.0000000000031848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Accepted: 10/26/2022] [Indexed: 12/02/2022] Open
Abstract
BACKGROUND The purpose of this study was to conduct a systematic review for understanding the availability and limitations of artificial intelligence (AI) approaches that could automatically identify and quantify computed tomography (CT) findings in traumatic brain injury (TBI). METHODS Systematic review, in accordance with PRISMA 2020 and SPIRIT-AI extension guidelines, with a search of 4 databases (Medline, Embase, IEEE Xplore, and Web of Science) was performed to find AI studies that automated the clinical tasks for identifying and quantifying CT findings of TBI-related abnormalities. RESULTS A total of 531 unique publications were reviewed, which resulted in 66 articles that met our inclusion criteria. The following components for identification and quantification regarding TBI were covered and automated by existing AI studies: identification of TBI-related abnormalities; classification of intracranial hemorrhage types; slice-, pixel-, and voxel-level localization of hemorrhage; measurement of midline shift; and measurement of hematoma volume. Automated identification of obliterated basal cisterns was not investigated in the existing AI studies. Most of the AI algorithms were based on deep neural networks that were trained on 2- or 3-dimensional CT imaging datasets. CONCLUSION We identified several important TBI-related CT findings that can be automatically identified and quantified with AI. A combination of these techniques may provide useful tools to enhance reproducibility of TBI identification and quantification by supporting radiologists and clinicians in their TBI assessments and reducing subjective human factors.
Collapse
Affiliation(s)
- Atsuhiro Hibi
- Institute of Medical Science, University of Toronto, Toronto, Ontario, Canada
| | - Majid Jaberipour
- Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada
| | - Michael D. Cusimano
- Institute of Medical Science, University of Toronto, Toronto, Ontario, Canada
- Division of Neurosurgery, St Michael’s Hospital, University of Toronto, Toronto, Canada
| | - Alexander Bilbily
- Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada
- Sunnybrook Health Sciences Centre, Toronto, Canada
| | - Rahul G. Krishnan
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Department of Laboratory Medicine & Pathobiology, University of Toronto, Toronto, Ontario, Canada
| | - Richard I. Aviv
- Department of Radiology, Radiation Oncology and Medical Physics, University of Ottawa, Ottawa, Ontario, Canada
| | - Pascal N. Tyrrell
- Institute of Medical Science, University of Toronto, Toronto, Ontario, Canada
- Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada
- Department of Statistical Sciences, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
4
|
Tabata K, Hashimoto M, Takahashi H, Wang Z, Nagaoka N, Hara T, Kamioka H. A morphometric analysis of the osteocyte canaliculus using applied automatic semantic segmentation by machine learning. J Bone Miner Metab 2022; 40:571-580. [PMID: 35338405 DOI: 10.1007/s00774-022-01321-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Accepted: 02/22/2022] [Indexed: 02/07/2023]
Abstract
INTRODUCTION Osteocytes play a role as mechanosensory cells by sensing flow-induced mechanical stimuli applied on their cell processes. High-resolution imaging of osteocyte processes and the canalicular wall are necessary for the analysis of this mechanosensing mechanism. Focused ion beam-scanning electron microscopy (FIB-SEM) enabled the visualization of the structure at the nanometer scale with thousands of serial-section SEM images. We applied machine learning for the automatic semantic segmentation of osteocyte processes and canalicular wall and performed a morphometric analysis using three-dimensionally reconstructed images. MATERIALS AND METHODS Six-week-old-mice femur were used. Osteocyte processes and canaliculi were observed at a resolution of 2 nm/voxel in a 4 × 4 μm region with 2000 serial-section SEM images. Machine learning was used for automatic semantic segmentation of the osteocyte processes and canaliculi from serial-section SEM images. The results of semantic segmentation were evaluated using the dice similarity coefficient (DSC). The segmented data were reconstructed to create three-dimensional images and a morphological analysis was performed. RESULTS The DSC was > 83%. Using the segmented data, a three-dimensional image of approximately 3.5 μm in length was reconstructed. The morphometric analysis revealed that the median osteocyte process diameter was 73.8 ± 18.0 nm, and the median pericellular fluid space around the osteocyte process was 40.0 ± 17.5 nm. CONCLUSION We used machine learning for the semantic segmentation of osteocyte processes and canalicular wall for the first time, and performed a morphological analysis using three-dimensionally reconstructed images.
Collapse
Affiliation(s)
- Kaori Tabata
- Department of Orthodontics, Okayama University Hospital, Okayama, Japan
| | - Mana Hashimoto
- Department of Orthodontics, Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, 2-5-1 Shikata, Kita-ku, Okayama, Okayama, 700-8558, Japan
| | - Haruka Takahashi
- Department of Orthodontics, Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, 2-5-1 Shikata, Kita-ku, Okayama, Okayama, 700-8558, Japan
| | - Ziyi Wang
- Department of Orthodontics, Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, 2-5-1 Shikata, Kita-ku, Okayama, Okayama, 700-8558, Japan
| | - Noriyuki Nagaoka
- Advanced Research Center for Oral and Craniofacial Sciences, Okayama University Dental School, Okayama, Japan
| | - Toru Hara
- Research Center for Structural Materials, National Institute for Materials Science, Tsukuba, Japan
| | - Hiroshi Kamioka
- Department of Orthodontics, Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, 2-5-1 Shikata, Kita-ku, Okayama, Okayama, 700-8558, Japan.
| |
Collapse
|
5
|
Sargolzaei S. Can Deep Learning Hit a Moving Target? A Scoping Review of Its Role to Study Neurological Disorders in Children. Front Comput Neurosci 2021; 15:670489. [PMID: 34025380 PMCID: PMC8131543 DOI: 10.3389/fncom.2021.670489] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2021] [Accepted: 04/09/2021] [Indexed: 12/12/2022] Open
Abstract
Neurological disorders dramatically impact patients of any age population, their families, and societies. Pediatrics are among vulnerable age populations who differently experience the devastating consequences of neurological conditions, such as attention-deficit hyperactivity disorders (ADHD), autism spectrum disorders (ASD), cerebral palsy, concussion, and epilepsy. System-level understanding of these neurological disorders, particularly from the brain networks' dynamic perspective, has led to the significant trend of recent scientific investigations. While a dramatic maturation in the network science application domain is evident, leading to a better understanding of neurological disorders, such rapid utilization for studying pediatric neurological disorders falls behind that of the adult population. Aside from the specific technological needs and constraints in studying neurological disorders in children, the concept of development introduces uncertainty and further complexity topping the existing neurologically driven processes caused by disorders. To unravel these complexities, indebted to the availability of high-dimensional data and computing capabilities, approaches based on machine learning have rapidly emerged a new trend to understand pathways better, accurately diagnose, and better manage the disorders. Deep learning has recently gained an ever-increasing role in the era of health and medical investigations. Thanks to its relatively more minor dependency on feature exploration and engineering, deep learning may overcome the challenges mentioned earlier in studying neurological disorders in children. The current scoping review aims to explore challenges concerning pediatric brain development studies under the constraints of neurological disorders and offer an insight into the potential role of deep learning methodology on such a task with varying and uncertain nature. Along with pinpointing recent advancements, possible research directions are highlighted where deep learning approaches can assist in computationally targeting neurological disorder-related processes and translating them into windows of opportunities for interventions in diagnosis, treatment, and management of neurological disorders in children.
Collapse
Affiliation(s)
- Saman Sargolzaei
- Department of Engineering, College of Engineering and Natural Sciences, University of Tennessee at Martin, Martin, TN, United States
| |
Collapse
|
6
|
Kirienko M, Sollini M, Ninatti G, Loiacono D, Giacomello E, Gozzi N, Amigoni F, Mainardi L, Lanzi PL, Chiti A. Distributed learning: a reliable privacy-preserving strategy to change multicenter collaborations using AI. Eur J Nucl Med Mol Imaging 2021; 48:3791-3804. [PMID: 33847779 PMCID: PMC8041944 DOI: 10.1007/s00259-021-05339-7] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Accepted: 03/24/2021] [Indexed: 12/12/2022]
Abstract
Purpose The present scoping review aims to assess the non-inferiority of distributed learning over centrally and locally trained machine learning (ML) models in medical applications. Methods We performed a literature search using the term “distributed learning” OR “federated learning” in the PubMed/MEDLINE and EMBASE databases. No start date limit was used, and the search was extended until July 21, 2020. We excluded articles outside the field of interest; guidelines or expert opinion, review articles and meta-analyses, editorials, letters or commentaries, and conference abstracts; articles not in the English language; and studies not using medical data. Selected studies were classified and analysed according to their aim(s). Results We included 26 papers aimed at predicting one or more outcomes: namely risk, diagnosis, prognosis, and treatment side effect/adverse drug reaction. Distributed learning was compared to centralized or localized training in 21/26 and 14/26 selected papers, respectively. Regardless of the aim, the type of input, the method, and the classifier, distributed learning performed close to centralized training, but two experiments focused on diagnosis. In all but 2 cases, distributed learning outperformed locally trained models. Conclusion Distributed learning resulted in a reliable strategy for model development; indeed, it performed equally to models trained on centralized datasets. Sensitive data can get preserved since they are not shared for model development. Distributed learning constitutes a promising solution for ML-based research and practice since large, diverse datasets are crucial for success.
Collapse
Affiliation(s)
- Margarita Kirienko
- Fondazione IRCCS Istituto Nazionale dei Tumori, Milan, Italy.,Department of Biomedical Sciences, Humanitas University, Pieve Emanuele, Milan, Italy
| | - Martina Sollini
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele, Milan, Italy. .,IRCCS Humanitas Research Hospital, Rozzano, Milan, Italy.
| | - Gaia Ninatti
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele, Milan, Italy
| | | | | | - Noemi Gozzi
- IRCCS Humanitas Research Hospital, Rozzano, Milan, Italy
| | | | | | | | - Arturo Chiti
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele, Milan, Italy.,IRCCS Humanitas Research Hospital, Rozzano, Milan, Italy
| |
Collapse
|
7
|
Enthoven D, Al-Ars Z. An Overview of Federated Deep Learning Privacy Attacks and Defensive Strategies. FEDERATED LEARNING SYSTEMS 2021. [DOI: 10.1007/978-3-030-70604-3_8] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
|
8
|
Sollini M, Bartoli F, Marciano A, Zanca R, Slart RHJA, Erba PA. Artificial intelligence and hybrid imaging: the best match for personalized medicine in oncology. Eur J Hybrid Imaging 2020; 4:24. [PMID: 34191197 PMCID: PMC8218106 DOI: 10.1186/s41824-020-00094-8] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2020] [Accepted: 11/26/2020] [Indexed: 12/20/2022] Open
Abstract
Artificial intelligence (AI) refers to a field of computer science aimed to perform tasks typically requiring human intelligence. Currently, AI is recognized in the broader technology radar within the five key technologies which emerge for their wide-ranging applications and impact in communities, companies, business, and value chain framework alike. However, AI in medical imaging is at an early phase of development, and there are still hurdles to take related to reliability, user confidence, and adoption. The present narrative review aimed to provide an overview on AI-based approaches (distributed learning, statistical learning, computer-aided diagnosis and detection systems, fully automated image analysis tool, natural language processing) in oncological hybrid medical imaging with respect to clinical tasks (detection, contouring and segmentation, prediction of histology and tumor stage, prediction of mutational status and molecular therapies targets, prediction of treatment response, and outcome). Particularly, AI-based approaches have been briefly described according to their purpose and, finally lung cancer-being one of the most extensively malignancy studied by hybrid medical imaging-has been used as illustrative scenario. Finally, we discussed clinical challenges and open issues including ethics, validation strategies, effective data-sharing methods, regulatory hurdles, educational resources, and strategy to facilitate the interaction among different stakeholders. Some of the major changes in medical imaging will come from the application of AI to workflow and protocols, eventually resulting in improved patient management and quality of life. Overall, several time-consuming tasks could be automatized. Machine learning algorithms and neural networks will permit sophisticated analysis resulting not only in major improvements in disease characterization through imaging, but also in the integration of multiple-omics data (i.e., derived from pathology, genomic, proteomics, and demographics) for multi-dimensional disease featuring. Nevertheless, to accelerate the transition of the theory to practice a sustainable development plan considering the multi-dimensional interactions between professionals, technology, industry, markets, policy, culture, and civil society directed by a mindset which will allow talents to thrive is necessary.
Collapse
Affiliation(s)
- Martina Sollini
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele (Milan), Italy
- Humanitas Clinical and Research Center, Rozzano (Milan), Italy
| | - Francesco Bartoli
- Regional Center of Nuclear Medicine, Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
| | - Andrea Marciano
- Regional Center of Nuclear Medicine, Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
| | - Roberta Zanca
- Regional Center of Nuclear Medicine, Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
| | - Riemer H J A Slart
- University Medical Center Groningen, Medical Imaging Center, University of Groningen, Groningen, The Netherlands
- Faculty of Science and Technology, Biomedical Photonic Imaging, University of Twente, Enschede, The Netherlands
| | - Paola A Erba
- Regional Center of Nuclear Medicine, Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy.
- University Medical Center Groningen, Medical Imaging Center, University of Groningen, Groningen, The Netherlands.
| |
Collapse
|
9
|
Segato A, Marzullo A, Calimeri F, De Momi E. Artificial intelligence for brain diseases: A systematic review. APL Bioeng 2020; 4:041503. [PMID: 33094213 PMCID: PMC7556883 DOI: 10.1063/5.0011697] [Citation(s) in RCA: 67] [Impact Index Per Article: 13.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2020] [Accepted: 09/09/2020] [Indexed: 12/15/2022] Open
Abstract
Artificial intelligence (AI) is a major branch of computer science that is fruitfully used for analyzing complex medical data and extracting meaningful relationships in datasets, for several clinical aims. Specifically, in the brain care domain, several innovative approaches have achieved remarkable results and open new perspectives in terms of diagnosis, planning, and outcome prediction. In this work, we present an overview of different artificial intelligent techniques used in the brain care domain, along with a review of important clinical applications. A systematic and careful literature search in major databases such as Pubmed, Scopus, and Web of Science was carried out using "artificial intelligence" and "brain" as main keywords. Further references were integrated by cross-referencing from key articles. 155 studies out of 2696 were identified, which actually made use of AI algorithms for different purposes (diagnosis, surgical treatment, intra-operative assistance, and postoperative assessment). Artificial neural networks have risen to prominent positions among the most widely used analytical tools. Classic machine learning approaches such as support vector machine and random forest are still widely used. Task-specific algorithms are designed for solving specific problems. Brain images are one of the most used data types. AI has the possibility to improve clinicians' decision-making ability in neuroscience applications. However, major issues still need to be addressed for a better practical use of AI in the brain. To this aim, it is important to both gather comprehensive data and build explainable AI algorithms.
Collapse
Affiliation(s)
- Alice Segato
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan 20133, Italy
| | - Aldo Marzullo
- Department of Mathematics and Computer Science, University of Calabria, Rende 87036, Italy
| | - Francesco Calimeri
- Department of Mathematics and Computer Science, University of Calabria, Rende 87036, Italy
| | - Elena De Momi
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan 20133, Italy
| |
Collapse
|
10
|
Remedios SW, Wu Z, Bermudez C, Kerley CI, Roy S, Patel MB, Butman JA, Landman BA, Pham DL. Extracting 2D weak labels from volume labels using multiple instance learning in CT hemorrhage detection. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2020; 11313:10.1117/12.2549356. [PMID: 34040275 PMCID: PMC8148053 DOI: 10.1117/12.2549356] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
Multiple instance learning (MIL) is a supervised learning methodology that aims to allow models to learn instance class labels from bag class labels, where a bag is defined to contain multiple instances. MIL is gaining traction for learning from weak labels but has not been widely applied to 3D medical imaging. MIL is well-suited to clinical CT acquisitions since (1) the highly anisotropic voxels hinder application of traditional 3D networks and (2) patch-based networks have limited ability to learn whole volume labels. In this work, we apply MIL with a deep convolutional neural network to identify whether clinical CT head image volumes possess one or more large hemorrhages (> 20cm3), resulting in a learned 2D model without the need for 2D slice annotations. Individual image volumes are considered separate bags, and the slices in each volume are instances. Such a framework sets the stage for incorporating information obtained in clinical reports to help train a 2D segmentation approach. Within this context, we evaluate the data requirements to enable generalization of MIL by varying the amount of training data. Our results show that a training size of at least 400 patient image volumes was needed to achieve accurate per-slice hemorrhage detection. Over a five-fold cross-validation, the leading model, which made use of the maximum number of training volumes, had an average true positive rate of 98.10%, an average true negative rate of 99.36%, and an average precision of 0.9698. The models have been made available along with source code1 to enabled continued exploration and adaption of MIL in CT neuroimaging.
Collapse
Affiliation(s)
- Samuel W Remedios
- Center for Neuroscience and Regenerative Medicine, Henry Jackson Foundation
- Radiology and Imaging Sciences, Clinical Center, National Institute of Health
- Department of Computer Science, Middle Tennessee State University
- Department of Electrical Engineering, Vanderbilt University
| | - Zihao Wu
- Department of Electrical Engineering, Vanderbilt University
| | | | | | - Snehashis Roy
- Center for Neuroscience and Regenerative Medicine, Henry Jackson Foundation
| | - Mayur B Patel
- Departments of Surgery, Neurosurgery, Hearing & Speech Sciences; Center for Health Services Research, Vanderbilt Brain Institute; Critical Illness, Brain Dysfunction, and Survivorship Center, Vanderbilt University Medical Center; VA Tennessee Valley Healthcare System, Department of Veterans Affairs Medical Center
| | - John A Butman
- Radiology and Imaging Sciences, Clinical Center, National Institute of Health
| | - Bennett A Landman
- Department of Electrical Engineering, Vanderbilt University
- Department of Biomedical Engineering, Vanderbilt University
- Department of Computer Science, Vanderbilt University
| | - Dzung L Pham
- Center for Neuroscience and Regenerative Medicine, Henry Jackson Foundation
- Radiology and Imaging Sciences, Clinical Center, National Institute of Health
| |
Collapse
|
11
|
Remedios SW, Roy S, Bermudez C, Patel MB, Butman JA, Landman BA, Pham DL. Distributed deep learning across multisite datasets for generalized CT hemorrhage segmentation. Med Phys 2020; 47:89-98. [PMID: 31660621 PMCID: PMC6983946 DOI: 10.1002/mp.13880] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2019] [Revised: 10/07/2019] [Accepted: 10/08/2019] [Indexed: 11/06/2022] Open
Abstract
PURPOSE As deep neural networks achieve more success in the wide field of computer vision, greater emphasis is being placed on the generalizations of these models for production deployment. With sufficiently large training datasets, models can typically avoid overfitting their data; however, for medical imaging it is often difficult to obtain enough data from a single site. Sharing data between institutions is also frequently nonviable or prohibited due to security measures and research compliance constraints, enforced to guard protected health information (PHI) and patient anonymity. METHODS In this paper, we implement cyclic weight transfer with independent datasets from multiple geographically disparate sites without compromising PHI. We compare results between single-site learning (SSL) and multisite learning (MSL) models on testing data drawn from each of the training sites as well as two other institutions. RESULTS The MSL model attains an average dice similarity coefficient (DSC) of 0.690 on the holdout institution datasets with a volume correlation of 0.914, respectively corresponding to a 7% and 5% statistically significant improvement over the average of both SSL models, which attained an average DSC of 0.646 and average correlation of 0.871. CONCLUSIONS We show that a neural network can be efficiently trained on data from two physically remote sites without consolidating patient data to a single location. The resulting network improves model generalization and achieves higher average DSCs on external datasets than neural networks trained on data from a single source.
Collapse
Affiliation(s)
- Samuel W. Remedios
- Center for Neuroscience and Regenerative Medicine, Henry Jackson Foundation
- Radiology and Imaging Sciences, Clinical Center, National Institute of Health
- Department of Computer Science, Middle Tennessee State University
- Department of Electrical Engineering, Vanderbilt University
| | - Snehashis Roy
- Center for Neuroscience and Regenerative Medicine, Henry Jackson Foundation
| | | | - Mayur B. Patel
- Departments of Surgery, Neurosurgery, Hearing & Speech Sciences; Center for Health Services Research, Vanderbilt Brain Institute; Critical Illness, Brain Dysfunction, and Survivorship Center, Vanderbilt University Medical Center; VA Tennessee Valley Healthcare System, Department of Veterans Affairs Medical Center
| | - John A. Butman
- Radiology and Imaging Sciences, Clinical Center, National Institute of Health
| | - Bennett A. Landman
- Department of Electrical Engineering, Vanderbilt University
- Department of Biomedical Engineering, Vanderbilt University
- Department of Computer Science, Vanderbilt University
| | - Dzung L. Pham
- Center for Neuroscience and Regenerative Medicine, Henry Jackson Foundation
- Radiology and Imaging Sciences, Clinical Center, National Institute of Health
| |
Collapse
|