1
|
Fathi Kazerooni A, Akbari H, Hu X, Bommineni V, Grigoriadis D, Toorens E, Sako C, Mamourian E, Ballinger D, Sussman R, Singh A, Verginadis II, Dahmane N, Koumenis C, Binder ZA, Bagley SJ, Mohan S, Hatzigeorgiou A, O'Rourke DM, Ganguly T, De S, Bakas S, Nasrallah MP, Davatzikos C. The radiogenomic and spatiogenomic landscapes of glioblastoma and their relationship to oncogenic drivers. COMMUNICATIONS MEDICINE 2025; 5:55. [PMID: 40025245 PMCID: PMC11873127 DOI: 10.1038/s43856-025-00767-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Accepted: 02/12/2025] [Indexed: 03/04/2025] Open
Abstract
BACKGROUND Glioblastoma is a highly heterogeneous brain tumor, posing challenges for precision therapies and patient stratification in clinical trials. Understanding how genetic mutations influence tumor imaging may improve patient management and treatment outcomes. This study investigates the relationship between imaging features, spatial patterns of tumor location, and genetic alterations in IDH-wildtype glioblastoma, as well as the likely sequence of mutational events. METHODS We conducted a retrospective analysis of 357 IDH-wildtype glioblastomas with pre-operative multiparametric MRI and targeted genetic sequencing data. Radiogenomic signatures and spatial distribution maps were generated for key mutations in genes such as EGFR, PTEN, TP53, and NF1 and their corresponding pathways. Machine and deep learning models were used to identify imaging biomarkers and stratify tumors based on their genetic profiles and molecular heterogeneity. RESULTS Here, we show that glioblastoma mutations produce distinctive imaging signatures, which are more pronounced in tumors with less molecular heterogeneity. These signatures provide insights into how mutations affect tumor characteristics such as neovascularization, cell density, invasion, and vascular leakage. We also found that tumor location and spatial distribution correlate with genetic profiles, revealing associations between tumor regions and specific oncogenic drivers. Additionally, imaging features reflect the cross-sectionally inferred evolutionary trajectories of glioblastomas. CONCLUSIONS This study establishes clinically accessible imaging biomarkers that capture the molecular composition and oncogenic drivers of glioblastoma. These findings have potential implications for noninvasive tumor profiling, personalized therapies, and improved patient stratification in clinical trials.
Collapse
Affiliation(s)
- Anahita Fathi Kazerooni
- AI2D Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, PA, USA
- Center for Data-Driven Discovery in Biomedicine (D3b), Division of Neurosurgery, Children's Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Hamed Akbari
- Department of Bioengineering, School of Engineering, Santa Clara University, Santa Clara, CA, USA
| | - Xiaoju Hu
- Rutgers Cancer Institute of New Jersey, Rutgers the State University of New Jersey, New Brunswick, NJ, USA
| | - Vikas Bommineni
- AI2D Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, PA, USA
| | - Dimitris Grigoriadis
- Department of Computer Science and Biomedical Informatics, University of Thessaly, Lamia, Greece
| | - Erik Toorens
- Penn Genomic Analysis Core, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Chiharu Sako
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Elizabeth Mamourian
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Dominique Ballinger
- Department of Pathology & Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Robyn Sussman
- Department of Pathology & Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Ashish Singh
- AI2D Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Ioannis I Verginadis
- Department of Radiation Oncology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Nadia Dahmane
- Department of Neurological Surgery, Weill Cornell Medicine, New York, NY, USA
| | - Constantinos Koumenis
- Department of Radiation Oncology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Zev A Binder
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Glioblastoma Translational Center of Excellence, Abramson Cancer Center, University of Pennsylvania, Philadelphia, PA, USA
| | - Stephen J Bagley
- Department of Radiation Oncology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Abramson Cancer Center, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Suyash Mohan
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Artemis Hatzigeorgiou
- Department of Computer Science and Biomedical Informatics, University of Thessaly, Lamia, Greece
- Hellenic Pasteur Institute, Athens, Greece
| | - Donald M O'Rourke
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Glioblastoma Translational Center of Excellence, Abramson Cancer Center, University of Pennsylvania, Philadelphia, PA, USA
| | - Tapan Ganguly
- Penn Genomic Analysis Core, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Subhajyoti De
- Rutgers Cancer Institute of New Jersey, Rutgers the State University of New Jersey, New Brunswick, NJ, USA
| | - Spyridon Bakas
- Department of Pathology & Laboratory Medicine, Indiana University School of Medicine, Indianapolis, IN, USA
- Department of Radiology and Imaging Sciences, Indiana University School of Medicine, Indianapolis, IN, USA
| | - MacLean P Nasrallah
- AI2D Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, PA, USA
- Department of Pathology & Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Christos Davatzikos
- AI2D Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, PA, USA.
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
2
|
Boelders SM, De Baene W, Postma E, Gehring K, Ong LL. Predicting Cognitive Functioning for Patients with a High-Grade Glioma: Evaluating Different Representations of Tumor Location in a Common Space. Neuroinformatics 2024; 22:329-352. [PMID: 38900230 PMCID: PMC11329426 DOI: 10.1007/s12021-024-09671-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/31/2024] [Indexed: 06/21/2024]
Abstract
Cognitive functioning is increasingly considered when making treatment decisions for patients with a brain tumor in view of a personalized onco-functional balance. Ideally, one can predict cognitive functioning of individual patients to make treatment decisions considering this balance. To make accurate predictions, an informative representation of tumor location is pivotal, yet comparisons of representations are lacking. Therefore, this study compares brain atlases and principal component analysis (PCA) to represent voxel-wise tumor location. Pre-operative cognitive functioning was predicted for 246 patients with a high-grade glioma across eight cognitive tests while using different representations of voxel-wise tumor location as predictors. Voxel-wise tumor location was represented using 13 different frequently-used population average atlases, 13 randomly generated atlases, and 13 representations based on PCA. ElasticNet predictions were compared between representations and against a model solely using tumor volume. Preoperative cognitive functioning could only partly be predicted from tumor location. Performances of different representations were largely similar. Population average atlases did not result in better predictions compared to random atlases. PCA-based representation did not clearly outperform other representations, although summary metrics indicated that PCA-based representations performed somewhat better in our sample. Representations with more regions or components resulted in less accurate predictions. Population average atlases possibly cannot distinguish between functionally distinct areas when applied to patients with a glioma. This stresses the need to develop and validate methods for individual parcellations in the presence of lesions. Future studies may test if the observed small advantage of PCA-based representations generalizes to other data.
Collapse
Affiliation(s)
- S M Boelders
- Department of Neurosurgery, Elisabeth-TweeSteden Hospital, Tilburg, The Netherlands
- Department of Cognitive Sciences and AI, Tilburg University, Tilburg, The Netherlands
| | - W De Baene
- Department of Cognitive Neuropsychology, Tilburg University Tilburg, Warandelaan 2, P. O. Box 90153, Tilburg, 5000 LE, The Netherlands
| | - E Postma
- Department of Cognitive Sciences and AI, Tilburg University, Tilburg, The Netherlands
| | - K Gehring
- Department of Neurosurgery, Elisabeth-TweeSteden Hospital, Tilburg, The Netherlands.
- Department of Cognitive Neuropsychology, Tilburg University Tilburg, Warandelaan 2, P. O. Box 90153, Tilburg, 5000 LE, The Netherlands.
| | - L L Ong
- Department of Cognitive Sciences and AI, Tilburg University, Tilburg, The Netherlands
| |
Collapse
|
3
|
Abd-Ellah MK, Awad AI, Khalaf AAM, Ibraheem AM. Automatic brain-tumor diagnosis using cascaded deep convolutional neural networks with symmetric U-Net and asymmetric residual-blocks. Sci Rep 2024; 14:9501. [PMID: 38664436 PMCID: PMC11045751 DOI: 10.1038/s41598-024-59566-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2023] [Accepted: 04/12/2024] [Indexed: 04/28/2024] Open
Abstract
The use of various kinds of magnetic resonance imaging (MRI) techniques for examining brain tissue has increased significantly in recent years, and manual investigation of each of the resulting images can be a time-consuming task. This paper presents an automatic brain-tumor diagnosis system that uses a CNN for detection, classification, and segmentation of glioblastomas; the latter stage seeks to segment tumors inside glioma MRI images. The structure of the developed multi-unit system consists of two stages. The first stage is responsible for tumor detection and classification by categorizing brain MRI images into normal, high-grade glioma (glioblastoma), and low-grade glioma. The uniqueness of the proposed network lies in its use of different levels of features, including local and global paths. The second stage is responsible for tumor segmentation, and skip connections and residual units are used during this step. Using 1800 images extracted from the BraTS 2017 dataset, the detection and classification stage was found to achieve a maximum accuracy of 99%. The segmentation stage was then evaluated using the Dice score, specificity, and sensitivity. The results showed that the suggested deep-learning-based system ranks highest among a variety of different strategies reported in the literature.
Collapse
Affiliation(s)
| | - Ali Ismail Awad
- College of Information Technology, United Arab Emirates University, P.O. Box 15551, Al Ain, United Arab Emirates.
- Faculty of Engineering, Al-Azhar University, P.O. Box 83513, Qena, Egypt.
| | - Ashraf A M Khalaf
- Department of Electrical Engineering, Faculty of Engineering, Minia University, Minia, 61519, Egypt
| | - Amira Mofreh Ibraheem
- Faculty of Artificial Intelligence, Egyptian Russian University, Cairo, 11829, Egypt
| |
Collapse
|
4
|
Zaman A, Hassan H, Zeng X, Khan R, Lu J, Yang H, Miao X, Cao A, Yang Y, Huang B, Guo Y, Kang Y. Adaptive Feature Medical Segmentation Network: an adaptable deep learning paradigm for high-performance 3D brain lesion segmentation in medical imaging. Front Neurosci 2024; 18:1363930. [PMID: 38680446 PMCID: PMC11047127 DOI: 10.3389/fnins.2024.1363930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Accepted: 03/04/2024] [Indexed: 05/01/2024] Open
Abstract
Introduction In neurological diagnostics, accurate detection and segmentation of brain lesions is crucial. Identifying these lesions is challenging due to its complex morphology, especially when using traditional methods. Conventional methods are either computationally demanding with a marginal impact/enhancement or sacrifice fine details for computational efficiency. Therefore, balancing performance and precision in compute-intensive medical imaging remains a hot research topic. Methods We introduce a novel encoder-decoder network architecture named the Adaptive Feature Medical Segmentation Network (AFMS-Net) with two encoder variants: the Single Adaptive Encoder Block (SAEB) and the Dual Adaptive Encoder Block (DAEB). A squeeze-and-excite mechanism is employed in SAEB to identify significant data while disregarding peripheral details. This approach is best suited for scenarios requiring quick and efficient segmentation, with an emphasis on identifying key lesion areas. In contrast, the DAEB utilizes an advanced channel spatial attention strategy for fine-grained delineation and multiple-class classifications. Additionally, both architectures incorporate a Segmentation Path (SegPath) module between the encoder and decoder, refining segmentation, enhancing feature extraction, and improving model performance and stability. Results AFMS-Net demonstrates exceptional performance across several notable datasets, including BRATs 2021, ATLAS 2021, and ISLES 2022. Its design aims to construct a lightweight architecture capable of handling complex segmentation challenges with high precision. Discussion The proposed AFMS-Net addresses the critical balance issue between performance and computational efficiency in the segmentation of brain lesions. By introducing two tailored encoder variants, the network adapts to varying requirements of speed and feature. This approach not only advances the state-of-the-art in lesion segmentation but also provides a scalable framework for future research in medical image processing.
Collapse
Affiliation(s)
- Asim Zaman
- School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, China
- School of Applied Technology, Shenzhen University, Shenzhen, China
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, China
| | - Haseeb Hassan
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, China
| | - Xueqiang Zeng
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, China
- School of Applied Technology, Shenzhen University, Shenzhen, China
| | - Rashid Khan
- School of Applied Technology, Shenzhen University, Shenzhen, China
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, China
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, China
| | - Jiaxi Lu
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, China
- School of Applied Technology, Shenzhen University, Shenzhen, China
| | - Huihui Yang
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, China
- School of Applied Technology, Shenzhen University, Shenzhen, China
| | - Xiaoqiang Miao
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, China
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Anbo Cao
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, China
- School of Applied Technology, Shenzhen University, Shenzhen, China
| | - Yingjian Yang
- Shenzhen Lanmage Medical Technology Co., Ltd, Shenzhen, China
| | - Bingding Huang
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, China
| | - Yingwei Guo
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, China
- School of Electrical and Information Engineering, Northeast Petroleum University, Daqing, China
| | - Yan Kang
- School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, China
- School of Applied Technology, Shenzhen University, Shenzhen, China
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, China
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| |
Collapse
|
5
|
Park KY, Shimony JS, Chakrabarty S, Tanenbaum AB, Hacker CD, Donovan KM, Luckett PH, Milchenko M, Sotiras A, Marcus DS, Leuthardt EC, Snyder AZ. Optimal approaches to analyzing functional MRI data in glioma patients. J Neurosci Methods 2024; 402:110011. [PMID: 37981126 PMCID: PMC10926951 DOI: 10.1016/j.jneumeth.2023.110011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Revised: 09/18/2023] [Accepted: 11/09/2023] [Indexed: 11/21/2023]
Abstract
BACKGROUND Resting-state fMRI is increasingly used to study the effects of gliomas on the functional organization of the brain. A variety of preprocessing techniques and functional connectivity analyses are represented in the literature. However, there so far has been no systematic comparison of how alternative methods impact observed results. NEW METHOD We first surveyed current literature and identified alternative analytical approaches commonly used in the field. Following, we systematically compared alternative approaches to atlas registration, parcellation scheme, and choice of graph-theoretical measure as regards differentiating glioma patients (N = 59) from age-matched reference subjects (N = 163). RESULTS Our results suggest that non-linear, as opposed to affine registration, improves structural match to an atlas, as well as measures of functional connectivity. Functionally- as opposed to anatomically-derived parcellation schemes maximized the contrast between glioma patients and reference subjects. We also demonstrate that graph-theoretic measures strongly depend on parcellation granularity, parcellation scheme, and graph density. COMPARISON WITH EXISTING METHODS AND CONCLUSIONS Our current work primarily focuses on technical optimization of rs-fMRI analysis in glioma patients and, therefore, is fundamentally different from the bulk of papers discussing glioma-induced functional network changes. We report that the evaluation of glioma-induced alterations in the functional connectome strongly depends on analytical approaches including atlas registration, choice of parcellation scheme, and graph-theoretical measures.
Collapse
Affiliation(s)
- Ki Yun Park
- Department of Neurological Surgery, Washington University School of Medicine, St. Louis, MO 63110, USA; Medical Scientist Training Program, Washington University School of Medicine, St. Louis, MO, USA; Division of Neurotechnology, Washington University School of Medicine, St. Louis, MO 63110, USA.
| | - Joshua S Shimony
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, USA
| | - Satrajit Chakrabarty
- Department of Electrical and Systems Engineering, Washington University, St. Louis, MO 63130, USA
| | - Aaron B Tanenbaum
- Department of Neurology, Washington University School of Medicine, St. Louis, MO 63110, USA
| | - Carl D Hacker
- Department of Neurological Surgery, Washington University School of Medicine, St. Louis, MO 63110, USA
| | - Kara M Donovan
- Department of Biomedical Engineering, Washington University, St. Louis, MO 63130, USA; Division of Neurotechnology, Washington University School of Medicine, St. Louis, MO 63110, USA
| | - Patrick H Luckett
- Department of Neurological Surgery, Washington University School of Medicine, St. Louis, MO 63110, USA; Division of Neurotechnology, Washington University School of Medicine, St. Louis, MO 63110, USA
| | - Mikhail Milchenko
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, USA
| | - Aristeidis Sotiras
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, USA; Institute for Informatics, Data Science & Biostatistics, Washington University School of Medicine, St. Louis, MO 63110, USA
| | - Daniel S Marcus
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, USA
| | - Eric C Leuthardt
- Department of Neurological Surgery, Washington University School of Medicine, St. Louis, MO 63110, USA; Department of Biomedical Engineering, Washington University, St. Louis, MO 63130, USA; Department of Mechanical Engineering and Materials Science, Washington University, St. Louis, MO 63130, USA; Center for Innovation in Neuroscience and Technology, Washington University School of Medicine, St. Louis, MO 63110, USA; Brain Laser Center, Washington University School of Medicine, St. Louis, MO 63110, USA; Division of Neurotechnology, Washington University School of Medicine, St. Louis, MO 63110, USA
| | - Abraham Z Snyder
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, USA; Department of Neurology, Washington University School of Medicine, St. Louis, MO 63110, USA
| |
Collapse
|
6
|
Bianconi A, Rossi LF, Bonada M, Zeppa P, Nico E, De Marco R, Lacroce P, Cofano F, Bruno F, Morana G, Melcarne A, Ruda R, Mainardi L, Fiaschi P, Garbossa D, Morra L. Deep learning-based algorithm for postoperative glioblastoma MRI segmentation: a promising new tool for tumor burden assessment. Brain Inform 2023; 10:26. [PMID: 37801128 PMCID: PMC10558414 DOI: 10.1186/s40708-023-00207-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 09/16/2023] [Indexed: 10/07/2023] Open
Abstract
OBJECTIVE Clinical and surgical decisions for glioblastoma patients depend on a tumor imaging-based evaluation. Artificial Intelligence (AI) can be applied to magnetic resonance imaging (MRI) assessment to support clinical practice, surgery planning and prognostic predictions. In a real-world context, the current obstacles for AI are low-quality imaging and postoperative reliability. The aim of this study is to train an automatic algorithm for glioblastoma segmentation on a clinical MRI dataset and to obtain reliable results both pre- and post-operatively. METHODS The dataset used for this study comprises 237 (71 preoperative and 166 postoperative) MRIs from 71 patients affected by a histologically confirmed Grade IV Glioma. The implemented U-Net architecture was trained by transfer learning to perform the segmentation task on postoperative MRIs. The training was carried out first on BraTS2021 dataset for preoperative segmentation. Performance is evaluated using DICE score (DS) and Hausdorff 95% (H95). RESULTS In preoperative scenario, overall DS is 91.09 (± 0.60) and H95 is 8.35 (± 1.12), considering tumor core, enhancing tumor and whole tumor (ET and edema). In postoperative context, overall DS is 72.31 (± 2.88) and H95 is 23.43 (± 7.24), considering resection cavity (RC), gross tumor volume (GTV) and whole tumor (WT). Remarkably, the RC segmentation obtained a mean DS of 63.52 (± 8.90) in postoperative MRIs. CONCLUSIONS The performances achieved by the algorithm are consistent with previous literature for both pre-operative and post-operative glioblastoma's MRI evaluation. Through the proposed algorithm, it is possible to reduce the impact of low-quality images and missing sequences.
Collapse
Affiliation(s)
- Andrea Bianconi
- Neurosurgery, Department of Neuroscience, University of Turin, via Cherasco 15, 10126, Turin, Italy.
| | | | - Marta Bonada
- Neurosurgery, Department of Neuroscience, University of Turin, via Cherasco 15, 10126, Turin, Italy
| | - Pietro Zeppa
- Neurosurgery, Department of Neuroscience, University of Turin, via Cherasco 15, 10126, Turin, Italy
| | - Elsa Nico
- Department of Neurosurgery, Barrow Neurological Institute, St. Joseph's Hospital and Medical Center, Phoenix, AZ, USA
| | - Raffaele De Marco
- Neurosurgery, Department of Neuroscience, University of Turin, via Cherasco 15, 10126, Turin, Italy
| | | | - Fabio Cofano
- Neurosurgery, Department of Neuroscience, University of Turin, via Cherasco 15, 10126, Turin, Italy
| | - Francesco Bruno
- Neurooncology, Department of Neuroscience, University of Turin, Turin, Italy
| | - Giovanni Morana
- Neuroradiology, Department of Neuroscience, University of Turin, Turin, Italy
| | - Antonio Melcarne
- Neurosurgery, Department of Neuroscience, University of Turin, via Cherasco 15, 10126, Turin, Italy
| | - Roberta Ruda
- Neurooncology, Department of Neuroscience, University of Turin, Turin, Italy
| | - Luca Mainardi
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milan, Italy
| | - Pietro Fiaschi
- IRCCS Ospedale Policlinico S. Martino, Genoa, Italy
- Dipartimento di Neuroscienze, Riabilitazione, Oftalmologia, Genetica e Scienze Materno-Infantili, Univeristy of Genoa, Genoa, Italy
| | - Diego Garbossa
- Neurosurgery, Department of Neuroscience, University of Turin, via Cherasco 15, 10126, Turin, Italy
| | - Lia Morra
- Dipartimento di Automatica e Informatica, Politecnico di Torino, Turin, Italy
| |
Collapse
|
7
|
Murmu A, Kumar P. A novel Gateaux derivatives with efficient DCNN-Resunet method for segmenting multi-class brain tumor. Med Biol Eng Comput 2023:10.1007/s11517-023-02824-z. [PMID: 37338739 DOI: 10.1007/s11517-023-02824-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Accepted: 03/14/2023] [Indexed: 06/21/2023]
Abstract
In hospitals and pathology, observing the features and locations of brain tumors in Magnetic Resonance Images (MRI) is a crucial task for assisting medical professionals in both treatment and diagnosis. The multi-class information about the brain tumor is often obtained from the patient's MRI dataset. However, this information may vary in different shapes and sizes for various brain tumors, making it difficult to detect their locations in the brain. To resolve these issues, a novel customized Deep Convolution Neural Network (DCNN) based Residual-Unet (ResUnet) model with Transfer Learning (TL) is proposed for predicting the locations of the brain tumor in an MRI dataset. The DCNN model has been used to extract the features from input images and select the Region Of Interest (ROI) by using the TL technique for training it faster. Furthermore, the min-max normalizing approach is used to enhance the color intensity value for particular ROI boundary edges in the brain tumor images. Specifically, the boundary edges of the brain tumors have been detected by utilizing Gateaux Derivatives (GD) method to identify the multi-class brain tumors precisely. The proposed scheme has been validated on two datasets namely the brain tumor, and Figshare MRI datasets for detecting multi-class Brain Tumor Segmentation (BTS).The experimental results have been analyzed by evaluation metrics namely, accuracy (99.78, and 99.03), Jaccard Coefficient (93.04, and 94.95), Dice Factor Coefficient (DFC) (92.37, and 91.94), Mean Absolute Error (MAE) (0.0019, and 0.0013), and Mean Squared Error (MSE) (0.0085, and 0.0012) for proper validation. The proposed system outperforms the state-of-the-art segmentation models on the MRI brain tumor dataset.
Collapse
Affiliation(s)
- Anita Murmu
- Computer Science and Engineering, National Institute of Technology Patna, Ashok Rajpath, Patna, 800005, Bihar, India.
| | - Piyush Kumar
- Computer Science and Engineering, National Institute of Technology Patna, Ashok Rajpath, Patna, 800005, Bihar, India
| |
Collapse
|
8
|
Subramanian S, Ghafouri A, Scheufele KM, Himthani N, Davatzikos C, Biros G. Ensemble Inversion for Brain Tumor Growth Models With Mass Effect. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:982-995. [PMID: 36378796 PMCID: PMC10201550 DOI: 10.1109/tmi.2022.3221913] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
We propose a method for extracting physics-based biomarkers from a single multiparametric Magnetic Resonance Imaging (mpMRI) scan bearing a glioma tumor. We account for mass effect, the deformation of brain parenchyma due to the growing tumor, which on its own is an important radiographic feature but its automatic quantification remains an open problem. In particular, we calibrate a partial differential equation (PDE) tumor growth model that captures mass effect, parameterized by a single scalar parameter, tumor proliferation, migration, while localizing the tumor initiation site. The single-scan calibration problem is severely ill-posed because the precancerous, healthy, brain anatomy is unknown. To address the ill-posedness, we introduce an ensemble inversion scheme that uses a number of normal subject brain templates as proxies for the healthy precancer subject anatomy. We verify our solver on a synthetic dataset and perform a retrospective analysis on a clinical dataset of 216 glioblastoma (GBM) patients. We analyze the reconstructions using our calibrated biophysical model and demonstrate that our solver provides both global and local quantitative measures of tumor biophysics and mass effect. We further highlight the improved performance in model calibration through the inclusion of mass effect in tumor growth models-including mass effect in the model leads to 10% increase in average dice coefficients for patients with significant mass effect. We further evaluate our model by introducing novel biophysics-based features and using them for survival analysis. Our preliminary analysis suggests that including such features can improve patient stratification and survival prediction.
Collapse
|
9
|
Mahesh Kumar G, Parthasarathy E. Development of an enhanced U-Net model for brain tumor segmentation with optimized architecture. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
|
10
|
Radiomics-based evaluation and possible characterization of dynamic contrast enhanced (DCE) perfusion derived different sub-regions of Glioblastoma. Eur J Radiol 2023; 159:110655. [PMID: 36577183 DOI: 10.1016/j.ejrad.2022.110655] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 12/12/2022] [Indexed: 12/23/2022]
Abstract
BACKGROUND Glioblastoma (GB) is among the most devastative brain tumors, which usually comprises sub-regions like enhancing tumor (ET), non-enhancing tumor (NET), edema (ED), and necrosis (NEC) as described on MRI. Semi-automated algorithms to extract these tumor subpart volumes and boundaries have been demonstrated using dynamic contrast-enhanced (DCE) perfusion imaging. We aim to characterize these sub-regions derived from DCE perfusion MRI using routine 3D post-contrast-T1 (T1GD) and FLAIR images with the aid of Radiomics analysis. We also explored the possibility of separating edema from tumor sub-regions by extracting the most influential radiomics features. METHODS A total of 89 patients with histopathological confirmed IDH wild type GB were considered, who underwent the MR imaging with DCE perfusion-MRI. Perfusion and kinetic indices were computed and further used to segment tumor sub-regions. Radiomics features were extracted from FLAIR and T1GD images with PyRadiomics tool. Statistical analysis of the features was carried out using two approaches as well as machine learning (ML) models were constructed separately, i) within different tumor sub-regions and ii) ED as one category and the remaining sub-regions combined as another category. ML based predictive feature maps was also constructed. RESULTS Seven features found to be statistically significant to differentiate tumor sub-regions in FLAIR and T1GD images, with p-value < 0.05 and AUC values in the range of 0.72 to 0.93. However, the edema features stood out in the analysis. In the second approach, the ML model was able to categorize the ED from the rest of the tumor sub-regions in FLAIR and T1GD images with AUC of 0.95 and 0.89 respectively. CONCLUSION Radiomics-based specific feature values and maps help to characterize different tumor sub-regions. However, the GLDM_DependenceNonUniformity feature appears to be most specific for separating edema from the remaining tumor sub-regions using conventional FLAIR images. This may be of value in the segmentation of edema from tumors using conventional MRI in the future.
Collapse
|
11
|
Meaney C, Das S, Colak E, Kohandel M. Deep learning characterization of brain tumours with diffusion weighted imaging. J Theor Biol 2023; 557:111342. [PMID: 36368560 DOI: 10.1016/j.jtbi.2022.111342] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Revised: 10/19/2022] [Accepted: 10/30/2022] [Indexed: 11/09/2022]
Abstract
Glioblastoma multiforme (GBM) is one of the most deadly forms of cancer. Methods of characterizing these tumours are valuable for improving predictions of their progression and response to treatment. A mathematical model called the proliferation-invasion (PI) model has been used extensively in the literature to model the growth of these tumours, though it relies on known values of two key parameters: the tumour cell diffusivity and proliferation rate. Unfortunately, these parameters are difficult to estimate in a patient-specific manner, making personalized tumour forecasting challenging. In this paper, we develop and apply a deep learning model capable of making accurate estimates of these key GBM-characterizing parameters while simultaneously producing a full prediction of the tumour progression curve. Our method uses two sets of multi sequence MRI in order to produce estimations and relies on a preprocessing pipeline which includes brain tumour segmentation and conversion to tumour cellularity. We first apply our deep learning model to synthetic tumours to showcase the model's capabilities and identify situations where prediction errors are likely to occur. We then apply our model to a clinical dataset consisting of five patients diagnosed with GBM. For all patients, we derive evidence-based estimates for each of the PI model parameters and predictions for the future progression of the tumour, along with estimates of the parameter uncertainties. Our work provides a new, easily generalizable method for the estimation of patient-specific tumour parameters, which can be built upon to aid physicians in designing personalized treatments.
Collapse
Affiliation(s)
- Cameron Meaney
- Department of Applied Mathematics, University of Waterloo, Waterloo, Canada.
| | - Sunit Das
- Division of Neurosurgery, Li Ka Shing Knowledge Institute, St. Michael's Hospital, Toronto, Canada; Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Errol Colak
- Faculty of Medicine, University of Toronto, Toronto, Canada; Department of Medical Imaging and Li Ka Shing Knowledge Institute, St. Michael's Hospital, Toronto, Canada; Odette Professorship in Artificial Intelligence for Medical Imaging, St. Michael's Hospital, Toronto, Canada
| | - Mohammad Kohandel
- Department of Applied Mathematics, University of Waterloo, Waterloo, Canada
| |
Collapse
|
12
|
Koteswara Rao Chinnam S, Sistla V, Krishna Kishore Kolli V. Multimodal attention-gated cascaded U-Net model for automatic brain tumor detection and segmentation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103907] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
13
|
Mazumdar I, Mukherjee J. Fully automatic MRI brain tumor segmentation using efficient spatial attention convolutional networks with composite loss. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.05.050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
14
|
Zhang L, Wang Y, Peng Z, Weng Y, Fang Z, Xiao F, Zhang C, Fan Z, Huang K, Zhu Y, Jiang W, Shen J, Zhan R. The progress of multimodal imaging combination and subregion based radiomics research of cancers. Int J Biol Sci 2022; 18:3458-3469. [PMID: 35637947 PMCID: PMC9134904 DOI: 10.7150/ijbs.71046] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 04/18/2022] [Indexed: 12/01/2022] Open
Abstract
In recent years, with the standardization of radiomics methods; development of tools; and popularization of the concept, radiomics has been widely used in all aspects of tumor diagnosis; treatment; and prognosis. As the study of radiomics in cancer has become more advanced, the currently used methods have revealed their shortcomings. The performance of cancer radiomics based on single-modality medical images, which based on their imaging principles, only partially reflects tumor information, has been necessarily compromised. Using the whole tumor as a region of interest to extract radiomic features inevitably leads to the loss of intra-tumoral heterogeneity of, which also affects the performance of radiomics. Radiomics of multimodal images extracts various aspects of information from images of each modality and then integrates them together for model construction; thus, avoiding missing information. Subregional segmentation based on multimodal medical image combinations allows radiomics features acquired from subregions to retain tumor heterogeneity, further improving the performance of radiomics. In this review, we provide a detailed summary of the current research on the radiomics of multimodal images of cancer and tumor subregion-based radiomics, and then raised some of the research problems and also provide a thorough discussion on these issues.
Collapse
Affiliation(s)
- Luyuan Zhang
- Department of Neurosurgery, First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Yumin Wang
- Department of Otolaryngology Head and Neck Surgery, Xiangya Hospital, Central South University, Changsha, Hunan, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Changsha, Hunan, China
| | - Zhouying Peng
- Department of Otolaryngology Head and Neck Surgery, Xiangya Hospital, Central South University, Changsha, Hunan, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Changsha, Hunan, China
| | - Yuxiang Weng
- Department of Neurosurgery, First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Zebin Fang
- Department of Neurosurgery, First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Feng Xiao
- Department of Neurosurgery, First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Chao Zhang
- Department of Neurosurgery, First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Zuoxu Fan
- Department of Neurosurgery, First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Kaiyuan Huang
- Department of Neurosurgery, First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Yu Zhu
- Department of Neurosurgery, First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Weihong Jiang
- Department of Otolaryngology Head and Neck Surgery, Xiangya Hospital, Central South University, Changsha, Hunan, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Changsha, Hunan, China
| | - Jian Shen
- Department of Neurosurgery, First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Renya Zhan
- Department of Neurosurgery, First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| |
Collapse
|
15
|
Huang B, Ye Y, Xu Z, Cai Z, He Y, Zhong Z, Liu L, Chen X, Chen H, Huang B. 3D Lightweight Network for Simultaneous Registration and Segmentation of Organs-at-Risk in CT Images of Head and Neck Cancer. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:951-964. [PMID: 34784272 DOI: 10.1109/tmi.2021.3128408] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Image-guided radiation therapy (IGRT) is the most effective treatment for head and neck cancer. The successful implementation of IGRT requires accurate delineation of organ-at-risk (OAR) in the computed tomography (CT) images. In routine clinical practice, OARs are manually segmented by oncologists, which is time-consuming, laborious, and subjective. To assist oncologists in OAR contouring, we proposed a three-dimensional (3D) lightweight framework for simultaneous OAR registration and segmentation. The registration network was designed to align a selected OAR template to a new image volume for OAR localization. A region of interest (ROI) selection layer then generated ROIs of OARs from the registration results, which were fed into a multiview segmentation network for accurate OAR segmentation. To improve the performance of registration and segmentation networks, a centre distance loss was designed for the registration network, an ROI classification branch was employed for the segmentation network, and further, context information was incorporated to iteratively promote both networks' performance. The segmentation results were further refined with shape information for final delineation. We evaluated registration and segmentation performances of the proposed framework using three datasets. On the internal dataset, the Dice similarity coefficient (DSC) of registration and segmentation was 69.7% and 79.6%, respectively. In addition, our framework was evaluated on two external datasets and gained satisfactory performance. These results showed that the 3D lightweight framework achieved fast, accurate and robust registration and segmentation of OARs in head and neck cancer. The proposed framework has the potential of assisting oncologists in OAR delineation.
Collapse
|
16
|
Andresen J, Kepp T, Ehrhardt J, Burchard CVD, Roider J, Handels H. Deep learning-based simultaneous registration and unsupervised non-correspondence segmentation of medical images with pathologies. Int J Comput Assist Radiol Surg 2022; 17:699-710. [PMID: 35239133 PMCID: PMC8948150 DOI: 10.1007/s11548-022-02577-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Accepted: 02/03/2022] [Indexed: 12/02/2022]
Abstract
Purpose The registration of medical images often suffers from missing correspondences due to inter-patient variations, pathologies and their progression leading to implausible deformations that cause misregistrations and might eliminate valuable information. Detecting non-corresponding regions simultaneously with the registration process helps generating better deformations and has been investigated thoroughly with classical iterative frameworks but rarely with deep learning-based methods. Methods We present the joint non-correspondence segmentation and image registration network (NCR-Net), a convolutional neural network (CNN) trained on a Mumford–Shah-like functional, transferring the classical approach to the field of deep learning. NCR-Net consists of one encoding and two decoding parts allowing the network to simultaneously generate diffeomorphic deformations and segment non-correspondences. The loss function is composed of a masked image distance measure and regularization of deformation field and segmentation output. Additionally, anatomical labels are used for weak supervision of the registration task. No manual segmentations of non-correspondences are required. Results The proposed network is evaluated on the publicly available LPBA40 dataset with artificially added stroke lesions and a longitudinal optical coherence tomography (OCT) dataset of patients with age-related macular degeneration. The LPBA40 data are used to quantitatively assess the segmentation performance of the network, and it is shown qualitatively that NCR-Net can be used for the unsupervised segmentation of pathologies in OCT images. Furthermore, NCR-Net is compared to a registration-only network and state-of-the-art registration algorithms showing that NCR-Net achieves competitive performance and superior robustness to non-correspondences. Conclusion NCR-Net, a CNN for simultaneous image registration and unsupervised non-correspondence segmentation, is presented. Experimental results show the network’s ability to segment non-correspondence regions in an unsupervised manner and its robust registration performance even in the presence of large pathologies.
Collapse
Affiliation(s)
- Julia Andresen
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23562, Lübeck, Germany.
| | - Timo Kepp
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23562, Lübeck, Germany
| | - Jan Ehrhardt
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23562, Lübeck, Germany
- German Research Center for Artificial Intelligence, Lübeck, Germany
| | | | - Johann Roider
- Department of Ophthalmology, Christian-Albrechts-University of Kiel, Kiel, Germany
| | - Heinz Handels
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23562, Lübeck, Germany
- German Research Center for Artificial Intelligence, Lübeck, Germany
| |
Collapse
|
17
|
Lapuyade-Lahorgue J, Ruan S. Segmentation of multicorrelated images with copula models and conditionally random fields. J Med Imaging (Bellingham) 2022; 9:014001. [PMID: 35024379 PMCID: PMC8741411 DOI: 10.1117/1.jmi.9.1.014001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Accepted: 12/16/2021] [Indexed: 01/11/2023] Open
Abstract
Purpose: Multisource images are interesting in medical imaging. Indeed, multisource images enable the use of complementary information of different sources such as for T1 and T2 modalities in MRI imaging. However, such multisource data can also be subject to redundancy and correlation. The question is how to efficiently fuse the multisource information without reinforcing the redundancy. We propose a method for segmenting multisource images that are statistically correlated. Approach: The method that we propose is the continuation of a prior work in which we introduce the copula model in hidden Markov fields (HMF). To achieve the multisource segmentations, we use a functional measure of dependency called "copula." This copula is incorporated in the conditionally random fields (CRF). Contrary to HMF, where we consider a prior knowledge on the hidden states modeled by an HMF, in CRF, there is no prior information and only the distribution of the hidden states conditionally to the observations can be known. This conditional distribution depends on the data and can be modeled by an energy function composed of two terms. The first one groups the voxels having similar intensities in the same class. As for the second term, it encourages a pair of voxels to be in the same class if the difference between their intensities is not too big. Results: A comparison between HMF and CRF is performed via theory and experimentations using both simulated and real data from BRATS 2013. Moreover, our method is compared with different state-of-the-art methods, which include supervised (convolutional neural networks) and unsupervised (hierarchical MRF). Our unsupervised method gives similar results as decision trees for synthetic images and as convolutional neural networks for real images; both methods are supervised. Conclusions: We compare two statistical methods using the copula: HMF and CRF to deal with multicorrelated images. We demonstrate the interest of using copula. In both models, the copula considerably improves the results compared with individual segmentations.
Collapse
Affiliation(s)
- Jérôme Lapuyade-Lahorgue
- University of Rouen, LITIS, Eq. Quantif, Rouen, France,Address all correspondence to Jérôme Lapuyade-Lahorgue,
| | - Su Ruan
- University of Rouen, LITIS, Eq. Quantif, Rouen, France
| |
Collapse
|
18
|
Machine Learning-Based Radiomics in Neuro-Oncology. ACTA NEUROCHIRURGICA. SUPPLEMENT 2021; 134:139-151. [PMID: 34862538 DOI: 10.1007/978-3-030-85292-4_18] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
In the last decades, modern medicine has evolved into a data-centered discipline, generating massive amounts of granular high-dimensional data exceeding human comprehension. With improved computational methods, machine learning and artificial intelligence (AI) as tools for data processing and analysis are becoming more and more important. At the forefront of neuro-oncology and AI-research, the field of radiomics has emerged. Non-invasive assessments of quantitative radiological biomarkers mined from complex imaging characteristics across various applications are used to predict survival, discriminate between primary and secondary tumors, as well as between progression and pseudo-progression. In particular, the application of molecular phenotyping, envisioned in the field of radiogenomics, has gained popularity for both primary and secondary brain tumors. Although promising results have been obtained thus far, the lack of workflow standardization and availability of multicenter data remains challenging. The objective of this review is to provide an overview of novel applications of machine learning- and deep learning-based radiomics in primary and secondary brain tumors and their implications for future research in the field.
Collapse
|
19
|
Huang Z, Zhao Y, Liu Y, Song G. GCAUNet: A group cross-channel attention residual UNet for slice based brain tumor segmentation. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102958] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
|
20
|
Huang D, Wang M, Zhang L, Li H, Ye M, Li A. Learning rich features with hybrid loss for brain tumor segmentation. BMC Med Inform Decis Mak 2021; 21:63. [PMID: 34330265 PMCID: PMC8323198 DOI: 10.1186/s12911-021-01431-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2021] [Accepted: 02/09/2021] [Indexed: 11/10/2022] Open
Abstract
Background Accurately segment the tumor region of MRI images is important for brain tumor diagnosis and radiotherapy planning. At present, manual segmentation is wildly adopted in clinical and there is a strong need for an automatic and objective system to alleviate the workload of radiologists. Methods We propose a parallel multi-scale feature fusing architecture to generate rich feature representation for accurate brain tumor segmentation. It comprises two parts: (1) Feature Extraction Network (FEN) for brain tumor feature extraction at different levels and (2) Multi-scale Feature Fusing Network (MSFFN) for merge all different scale features in a parallel manner. In addition, we use two hybrid loss functions to optimize the proposed network for the class imbalance issue. Results We validate our method on BRATS 2015, with 0.86, 0.73 and 0.61 in Dice for the three tumor regions (complete, core and enhancing), and the model parameter size is only 6.3 MB. Without any post-processing operations, our method still outperforms published state-of-the-arts methods on the segmentation results of complete tumor regions and obtains competitive performance in another two regions. Conclusions The proposed parallel structure can effectively fuse multi-level features to generate rich feature representation for high-resolution results. Moreover, the hybrid loss functions can alleviate the class imbalance issue and guide the training process. The proposed method can be used in other medical segmentation tasks.
Collapse
Affiliation(s)
- Daobin Huang
- School of Information Science and Technology, and Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, 230027, China.,School of Medical Information, Wannan Medical College, Wuhu, 241002, China.,Research Center of Health Big Data Mining and Applications, Wannan Medical College, Wuhu, 241002, China
| | - Minghui Wang
- School of Information Science and Technology, and Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, 230027, China
| | - Ling Zhang
- Department of Biochemistry, Wannan Medical College, Wuhu, 241002, China
| | - Haichun Li
- School of Information Science and Technology, and Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, 230027, China
| | - Minquan Ye
- School of Information Science and Technology, and Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, 230027, China. .,Research Center of Health Big Data Mining and Applications, Wannan Medical College, Wuhu, 241002, China.
| | - Ao Li
- School of Information Science and Technology, and Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, 230027, China.
| |
Collapse
|
21
|
Samani ZR, Parker D, Wolf R, Hodges W, Brem S, Verma R. Distinct tumor signatures using deep learning-based characterization of the peritumoral microenvironment in glioblastomas and brain metastases. Sci Rep 2021; 11:14469. [PMID: 34262079 PMCID: PMC8280204 DOI: 10.1038/s41598-021-93804-6] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Accepted: 06/30/2021] [Indexed: 11/25/2022] Open
Abstract
Tumor types are classically distinguished based on biopsies of the tumor itself, as well as a radiological interpretation using diverse MRI modalities. In the current study, the overarching goal is to demonstrate that primary (glioblastomas) and secondary (brain metastases) malignancies can be differentiated based on the microstructure of the peritumoral region. This is achieved by exploiting the extracellular water differences between vasogenic edema and infiltrative tissue and training a convolutional neural network (CNN) on the Diffusion Tensor Imaging (DTI)-derived free water volume fraction. We obtained 85% accuracy in discriminating extracellular water differences between local patches in the peritumoral area of 66 glioblastomas and 40 metastatic patients in a cross-validation setting. On an independent test cohort consisting of 20 glioblastomas and 10 metastases, we got 93% accuracy in discriminating metastases from glioblastomas using majority voting on patches. This level of accuracy surpasses CNNs trained on other conventional DTI-based measures such as fractional anisotropy (FA) and mean diffusivity (MD), that have been used in other studies. Additionally, the CNN captures the peritumoral heterogeneity better than conventional texture features, including Gabor and radiomic features. Our results demonstrate that the extracellular water content of the peritumoral tissue, as captured by the free water volume fraction, is best able to characterize the differences between infiltrative and vasogenic peritumoral regions, paving the way for its use in classifying and benchmarking peritumoral tissue with varying degrees of infiltration.
Collapse
Affiliation(s)
- Zahra Riahi Samani
- Diffusion and Connectomics in Precision Healthcare Research Lab (DiCIPHR), Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - Drew Parker
- Diffusion and Connectomics in Precision Healthcare Research Lab (DiCIPHR), Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - Ronald Wolf
- Department of Radiology, Department of Neurosurgery, University of Pennsylvania, Philadelphia, PA, USA
| | - Wes Hodges
- Founder at Synaptive Medical, Toronto, ON, Canada
| | - Steven Brem
- Department of Radiology, Department of Neurosurgery, University of Pennsylvania, Philadelphia, PA, USA
| | - Ragini Verma
- Diffusion and Connectomics in Precision Healthcare Research Lab (DiCIPHR), Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
22
|
Kommers I, Bouget D, Pedersen A, Eijgelaar RS, Ardon H, Barkhof F, Bello L, Berger MS, Conti Nibali M, Furtner J, Fyllingen EH, Hervey-Jumper S, Idema AJS, Kiesel B, Kloet A, Mandonnet E, Müller DMJ, Robe PA, Rossi M, Sagberg LM, Sciortino T, van den Brink WA, Wagemakers M, Widhalm G, Witte MG, Zwinderman AH, Reinertsen I, Solheim O, De Witt Hamer PC. Glioblastoma Surgery Imaging-Reporting and Data System: Standardized Reporting of Tumor Volume, Location, and Resectability Based on Automated Segmentations. Cancers (Basel) 2021; 13:2854. [PMID: 34201021 PMCID: PMC8229389 DOI: 10.3390/cancers13122854] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Revised: 05/28/2021] [Accepted: 06/02/2021] [Indexed: 01/01/2023] Open
Abstract
Treatment decisions for patients with presumed glioblastoma are based on tumor characteristics available from a preoperative MR scan. Tumor characteristics, including volume, location, and resectability, are often estimated or manually delineated. This process is time consuming and subjective. Hence, comparison across cohorts, trials, or registries are subject to assessment bias. In this study, we propose a standardized Glioblastoma Surgery Imaging Reporting and Data System (GSI-RADS) based on an automated method of tumor segmentation that provides standard reports on tumor features that are potentially relevant for glioblastoma surgery. As clinical validation, we determine the agreement in extracted tumor features between the automated method and the current standard of manual segmentations from routine clinical MR scans before treatment. In an observational consecutive cohort of 1596 adult patients with a first time surgery of a glioblastoma from 13 institutions, we segmented gadolinium-enhanced tumor parts both by a human rater and by an automated algorithm. Tumor features were extracted from segmentations of both methods and compared to assess differences, concordance, and equivalence. The laterality, contralateral infiltration, and the laterality indices were in excellent agreement. The native and normalized tumor volumes had excellent agreement, consistency, and equivalence. Multifocality, but not the number of foci, had good agreement and equivalence. The location profiles of cortical and subcortical structures were in excellent agreement. The expected residual tumor volumes and resectability indices had excellent agreement, consistency, and equivalence. Tumor probability maps were in good agreement. In conclusion, automated segmentations are in excellent agreement with manual segmentations and practically equivalent regarding tumor features that are potentially relevant for neurosurgical purposes. Standard GSI-RADS reports can be generated by open access software.
Collapse
Affiliation(s)
- Ivar Kommers
- Department of Neurosurgery, Amsterdam University Medical Centers, Vrije Universiteit, 1081 HV Amsterdam, The Netherlands; (I.K.); (R.S.E.); (D.M.J.M.)
- Cancer Center Amsterdam, Brain Tumor Center, Amsterdam University Medical Centers, 1081 HV Amsterdam, The Netherlands
| | - David Bouget
- Department of Health Research, SINTEF Digital, NO-7465 Trondheim, Norway; (D.B.); (A.P.); (I.R.)
| | - André Pedersen
- Department of Health Research, SINTEF Digital, NO-7465 Trondheim, Norway; (D.B.); (A.P.); (I.R.)
| | - Roelant S. Eijgelaar
- Department of Neurosurgery, Amsterdam University Medical Centers, Vrije Universiteit, 1081 HV Amsterdam, The Netherlands; (I.K.); (R.S.E.); (D.M.J.M.)
- Cancer Center Amsterdam, Brain Tumor Center, Amsterdam University Medical Centers, 1081 HV Amsterdam, The Netherlands
| | - Hilko Ardon
- Department of Neurosurgery, Twee Steden Hospital, 5042 AD Tilburg, The Netherlands;
| | - Frederik Barkhof
- Department of Radiology and Nuclear Medicine, Amsterdam University Medical Centers, Vrije Universiteit, 1081 HV Amsterdam, The Netherlands;
- Institutes of Neurology and Healthcare Engineering, University College London, London WC1E 6BT, UK
| | - Lorenzo Bello
- Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Humanitas Research Hospital, Università Degli Studi di Milano, 20122 Milano, Italy; (L.B.); (M.C.N.); (M.R.); (T.S.)
| | - Mitchel S. Berger
- Department of Neurological Surgery, University of California San Francisco, San Francisco, CA 94143, USA; (M.S.B.); (S.H.-J.)
| | - Marco Conti Nibali
- Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Humanitas Research Hospital, Università Degli Studi di Milano, 20122 Milano, Italy; (L.B.); (M.C.N.); (M.R.); (T.S.)
| | - Julia Furtner
- Department of Biomedical Imaging and Image-Guided Therapy, Medical University Vienna, 1090 Wien, Austria;
| | - Even H. Fyllingen
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, NO-7491 Trondheim, Norway;
- Department of Radiology and Nuclear Medicine, St. Olav’s Hospital, Trondheim University Hospital, NO-7030 Trondheim, Norway
| | - Shawn Hervey-Jumper
- Department of Neurological Surgery, University of California San Francisco, San Francisco, CA 94143, USA; (M.S.B.); (S.H.-J.)
| | - Albert J. S. Idema
- Department of Neurosurgery, Northwest Clinics, 1815 JD Alkmaar, The Netherlands;
| | - Barbara Kiesel
- Department of Neurosurgery, Medical University Vienna, 1090 Wien, Austria; (B.K.); (G.W.)
| | - Alfred Kloet
- Department of Neurosurgery, Haaglanden Medical Center, 2512 VA The Hague, The Netherlands;
| | - Emmanuel Mandonnet
- Department of Neurological Surgery, Hôpital Lariboisière, 75010 Paris, France;
| | - Domenique M. J. Müller
- Department of Neurosurgery, Amsterdam University Medical Centers, Vrije Universiteit, 1081 HV Amsterdam, The Netherlands; (I.K.); (R.S.E.); (D.M.J.M.)
- Cancer Center Amsterdam, Brain Tumor Center, Amsterdam University Medical Centers, 1081 HV Amsterdam, The Netherlands
| | - Pierre A. Robe
- Department of Neurology and Neurosurgery, University Medical Center Utrecht, 3584 CX Utrecht, The Netherlands;
| | - Marco Rossi
- Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Humanitas Research Hospital, Università Degli Studi di Milano, 20122 Milano, Italy; (L.B.); (M.C.N.); (M.R.); (T.S.)
| | - Lisa M. Sagberg
- Department of Neurosurgery, St. Olav’s Hospital, Trondheim University Hospital, NO-7030 Trondheim, Norway;
| | - Tommaso Sciortino
- Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Humanitas Research Hospital, Università Degli Studi di Milano, 20122 Milano, Italy; (L.B.); (M.C.N.); (M.R.); (T.S.)
| | | | - Michiel Wagemakers
- Department of Neurosurgery, University Medical Center Groningen, University of Groningen, 9713 GZ Groningen, The Netherlands;
| | - Georg Widhalm
- Department of Neurosurgery, Medical University Vienna, 1090 Wien, Austria; (B.K.); (G.W.)
| | - Marnix G. Witte
- Department of Radiation Oncology, The Netherlands Cancer Institute, 1066 CX Amsterdam, The Netherlands;
| | - Aeilko H. Zwinderman
- Department of Clinical Epidemiology and Biostatistics, Amsterdam University Medical Centers, University of Amsterdam, 1105 AZ Amsterdam, The Netherlands;
| | - Ingerid Reinertsen
- Department of Health Research, SINTEF Digital, NO-7465 Trondheim, Norway; (D.B.); (A.P.); (I.R.)
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, NO-7491 Trondheim, Norway;
| | - Ole Solheim
- Department of Neurosurgery, St. Olav’s Hospital, Trondheim University Hospital, NO-7030 Trondheim, Norway;
- Department of Neuromedicine and Movement Science, Norwegian University of Science and Technology, NO-7491 Trondheim, Norway
| | - Philip C. De Witt Hamer
- Department of Neurosurgery, Amsterdam University Medical Centers, Vrije Universiteit, 1081 HV Amsterdam, The Netherlands; (I.K.); (R.S.E.); (D.M.J.M.)
- Cancer Center Amsterdam, Brain Tumor Center, Amsterdam University Medical Centers, 1081 HV Amsterdam, The Netherlands
| |
Collapse
|
23
|
Le NQK, Hung TNK, Do DT, Lam LHT, Dang LH, Huynh TT. Radiomics-based machine learning model for efficiently classifying transcriptome subtypes in glioblastoma patients from MRI. Comput Biol Med 2021; 132:104320. [PMID: 33735760 DOI: 10.1016/j.compbiomed.2021.104320] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Revised: 03/05/2021] [Accepted: 03/05/2021] [Indexed: 12/13/2022]
Abstract
BACKGROUND In the field of glioma, transcriptome subtypes have been considered as an important diagnostic and prognostic biomarker that may help improve the treatment efficacy. However, existing identification methods of transcriptome subtypes are limited due to the relatively long detection period, the unattainability of tumor specimens via biopsy or surgery, and the fleeting nature of intralesional heterogeneity. In search of a superior model over previous ones, this study evaluated the efficiency of eXtreme Gradient Boosting (XGBoost)-based radiomics model to classify transcriptome subtypes in glioblastoma patients. METHODS This retrospective study retrieved patients from TCGA-GBM and IvyGAP cohorts with pathologically diagnosed glioblastoma, and separated them into different transcriptome subtypes groups. GBM patients were then segmented into three different regions of MRI: enhancement of the tumor core (ET), non-enhancing portion of the tumor core (NET), and peritumoral edema (ED). We subsequently used handcrafted radiomics features (n = 704) from multimodality MRI and two-level feature selection techniques (Spearman correlation and F-score tests) in order to find the features that could be relevant. RESULTS After the feature selection approach, we identified 13 radiomics features that were the most meaningful ones that can be used to reach the optimal results. With these features, our XGBoost model reached the predictive accuracies of 70.9%, 73.3%, 88.4%, and 88.4% for classical, mesenchymal, neural, and proneural subtypes, respectively. Our model performance has been improved in comparison with the other models as well as previous works on the same dataset. CONCLUSION The use of XGBoost and two-level feature selection analysis (Spearman correlation and F-score) could be expected as a potential combination for classifying transcriptome subtypes with high performance and might raise public attention for further research on radiomics-based GBM models.
Collapse
Affiliation(s)
- Nguyen Quoc Khanh Le
- Professional Master Program in Artificial Intelligence in Medicine, College of Medicine, Taipei Medical University, Taipei, 106, Taiwan; Research Center for Artificial Intelligence in Medicine, Taipei Medical University, Taipei, 106, Taiwan; Translational Imaging Research Center, Taipei Medical University Hospital, Taipei, 110, Taiwan.
| | - Truong Nguyen Khanh Hung
- International Master/Ph.D. Program in Medicine, College of Medicine, Taipei Medical University, Taipei, 110, Taiwan; Orthopedic and Trauma Department, Cho Ray Hospital, Ho Chi Minh City, 70000, Viet Nam
| | - Duyen Thi Do
- Graduate Institute of Biomedical Informatics, Taipei Medical University, Taipei, 106, Taiwan
| | - Luu Ho Thanh Lam
- International Master/Ph.D. Program in Medicine, College of Medicine, Taipei Medical University, Taipei, 110, Taiwan; Children's Hospital 2, Ho Chi Minh City, 70000, Viet Nam
| | - Luong Huu Dang
- Department of Otolaryngology, University of Medicine and Pharmacy at Ho Chi Minh City, Ho Chi Minh City, 70000, Viet Nam
| | - Tuan-Tu Huynh
- Department of Electrical Engineering, Yuan Ze University, No. 135, Yuandong Road, Zhongli, 320, Taoyuan, Taiwan; Department of Electrical Electronic and Mechanical Engineering, Lac Hong University, No. 10, Huynh Van Nghe Road, Bien Hoa, Dong Nai, 76120, Viet Nam
| |
Collapse
|
24
|
Zhong X, Amrehn M, Ravikumar N, Chen S, Strobel N, Birkhold A, Kowarschik M, Fahrig R, Maier A. Deep action learning enables robust 3D segmentation of body organs in various CT and MRI images. Sci Rep 2021; 11:3311. [PMID: 33558570 PMCID: PMC7870874 DOI: 10.1038/s41598-021-82370-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2020] [Accepted: 01/14/2021] [Indexed: 11/09/2022] Open
Abstract
In this study, we propose a novel point cloud based 3D registration and segmentation framework using reinforcement learning. An artificial agent, implemented as a distinct actor based on value networks, is trained to predict the optimal piece-wise linear transformation of a point cloud for the joint tasks of registration and segmentation. The actor network estimates a set of plausible actions and the value network aims to select the optimal action for the current observation. Point-wise features that comprise spatial positions (and surface normal vectors in the case of structured meshes), and their corresponding image features, are used to encode the observation and represent the underlying 3D volume. The actor and value networks are applied iteratively to estimate a sequence of transformations that enable accurate delineation of object boundaries. The proposed approach was extensively evaluated in both segmentation and registration tasks using a variety of challenging clinical datasets. Our method has fewer trainable parameters and lower computational complexity compared to the 3D U-Net, and it is independent of the volume resolution. We show that the proposed method is applicable to mono- and multi-modal segmentation tasks, achieving significant improvements over the state-of-the-art for the latter. The flexibility of the proposed framework is further demonstrated for a multi-modal registration application. As we learn to predict actions rather than a target, the proposed method is more robust compared to the 3D U-Net when dealing with previously unseen datasets, acquired using different protocols or modalities. As a result, the proposed method provides a promising multi-purpose segmentation and registration framework, particular in the context of image-guided interventions.
Collapse
Affiliation(s)
- Xia Zhong
- Pattern Recognition Lab, Friedrich-Alexander University, Erlangen-Nürnberg, Germany.
| | - Mario Amrehn
- Pattern Recognition Lab, Friedrich-Alexander University, Erlangen-Nürnberg, Germany
| | - Nishant Ravikumar
- Pattern Recognition Lab, Friedrich-Alexander University, Erlangen-Nürnberg, Germany
| | - Shuqing Chen
- Pattern Recognition Lab, Friedrich-Alexander University, Erlangen-Nürnberg, Germany
| | - Norbert Strobel
- Institute of Medical Engineering, University of Applied Sciences, Würzburg-Schweinfurt, Germany
| | | | | | | | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander University, Erlangen-Nürnberg, Germany
| |
Collapse
|
25
|
Gryska E, Schneiderman J, Björkman-Burtscher I, Heckemann RA. Automatic brain lesion segmentation on standard magnetic resonance images: a scoping review. BMJ Open 2021; 11:e042660. [PMID: 33514580 PMCID: PMC7849889 DOI: 10.1136/bmjopen-2020-042660] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/11/2020] [Revised: 01/09/2021] [Accepted: 01/12/2021] [Indexed: 12/11/2022] Open
Abstract
OBJECTIVES Medical image analysis practices face challenges that can potentially be addressed with algorithm-based segmentation tools. In this study, we map the field of automatic MR brain lesion segmentation to understand the clinical applicability of prevalent methods and study designs, as well as challenges and limitations in the field. DESIGN Scoping review. SETTING Three databases (PubMed, IEEE Xplore and Scopus) were searched with tailored queries. Studies were included based on predefined criteria. Emerging themes during consecutive title, abstract, methods and whole-text screening were identified. The full-text analysis focused on materials, preprocessing, performance evaluation and comparison. RESULTS Out of 2990 unique articles identified through the search, 441 articles met the eligibility criteria, with an estimated growth rate of 10% per year. We present a general overview and trends in the field with regard to publication sources, segmentation principles used and types of lesions. Algorithms are predominantly evaluated by measuring the agreement of segmentation results with a trusted reference. Few articles describe measures of clinical validity. CONCLUSIONS The observed reporting practices leave room for improvement with a view to studying replication, method comparison and clinical applicability. To promote this improvement, we propose a list of recommendations for future studies in the field.
Collapse
Affiliation(s)
- Emilia Gryska
- Medical Radiation Sciences, Goteborgs universitet Institutionen for kliniska vetenskaper, Goteborg, Sweden
| | - Justin Schneiderman
- Sektionen för klinisk neurovetenskap, Goteborgs Universitet Institutionen for Neurovetenskap och fysiologi, Goteborg, Sweden
| | | | - Rolf A Heckemann
- Medical Radiation Sciences, Goteborgs universitet Institutionen for kliniska vetenskaper, Goteborg, Sweden
| |
Collapse
|
26
|
Isselmou AEK, Xu G, Shuai Z, Saminu S, Javaid I, Ahmad IS. Brain Tumor identification by Convolution Neural Network with Fuzzy C-mean Model Using MR Brain Images. INTERNATIONAL JOURNAL OF CIRCUITS, SYSTEMS AND SIGNAL PROCESSING 2021; 14:1096-1102. [DOI: 10.46300/9106.2020.14.137] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Abstract
Medical image computing techniques are essential in helping the doctors to support their decision in the diagnosis of the patients. Due to the complexity of the brain structure, we choose to use MR brain images because of their quality and the highest resolution. The objective of this article is to detect brain tumor using convolution neural network with fuzzy c-means model, the advantage of the proposed model is the ability to achieve excellent performance using accuracy, sensitivity, specificity, overall dice and recall values better than the previous models that are already published. In addition, the novel model can identify the brain tumor, using different types of MR images. The proposed model obtained accuracy with 98%.
Collapse
Affiliation(s)
- Abd El Kader Isselmou
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300130, P.R. China
| | - Guizhi Xu
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300130, P.R. China
| | - Zhang Shuai
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300130, P.R. China
| | - Sani Saminu
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300130, P.R. China
| | - Imran Javaid
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300130, P.R. China
| | - Isah Salim Ahmad
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300130, P.R. China
| |
Collapse
|
27
|
Scheufele K, Subramanian S, Biros G. Fully Automatic Calibration of Tumor-Growth Models Using a Single mpMRI Scan. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:193-204. [PMID: 32931431 PMCID: PMC8565678 DOI: 10.1109/tmi.2020.3024264] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Our objective is the calibration of mathematical tumor growth models from a single multiparametric scan. The target problem is the analysis of preoperative Glioblastoma (GBM) scans. To this end, we present a fully automatic tumor-growth calibration methodology that integrates a single-species reaction-diffusion partial differential equation (PDE) model for tumor progression with multiparametric Magnetic Resonance Imaging (mpMRI) scans to robustly extract patient specific biomarkers i.e., estimates for (i) the tumor cell proliferation rate, (ii) the tumor cell migration rate, and (iii) the original, localized site(s) of tumor initiation. Our method is based on a sparse reconstruction algorithm for the tumor initial location (TIL). This problem is particularly challenging due to nonlinearity, ill-posedeness, and ill conditioning. We propose a coarse-to-fine multi-resolution continuation scheme with parameter decomposition to stabilize the inversion. We demonstrate robustness and practicality of our method by applying the proposed method to clinical data of 206 GBM patients. We analyze the extracted biomarkers and relate tumor origin with patient overall survival by mapping the former into a common atlas space. We present preliminary results that suggest improved accuracy for prediction of patient overall survival when a set of imaging features is augmented with estimated biophysical parameters. All extracted features, tumor initial positions, and biophysical growth parameters are made publicly available for further analysis. To our knowledge, this is the first fully automatic scheme that can handle multifocal tumors and can localize the TIL to a few millimeters.
Collapse
|
28
|
Wang G, Ma C. Application and prospect of radiomics in spinal cord and spine system diseases: A narrative review. GLIOMA 2021. [DOI: 10.4103/glioma.glioma_14_21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
|
29
|
Abstract
Segmenting brain tumors accurately and reliably is an essential part of cancer diagnosis and treatment planning. Brain tumor segmentation of glioma patients is a challenging task because of the wide variety of tumor sizes, shapes, positions, scanning modalities, and scanner’s acquisition protocols. Many convolutional neural network (CNN) based methods have been proposed to solve the problem of brain tumor segmentation and achieved great success. However, most previous studies do not fully take into account multiscale tumors and often fail to segment small tumors, which may have a significant impact on finding early-stage cancers. This paper deals with the brain tumor segmentation of any sizes, but specially focuses on accurately identifying small tumors, thereby increasing the performance of the brain tumor segmentation of overall sizes. Instead of using heavyweight networks with multi-resolution or multiple kernel sizes, we propose a novel approach for better segmentation of small tumors by dilated convolution and multi-task learning. Dilated convolution is used for multiscale feature extraction, however it does not work well with very small tumor segmentation. For dealing with small-sized tumors, we try multi-task learning, where an auxiliary task of feature reconstruction is used to retain the features of small tumors. The experiment shows the effectiveness of segmenting small tumors with the proposed method. This paper contributes to the detection and segmentation of small tumors, which have seldom been considered before and the new development of hierarchical analysis using multi-task learning.
Collapse
|
30
|
Machine Learning Based on a Multiparametric and Multiregional Radiomics Signature Predicts Radiotherapeutic Response in Patients with Glioblastoma. Behav Neurol 2020; 2020:1712604. [PMID: 33163122 PMCID: PMC7604589 DOI: 10.1155/2020/1712604] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2020] [Revised: 09/15/2020] [Accepted: 09/23/2020] [Indexed: 12/12/2022] Open
Abstract
Methods The MRI images, genetic data, and clinical data of 152 patients with GBM were analyzed. 122 patients from the TCIA dataset (training set: n = 82; validation set: n = 40) and 30 patients from local hospitals were used as an independent test dataset. Radiomics features were extracted from multiple regions of multiparameter MRI. Kaplan-Meier survival analysis was used to verify the ability of the imaging signature to predict the response of GBM patients to radiotherapy before an operation. Multivariate Cox regression including radiomics signature and preoperative clinical risk factors was used to further improve the ability to predict the overall survival (OS) of individual GBM patients, which was presented in the form of a nomogram. Results The radiomics signature was built by eight selected features. The C-index of the radiomics signature in the TCIA and independent test cohorts was 0.703 (P < 0.001) and 0.757 (P = 0.001), respectively. Multivariate Cox regression analysis confirmed that the radiomics signature (HR: 0.290, P < 0.001), age (HR: 1.023, P = 0.01), and KPS (HR: 0.968, P < 0.001) were independent risk factors for OS in GBM patients before surgery. When the radiomics signature and preoperative clinical risk factors were combined, the radiomics nomogram further improved the performance of OS prediction in individual patients (C‐index = 0.764 and 0.758 in the TCIA and test cohorts, respectively). Conclusion This study developed a radiomics signature that can predict the response of individual GBM patients to radiotherapy and may be a new supplement for precise GBM radiotherapy.
Collapse
|
31
|
Multiatlas Calibration of Biophysical Brain Tumor Growth Models with Mass Effect. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2020; 12262:551-560. [PMID: 34704089 DOI: 10.1007/978-3-030-59713-9_53] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
We present a 3D fully-automatic method for the calibration of partial differential equation (PDE) models of glioblastoma (GBM) growth with "mass effect", the deformation of brain tissue due to the tumor. We quantify the mass effect, tumor proliferation, tumor migration, and the localized tumor initial condition from a single multiparameteric Magnetic Resonance Imaging (mpMRI) patient scan. The PDE is a reaction-advection-diffusion partial differential equation coupled with linear elasticity equations to capture mass effect. The single-scan calibration model is notoriously difficult because the precancerous (healthy) brain anatomy is unknown. To solve this inherently ill-posed and illconditioned optimization problem, we introduce a novel inversion scheme that uses multiple brain atlases as proxies for the healthy precancer patient brain resulting in robust and reliable parameter estimation. We apply our method on both synthetic and clinical datasets representative of the heterogeneous spatial landscape typically observed in glioblastomas to demonstrate the validity and performance of our methods. In the synthetic data, we report calibration errors (due to the ill-posedness and our solution scheme) in the 10%-20% range. In the clinical data, we report good quantitative agreement with the observed tumor and qualitative agreement with the mass effect (for which we do not have a ground truth). Our method uses a minimal set of parameters and provides both global and local quantitative measures of tumor infiltration and mass effect.
Collapse
|
32
|
Le NQK, Do DT, Chiu FY, Yapp EKY, Yeh HY, Chen CY. XGBoost Improves Classification of MGMT Promoter Methylation Status in IDH1 Wildtype Glioblastoma. J Pers Med 2020; 10:jpm10030128. [PMID: 32942564 PMCID: PMC7563334 DOI: 10.3390/jpm10030128] [Citation(s) in RCA: 61] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2020] [Revised: 09/03/2020] [Accepted: 09/09/2020] [Indexed: 02/07/2023] Open
Abstract
Approximately 96% of patients with glioblastomas (GBM) have IDH1 wildtype GBMs, characterized by extremely poor prognosis, partly due to resistance to standard temozolomide treatment. O6-Methylguanine-DNA methyltransferase (MGMT) promoter methylation status is a crucial prognostic biomarker for alkylating chemotherapy resistance in patients with GBM. However, MGMT methylation status identification methods, where the tumor tissue is often undersampled, are time consuming and expensive. Currently, presurgical noninvasive imaging methods are used to identify biomarkers to predict MGMT methylation status. We evaluated a novel radiomics-based eXtreme Gradient Boosting (XGBoost) model to identify MGMT promoter methylation status in patients with IDH1 wildtype GBM. This retrospective study enrolled 53 patients with pathologically proven GBM and tested MGMT methylation and IDH1 status. Radiomics features were extracted from multimodality MRI and tested by F-score analysis to identify important features to improve our model. We identified nine radiomics features that reached an area under the curve of 0.896, which outperformed other classifiers reported previously. These features could be important biomarkers for identifying MGMT methylation status in IDH1 wildtype GBM. The combination of radiomics feature extraction and F-core feature selection significantly improved the performance of the XGBoost model, which may have implications for patient stratification and therapeutic strategy in GBM.
Collapse
Affiliation(s)
- Nguyen Quoc Khanh Le
- Professional Master Program in Artificial Intelligence in Medicine, College of Medicine, Taipei Medical University, Taipei City 106, Taiwan
- Research Center for Artificial Intelligence in Medicine, Taipei Medical University, Taipei City 106, Taiwan;
- Correspondence: (N.Q.K.L.); (C.-Y.C.); Tel.: +886-266-382-736 (ext. 1992) (N.Q.K.L.); Fax: +886-2-2732-1956 (N.Q.K.L.)
| | - Duyen Thi Do
- Faculty of Applied Sciences, Ton Duc Thang University, Ho Chi Minh City 70000, Vietnam;
| | - Fang-Ying Chiu
- Research Center for Artificial Intelligence in Medicine, Taipei Medical University, Taipei City 106, Taiwan;
| | - Edward Kien Yee Yapp
- Singapore Institute of Manufacturing Technology, 2 Fusionopolis Way, #08-04, Innovis, Singapore 138634, Singapore;
| | - Hui-Yuan Yeh
- Medical Humanities Research Cluster, School of Humanities, Nanyang Technological University, 48 Nanyang Ave, Singapore 639798, Singapore;
| | - Cheng-Yu Chen
- Professional Master Program in Artificial Intelligence in Medicine, College of Medicine, Taipei Medical University, Taipei City 106, Taiwan
- Research Center for Artificial Intelligence in Medicine, Taipei Medical University, Taipei City 106, Taiwan;
- Department of Radiology, School of Medicine, College of Medicine, Taipei Medical University, Taipei 11031, Taiwan
- Department of Medical Imaging, Taipei Medical University Hospital, Taipei 11031, Taiwan
- Correspondence: (N.Q.K.L.); (C.-Y.C.); Tel.: +886-266-382-736 (ext. 1992) (N.Q.K.L.); Fax: +886-2-2732-1956 (N.Q.K.L.)
| |
Collapse
|
33
|
Zhang L, Zhang J, Shen P, Zhu G, Li P, Lu X, Zhang H, Shah SA, Bennamoun M. Block Level Skip Connections Across Cascaded V-Net for Multi-Organ Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2782-2793. [PMID: 32091995 DOI: 10.1109/tmi.2020.2975347] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Multi-organ segmentation is a challenging task due to the label imbalance and structural differences between different organs. In this work, we propose an efficient cascaded V-Net model to improve the performance of multi-organ segmentation by establishing dense Block Level Skip Connections (BLSC) across cascaded V-Net. Our model can take full advantage of features from the first stage network and make the cascaded structure more efficient. We also combine stacked small and large kernels with an inception-like structure to help our model to learn more patterns, which produces superior results for multi-organ segmentation. In addition, some small organs are commonly occluded by large organs and have unclear boundaries with other surrounding tissues, which makes them hard to be segmented. We therefore first locate the small organs through a multi-class network and crop them randomly with the surrounding region, then segment them with a single-class network. We evaluated our model on SegTHOR 2019 challenge unseen testing set and Multi-Atlas Labeling Beyond the Cranial Vault challenge validation set. Our model has achieved an average dice score gain of 1.62 percents and 3.90 percents compared to traditional cascaded networks on these two datasets, respectively. For hard-to-segment small organs, such as the esophagus in SegTHOR 2019 challenge, our technique has achieved a gain of 5.63 percents on dice score, and four organs in Multi-Atlas Labeling Beyond the Cranial Vault challenge have achieved a gain of 5.27 percents on average dice score.
Collapse
|
34
|
An Intelligent Diagnosis Method of Brain MRI Tumor Segmentation Using Deep Convolutional Neural Network and SVM Algorithm. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2020; 2020:6789306. [PMID: 32733596 PMCID: PMC7376410 DOI: 10.1155/2020/6789306] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Accepted: 07/01/2020] [Indexed: 12/30/2022]
Abstract
Among the currently proposed brain segmentation methods, brain tumor segmentation methods based on traditional image processing and machine learning are not ideal enough. Therefore, deep learning-based brain segmentation methods are widely used. In the brain tumor segmentation method based on deep learning, the convolutional network model has a good brain segmentation effect. The deep convolutional network model has the problems of a large number of parameters and large loss of information in the encoding and decoding process. This paper proposes a deep convolutional neural network fusion support vector machine algorithm (DCNN-F-SVM). The proposed brain tumor segmentation model is mainly divided into three stages. In the first stage, a deep convolutional neural network is trained to learn the mapping from image space to tumor marker space. In the second stage, the predicted labels obtained from the deep convolutional neural network training are input into the integrated support vector machine classifier together with the test images. In the third stage, a deep convolutional neural network and an integrated support vector machine are connected in series to train a deep classifier. Run each model on the BraTS dataset and the self-made dataset to segment brain tumors. The segmentation results show that the performance of the proposed model is significantly better than the deep convolutional neural network and the integrated SVM classifier.
Collapse
|
35
|
Multimodal MRI Brain Tumor Image Segmentation Using Sparse Subspace Clustering Algorithm. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2020; 2020:8620403. [PMID: 32714431 PMCID: PMC7355351 DOI: 10.1155/2020/8620403] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Revised: 05/24/2020] [Accepted: 06/08/2020] [Indexed: 11/17/2022]
Abstract
Brain tumors are one of the most deadly diseases with a high mortality rate. The shape and size of the tumor are random during the growth process. Brain tumor segmentation is a brain tumor assisted diagnosis technology that separates different brain tumor structures such as edema and active and tumor necrosis tissues from normal brain tissue. Magnetic resonance imaging (MRI) technology has the advantages of no radiation impact on the human body, good imaging effect on structural tissues, and an ability to realize tomographic imaging of any orientation. Therefore, doctors often use MRI brain tumor images to analyze and process brain tumors. In these images, the tumor structure is only characterized by grayscale changes, and the developed images obtained by different equipment and different conditions may also be different. This makes it difficult for traditional image segmentation methods to deal well with the segmentation of brain tumor images. Considering that the traditional single-mode MRI brain tumor images contain incomplete brain tumor information, it is difficult to segment the single-mode brain tumor images to meet clinical needs. In this paper, a sparse subspace clustering (SSC) algorithm is introduced to process the diagnosis of multimodal MRI brain tumor images. In the absence of added noise, the proposed algorithm has better advantages than traditional methods. Compared with the top 15 in the Brats 2015 competition, the accuracy is not much different, being basically stable between 10 and 15. In order to verify the noise resistance of the proposed algorithm, this paper adds 5%, 10%, 15%, and 20% Gaussian noise to the test image. Experimental results show that the proposed algorithm has better noise immunity than a comparable algorithm.
Collapse
|
36
|
Steed TC, Treiber JM, Taha B, Engin HB, Carter H, Patel KS, Dale AM, Carter BS, Chen CC. Glioblastomas located in proximity to the subventricular zone (SVZ) exhibited enrichment of gene expression profiles associated with the cancer stem cell state. J Neurooncol 2020; 148:455-462. [PMID: 32556864 DOI: 10.1007/s11060-020-03550-4] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2020] [Accepted: 05/29/2020] [Indexed: 02/07/2023]
Abstract
INTRODUCTION Conflicting results have been reported in the association between glioblastoma proximity to the subventricular zone (SVZ) and enrichment of cancer stem cell properties. Here, we examined this hypothesis using magnetic resonance (MR) images derived from 217 The Cancer Imaging Archive (TCIA) glioblastoma subjects. METHODS Pre-operative MR images were segmented automatically into contrast enhancing (CE) tumor volumes using Iterative Probabilistic Voxel Labeling (IPVL). Distances were calculated from the centroid of CE tumor volumes to the SVZ and correlated with gene expression profiles of the corresponding glioblastomas. Correlative analyses were performed between SVZ distance, gene expression patterns, and clinical survival. RESULTS Glioblastoma located in proximity to the SVZ showed increased mRNA expression patterns associated with the cancer stem-cell state, including CD133 (P = 0.006). Consistent with the previous observations suggesting that glioblastoma stem cells exhibit increased DNA repair capacity, glioblastomas in proximity to the SVZ also showed increased expression of DNA repair genes, including MGMT (P = 0.018). Reflecting this enhanced DNA repair capacity, the genomes of glioblastomas in SVZ proximity harbored fewer single nucleotide polymorphisms relative to those located distant to the SVZ (P = 0.003). Concordant with the notion that glioblastoma stem cells are more aggressive and refractory to therapy, patients with glioblastoma in proximity to SVZ exhibited poorer progression free and overall survival (P < 0.01). CONCLUSION An unbiased analysis of TCIA suggests that glioblastomas located in proximity to the SVZ exhibited mRNA expression profiles associated with stem cell properties, increased DNA repair capacity, and is associated with poor clinical survival.
Collapse
Affiliation(s)
- Tyler C Steed
- Department of Neurosurgery, Emory School of Surgery, Atlanta, GA, USA
| | - Jeffrey M Treiber
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, USA
| | - Birra Taha
- Department of Neurosurgery, University of Minnesota, D429 Mayo Memorial Building, 420 Delaware St. S. E., MMC96, Minneapolis, MN, 55455, USA
| | - H Billur Engin
- Division of Medical Genetics, Department of Medicine, University of California, La Jolla, San Diego, CA, USA
| | - Hannah Carter
- Division of Medical Genetics, Department of Medicine, University of California, La Jolla, San Diego, CA, USA
| | - Kunal S Patel
- Department of Neurosurgery, University of California Los Angeles, Los Angeles, CA, USA
| | - Anders M Dale
- Multimodal Imaging Laboratory, University of California San Diego, La Jolla, San Diego, CA, USA
- Department of Radiology, University of California San Diego, La Jolla, San Diego, CA, USA
| | - Bob S Carter
- Department of Neurosurgery, Massachusetts General Hospital, Boston, MA, USA
| | - Clark C Chen
- Department of Neurosurgery, University of Minnesota, D429 Mayo Memorial Building, 420 Delaware St. S. E., MMC96, Minneapolis, MN, 55455, USA.
| |
Collapse
|
37
|
Mang A, Bakas S, Subramanian S, Davatzikos C, Biros G. Integrated Biophysical Modeling and Image Analysis: Application to Neuro-Oncology. Annu Rev Biomed Eng 2020; 22:309-341. [PMID: 32501772 PMCID: PMC7520881 DOI: 10.1146/annurev-bioeng-062117-121105] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
Central nervous system (CNS) tumors come with vastly heterogeneous histologic, molecular, and radiographic landscapes, rendering their precise characterization challenging. The rapidly growing fields of biophysical modeling and radiomics have shown promise in better characterizing the molecular, spatial, and temporal heterogeneity of tumors. Integrative analysis of CNS tumors, including clinically acquired multi-parametric magnetic resonance imaging (mpMRI) and the inverse problem of calibrating biophysical models to mpMRI data, assists in identifying macroscopic quantifiable tumor patterns of invasion and proliferation, potentially leading to improved (a) detection/segmentation of tumor subregions and (b) computer-aided diagnostic/prognostic/predictive modeling. This article presents a summary of (a) biophysical growth modeling and simulation,(b) inverse problems for model calibration, (c) these models' integration with imaging workflows, and (d) their application to clinically relevant studies. We anticipate that such quantitative integrative analysis may even be beneficial in a future revision of the World Health Organization (WHO) classification for CNS tumors, ultimately improving patient survival prospects.
Collapse
Affiliation(s)
- Andreas Mang
- Department of Mathematics, University of Houston, Houston, Texas 77204, USA;
| | - Spyridon Bakas
- Department of Mathematics, University of Houston, Houston, Texas 77204, USA;
| | - Shashank Subramanian
- Oden Institute of Computational Engineering and Sciences, The University of Texas at Austin, Austin, Texas 78712, USA; ,
| | - Christos Davatzikos
- Center for Biomedical Image Computing and Analytics (CBICA); Department of Radiology; and Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA; ,
| | - George Biros
- Oden Institute of Computational Engineering and Sciences, The University of Texas at Austin, Austin, Texas 78712, USA; ,
| |
Collapse
|
38
|
Parker D, Ould Ismail AA, Wolf R, Brem S, Alexander S, Hodges W, Pasternak O, Caruyer E, Verma R. Freewater estimatoR using iNtErpolated iniTialization (FERNET): Characterizing peritumoral edema using clinically feasible diffusion MRI data. PLoS One 2020; 15:e0233645. [PMID: 32469944 PMCID: PMC7259683 DOI: 10.1371/journal.pone.0233645] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2019] [Accepted: 05/10/2020] [Indexed: 12/19/2022] Open
Abstract
Characterization of healthy versus pathological tissue in the peritumoral area is confounded by the presence of edema, making free water estimation the key concern in modeling tissue microstructure. Most methods that model tissue microstructure are either based on advanced acquisition schemes not readily available in the clinic or are not designed to address the challenge of edema. This underscores the need for a robust free water elimination (FWE) method that estimates free water in pathological tissue but can be used with clinically prevalent single-shell diffusion tensor imaging data. FWE in single-shell data requires the fitting of a bi-compartment model, which is an ill-posed problem. Its solution requires optimization, which relies on an initialization step. We propose a novel initialization approach for FWE, FERNET, which improves the estimation of free water in edematous and infiltrated peritumoral regions, using single-shell diffusion MRI data. The method has been extensively investigated on simulated data and healthy dataset. Additionally, it has been applied to clinically acquired data from brain tumor patients to characterize the peritumoral region and improve tractography in it.
Collapse
Affiliation(s)
- Drew Parker
- DiCIPHR (Diffusion and Connectomics in Precision Healthcare Research) Lab, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States of America
- Center for Biomedical Image Computing and Analytics (CBICA), Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States of America
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States of America
| | - Abdol Aziz Ould Ismail
- DiCIPHR (Diffusion and Connectomics in Precision Healthcare Research) Lab, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States of America
- Center for Biomedical Image Computing and Analytics (CBICA), Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States of America
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States of America
| | - Ronald Wolf
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States of America
| | - Steven Brem
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States of America
| | | | - Wes Hodges
- Synaptive Medical Inc., Toronto, ON, Canada
| | - Ofer Pasternak
- Departments of Psychiatry & Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States of America
| | | | - Ragini Verma
- DiCIPHR (Diffusion and Connectomics in Precision Healthcare Research) Lab, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States of America
- Center for Biomedical Image Computing and Analytics (CBICA), Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States of America
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States of America
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States of America
- * E-mail: ,
| |
Collapse
|
39
|
Pati S, Singh A, Rathore S, Gastounioti A, Bergman M, Ngo P, Ha SM, Bounias D, Minock J, Murphy G, Li H, Bhattarai A, Wolf A, Sridaran P, Kalarot R, Akbari H, Sotiras A, Thakur SP, Verma R, Shinohara RT, Yushkevich P, Fan Y, Kontos D, Davatzikos C, Bakas S. The Cancer Imaging Phenomics Toolkit (CaPTk): Technical Overview. BRAINLESION : GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES. BRAINLES (WORKSHOP) 2020; 11993:380-394. [PMID: 32754723 PMCID: PMC7402244 DOI: 10.1007/978-3-030-46643-5_38] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
The purpose of this manuscript is to provide an overview of the technical specifications and architecture of the Cancer imaging Phenomics Toolkit (CaPTk www.cbica.upenn.edu/captk), a cross-platform, open-source, easy-to-use, and extensible software platform for analyzing 2D and 3D images, currently focusing on radiographic scans of brain, breast, and lung cancer. The primary aim of this platform is to enable swift and efficient translation of cutting-edge academic research into clinically useful tools relating to clinical quantification, analysis, predictive modeling, decision-making, and reporting workflow. CaPTk builds upon established open-source software toolkits, such as the Insight Toolkit (ITK) and OpenCV, to bring together advanced computational functionality. This functionality describes specialized, as well as general-purpose, image analysis algorithms developed during active multi-disciplinary collaborative research studies to address real clinical requirements. The target audience of CaPTk consists of both computational scientists and clinical experts. For the former it provides i) an efficient image viewer offering the ability of integrating new algorithms, and ii) a library of readily-available clinically-relevant algorithms, allowing batch-processing of multiple subjects. For the latter it facilitates the use of complex algorithms for clinically-relevant studies through a user-friendly interface, eliminating the prerequisite of a substantial computational background. CaPTk's long-term goal is to provide widely-used technology to make use of advanced quantitative imaging analytics in cancer prediction, diagnosis and prognosis, leading toward a better understanding of the biological mechanisms of cancer development.
Collapse
Affiliation(s)
- Sarthak Pati
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Ashish Singh
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Saima Rathore
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Aimilia Gastounioti
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Mark Bergman
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Phuc Ngo
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Sung Min Ha
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Dimitrios Bounias
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - James Minock
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Grayson Murphy
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Hongming Li
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Amit Bhattarai
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Adam Wolf
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Patmaa Sridaran
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Ratheesh Kalarot
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Hamed Akbari
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Aristeidis Sotiras
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology and Institute for Informatics, School of Medicine, Washington University in St. Louis, Saint Louis, MO, USA
| | - Siddhesh P Thakur
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Ragini Verma
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Russell T Shinohara
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Penn Statistics in Imaging and Visualization Endeavor (PennSIVE), University of Pennsylvania, Philadelphia, PA, USA
| | - Paul Yushkevich
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Penn Image Computing and Science Lab., University of Pennsylvania (PICSL), Philadelphia, PA, USA
| | - Yong Fan
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Despina Kontos
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Christos Davatzikos
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
40
|
Scheufele K, Subramanian S, Mang A, Biros G, Mehl M. IMAGE-DRIVEN BIOPHYSICAL TUMOR GROWTH MODEL CALIBRATION. SIAM JOURNAL ON SCIENTIFIC COMPUTING : A PUBLICATION OF THE SOCIETY FOR INDUSTRIAL AND APPLIED MATHEMATICS 2020; 42:B549-B580. [PMID: 33071533 PMCID: PMC7561052 DOI: 10.1137/19m1275280] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
We present a novel formulation for the calibration of a biophysical tumor growth model from a single-time snapshot, multiparametric magnetic resonance imaging (MRI) scan of a glioblastoma patient. Tumor growth models are typically nonlinear parabolic partial differential equations (PDEs). Thus, we have to generate a second snapshot to be able to extract significant information from a single patient snapshot. We create this two-snapshot scenario as follows. We use an atlas (an average of several scans of healthy individuals) as a substitute for an earlier, pretumor, MRI scan of the patient. Then, using the patient scan and the atlas, we combine image-registration algorithms and parameter estimation algorithms to achieve a better estimate of the healthy patient scan and the tumor growth parameters that are consistent with the data. Our scheme is based on our recent work (Scheufele et al., Comput. Methods Appl. Mech. Engrg., to appear), but we apply a different and novel scheme where the tumor growth simulation in contrast to the previous work is executed in the patient brain domain and not in the atlas domain yielding more meaningful patient-specific results. As a basis, we use a PDE-constrained optimization framework. We derive a modified Picard-iteration-type solution strategy in which we alternate between registration and tumor parameter estimation in a new way. In addition, we consider an ℓ 1 sparsity constraint on the initial condition for the tumor and integrate it with the new joint inversion scheme. We solve the sub-problems with a reduced space, inexact Gauss-Newton-Krylov/quasi-Newton method. We present results using real brain data with synthetic tumor data that show that the new scheme reconstructs the tumor parameters in a more accurate and reliable way compared to our earlier scheme.
Collapse
Affiliation(s)
- Klaudius Scheufele
- Institut for Parallel and Distributed Systems, Universität Stuttgart, Universitätsstraße 38, 70569, Stuttgart, Germany
| | - Shashank Subramanian
- Oden Institute for Computational Engineering and Sciences, University of Austin, 201 E. 24th Street, Austin, TX 78712-1229
| | - Andreas Mang
- Department of Mathematics, University of Houston, 3551 Cullen Blvd., Houston, TX 77204-3008
| | - George Biros
- Oden Institute for Computational Engineering and Sciences, University of Austin, 201 E. 24th Street, Austin, TX 78712-1229
| | - Miriam Mehl
- Institut for Parallel and Distributed Systems, Universität Stuttgart, Universitätsstraße 38, 70569, Stuttgart, Germany
| |
Collapse
|
41
|
Lin P, Peng YT, Gao RZ, Wei Y, Li XJ, Huang SN, Fang YY, Wei ZX, Huang ZG, Yang H, Chen G. Radiomic profiles in diffuse glioma reveal distinct subtypes with prognostic value. J Cancer Res Clin Oncol 2020; 146:1253-1262. [PMID: 32065261 DOI: 10.1007/s00432-020-03153-6] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2019] [Accepted: 02/10/2020] [Indexed: 01/22/2023]
Abstract
PURPOSE To evaluate a radiomic approach for the stratification of diffuse gliomas with distinct prognosis and provide additional resolution of their clinicopathological and molecular characteristics. METHODS For this retrospective study, a total of 704 radiomic features were extracted from the multi-channel MRI data of 166 diffuse gliomas. Survival-associated radiomic features were identified and submitted to distinguish glioma subtypes using consensus clustering. Multi-layered molecular data were used to observe the different clinical and molecular characteristics between radiomic subtypes. The relative profiles of an array of immune cell infiltrations were measured gene set variation analysis approach to explore differences in tumor immune microenvironment. RESULTS A total of 6 categories, including 318 radiomic features were significantly correlated with the overall survival of glioma patients. Two subgroups with distinct prognosis were separated by consensus clustering of radiomic features that significantly associated with survival. Histological stage and molecular factors, including IDH status and MGMT promoter methylation status were significant differences between the two subtypes. Furthermore, gene functional enrichment analysis and immune infiltration pattern analysis also hinted that the inferior prognosis subtype may more response to immunotherapy. CONCLUSION A radiomic model derived from multi-parameter MRI of the gliomas was successful in the risk stratification of diffuse glioma patients. These data suggested that radiomics provided an alternative approach for survival estimation and may improve clinical decision-making.
Collapse
Affiliation(s)
- Peng Lin
- Department of Ultrasound, The First Affiliated Hospital of Guangxi Medical University, 6 Shuangyong Road, Nanning, Guangxi Zhuang Autonomous Region, People's Republic of China
| | - Yu-Ting Peng
- Department of Ultrasound, The First Affiliated Hospital of Guangxi Medical University, 6 Shuangyong Road, Nanning, Guangxi Zhuang Autonomous Region, People's Republic of China
| | - Rui-Zhi Gao
- Department of Ultrasound, The First Affiliated Hospital of Guangxi Medical University, 6 Shuangyong Road, Nanning, Guangxi Zhuang Autonomous Region, People's Republic of China
| | - Yan Wei
- Department of Pathology, The First Affiliated Hospital of Guangxi Medical University, 6 Shuangyong Road, Nanning, Guangxi Zhuang Autonomous Region, People's Republic of China
| | - Xiao-Jiao Li
- Department of PET-CT, The First Affiliated Hospital of Guangxi Medical University, 6 Shuangyong Road, Nanning, Guangxi Zhuang Autonomous Region, People's Republic of China
| | - Su-Ning Huang
- Department of Radiotherapy, Guangxi Medical University Cancer Hospital, 71 Hedi Road, Nanning, Guangxi Zhuang Autonomous Region, People's Republic of China
| | - Ye-Ying Fang
- Department of Radiotherapy, The First Affiliated Hospital of Guangxi Medical University, 6 Shuangyong Road, Nanning, Guangxi Zhuang Autonomous Region, People's Republic of China
| | - Zhu-Xin Wei
- Department of Radiotherapy, The First Affiliated Hospital of Guangxi Medical University, 6 Shuangyong Road, Nanning, Guangxi Zhuang Autonomous Region, People's Republic of China
| | - Zhi-Guang Huang
- Department of Pathology, The First Affiliated Hospital of Guangxi Medical University, 6 Shuangyong Road, Nanning, Guangxi Zhuang Autonomous Region, People's Republic of China
| | - Hong Yang
- Department of Ultrasound, The First Affiliated Hospital of Guangxi Medical University, 6 Shuangyong Road, Nanning, Guangxi Zhuang Autonomous Region, People's Republic of China
| | - Gang Chen
- Department of Pathology, The First Affiliated Hospital of Guangxi Medical University, 6 Shuangyong Road, Nanning, Guangxi Zhuang Autonomous Region, People's Republic of China.
| |
Collapse
|
42
|
Subramanian S, Scheufele K, Mehl M, Biros G. WHERE DID THE TUMOR START? AN INVERSE SOLVER WITH SPARSE LOCALIZATION FOR TUMOR GROWTH MODELS. INVERSE PROBLEMS 2020; 36:045006. [PMID: 33746330 PMCID: PMC7971430 DOI: 10.1088/1361-6420/ab649c] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
We present a numerical scheme for solving an inverse problem for parameter estimation in tumor growth models for glioblastomas, a form of aggressive primary brain tumor. The growth model is a reaction-diffusion partial differential equation (PDE) for the tumor concentration. We use a PDE-constrained optimization formulation for the inverse problem. The unknown parameters are the reaction coefficient (proliferation), the diffusion coefficient (infiltration), and the initial condition field for the tumor PDE. Segmentation of Magnetic Resonance Imaging (MRI) scans drive the inverse problem where segmented tumor regions serve as partial observations of the tumor concentration. Like most cases in clinical practice, we use data from a single time snapshot. Moreover, the precise time relative to the initiation of the tumor is unknown, which poses an additional difficulty for inversion. We perform a frozen-coefficient spectral analysis and show that the inverse problem is severely ill-posed. We introduce a biophysically motivated regularization on the structure and magnitude of the tumor initial condition. In particular, we assume that the tumor starts at a few locations (enforced with a sparsity constraint on the initial condition of the tumor) and that the initial condition magnitude in the maximum norm is equal to one. We solve the resulting optimization problem using an inexact quasi-Newton method combined with a compressive sampling algorithm for the sparsity constraint. Our implementation uses PETSc and AccFFT libraries. We conduct numerical experiments on synthetic and clinical images to highlight the improved performance of our solver over a previously existing solver that uses standard two-norm regularization for the calibration parameters. The existing solver is unable to localize the initial condition. Our new solver can localize the initial condition and recover infiltration and proliferation. In clinical datasets (for which the ground truth is unknown), our solver results in qualitatively different solutions compared to the two-norm regularized solver.
Collapse
Affiliation(s)
- Shashank Subramanian
- Oden Institute for Computational Engineering and Sciences, University of Texas at Austin, 201 E. 24th Street, Austin, Texas, USA
| | - Klaudius Scheufele
- Institute for Parallel and Distributed Systems, Universität Stuttgart, Universitatsstraßë38, Stuttgart, Germany
| | - Miriam Mehl
- Institute for Parallel and Distributed Systems, Universität Stuttgart, Universitatsstraßë38, Stuttgart, Germany
| | - George Biros
- Oden Institute for Computational Engineering and Sciences, University of Texas at Austin, 201 E. 24th Street, Austin, Texas, USA
| |
Collapse
|
43
|
Estienne T, Lerousseau M, Vakalopoulou M, Alvarez Andres E, Battistella E, Carré A, Chandra S, Christodoulidis S, Sahasrabudhe M, Sun R, Robert C, Talbot H, Paragios N, Deutsch E. Deep Learning-Based Concurrent Brain Registration and Tumor Segmentation. Front Comput Neurosci 2020; 14:17. [PMID: 32265680 PMCID: PMC7100603 DOI: 10.3389/fncom.2020.00017] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Accepted: 02/11/2020] [Indexed: 01/30/2023] Open
Abstract
Image registration and segmentation are the two most studied problems in medical image analysis. Deep learning algorithms have recently gained a lot of attention due to their success and state-of-the-art results in variety of problems and communities. In this paper, we propose a novel, efficient, and multi-task algorithm that addresses the problems of image registration and brain tumor segmentation jointly. Our method exploits the dependencies between these tasks through a natural coupling of their interdependencies during inference. In particular, the similarity constraints are relaxed within the tumor regions using an efficient and relatively simple formulation. We evaluated the performance of our formulation both quantitatively and qualitatively for registration and segmentation problems on two publicly available datasets (BraTS 2018 and OASIS 3), reporting competitive results with other recent state-of-the-art methods. Moreover, our proposed framework reports significant amelioration (p < 0.005) for the registration performance inside the tumor locations, providing a generic method that does not need any predefined conditions (e.g., absence of abnormalities) about the volumes to be registered. Our implementation is publicly available online at https://github.com/TheoEst/joint_registration_tumor_segmentation.
Collapse
Affiliation(s)
- Théo Estienne
- Gustave Roussy-CentraleSupélec-TheraPanacea Center of Artificial Intelligence in Radiation Therapy and Oncology, Gustave Roussy Cancer Campus, Villejuif, France
- Université Paris-Saclay, Institut Gustave Roussy, Inserm, Molecular Radiotherapy and Innovative Therapeutics, Villejuif, France
- Gustave Roussy Cancer Campus, Department of Radiation Oncology, Villejuif, France
- Université Paris-Saclay, CentraleSupélec, Mathématiques et Informatique pour la Complexité et les Systèmes, Gif-sur-Yvette, France
| | - Marvin Lerousseau
- Gustave Roussy-CentraleSupélec-TheraPanacea Center of Artificial Intelligence in Radiation Therapy and Oncology, Gustave Roussy Cancer Campus, Villejuif, France
- Université Paris-Saclay, Institut Gustave Roussy, Inserm, Molecular Radiotherapy and Innovative Therapeutics, Villejuif, France
- Gustave Roussy Cancer Campus, Department of Radiation Oncology, Villejuif, France
- Université Paris-Saclay, CentraleSupélec, Inria, Centre de Vision Numérique, Gif-sur-Yvette, France
| | - Maria Vakalopoulou
- Gustave Roussy-CentraleSupélec-TheraPanacea Center of Artificial Intelligence in Radiation Therapy and Oncology, Gustave Roussy Cancer Campus, Villejuif, France
- Université Paris-Saclay, CentraleSupélec, Mathématiques et Informatique pour la Complexité et les Systèmes, Gif-sur-Yvette, France
- Université Paris-Saclay, CentraleSupélec, Inria, Centre de Vision Numérique, Gif-sur-Yvette, France
| | - Emilie Alvarez Andres
- Gustave Roussy-CentraleSupélec-TheraPanacea Center of Artificial Intelligence in Radiation Therapy and Oncology, Gustave Roussy Cancer Campus, Villejuif, France
- Université Paris-Saclay, Institut Gustave Roussy, Inserm, Molecular Radiotherapy and Innovative Therapeutics, Villejuif, France
- Gustave Roussy Cancer Campus, Department of Radiation Oncology, Villejuif, France
| | - Enzo Battistella
- Gustave Roussy-CentraleSupélec-TheraPanacea Center of Artificial Intelligence in Radiation Therapy and Oncology, Gustave Roussy Cancer Campus, Villejuif, France
- Université Paris-Saclay, Institut Gustave Roussy, Inserm, Molecular Radiotherapy and Innovative Therapeutics, Villejuif, France
- Gustave Roussy Cancer Campus, Department of Radiation Oncology, Villejuif, France
- Université Paris-Saclay, CentraleSupélec, Mathématiques et Informatique pour la Complexité et les Systèmes, Gif-sur-Yvette, France
| | - Alexandre Carré
- Gustave Roussy-CentraleSupélec-TheraPanacea Center of Artificial Intelligence in Radiation Therapy and Oncology, Gustave Roussy Cancer Campus, Villejuif, France
- Université Paris-Saclay, Institut Gustave Roussy, Inserm, Molecular Radiotherapy and Innovative Therapeutics, Villejuif, France
- Gustave Roussy Cancer Campus, Department of Radiation Oncology, Villejuif, France
| | - Siddhartha Chandra
- Université Paris-Saclay, CentraleSupélec, Inria, Centre de Vision Numérique, Gif-sur-Yvette, France
| | - Stergios Christodoulidis
- Université Paris-Saclay, Institut Gustave Roussy, Inserm, Predictive Biomarkers and Novel Therapeutic Strategies in Oncology, Villejuif, France
| | - Mihir Sahasrabudhe
- Université Paris-Saclay, CentraleSupélec, Inria, Centre de Vision Numérique, Gif-sur-Yvette, France
| | - Roger Sun
- Gustave Roussy-CentraleSupélec-TheraPanacea Center of Artificial Intelligence in Radiation Therapy and Oncology, Gustave Roussy Cancer Campus, Villejuif, France
- Université Paris-Saclay, Institut Gustave Roussy, Inserm, Molecular Radiotherapy and Innovative Therapeutics, Villejuif, France
- Gustave Roussy Cancer Campus, Department of Radiation Oncology, Villejuif, France
- Université Paris-Saclay, CentraleSupélec, Inria, Centre de Vision Numérique, Gif-sur-Yvette, France
| | - Charlotte Robert
- Gustave Roussy-CentraleSupélec-TheraPanacea Center of Artificial Intelligence in Radiation Therapy and Oncology, Gustave Roussy Cancer Campus, Villejuif, France
- Université Paris-Saclay, Institut Gustave Roussy, Inserm, Molecular Radiotherapy and Innovative Therapeutics, Villejuif, France
- Gustave Roussy Cancer Campus, Department of Radiation Oncology, Villejuif, France
| | - Hugues Talbot
- Université Paris-Saclay, CentraleSupélec, Inria, Centre de Vision Numérique, Gif-sur-Yvette, France
| | - Nikos Paragios
- Gustave Roussy-CentraleSupélec-TheraPanacea Center of Artificial Intelligence in Radiation Therapy and Oncology, Gustave Roussy Cancer Campus, Villejuif, France
| | - Eric Deutsch
- Gustave Roussy-CentraleSupélec-TheraPanacea Center of Artificial Intelligence in Radiation Therapy and Oncology, Gustave Roussy Cancer Campus, Villejuif, France
- Université Paris-Saclay, Institut Gustave Roussy, Inserm, Molecular Radiotherapy and Innovative Therapeutics, Villejuif, France
- Gustave Roussy Cancer Campus, Department of Radiation Oncology, Villejuif, France
| |
Collapse
|
44
|
Banerjee S, Mitra S. Novel Volumetric Sub-region Segmentation in Brain Tumors. Front Comput Neurosci 2020; 14:3. [PMID: 32038216 PMCID: PMC6993215 DOI: 10.3389/fncom.2020.00003] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2019] [Accepted: 01/08/2020] [Indexed: 11/13/2022] Open
Abstract
A novel deep learning based model called Multi-Planar Spatial Convolutional Neural Network (MPS-CNN) is proposed for effective, automated segmentation of different sub-regions viz. peritumoral edema (ED), necrotic core (NCR), enhancing and non-enhancing tumor core (ET/NET), from multi-modal MR images of the brain. An encoder-decoder type CNN model is designed for pixel-wise segmentation of the tumor along three anatomical planes (axial, sagittal, and coronal) at the slice level. These are then combined, by incorporating a consensus fusion strategy with a fully connected Conditional Random Field (CRF) based post-refinement, to produce the final volumetric segmentation of the tumor and its constituent sub-regions. Concepts, such as spatial-pooling and unpooling are used to preserve the spatial locations of the edge pixels, for reducing segmentation error around the boundaries. A new aggregated loss function is also developed for effectively handling data imbalance. The MPS-CNN is trained and validated on the recent Multimodal Brain Tumor Segmentation Challenge (BraTS) 2018 dataset. The Dice scores obtained for the validation set for whole tumor (WT :NCR/NE +ET +ED), tumor core (TC:NCR/NET +ET), and enhancing tumor (ET) are 0.90216, 0.87247, and 0.82445. The proposed MPS-CNN is found to perform the best (based on leaderboard scores) for ET and TC segmentation tasks, in terms of both the quantitative measures (viz. Dice and Hausdorff). In case of the WT segmentation it also achieved the second highest accuracy, with a score which was only 1% less than that of the best performing method.
Collapse
Affiliation(s)
- Subhashis Banerjee
- Machine Intelligence Unit, Indian Statistical Institute, Kolkata, India
- Department of CSE, University of Calcutta, Kolkata, India
| | - Sushmita Mitra
- Machine Intelligence Unit, Indian Statistical Institute, Kolkata, India
| |
Collapse
|
45
|
Wang H, Zhang Y, Fan X. Gray image segmentation algorithm based on one-dimensional image complexity. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2020. [DOI: 10.3233/jifs-179615] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Haifeng Wang
- Information Center, Jiangsu University of Technology, Jiangsu, China
| | - Yi Zhang
- Jiangsu University of Technology, Jiangsu, China
| | - Xin Fan
- Jiangsu University of Technology, Jiangsu, China
| |
Collapse
|
46
|
Hajishamsaei M, Pishevar A, Bavi O, Soltani M. A novel in silico platform for a fully automatic personalized brain tumor growth. Magn Reson Imaging 2020; 68:121-126. [PMID: 31911200 DOI: 10.1016/j.mri.2019.12.012] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2019] [Revised: 12/07/2019] [Accepted: 12/31/2019] [Indexed: 02/07/2023]
Abstract
Glioblastoma Multiforme is the most common and most aggressive type of brain tumors. Although accurate prediction of Glioblastoma borders and shape is absolutely essential for neurosurgeons, there are not many in silico platforms that can make such predictions. In the current study, an automatic patient-specific simulation of Glioblastoma growth would be described. A finite element approach is used to analyze the magnetic resonance images from patients in the early stages of their tumors. For segmentation of the tumor, the Support Vector Machine (SVM) method, which is an automatic segmentation algorithm, is used. Using in situ and in vivo data, the main parameters of tumor prediction and growth are estimated with high precision in proliferation-invasion partial differential equation, using the genetic algorithm optimization method. The results show that for a C57BL mouse, the differences between the area and perimeter of in vivo test and simulation prediction data, as the objective function, are 3.7% and 17.4%, respectively.
Collapse
Affiliation(s)
- Mojtaba Hajishamsaei
- Department of Mechanical Engineering, Isfahan University of Technology, Isfahan, Iran
| | - Ahmadreza Pishevar
- Department of Mechanical Engineering, Isfahan University of Technology, Isfahan, Iran
| | - Omid Bavi
- Department of Mechanical and Aerospace Engineering, Shiraz University of Technology, Shiraz, Iran
| | - Madjid Soltani
- Department of Mechanical Engineering, K. N. Toosi University of Technology, Tehran, Iran; Advanced Bioengineering Initiative Center, Computational Medicine Center, K. N. Toosi University of Technology, Tehran, Iran; Cancer Biology Research Center, Cancer Institute of Iran, Tehran University of Medical Sciences, Tehran, Iran; Department of Electrical and Computer Engineering, University of Waterloo, ON, Canada; Centre for Biotechnology and Bioengineering (CBB), University of Waterloo, Waterloo, Ontario, Canada.
| |
Collapse
|
47
|
Longitudinal brain tumor segmentation prediction in MRI using feature and label fusion. Biomed Signal Process Control 2020; 55:101648. [PMID: 34354762 PMCID: PMC8336640 DOI: 10.1016/j.bspc.2019.101648] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
This work proposes a novel framework for brain tumor segmentation prediction in longitudinal multi-modal MRI scans, comprising two methods; feature fusion and joint label fusion (JLF). The first method fuses stochastic multi-resolution texture features with tumor cell density feature to obtain tumor segmentation predictions in follow-up timepoints using data from baseline pre-operative timepoint. The cell density feature is obtained by solving the 3D reaction-diffusion equation for biophysical tumor growth modelling using the Lattice-Boltzmann method. The second method utilizes JLF to combine segmentation labels obtained from (i) the stochastic texture feature-based and Random Forest (RF)-based tumor segmentation method; and (ii) another state-of-the-art tumor growth and segmentation method, known as boosted Glioma Image Segmentation and Registration (GLISTRboost, or GB). We quantitatively evaluate both proposed methods using the Dice Similarity Coefficient (DSC) in longitudinal scans of 9 patients from the public BraTS 2015 multi-institutional dataset. The evaluation results for the feature-based fusion method show improved tumor segmentation prediction for the whole tumor(DSC WT = 0.314, p = 0.1502), tumor core (DSC TC = 0.332, p = 0.0002), and enhancing tumor (DSC ET = 0.448, p = 0.0002) regions. The feature-based fusion shows some improvement on tumor prediction of longitudinal brain tumor tracking, whereas the JLF offers statistically significant improvement on the actual segmentation of WT and ET (DSC WT = 0.85 ± 0.055, DSC ET = 0.837 ± 0.074), and also improves the results of GB. The novelty of this work is two-fold: (a) exploit tumor cell density as a feature to predict brain tumor segmentation, using a stochastic multi-resolution RF-based method, and (b) improve the performance of another successful tumor segmentation method, GB, by fusing with the RF-based segmentation labels.
Collapse
|
48
|
Rudie JD, Weiss DA, Saluja R, Rauschecker AM, Wang J, Sugrue L, Bakas S, Colby JB. Multi-Disease Segmentation of Gliomas and White Matter Hyperintensities in the BraTS Data Using a 3D Convolutional Neural Network. Front Comput Neurosci 2019; 13:84. [PMID: 31920609 PMCID: PMC6933520 DOI: 10.3389/fncom.2019.00084] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2019] [Accepted: 12/04/2019] [Indexed: 12/22/2022] Open
Abstract
An important challenge in segmenting real-world biomedical imaging data is the presence of multiple disease processes within individual subjects. Most adults above age 60 exhibit a variable degree of small vessel ischemic disease, as well as chronic infarcts, which will manifest as white matter hyperintensities (WMH) on brain MRIs. Subjects diagnosed with gliomas will also typically exhibit some degree of abnormal T2 signal due to WMH, rather than just due to tumor. We sought to develop a fully automated algorithm to distinguish and quantify these distinct disease processes within individual subjects’ brain MRIs. To address this multi-disease problem, we trained a 3D U-Net to distinguish between abnormal signal arising from tumors vs. WMH in the 3D multi-parametric MRI (mpMRI, i.e., native T1-weighted, T1-post-contrast, T2, T2-FLAIR) scans of the International Brain Tumor Segmentation (BraTS) 2018 dataset (ntraining = 285, nvalidation = 66). Our trained neuroradiologist manually annotated WMH on the BraTS training subjects, finding that 69% of subjects had WMH. Our 3D U-Net model had a 4-channel 3D input patch (80 × 80 × 80) from mpMRI, four encoding and decoding layers, and an output of either four [background, active tumor (AT), necrotic core (NCR), peritumoral edematous/infiltrated tissue (ED)] or five classes (adding WMH as the fifth class). For both the four- and five-class output models, the median Dice for whole tumor (WT) extent (i.e., union of AT, ED, NCR) was 0.92 in both training and validation sets. Notably, the five-class model achieved significantly (p = 0.002) lower/better Hausdorff distances for WT extent in the training subjects. There was strong positive correlation between manually segmented and predicted volumes for WT (r = 0.96) and WMH (r = 0.89). Larger lesion volumes were positively correlated with higher/better Dice scores for WT (r = 0.33), WMH (r = 0.34), and across all lesions (r = 0.89) on a log(10) transformed scale. While the median Dice for WMH was 0.42 across training subjects with WMH, the median Dice was 0.62 for those with at least 5 cm3 of WMH. We anticipate the development of computational algorithms that are able to model multiple diseases within a single subject will be a critical step toward translating and integrating artificial intelligence systems into the heterogeneous real-world clinical workflow.
Collapse
Affiliation(s)
- Jeffrey D Rudie
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States.,Department of Radiology & Biomedical Imaging, University of California, San Francisco, San Francisco, CA, United States
| | - David A Weiss
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, United States
| | - Rachit Saluja
- Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, PA, United States
| | - Andreas M Rauschecker
- Department of Radiology & Biomedical Imaging, University of California, San Francisco, San Francisco, CA, United States
| | - Jiancong Wang
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Leo Sugrue
- Department of Radiology & Biomedical Imaging, University of California, San Francisco, San Francisco, CA, United States
| | - Spyridon Bakas
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States.,Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, United States.,Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - John B Colby
- Department of Radiology & Biomedical Imaging, University of California, San Francisco, San Francisco, CA, United States
| |
Collapse
|
49
|
Amin J, Sharif M, Raza M, Saba T, Sial R, Shad SA. Brain tumor detection: a long short-term memory (LSTM)-based learning model. Neural Comput Appl 2019. [DOI: 10.1007/s00521-019-04650-7] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
|
50
|
A deep learning model integrating SK-TPCNN and random forests for brain tumor segmentation in MRI. Biocybern Biomed Eng 2019. [DOI: 10.1016/j.bbe.2019.06.003] [Citation(s) in RCA: 38] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|