1
|
Wald T, Hamm B, Holzschuh JC, El Shafie R, Kudak A, Kovacs B, Pflüger I, von Nettelbladt B, Ulrich C, Baumgartner MA, Vollmuth P, Debus J, Maier-Hein KH, Welzel T. Enhancing deep learning methods for brain metastasis detection through cross-technique annotations on SPACE MRI. Eur Radiol Exp 2025; 9:15. [PMID: 39913077 PMCID: PMC11802942 DOI: 10.1186/s41747-025-00554-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2024] [Accepted: 01/15/2025] [Indexed: 02/07/2025] Open
Abstract
BACKGROUND Gadolinium-enhanced "sampling perfection with application-optimized contrasts using different flip angle evolution" (SPACE) sequence allows better visualization of brain metastases (BMs) compared to "magnetization-prepared rapid acquisition gradient echo" (MPRAGE). We hypothesize that this better conspicuity leads to high-quality annotation (HAQ), enhancing deep learning (DL) algorithm detection of BMs on MPRAGE images. METHODS Retrospective contrast-enhanced (gadobutrol 0.1 mmol/kg) SPACE and MPRAGE data of 157 patients with BM were used, either annotated on MPRAGE resulting in normal annotation quality (NAQ) or on coregistered SPACE resulting in HAQ. Multiple DL methods were developed with NAQ or HAQ using either SPACE or MRPAGE images and evaluated on their detection performance using positive predictive value (PPV), sensitivity, and F1 score and on their delineation performance using volumetric Dice similarity coefficient, PPV, and sensitivity on one internal and four additional test datasets (660 patients). RESULTS The SPACE-HAQ model reached 0.978 PPV, 0.882 sensitivity, and 0.916 F1-score. The MPRAGE-HAQ reached 0.867, 0.839, and 0.840, the MPRAGE NAQ 0.964, 0.667, and 0.798, respectively (p ≥ 0.157). Relative to MPRAGE-NAQ, the MPRAGE-HAQ F1-score detection increased on all additional test datasets by 2.5-9.6 points (p < 0.016) and sensitivity improved on three datasets by 4.6-8.5 points (p < 0.001). Moreover, volumetric instance sensitivity improved by 3.6-7.6 points (p < 0.001). CONCLUSION HAQ improves DL methods without specialized imaging during application time. HAQ alone achieves about 40% of the performance improvements seen with SPACE images as input, allowing for fast and accurate, fully automated detection of small (< 1 cm) BMs. RELEVANCE STATEMENT Training with higher-quality annotations, created using the SPACE sequence, improves the detection and delineation sensitivity of DL methods for the detection of brain metastases (BMs)on MPRAGE images. This MRI cross-technique transfer learning is a promising way to increase diagnostic performance. KEY POINTS Delineating small BMs on SPACE MRI sequence results in higher quality annotations than on MPRAGE sequence due to enhanced conspicuity. Leveraging cross-technique ground truth annotations during training improved the accuracy of DL models in detecting and segmenting BMs. Cross-technique annotation may enhance DL models by integrating benefits from specialized, time-intensive MRI sequences while not relying on them. Further validation in prospective studies is needed.
Collapse
Affiliation(s)
- Tassilo Wald
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany.
- Helmholtz Imaging, German Cancer Research Center (DKFZ), Heidelberg, Germany.
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany.
| | - Benjamin Hamm
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
- Medical Faculty Heidelberg, University of Heidelberg, Heidelberg, Germany
| | - Julius C Holzschuh
- Division of Radiology, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Rami El Shafie
- Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
- Department of Radiation Oncology, University Hospital Göttingen, Göttingen, Germany
| | - Andreas Kudak
- Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
- Heidelberg Institute of Radiation Oncology (HIRO), Heidelberg, Germany
- Clinical Cooperation Unit Radiation Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Balint Kovacs
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
- Medical Faculty Heidelberg, University of Heidelberg, Heidelberg, Germany
| | - Irada Pflüger
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
- Division for Computational Neuroimaging, Heidelberg University Hospital, Heidelberg, Germany
| | - Bastian von Nettelbladt
- Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
- Heidelberg Institute of Radiation Oncology (HIRO), Heidelberg, Germany
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and Heidelberg University Medical Center, Heidelberg, Germany
- German Cancer Consortium (DKTK), partner site Heidelberg, Heidelberg, Germany
- Heidelberg Ion-Beam Therapy Center (HIT), Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| | - Constantin Ulrich
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
- Medical Faculty Heidelberg, University of Heidelberg, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and Heidelberg University Medical Center, Heidelberg, Germany
| | - Michael Anton Baumgartner
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
- Helmholtz Imaging, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany
| | - Philipp Vollmuth
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
- Division for Computational Neuroimaging, Heidelberg University Hospital, Heidelberg, Germany
- Division for Computational Radiology Clinical AI (CCIBonn.ai), Clinic for Neuroradiology, University Hospital Bonn, Bonn, Germany
- Medical Faculty Bonn, University of Bonn, Bonn, Germany
| | - Jürgen Debus
- Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
- Heidelberg Institute of Radiation Oncology (HIRO), Heidelberg, Germany
- Clinical Cooperation Unit Radiation Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and Heidelberg University Medical Center, Heidelberg, Germany
- German Cancer Consortium (DKTK), partner site Heidelberg, Heidelberg, Germany
- Heidelberg Ion-Beam Therapy Center (HIT), Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| | - Klaus H Maier-Hein
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
- Helmholtz Imaging, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and Heidelberg University Medical Center, Heidelberg, Germany
- Pattern Analysis and Learning Group, Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
- Translational Lung Research Center Heidelberg (TLRC), Member of the German Center for Lung Research (DZL), Heidelberg, Germany
| | - Thomas Welzel
- Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
- Heidelberg Institute of Radiation Oncology (HIRO), Heidelberg, Germany
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and Heidelberg University Medical Center, Heidelberg, Germany
- German Cancer Consortium (DKTK), partner site Heidelberg, Heidelberg, Germany
- Heidelberg Ion-Beam Therapy Center (HIT), Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| |
Collapse
|
2
|
Sarria GR, Fleckenstein J, Eckl M, Stieler F, Ruder A, Bendszus M, Schmeel LC, Koch D, Feisst A, Essig M, Wenz F, Giordano FA. Impact of the Novel MRI Contrast Agent Gadopiclenol on Radiotherapy Decision Making in Patients With Brain Metastases. Invest Radiol 2025; 60:138-144. [PMID: 39159365 DOI: 10.1097/rli.0000000000001115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/21/2024]
Abstract
PURPOSE The aim of this study was to assess the effect of gadopiclenol versus gadobenate dimeglumine contrast-enhanced magnetic resonance imaging (MRI) on decision-making between whole-brain radiotherapy (WBRT) and stereotactic radiosurgery (SRS) for treatment of brain metastases (BMs). METHODS Patients with BMs underwent 2 separate MRI examinations in a double-blind crossover phase IIb comparative study between the MRI contrast agents gadopiclenol and gadobenate dimeglumine, both administered at 0.1 mmol/kg. The imaging data of a single site using identical MRI scanners and protocols were included in this post hoc analysis. Patients with 1 or more BMs in any of both MRIs were subjected to target volume delineation for treatment planning. Two radiation oncologists contoured all visible lesions and decided upon SRS or WBRT, according to the number of metastases. For each patient, SRS or WBRT treatment plans were calculated for both MRIs, considering the gross target volume (GTV) as the contrast-enhancing aspects of the tumor. Mean GTVs and volume of healthy brain exposed to 12 Gy (V 12 ), as well as Dice similarity coefficient scores, were obtained. The Spearman rank (ρ) correlation was additionally calculated for assessing linear differences. Three different expert radiation oncologists blindly rated the contrast enhancement for contouring purposes. RESULTS Thirteen adult patients were included. Gadopiclenol depicted additional BM as compared with gadobenate dimeglumine in 7 patients (54%). Of a total of 63 identified metastatic lesions in both MRI sets, 3 subgroups could be defined: A, 48 (24 pairs) detected equal GTVs visible in both modalities; B, 13 GTVs only visible in the gadopiclenol set (mean ± SD, 0.16 ± 0.37 cm 3 ); and C, 2 GTVs only visible in the gadobenate dimeglumine set (mean ± SD, 0.01 ± 0.01). Treatment indication was changed for 2 (15%) patients, 1 from no treatment to SRS and for 1 from SRS to WBRT. The mean GTVs and brain V 12 were comparable between both agents ( P = 0.694, P = 0.974). The mean Dice similarity coefficient was 0.70 ± 0.14 (ρ = 0.82). According to the readers, target volume definition was improved in 63.9% of cases (23 of 36 evaluations) with gadopiclenol and 22.2% with gadobenate dimeglumine (8 of 36), whereas equivalence was obtained in 13.9% (5 of 36). CONCLUSIONS Gadopiclenol-enhanced MRI improved BM detection and characterization, with a direct impact on radiotherapy treatment decision between WBRT and SRS. Additionally, a more exact target delineation and planning could be performed with gadopiclenol. A prospective evaluation in a larger cohort of patients is required to confirm these findings.
Collapse
Affiliation(s)
- Gustavo R Sarria
- From the Department of Radiation Oncology, University Hospital Bonn, University of Bonn, Bonn, Germany (G.R.S., L.C.S., D.K., A.F.); Department of Radiation Oncology, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany (J.F., M. Eckl, F.S., A.R., F.A.G.); Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany (M.B.); Department of Radiology, University of Manitoba, Winnipeg, Manitoba, Canada (M. Essig); and University Medical Center Freiburg, Freiburg University, Freiburg, Germany (F.W.)
| | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
3
|
Ottesen JA, Tong E, Emblem KE, Latysheva A, Zaharchuk G, Bjørnerud A, Grøvik E. Semi-Supervised Learning Allows for Improved Segmentation With Reduced Annotations of Brain Metastases Using Multicenter MRI Data. J Magn Reson Imaging 2025. [PMID: 39792624 DOI: 10.1002/jmri.29686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2024] [Revised: 12/03/2024] [Accepted: 12/04/2024] [Indexed: 01/12/2025] Open
Abstract
BACKGROUND Deep learning-based segmentation of brain metastases relies on large amounts of fully annotated data by domain experts. Semi-supervised learning offers potential efficient methods to improve model performance without excessive annotation burden. PURPOSE This work tests the viability of semi-supervision for brain metastases segmentation. STUDY TYPE Retrospective. SUBJECTS There were 156, 65, 324, and 200 labeled scans from four institutions and 519 unlabeled scans from a single institution. All subjects included in the study had diagnosed with brain metastases. FIELD STRENGTH/SEQUENCES 1.5 T and 3 T, 2D and 3D T1-weighted pre- and post-contrast, and fluid-attenuated inversion recovery (FLAIR). ASSESSMENT Three semi-supervision methods (mean teacher, cross-pseudo supervision, and interpolation consistency training) were adapted with the U-Net architecture. The three semi-supervised methods were compared to their respective supervised baseline on the full and half-sized training. STATISTICAL TESTS Evaluation was performed on a multinational test set from four different institutions using 5-fold cross-validation. Method performance was evaluated by the following: the number of false-positive predictions, the number of true positive predictions, the 95th Hausdorff distance, and the Dice similarity coefficient (DSC). Significance was tested using a paired samples t test for a single fold, and across all folds within a given cohort. RESULTS Semi-supervision outperformed the supervised baseline for all sites with the best-performing semi-supervised method achieved an on average DSC improvement of 6.3% ± 1.6%, 8.2% ± 3.8%, 8.6% ± 2.6%, and 15.4% ± 1.4%, when trained on half the dataset and 3.6% ± 0.7%, 2.0% ± 1.5%, 1.8% ± 5.7%, and 4.7% ± 1.7%, compared to the supervised baseline on four test cohorts. In addition, in three of four datasets, the semi-supervised training produced equal or better results than the supervised models trained on twice the labeled data. DATA CONCLUSION Semi-supervised learning allows for improved segmentation performance over the supervised baseline, and the improvement was particularly notable for independent external test sets when trained on small amounts of labeled data. PLAIN LANGUAGE SUMMARY Artificial intelligence requires extensive datasets with large amounts of annotated data from medical experts which can be difficult to acquire due to the large workload. To compensate for this, it is possible to utilize large amounts of un-annotated clinical data in addition to annotated data. However, this method has not been widely tested for the most common intracranial brain tumor, brain metastases. This study shows that this approach allows for data efficient deep learning models across multiple institutions with different clinical protocols and scanners. LEVEL OF EVIDENCE 3 TECHNICAL EFFICACY: Stage 2.
Collapse
Affiliation(s)
- Jon André Ottesen
- Computational Radiology and Artificial Intelligence (CRAI) Research Group, Division of Radiology and Nuclear Medicine, Oslo University Hospital, Oslo, Norway
- Department of Physics, Faculty of Mathematics and Natural Sciences, University of Oslo, Oslo, Norway
| | - Elizabeth Tong
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Kyrre Eeg Emblem
- Department of Physics and Computational Radiology, Division of Radiology and Nuclear Medicine, Oslo University Hospital, Oslo, Norway
- Institute of Clinical Medicine, Faculty of Medicine, University of Oslo, Oslo, Norway
| | - Anna Latysheva
- Division of Radiology and Nuclear Medicine, Oslo University Hospital, Oslo, Norway
| | - Greg Zaharchuk
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Atle Bjørnerud
- Computational Radiology and Artificial Intelligence (CRAI) Research Group, Division of Radiology and Nuclear Medicine, Oslo University Hospital, Oslo, Norway
- Department of Physics, Faculty of Mathematics and Natural Sciences, University of Oslo, Oslo, Norway
| | - Endre Grøvik
- Department of Radiology, Ålesund Hospital, Møre og Romsdal Hospital Trust, Ålesund, Norway
- Department of Physics, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
4
|
Yoo Y, Gibson E, Zhao G, Re TJ, Parmar H, Das J, Wang H, Kim MM, Shen C, Lee Y, Kondziolka D, Ibrahim M, Lian J, Jain R, Zhu T, Comaniciu D, Balter JM, Cao Y. Extended nnU-Net for Brain Metastasis Detection and Segmentation in Contrast-Enhanced Magnetic Resonance Imaging With a Large Multi-Institutional Data Set. Int J Radiat Oncol Biol Phys 2025; 121:241-249. [PMID: 39059508 DOI: 10.1016/j.ijrobp.2024.07.2318] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Revised: 05/30/2024] [Accepted: 07/13/2024] [Indexed: 07/28/2024]
Abstract
PURPOSE The purpose of this study was to investigate an extended self-adapting nnU-Net framework for detecting and segmenting brain metastases (BM) on magnetic resonance imaging (MRI). METHODS AND MATERIALS Six different nnU-Net systems with adaptive data sampling, adaptive Dice loss, or different patch/batch sizes were trained and tested for detecting and segmenting intraparenchymal BM with a size ≥2 mm on 3 Dimensional (3D) post-Gd T1-weighted MRI volumes using 2092 patients from 7 institutions (1712, 195, and 185 patients for training, validation, and testing, respectively). Gross tumor volumes of BM delineated by physicians for stereotactic radiosurgery were collected retrospectively and curated at each institute. Additional centralized data curation was carried out to create gross tumor volumes of uncontoured BM by 2 radiologists to improve the accuracy of ground truth. The training data set was augmented with synthetic BMs of 1025 MRI volumes using a 3D generative pipeline. BM detection was evaluated by lesion-level sensitivity and false-positive (FP) rate. BM segmentation was assessed by lesion-level Dice similarity coefficient, 95-percentile Hausdorff distance, and average Hausdorff distance (HD). The performances were assessed across different BM sizes. Additional testing was performed using a second data set of 206 patients. RESULTS Of the 6 nnU-Net systems, the nnU-Net with adaptive Dice loss achieved the best detection and segmentation performance on the first testing data set. At an FP rate of 0.65 ± 1.17, overall sensitivity was 0.904 for all sizes of BM, 0.966 for BM ≥0.1 cm3, and 0.824 for BM <0.1 cm3. Mean values of Dice similarity coefficient, 95-percentile Hausdorff distance, and average HD of all detected BMs were 0.758, 1.45, and 0.23 mm, respectively. Performances on the second testing data set achieved a sensitivity of 0.907 at an FP rate of 0.57 ± 0.85 for all BM sizes, and an average HD of 0.33 mm for all detected BM. CONCLUSIONS Our proposed extension of the self-configuring nnU-Net framework substantially improved small BM detection sensitivity while maintaining a controlled FP rate. Clinical utility of the extended nnU-Net model for assisting early BM detection and stereotactic radiosurgery planning will be investigated.
Collapse
Affiliation(s)
- Youngjin Yoo
- Siemens Healthineers, Digital Technology and Innovation, Princeton, New Jersey.
| | - Eli Gibson
- Siemens Healthineers, Digital Technology and Innovation, Princeton, New Jersey
| | - Gengyan Zhao
- Siemens Healthineers, Digital Technology and Innovation, Princeton, New Jersey
| | - Thomas J Re
- Siemens Healthineers, Digital Technology and Innovation, Princeton, New Jersey
| | - Hemant Parmar
- Department of Radiology, University of Michigan, Ann Arbor, Michigan
| | - Jyotipriya Das
- Siemens Healthineers, Digital Technology and Innovation, Princeton, New Jersey
| | - Hesheng Wang
- Department of Radiation Oncology, New York University, New York, New York
| | - Michelle M Kim
- Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan
| | - Colette Shen
- Department of Radiation Oncology, University of North Carolina, Chapel Hill, North Carolina
| | - Yueh Lee
- Department of Radiology, University of North Carolina, Chapel Hill, North Carolina
| | - Douglas Kondziolka
- Center for Advanced Radiosurgery, New York University, New York, New York
| | - Mohannad Ibrahim
- Department of Radiology, University of Michigan, Ann Arbor, Michigan
| | - Jun Lian
- Department of Radiation Oncology, University of North Carolina, Chapel Hill, North Carolina
| | - Rajan Jain
- Department of Radiology, New York University, New York, New York
| | - Tong Zhu
- Department of Radiation Oncology, Washington University, St. Louis, Missouri
| | - Dorin Comaniciu
- Siemens Healthineers, Digital Technology and Innovation, Princeton, New Jersey
| | - James M Balter
- Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan
| | - Yue Cao
- Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan
| |
Collapse
|
5
|
Huang Y, Khodabakhshi Z, Gomaa A, Schmidt M, Fietkau R, Guckenberger M, Andratschke N, Bert C, Tanadini-Lang S, Putz F. Multicenter privacy-preserving model training for deep learning brain metastases autosegmentation. Radiother Oncol 2024; 198:110419. [PMID: 38969106 DOI: 10.1016/j.radonc.2024.110419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Revised: 06/19/2024] [Accepted: 06/21/2024] [Indexed: 07/07/2024]
Abstract
OBJECTIVES This work aims to explore the impact of multicenter data heterogeneity on deep learning brain metastases (BM) autosegmentation performance, and assess the efficacy of an incremental transfer learning technique, namely learning without forgetting (LWF), to improve model generalizability without sharing raw data. MATERIALS AND METHODS A total of six BM datasets from University Hospital Erlangen (UKER), University Hospital Zurich (USZ), Stanford, UCSF, New York University (NYU), and BraTS Challenge 2023 were used. First, the performance of the DeepMedic network for BM autosegmentation was established for exclusive single-center training and mixed multicenter training, respectively. Subsequently privacy-preserving bilateral collaboration was evaluated, where a pretrained model is shared to another center for further training using transfer learning (TL) either with or without LWF. RESULTS For single-center training, average F1 scores of BM detection range from 0.625 (NYU) to 0.876 (UKER) on respective single-center test data. Mixed multicenter training notably improves F1 scores at Stanford and NYU, with negligible improvement at other centers. When the UKER pretrained model is applied to USZ, LWF achieves a higher average F1 score (0.839) than naive TL (0.570) and single-center training (0.688) on combined UKER and USZ test data. Naive TL improves sensitivity and contouring accuracy, but compromises precision. Conversely, LWF demonstrates commendable sensitivity, precision and contouring accuracy. When applied to Stanford, similar performance was observed. CONCLUSION Data heterogeneity (e.g., variations in metastases density, spatial distribution, and image spatial resolution across centers) results in varying performance in BM autosegmentation, posing challenges to model generalizability. LWF is a promising approach to peer-to-peer privacy-preserving model training.
Collapse
Affiliation(s)
- Yixing Huang
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany; Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany; Bavarian Cancer Research Center (BZKF), Erlangen, Germany.
| | - Zahra Khodabakhshi
- Department of Radiation Oncology, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Ahmed Gomaa
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany; Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany; Bavarian Cancer Research Center (BZKF), Erlangen, Germany
| | - Manuel Schmidt
- Department of Neuroradiology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Rainer Fietkau
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany; Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany; Bavarian Cancer Research Center (BZKF), Erlangen, Germany
| | - Matthias Guckenberger
- Department of Radiation Oncology, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Nicolaus Andratschke
- Department of Radiation Oncology, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Christoph Bert
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany; Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany; Bavarian Cancer Research Center (BZKF), Erlangen, Germany
| | - Stephanie Tanadini-Lang
- Department of Radiation Oncology, University Hospital Zurich, University of Zurich, Zurich, Switzerland.
| | - Florian Putz
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany; Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany; Bavarian Cancer Research Center (BZKF), Erlangen, Germany
| |
Collapse
|
6
|
Machura B, Kucharski D, Bozek O, Eksner B, Kokoszka B, Pekala T, Radom M, Strzelczak M, Zarudzki L, Gutiérrez-Becker B, Krason A, Tessier J, Nalepa J. Deep learning ensembles for detecting brain metastases in longitudinal multi-modal MRI studies. Comput Med Imaging Graph 2024; 116:102401. [PMID: 38795690 DOI: 10.1016/j.compmedimag.2024.102401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Revised: 05/13/2024] [Accepted: 05/13/2024] [Indexed: 05/28/2024]
Abstract
Metastatic brain cancer is a condition characterized by the migration of cancer cells to the brain from extracranial sites. Notably, metastatic brain tumors surpass primary brain tumors in prevalence by a significant factor, they exhibit an aggressive growth potential and have the capacity to spread across diverse cerebral locations simultaneously. Magnetic resonance imaging (MRI) scans of individuals afflicted with metastatic brain tumors unveil a wide spectrum of characteristics. These lesions vary in size and quantity, spanning from tiny nodules to substantial masses captured within MRI. Patients may present with a limited number of lesions or an extensive burden of hundreds of them. Moreover, longitudinal studies may depict surgical resection cavities, as well as areas of necrosis or edema. Thus, the manual analysis of such MRI scans is difficult, user-dependent and cost-inefficient, and - importantly - it lacks reproducibility. We address these challenges and propose a pipeline for detecting and analyzing brain metastases in longitudinal studies, which benefits from an ensemble of various deep learning architectures originally designed for different downstream tasks (detection and segmentation). The experiments, performed over 275 multi-modal MRI scans of 87 patients acquired in 53 sites, coupled with rigorously validated manual annotations, revealed that our pipeline, built upon open-source tools to ensure its reproducibility, offers high-quality detection, and allows for precisely tracking the disease progression. To objectively quantify the generalizability of models, we introduce a new data stratification approach that accommodates the heterogeneity of the dataset and is used to elaborate training-test splits in a data-robust manner, alongside a new set of quality metrics to objectively assess algorithms. Our system provides a fully automatic and quantitative approach that may support physicians in a laborious process of disease progression tracking and evaluation of treatment efficacy.
Collapse
Affiliation(s)
| | - Damian Kucharski
- Graylight Imaging, Gliwice, Poland; Silesian University of Technology, Gliwice, Poland.
| | - Oskar Bozek
- Department of Radiodiagnostics and Invasive Radiology, School of Medicine in Katowice, Medical University of Silesia in Katowice, Katowice, Poland.
| | - Bartosz Eksner
- Department of Radiology and Nuclear Medicine, ZSM Chorzów, Chorzów, Poland.
| | - Bartosz Kokoszka
- Department of Radiodiagnostics and Invasive Radiology, School of Medicine in Katowice, Medical University of Silesia in Katowice, Katowice, Poland.
| | - Tomasz Pekala
- Department of Radiodiagnostics, Interventional Radiology and Nuclear Medicine, University Clinical Centre, Katowice, Poland.
| | - Mateusz Radom
- Department of Radiology and Diagnostic Imaging, Maria Skłodowska-Curie National Research Institute of Oncology, Gliwice Branch, Gliwice, Poland.
| | - Marek Strzelczak
- Department of Radiology and Diagnostic Imaging, Maria Skłodowska-Curie National Research Institute of Oncology, Gliwice Branch, Gliwice, Poland.
| | - Lukasz Zarudzki
- Department of Radiology and Diagnostic Imaging, Maria Skłodowska-Curie National Research Institute of Oncology, Gliwice Branch, Gliwice, Poland.
| | - Benjamín Gutiérrez-Becker
- Roche Pharma Research and Early Development, Informatics, Roche Innovation Center Basel, Basel, Switzerland.
| | - Agata Krason
- Roche Pharma Research and Early Development, Early Clinical Development Oncology, Roche Innovation Center Basel, Basel, Switzerland.
| | - Jean Tessier
- Roche Pharma Research and Early Development, Early Clinical Development Oncology, Roche Innovation Center Basel, Basel, Switzerland.
| | - Jakub Nalepa
- Graylight Imaging, Gliwice, Poland; Silesian University of Technology, Gliwice, Poland.
| |
Collapse
|
7
|
Kim M, Wang JY, Lu W, Jiang H, Stojadinovic S, Wardak Z, Dan T, Timmerman R, Wang L, Chuang C, Szalkowski G, Liu L, Pollom E, Rahimy E, Soltys S, Chen M, Gu X. Where Does Auto-Segmentation for Brain Metastases Radiosurgery Stand Today? Bioengineering (Basel) 2024; 11:454. [PMID: 38790322 PMCID: PMC11117895 DOI: 10.3390/bioengineering11050454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Revised: 04/26/2024] [Accepted: 04/30/2024] [Indexed: 05/26/2024] Open
Abstract
Detection and segmentation of brain metastases (BMs) play a pivotal role in diagnosis, treatment planning, and follow-up evaluations for effective BM management. Given the rising prevalence of BM cases and its predominantly multiple onsets, automated segmentation is becoming necessary in stereotactic radiosurgery. It not only alleviates the clinician's manual workload and improves clinical workflow efficiency but also ensures treatment safety, ultimately improving patient care. Recent strides in machine learning, particularly in deep learning (DL), have revolutionized medical image segmentation, achieving state-of-the-art results. This review aims to analyze auto-segmentation strategies, characterize the utilized data, and assess the performance of cutting-edge BM segmentation methodologies. Additionally, we delve into the challenges confronting BM segmentation and share insights gleaned from our algorithmic and clinical implementation experiences.
Collapse
Affiliation(s)
- Matthew Kim
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Jen-Yeu Wang
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Weiguo Lu
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Hao Jiang
- NeuralRad LLC, Madison, WI 53717, USA
| | | | - Zabi Wardak
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Tu Dan
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Robert Timmerman
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Lei Wang
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Cynthia Chuang
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Gregory Szalkowski
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Lianli Liu
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Erqi Pollom
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Elham Rahimy
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Scott Soltys
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Mingli Chen
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Xuejun Gu
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| |
Collapse
|
8
|
Eidex Z, Ding Y, Wang J, Abouei E, Qiu RLJ, Liu T, Wang T, Yang X. Deep learning in MRI-guided radiation therapy: A systematic review. J Appl Clin Med Phys 2024; 25:e14155. [PMID: 37712893 PMCID: PMC10860468 DOI: 10.1002/acm2.14155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 05/10/2023] [Accepted: 08/21/2023] [Indexed: 09/16/2023] Open
Abstract
Recent advances in MRI-guided radiation therapy (MRgRT) and deep learning techniques encourage fully adaptive radiation therapy (ART), real-time MRI monitoring, and the MRI-only treatment planning workflow. Given the rapid growth and emergence of new state-of-the-art methods in these fields, we systematically review 197 studies written on or before December 31, 2022, and categorize the studies into the areas of image segmentation, image synthesis, radiomics, and real time MRI. Building from the underlying deep learning methods, we discuss their clinical importance and current challenges in facilitating small tumor segmentation, accurate x-ray attenuation information from MRI, tumor characterization and prognosis, and tumor motion tracking. In particular, we highlight the recent trends in deep learning such as the emergence of multi-modal, visual transformer, and diffusion models.
Collapse
Affiliation(s)
- Zach Eidex
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
- School of Mechanical EngineeringGeorgia Institute of TechnologyAtlantaGeorgiaUSA
| | - Yifu Ding
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Jing Wang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Elham Abouei
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Richard L. J. Qiu
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Tian Liu
- Department of Radiation OncologyIcahn School of Medicine at Mount SinaiNew YorkNew YorkUSA
| | - Tonghe Wang
- Department of Medical PhysicsMemorial Sloan Kettering Cancer CenterNew YorkNew YorkUSA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
- School of Mechanical EngineeringGeorgia Institute of TechnologyAtlantaGeorgiaUSA
| |
Collapse
|
9
|
Bibault JE, Giraud P. Deep learning for automated segmentation in radiotherapy: a narrative review. Br J Radiol 2024; 97:13-20. [PMID: 38263838 PMCID: PMC11027240 DOI: 10.1093/bjr/tqad018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 08/10/2023] [Accepted: 10/27/2023] [Indexed: 01/25/2024] Open
Abstract
The segmentation of organs and structures is a critical component of radiation therapy planning, with manual segmentation being a laborious and time-consuming task. Interobserver variability can also impact the outcomes of radiation therapy. Deep neural networks have recently gained attention for their ability to automate segmentation tasks, with convolutional neural networks (CNNs) being a popular approach. This article provides a descriptive review of the literature on deep learning (DL) techniques for segmentation in radiation therapy planning. This review focuses on five clinical sub-sites and finds that U-net is the most commonly used CNN architecture. The studies using DL for image segmentation were included in brain, head and neck, lung, abdominal, and pelvic cancers. The majority of DL segmentation articles in radiation therapy planning have concentrated on normal tissue structures. N-fold cross-validation was commonly employed, without external validation. This research area is expanding quickly, and standardization of metrics and independent validation are critical to benchmarking and comparing proposed methods.
Collapse
Affiliation(s)
- Jean-Emmanuel Bibault
- Radiation Oncology Department, Georges Pompidou European Hospital, Assistance Publique—Hôpitaux de Paris, Université de Paris Cité, Paris, 75015, France
- INSERM UMR 1138, Centre de Recherche des Cordeliers, Paris, 75006, France
| | - Paul Giraud
- INSERM UMR 1138, Centre de Recherche des Cordeliers, Paris, 75006, France
- Radiation Oncology Department, Pitié Salpêtrière Hospital, Assistance Publique—Hôpitaux de Paris, Paris Sorbonne Universités, Paris, 75013, France
| |
Collapse
|
10
|
Wang TW, Hsu MS, Lee WK, Pan HC, Yang HC, Lee CC, Wu YT. Brain metastasis tumor segmentation and detection using deep learning algorithms: A systematic review and meta-analysis. Radiother Oncol 2024; 190:110007. [PMID: 37967585 DOI: 10.1016/j.radonc.2023.110007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 10/15/2023] [Accepted: 11/08/2023] [Indexed: 11/17/2023]
Abstract
BACKGROUND Manual detection of brain metastases is both laborious and inconsistent, driving the need for more efficient solutions. Accordingly, our systematic review and meta-analysis assessed the efficacy of deep learning algorithms in detecting and segmenting brain metastases from various primary origins in MRI images. METHODS We conducted a comprehensive search of PubMed, Embase, and Web of Science up to May 24, 2023, which yielded 42 relevant studies for our analysis. We assessed the quality of these studies using the QUADAS-2 and CLAIM tools. Using a random-effect model, we calculated the pooled lesion-wise dice score as well as patient-wise and lesion-wise sensitivity. We performed subgroup analyses to investigate the influence of factors such as publication year, study design, training center of the model, validation methods, slice thickness, model input dimensions, MRI sequences fed to the model, and the specific deep learning algorithms employed. Additionally, meta-regression analyses were carried out considering the number of patients in the studies, count of MRI manufacturers, count of MRI models, training sample size, and lesion number. RESULTS Our analysis highlighted that deep learning models, particularly the U-Net and its variants, demonstrated superior segmentation accuracy. Enhanced detection sensitivity was observed with an increased diversity in MRI hardware, both in terms of manufacturer and model variety. Furthermore, slice thickness was identified as a significant factor influencing lesion-wise detection sensitivity. Overall, the pooled results indicated a lesion-wise dice score of 79%, with patient-wise and lesion-wise sensitivities at 86% and 87%, respectively. CONCLUSIONS The study underscores the potential of deep learning in improving brain metastasis diagnostics and treatment planning. Still, more extensive cohorts and larger meta-analysis are needed for more practical and generalizable algorithms. Future research should prioritize these areas to advance the field. This study was funded by the Gen. & Mrs. M.C. Peng Fellowship and registered under PROSPERO (CRD42023427776).
Collapse
Affiliation(s)
- Ting-Wei Wang
- Institute of Biophotonics, National Yang Ming Chiao Tung University, 155, Sec. 2, Li-Nong St. Beitou Dist., Taipei 112304, Taiwan; School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Ming-Sheng Hsu
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Wei-Kai Lee
- Institute of Biophotonics, National Yang Ming Chiao Tung University, 155, Sec. 2, Li-Nong St. Beitou Dist., Taipei 112304, Taiwan
| | - Hung-Chuan Pan
- Department of Neurosurgery, Taichung Veterans General Hospital, Taichung, Taiwan; Department of Medical Research, Taichung Veterans General Hospital, Taichung, Taiwan
| | - Huai-Che Yang
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan; Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Cheng-Chia Lee
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan; Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Yu-Te Wu
- Institute of Biophotonics, National Yang Ming Chiao Tung University, 155, Sec. 2, Li-Nong St. Beitou Dist., Taipei 112304, Taiwan; National Yang Ming Chiao Tung University, Brain Research Center, Taiwan; National Yang Ming Chiao Tung University, College Medical Device Innovation and Translation Center, Taiwan.
| |
Collapse
|
11
|
Prezelski K, Hsu DG, del Balzo L, Heller E, Ma J, Pike LRG, Ballangrud Å, Aristophanous M. Artificial-intelligence-driven measurements of brain metastases' response to SRS compare favorably with current manual standards of assessment. Neurooncol Adv 2024; 6:vdae015. [PMID: 38464949 PMCID: PMC10924534 DOI: 10.1093/noajnl/vdae015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/12/2024] Open
Abstract
Background Evaluation of treatment response for brain metastases (BMs) following stereotactic radiosurgery (SRS) becomes complex as the number of treated BMs increases. This study uses artificial intelligence (AI) to track BMs after SRS and validates its output compared with manual measurements. Methods Patients with BMs who received at least one course of SRS and followed up with MRI scans were retrospectively identified. A tool for automated detection, segmentation, and tracking of intracranial metastases on longitudinal imaging, MEtastasis Tracking with Repeated Observations (METRO), was applied to the dataset. The longest three-dimensional (3D) diameter identified with METRO was compared with manual measurements of maximum axial BM diameter, and their correlation was analyzed. Change in size of the measured BM identified with METRO after SRS treatment was used to classify BMs as responding, or not responding, to treatment, and its accuracy was determined relative to manual measurements. Results From 71 patients, 176 BMs were identified and measured with METRO and manual methods. Based on a one-to-one correlation analysis, the correlation coefficient was R2 = 0.76 (P = .0001). Using modified BM response classifications of BM change in size, the longest 3D diameter data identified with METRO had a sensitivity of 0.72 and a specificity of 0.95 in identifying lesions that responded to SRS, when using manual axial diameter measurements as the ground truth. Conclusions Using AI to automatically measure and track BM volumes following SRS treatment, this study showed a strong correlation between AI-driven measurements and the current clinically used method: manual axial diameter measurements.
Collapse
Affiliation(s)
- Kayla Prezelski
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
- Saint Louis University School of Medicine, St. Louis, Missouri, USA
| | - Dylan G Hsu
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Luke del Balzo
- Medical College of Georgia, Athens, Georgia, USA
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Erica Heller
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Jennifer Ma
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Luke R G Pike
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
- Biomarker Development Program, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Åse Ballangrud
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Michalis Aristophanous
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| |
Collapse
|
12
|
Qu J, Zhang W, Shu X, Wang Y, Wang L, Xu M, Yao L, Hu N, Tang B, Zhang L, Lui S. Construction and evaluation of a gated high-resolution neural network for automatic brain metastasis detection and segmentation. Eur Radiol 2023; 33:6648-6658. [PMID: 37186214 DOI: 10.1007/s00330-023-09648-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 01/23/2023] [Accepted: 02/08/2023] [Indexed: 05/17/2023]
Abstract
OBJECTIVES To construct and evaluate a gated high-resolution convolutional neural network for detecting and segmenting brain metastasis (BM). METHODS This retrospective study included craniocerebral MRI scans of 1392 patients with 14,542 BMs and 200 patients with no BM between January 2012 and April 2022. A primary dataset including 1000 cases with 11,686 BMs was employed to construct the model, while an independent dataset including 100 cases with 1069 BMs from other hospitals was used to examine the generalizability. The potential of the model for clinical use was also evaluated by comparing its performance in BM detection and segmentation to that of radiologists, and comparing radiologists' lesion detecting performances with and without model assistance. RESULTS Our model yielded a recall of 0.88, a dice similarity coefficient (DSC) of 0.90, a positive predictive value (PPV) of 0.93 and a false positives per patient (FP) of 1.01 in the test set, and a recall of 0.85, a DSC of 0.89, a PPV of 0.93, and a FP of 1.07 in dataset from other hospitals. With the model's assistance, the BM detection rates of 4 radiologists improved significantly, ranging from 5.2 to 15.1% (all p < 0.001), and also for detecting small BMs with diameter ≤ 5 mm (ranging from 7.2 to 27.0%, all p < 0.001). CONCLUSIONS The proposed model enables accurate BM detection and segmentation with higher sensitivity and less time consumption, showing the potential to augment radiologists' performance in detecting BM. CLINICAL RELEVANCE STATEMENT This study offers a promising computer-aided tool to assist the brain metastasis detection and segmentation in routine clinical practice for cancer patients. KEY POINTS • The GHR-CNN could accurately detect and segment BM on contrast-enhanced 3D-T1W images. • The GHR-CNN improved the BM detection rate of radiologists, including the detection of small lesions. • The GHR-CNN enabled automated segmentation of BM in a very short time.
Collapse
Affiliation(s)
- Jiao Qu
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
| | - Wenjing Zhang
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
| | - Xin Shu
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Ying Wang
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
- Department of Nuclear Medicine, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Lituan Wang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Mengyuan Xu
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
| | - Li Yao
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
| | - Na Hu
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
| | - Biqiu Tang
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
| | - Lei Zhang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Su Lui
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China.
| |
Collapse
|
13
|
Bouget D, Alsinan D, Gaitan V, Helland RH, Pedersen A, Solheim O, Reinertsen I. Raidionics: an open software for pre- and postoperative central nervous system tumor segmentation and standardized reporting. Sci Rep 2023; 13:15570. [PMID: 37730820 PMCID: PMC10511510 DOI: 10.1038/s41598-023-42048-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 09/05/2023] [Indexed: 09/22/2023] Open
Abstract
For patients suffering from central nervous system tumors, prognosis estimation, treatment decisions, and postoperative assessments are made from the analysis of a set of magnetic resonance (MR) scans. Currently, the lack of open tools for standardized and automatic tumor segmentation and generation of clinical reports, incorporating relevant tumor characteristics, leads to potential risks from inherent decisions' subjectivity. To tackle this problem, the proposed Raidionics open-source software has been developed, offering both a user-friendly graphical user interface and stable processing backend. The software includes preoperative segmentation models for each of the most common tumor types (i.e., glioblastomas, lower grade gliomas, meningiomas, and metastases), together with one early postoperative glioblastoma segmentation model. Preoperative segmentation performances were quite homogeneous across the four different brain tumor types, with an average Dice around 85% and patient-wise recall and precision around 95%. Postoperatively, performances were lower with an average Dice of 41%. Overall, the generation of a standardized clinical report, including the tumor segmentation and features computation, requires about ten minutes on a regular laptop. The proposed Raidionics software is the first open solution enabling an easy use of state-of-the-art segmentation models for all major tumor types, including preoperative and postsurgical standardized reports.
Collapse
Affiliation(s)
- David Bouget
- Department of Health Research, SINTEF Digital, 7465, Trondheim, Norway
| | - Demah Alsinan
- Department of Health Research, SINTEF Digital, 7465, Trondheim, Norway
| | - Valeria Gaitan
- Department of Health Research, SINTEF Digital, 7465, Trondheim, Norway
| | - Ragnhild Holden Helland
- Department of Health Research, SINTEF Digital, 7465, Trondheim, Norway
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology (NTNU), 7491, Trondheim, Norway
| | - André Pedersen
- Department of Health Research, SINTEF Digital, 7465, Trondheim, Norway
| | - Ole Solheim
- Department of Neurosurgery, St. Olavs Hospital, Trondheim University Hospital, 7491, Trondheim, Norway
- Norwegian University of Science and Technology (NTNU), Department of Neuromedicine and Movement Science, 7491, Trondheim, Norway
| | - Ingerid Reinertsen
- Department of Health Research, SINTEF Digital, 7465, Trondheim, Norway.
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology (NTNU), 7491, Trondheim, Norway.
| |
Collapse
|
14
|
Anderson M, Sadiq S, Nahaboo Solim M, Barker H, Steel DH, Habib M, Obara B. Biomedical Data Annotation: An OCT Imaging Case Study. J Ophthalmol 2023; 2023:5747010. [PMID: 37650051 PMCID: PMC10465257 DOI: 10.1155/2023/5747010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 07/20/2023] [Accepted: 08/02/2023] [Indexed: 09/01/2023] Open
Abstract
In ophthalmology, optical coherence tomography (OCT) is a widely used imaging modality, allowing visualisation of the structures of the eye with objective and quantitative cross-sectional three-dimensional (3D) volumetric scans. Due to the quantity of data generated from OCT scans and the time taken for an ophthalmologist to inspect for various disease pathology features, automated image analysis in the form of deep neural networks has seen success for the classification and segmentation of OCT layers and quantification of features. However, existing high-performance deep learning approaches rely on huge training datasets with high-quality annotations, which are challenging to obtain in many clinical applications. The collection of annotations from less experienced clinicians has the potential to alleviate time constraints from more senior clinicians, allowing faster data collection of medical image annotations; however, with less experience, there is the possibility of reduced annotation quality. In this study, we evaluate the quality of diabetic macular edema (DME) intraretinal fluid (IRF) biomarker image annotations on OCT B-scans from five clinicians with a range of experience. We also assess the effectiveness of annotating across multiple sessions following a training session led by an expert clinician. Our investigation shows a notable variance in annotation performance, with a correlation that depends on the clinician's experience with OCT image interpretation of DME, and that having multiple annotation sessions has a limited effect on the annotation quality.
Collapse
Affiliation(s)
- Matthew Anderson
- School of Computing, Newcastle University, Urban Sciences Building, Newcastle upon Tyne NE4 5TG, UK
| | - Salman Sadiq
- Sunderland Eye Infirmary, Queen Alexandra Rd, Sunderland NE4 5TG, UK
| | | | - Hannah Barker
- Sunderland Eye Infirmary, Queen Alexandra Rd, Sunderland NE4 5TG, UK
| | - David H. Steel
- Sunderland Eye Infirmary, Queen Alexandra Rd, Sunderland NE4 5TG, UK
- Bioscience Institute, Newcastle University, Catherine Cookson Building, Newcastle upon Tyne NE2 4HH, UK
| | - Maged Habib
- Sunderland Eye Infirmary, Queen Alexandra Rd, Sunderland NE4 5TG, UK
- Bioscience Institute, Newcastle University, Catherine Cookson Building, Newcastle upon Tyne NE2 4HH, UK
| | - Boguslaw Obara
- School of Computing, Newcastle University, Urban Sciences Building, Newcastle upon Tyne NE4 5TG, UK
- Bioscience Institute, Newcastle University, Catherine Cookson Building, Newcastle upon Tyne NE2 4HH, UK
| |
Collapse
|
15
|
Hsu DG, Ballangrud Å, Prezelski K, Swinburne NC, Young R, Beal K, Deasy JO, Cerviño L, Aristophanous M. Automatically tracking brain metastases after stereotactic radiosurgery. Phys Imaging Radiat Oncol 2023; 27:100452. [PMID: 37720463 PMCID: PMC10500025 DOI: 10.1016/j.phro.2023.100452] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Revised: 05/12/2023] [Accepted: 05/26/2023] [Indexed: 09/19/2023] Open
Abstract
Background and purpose Patients with brain metastases (BMs) are surviving longer and returning for multiple courses of stereotactic radiosurgery. BMs are monitored after radiation with follow-up magnetic resonance (MR) imaging every 2-3 months. This study investigated whether it is possible to automatically track BMs on longitudinal imaging and quantify the tumor response after radiotherapy. Methods The METRO process (MEtastasis Tracking with Repeated Observations was developed to automatically process patient data and track BMs. A longitudinal intrapatient registration method for T1 MR post-Gd was conceived and validated on 20 patients. Detections and volumetric measurements of BMs were obtained from a deep learning model. BM tracking was validated on 32 separate patients by comparing results with manual measurements of BM response and radiologists' assessments of new BMs. Linear regression and residual analysis were used to assess accuracy in determining tumor response and size change. Results A total of 123 irradiated BMs and 38 new BMs were successfully tracked. 66 irradiated BMs were visible on follow-up imaging 3-9 months after radiotherapy. Comparing their longest diameter changes measured manually vs. METRO, the Pearson correlation coefficient was 0.88 (p < 0.001); the mean residual error was -8 ± 17%. The mean registration error was 1.5 ± 0.2 mm. Conclusions Automatic, longitudinal tracking of BMs using deep learning methods is feasible. In particular, the software system METRO fulfills a need to automatically track and quantify volumetric changes of BMs prior to, and in response to, radiation therapy.
Collapse
Affiliation(s)
- Dylan G. Hsu
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Åse Ballangrud
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Kayla Prezelski
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Nathaniel C. Swinburne
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Robert Young
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Kathryn Beal
- Department of Radiation Oncology, Weill Cornell Medicine, New York, NY 10065, United States
| | - Joseph O. Deasy
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Laura Cerviño
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Michalis Aristophanous
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| |
Collapse
|
16
|
Sha X, Wang H, Sha H, Xie L, Zhou Q, Zhang W, Yin Y. Clinical target volume and organs at risk segmentation for rectal cancer radiotherapy using the Flex U-Net network. Front Oncol 2023; 13:1172424. [PMID: 37324028 PMCID: PMC10266488 DOI: 10.3389/fonc.2023.1172424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Accepted: 05/05/2023] [Indexed: 06/17/2023] Open
Abstract
Purpose/Objectives The aim of this study was to improve the accuracy of the clinical target volume (CTV) and organs at risk (OARs) segmentation for rectal cancer preoperative radiotherapy. Materials/Methods Computed tomography (CT) scans from 265 rectal cancer patients treated at our institution were collected to train and validate automatic contouring models. The regions of CTV and OARs were delineated by experienced radiologists as the ground truth. We improved the conventional U-Net and proposed Flex U-Net, which used a register model to correct the noise caused by manual annotation, thus refining the performance of the automatic segmentation model. Then, we compared its performance with that of U-Net and V-Net. The Dice similarity coefficient (DSC), Hausdorff distance (HD), and average symmetric surface distance (ASSD) were calculated for quantitative evaluation purposes. With a Wilcoxon signed-rank test, we found that the differences between our method and the baseline were statistically significant (P< 0.05). Results Our proposed framework achieved DSC values of 0.817 ± 0.071, 0.930 ± 0.076, 0.927 ± 0.03, and 0.925 ± 0.03 for CTV, the bladder, Femur head-L and Femur head-R, respectively. Conversely, the baseline results were 0.803 ± 0.082, 0.917 ± 0.105, 0.923 ± 0.03 and 0.917 ± 0.03, respectively. Conclusion In conclusion, our proposed Flex U-Net can enable satisfactory CTV and OAR segmentation for rectal cancer and yield superior performance compared to conventional methods. This method provides an automatic, fast and consistent solution for CTV and OAR segmentation and exhibits potential to be widely applied for radiation therapy planning for a variety of cancers.
Collapse
Affiliation(s)
- Xue Sha
- Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Hui Wang
- Department of Radiation Oncology, Qingdao Central Hospital, Qingdao, Shandong, China
| | - Hui Sha
- Hunan Cancer Hospital, Xiangya School of Medicine, Central South University, Changsha, Hunan, China
| | - Lu Xie
- Manteia Technologies Co., Ltd, Xiamen, Fujian, China
| | - Qichao Zhou
- Manteia Technologies Co., Ltd, Xiamen, Fujian, China
| | - Wei Zhang
- Manteia Technologies Co., Ltd, Xiamen, Fujian, China
| | - Yong Yin
- Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| |
Collapse
|
17
|
Ocaña-Tienda B, Pérez-Beteta J, Villanueva-García JD, Romero-Rosales JA, Molina-García D, Suter Y, Asenjo B, Albillo D, Ortiz de Mendivil A, Pérez-Romasanta LA, González-Del Portillo E, Llorente M, Carballo N, Nagib-Raya F, Vidal-Denis M, Luque B, Reyes M, Arana E, Pérez-García VM. A comprehensive dataset of annotated brain metastasis MR images with clinical and radiomic data. Sci Data 2023; 10:208. [PMID: 37059722 PMCID: PMC10104872 DOI: 10.1038/s41597-023-02123-0] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Accepted: 03/30/2023] [Indexed: 04/16/2023] Open
Abstract
Brain metastasis (BM) is one of the main complications of many cancers, and the most frequent malignancy of the central nervous system. Imaging studies of BMs are routinely used for diagnosis of disease, treatment planning and follow-up. Artificial Intelligence (AI) has great potential to provide automated tools to assist in the management of disease. However, AI methods require large datasets for training and validation, and to date there have been just one publicly available imaging dataset of 156 BMs. This paper publishes 637 high-resolution imaging studies of 75 patients harboring 260 BM lesions, and their respective clinical data. It also includes semi-automatic segmentations of 593 BMs, including pre- and post-treatment T1-weighted cases, and a set of morphological and radiomic features for the cases segmented. This data-sharing initiative is expected to enable research into and performance evaluation of automatic BM detection, lesion segmentation, disease status evaluation and treatment planning methods for BMs, as well as the development and validation of predictive and prognostic tools with clinical applicability.
Collapse
Affiliation(s)
- Beatriz Ocaña-Tienda
- Mathematical Oncology Laboratory (MOLAB), University of Castilla-La Mancha, Ciudad Real, Spain.
| | - Julián Pérez-Beteta
- Mathematical Oncology Laboratory (MOLAB), University of Castilla-La Mancha, Ciudad Real, Spain
| | | | - José A Romero-Rosales
- Mathematical Oncology Laboratory (MOLAB), University of Castilla-La Mancha, Ciudad Real, Spain
| | - David Molina-García
- Mathematical Oncology Laboratory (MOLAB), University of Castilla-La Mancha, Ciudad Real, Spain
| | - Yannick Suter
- Medical Image Analysis Group, ARTORG Research Center, Bern, Switzerland
| | - Beatriz Asenjo
- Radiology Department, Hospital Regional Universitario de Málaga, Málaga, Spain
| | - David Albillo
- Radiology Department, MD Anderson Cancer Center, Madrid, Spain
| | | | | | | | - Manuel Llorente
- Radiology Department, MD Anderson Cancer Center, Madrid, Spain
| | | | - Fátima Nagib-Raya
- Radiology Department, Hospital Regional Universitario de Málaga, Málaga, Spain
| | - Maria Vidal-Denis
- Radiology Department, Hospital Regional Universitario de Málaga, Málaga, Spain
| | - Belén Luque
- Mathematical Oncology Laboratory (MOLAB), University of Castilla-La Mancha, Ciudad Real, Spain
| | - Mauricio Reyes
- Medical Image Analysis Group, ARTORG Research Center, Bern, Switzerland
| | - Estanislao Arana
- Radiology Department, Fundación Instituto Valenciano de Oncología, Valencia, Spain.
| | - Víctor M Pérez-García
- Mathematical Oncology Laboratory (MOLAB), University of Castilla-La Mancha, Ciudad Real, Spain
| |
Collapse
|
18
|
Dikici E, Nguyen XV, Takacs N, Prevedello LM. Prediction of model generalizability for unseen data: Methodology and case study in brain metastases detection in T1-Weighted contrast-enhanced 3D MRI. Comput Biol Med 2023; 159:106901. [PMID: 37068317 DOI: 10.1016/j.compbiomed.2023.106901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 03/08/2023] [Accepted: 04/09/2023] [Indexed: 04/19/2023]
Abstract
BACKGROUND AND PURPOSE A medical AI system's generalizability describes the continuity of its performance acquired from varying geographic, historical, and methodologic settings. Previous literature on this topic has mostly focused on "how" to achieve high generalizability (e.g., via larger datasets, transfer learning, data augmentation, model regularization schemes), with limited success. Instead, we aim to understand "when" the generalizability is achieved: Our study presents a medical AI system that could estimate its generalizability status for unseen data on-the-fly. MATERIALS AND METHODS We introduce a latent space mapping (LSM) approach utilizing Fréchet distance loss to force the underlying training data distribution into a multivariate normal distribution. During the deployment, a given test data's LSM distribution is processed to detect its deviation from the forced distribution; hence, the AI system could predict its generalizability status for any previously unseen data set. If low model generalizability is detected, then the user is informed by a warning message integrated into a sample deployment workflow. While the approach is applicable for most classification deep neural networks (DNNs), we demonstrate its application to a brain metastases (BM) detector for T1-weighted contrast-enhanced (T1c) 3D MRI. The BM detection model was trained using 175 T1c studies acquired internally (from the authors' institution) and tested using (1) 42 internally acquired exams and (2) 72 externally acquired exams from the publicly distributed Brain Mets dataset provided by the Stanford University School of Medicine. Generalizability scores, false positive (FP) rates, and sensitivities of the BM detector were computed for the test datasets. RESULTS AND CONCLUSION The model predicted its generalizability to be low for 31% of the testing data (i.e., two of the internally and 33 of the externally acquired exams), where it produced (1) ∼13.5 false positives (FPs) at 76.1% BM detection sensitivity for the low and (2) ∼10.5 FPs at 89.2% BM detection sensitivity for the high generalizability groups respectively. These results suggest that the proposed formulation enables a model to predict its generalizability for unseen data.
Collapse
Affiliation(s)
- Engin Dikici
- The Ohio State University, College of Medicine, Department of Radiology, Columbus, OH, 43210, USA.
| | - Xuan V Nguyen
- The Ohio State University, College of Medicine, Department of Radiology, Columbus, OH, 43210, USA
| | - Noah Takacs
- The Ohio State University, College of Medicine, Department of Radiology, Columbus, OH, 43210, USA
| | - Luciano M Prevedello
- The Ohio State University, College of Medicine, Department of Radiology, Columbus, OH, 43210, USA
| |
Collapse
|
19
|
Wang JY, Qu V, Hui C, Sandhu N, Mendoza MG, Panjwani N, Chang YC, Liang CH, Lu JT, Wang L, Kovalchuk N, Gensheimer MF, Soltys SG, Pollom EL. Stratified assessment of an FDA-cleared deep learning algorithm for automated detection and contouring of metastatic brain tumors in stereotactic radiosurgery. Radiat Oncol 2023; 18:61. [PMID: 37016416 PMCID: PMC10074777 DOI: 10.1186/s13014-023-02246-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Accepted: 03/14/2023] [Indexed: 04/06/2023] Open
Abstract
PURPOSE Artificial intelligence-based tools can be leveraged to improve detection and segmentation of brain metastases for stereotactic radiosurgery (SRS). VBrain by Vysioneer Inc. is a deep learning algorithm with recent FDA clearance to assist in brain tumor contouring. We aimed to assess the performance of this tool by various demographic and clinical characteristics among patients with brain metastases treated with SRS. MATERIALS AND METHODS We randomly selected 100 patients with brain metastases who underwent initial SRS on the CyberKnife from 2017 to 2020 at a single institution. Cases with resection cavities were excluded from the analysis. Computed tomography (CT) and axial T1-weighted post-contrast magnetic resonance (MR) image data were extracted for each patient and uploaded to VBrain. A brain metastasis was considered "detected" when the VBrain- "predicted" contours overlapped with the corresponding physician contours ("ground-truth" contours). We evaluated performance of VBrain against ground-truth contours using the following metrics: lesion-wise Dice similarity coefficient (DSC), lesion-wise average Hausdorff distance (AVD), false positive count (FP), and lesion-wise sensitivity (%). Kruskal-Wallis tests were performed to assess the relationships between patient characteristics including sex, race, primary histology, age, and size and number of brain metastases, and performance metrics such as DSC, AVD, FP, and sensitivity. RESULTS We analyzed 100 patients with 435 intact brain metastases treated with SRS. Our cohort consisted of patients with a median number of 2 brain metastases (range: 1 to 52), median age of 69 (range: 19 to 91), and 50% male and 50% female patients. The primary site breakdown was 56% lung, 10% melanoma, 9% breast, 8% gynecological, 5% renal, 4% gastrointestinal, 2% sarcoma, and 6% other, while the race breakdown was 60% White, 18% Asian, 3% Black/African American, 2% Native Hawaiian or other Pacific Islander, and 17% other/unknown/not reported. The median tumor size was 0.112 c.c. (range: 0.010-26.475 c.c.). We found mean lesion-wise DSC to be 0.723, mean lesion-wise AVD to be 7.34% of lesion size (0.704 mm), mean FP count to be 0.72 tumors per case, and lesion-wise sensitivity to be 89.30% for all lesions. Moreover, mean sensitivity was found to be 99.07%, 97.59%, and 96.23% for lesions with diameter equal to and greater than 10 mm, 7.5 mm, and 5 mm, respectively. No other significant differences in performance metrics were observed across demographic or clinical characteristic groups. CONCLUSION In this study, a commercial deep learning algorithm showed promising results in segmenting brain metastases, with 96.23% sensitivity for metastases with diameters of 5 mm or higher. As the software is an assistive AI, future work of VBrain integration into the clinical workflow can provide further clinical and research insights.
Collapse
Affiliation(s)
- Jen-Yeu Wang
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | - Vera Qu
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | - Caressa Hui
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | - Navjot Sandhu
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | - Maria G Mendoza
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | - Neil Panjwani
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | | | | | | | - Lei Wang
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | - Nataliya Kovalchuk
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | - Michael F Gensheimer
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | - Scott G Soltys
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | - Erqi L Pollom
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA.
| |
Collapse
|
20
|
Eidex Z, Ding Y, Wang J, Abouei E, Qiu RL, Liu T, Wang T, Yang X. Deep Learning in MRI-guided Radiation Therapy: A Systematic Review. ARXIV 2023:arXiv:2303.11378v2. [PMID: 36994167 PMCID: PMC10055493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 03/31/2023]
Abstract
MRI-guided radiation therapy (MRgRT) offers a precise and adaptive approach to treatment planning. Deep learning applications which augment the capabilities of MRgRT are systematically reviewed. MRI-guided radiation therapy offers a precise, adaptive approach to treatment planning. Deep learning applications which augment the capabilities of MRgRT are systematically reviewed with emphasis placed on underlying methods. Studies are further categorized into the areas of segmentation, synthesis, radiomics, and real time MRI. Finally, clinical implications, current challenges, and future directions are discussed.
Collapse
Affiliation(s)
- Zach Eidex
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA
| | - Yifu Ding
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Jing Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Elham Abouei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Richard L.J. Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Tian Liu
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, NY
| | - Tonghe Wang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA
| |
Collapse
|
21
|
[Robotics and computer-assisted procedures in cranial neurosurgery]. CHIRURGIE (HEIDELBERG, GERMANY) 2023; 94:299-306. [PMID: 36629923 DOI: 10.1007/s00104-022-01783-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Accepted: 11/21/2022] [Indexed: 01/12/2023]
Abstract
BACKGROUND The medical technical innovations over the last decade have made operations in the highly sensitive regions of the brain much safer. OBJECTIVE Presentation of how far computer assistance and robotics have become incorporated into clinical neurosurgery. MATERIAL AND METHOD Evaluation of the scientific literature and analysis of the certification status of the corresponding medical devices. RESULTS The rapid development of computer technology and the switch to digital imaging has led to the widespread introduction of neurosurgical planning software and intraoperative neuronavigation. In the field of robotics, the penetration into clinical neurosurgery is currently still largely limited to the automatic setting of trajectories. CONCLUSION The digitalization of imaging has fundamentally transformed neurosurgery. In the field of cranial neurosurgery, computer-assisted procedures can now be distinguished from noncomputer-assisted procedures only in a handful of cases. In the coming years important innovations for the clinical implementation can be expected in the field of robotics.
Collapse
|
22
|
Yu H, Zhang Z, Xia W, Liu Y, Liu L, Luo W, Zhou J, Zhang Y. DeSeg: auto detector-based segmentation for brain metastases. Phys Med Biol 2023; 68. [PMID: 36535028 DOI: 10.1088/1361-6560/acace7] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Accepted: 12/19/2022] [Indexed: 12/24/2022]
Abstract
Delineation of brain metastases (BMs) is a paramount step in stereotactic radiosurgery treatment. Clinical practice has specific expectation on BM auto-delineation that the method is supposed to avoid missing of small lesions and yield accurate contours for large lesions. In this study, we propose a novel coarse-to-fine framework, named detector-based segmentation (DeSeg), to incorporate object-level detection into pixel-wise segmentation so as to meet the clinical demand. DeSeg consists of three components: a center-point-guided single-shot detector to localize the potential lesion regions, a multi-head U-Net segmentation model to refine contours, and a data cascade unit to connect both tasks smoothly. Performance on tiny lesions is measured by the object-based sensitivity and positive predictive value (PPV), while that on large lesions is quantified by dice similarity coefficient (DSC), average symmetric surface distance (ASSD) and 95% Hausdorff distance (HD95). Besides, computational complexity is also considered to study the potential of method in real-time processing. This study retrospectively collected 240 BM patients with Gadolinium injected contrast-enhanced T1-weighted magnetic resonance imaging (T1c-MRI), which were randomly split into training, validating and testing datasets (192, 24 and 24 scans, respectively). The lesions in the testing dataset were further divided into two groups based on the volume size (smallS: ≤1.5 cc,N= 88; largeL: > 1.5 cc,N= 15). On average, DeSeg yielded a sensitivity of 0.91 and a PPV of 0.77 on S group, and a DSC of 0.86, an ASSD 0f 0.76 mm and a HD95 of 2.31 mm onLgroup. The results indicated that DeSeg achieved leading sensitivity and PPV for tiny lesions as well as segmentation metrics for large ones. After our clinical validation, DeSeg showed competitive segmentation performance while kept faster processing speed comparing with existing 3D models.
Collapse
Affiliation(s)
- Hui Yu
- College of Computer Science, Sichuan University, Chengdu, 610065, People's Republic of China
| | - Zhongzhou Zhang
- College of Computer Science, Sichuan University, Chengdu, 610065, People's Republic of China
| | - Wenjun Xia
- College of Computer Science, Sichuan University, Chengdu, 610065, People's Republic of China
| | - Yan Liu
- College of Electrical Engineering, Sichuan University, Chengdu, 610065, People's Republic of China
| | - Lunxin Liu
- Department of Neurosurgery, West China Hospital of Sichuan University, Chengdu, 610044, People's Republic of China
| | - Wuman Luo
- School of Applied Sciences, Macao Polytechnic University, Macao, 999078, People's Republic of China
| | - Jiliu Zhou
- College of Computer Science, Sichuan University, Chengdu, 610065, People's Republic of China
| | - Yi Zhang
- School of Cyber Science and Engineering, Sichuan University, Chengdu, 610065, People's Republic of China
| |
Collapse
|
23
|
Zhou Z. Editorial for "Automated Segmentation of Brain Metastases on T1-Weighted MRI Using Convolutional Neural Network: Impact of Using Volume Aware Loss and Sampling Strategy". J Magn Reson Imaging 2022; 56:1899-1900. [PMID: 35678418 DOI: 10.1002/jmri.28272] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Accepted: 04/14/2022] [Indexed: 01/05/2023] Open
Affiliation(s)
- Zijian Zhou
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| |
Collapse
|
24
|
You S, Reyes M. Influence of contrast and texture based image modifications on the performance and attention shift of U-Net models for brain tissue segmentation. FRONTIERS IN NEUROIMAGING 2022; 1:1012639. [PMID: 37555149 PMCID: PMC10406260 DOI: 10.3389/fnimg.2022.1012639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 10/12/2022] [Indexed: 08/10/2023]
Abstract
Contrast and texture modifications applied during training or test-time have recently shown promising results to enhance the generalization performance of deep learning segmentation methods in medical image analysis. However, a deeper understanding of this phenomenon has not been investigated. In this study, we investigated this phenomenon using a controlled experimental setting, using datasets from the Human Connectome Project and a large set of simulated MR protocols, in order to mitigate data confounders and investigate possible explanations as to why model performance changes when applying different levels of contrast and texture-based modifications. Our experiments confirm previous findings regarding the improved performance of models subjected to contrast and texture modifications employed during training and/or testing time, but further show the interplay when these operations are combined, as well as the regimes of model improvement/worsening across scanning parameters. Furthermore, our findings demonstrate a spatial attention shift phenomenon of trained models, occurring for different levels of model performance, and varying in relation to the type of applied image modification.
Collapse
Affiliation(s)
- Suhang You
- Medical Image Analysis Group, ARTORG, Graduate School for Cellular and Biomedical Sciences, University of Bern, Bern, Switzerland
| | | |
Collapse
|
25
|
Lefevre E, Bouilhol E, Chauvière A, Souleyreau W, Derieppe MA, Trotier AJ, Miraux S, Bikfalvi A, Ribot EJ, Nikolski M. Deep learning model for automatic segmentation of lungs and pulmonary metastasis in small animal MR images. FRONTIERS IN BIOINFORMATICS 2022; 2:999700. [PMID: 36304332 PMCID: PMC9580845 DOI: 10.3389/fbinf.2022.999700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Accepted: 09/26/2022] [Indexed: 12/03/2022] Open
Abstract
Lungs are the most frequent site of metastases growth. The amount and size of pulmonary metastases acquired from MRI imaging data are the important criteria to assess the efficacy of new drugs in preclinical models. While efficient solutions both for MR imaging and the downstream automatic segmentation have been proposed for human patients, both MRI lung imaging and segmentation in preclinical animal models remains challenging due to the physiological motion (respiratory and cardiac movements), to the low amount of protons in this organ and to the particular challenge of precise segmentation of metastases. As a consequence post-mortem analysis is currently required to obtain information on metastatic volume. In this work, we have developed a complete methodological pipeline for automated analysis of lungs and metastases in mice, consisting of an MR sequence for image acquisition and a deep learning method for automatic segmentation of both lungs and metastases. On one hand, we optimized an MR sequence for mouse lung imaging with high contrast for high detection sensitivity. On the other hand we developed DeepMeta, a multiclass U-Net 3+ deep learning model to automatically segment the images. To assess if the proposed deep learning pipeline is able to provide an accurate segmentation of both lungs and pulmonary metastases, we have longitudinally imaged mice with fast- and slow-growing metastasis. Fifty-five balb/c mice were injected with two different derivatives of renal carcinoma cells. Mice were imaged with a SG-bSSFP (self-gated balanced steady state free precession) sequence at different time points after the injection of cancer cells. Both lung and metastases segmentations were manually performed by experts. DeepMeta was trained to perform lung and metastases segmentation based on the resulting ground truth annotations. Volumes of lungs and of pulmonary metastases as well as the number of metastases per mouse were measured on a separate test dataset of MR images. Thanks to the SG method, the 3D bSSFP images of lungs were artifact-free, enabling the downstream detection and serial follow-up of metastases. Moreover, both lungs and metastases segmentation was accurately performed by DeepMeta as soon as they reached the volume of ∼ 0.02 m m 3 . Thus we were able to distinguish two groups of mice in terms of number and volume of pulmonary metastases as well as in terms of the slow versus fast patterns of growth of metastases. We have shown that our methodology combining SG-bSSFP with deep learning, enables processing of the whole animal lungs and is thus a viable alternative to histology alone.
Collapse
Affiliation(s)
- Edgar Lefevre
- Bordeaux Bioinformatics Center, University of Bordeaux, Bordeaux, France,*Correspondence: Edgar Lefevre, ; Macha Nikolski,
| | - Emmanuel Bouilhol
- Bordeaux Bioinformatics Center, University of Bordeaux, Bordeaux, France,IBGC, CNRS, University of Bordeaux, Bordeaux, France
| | - Antoine Chauvière
- Bordeaux Bioinformatics Center, University of Bordeaux, Bordeaux, France
| | | | | | - Aurélien J. Trotier
- Centre de Résonance Magnétique des Systèmes Biologiques, CNRS, University of Bordeaux, Bordeaux, France
| | - Sylvain Miraux
- Centre de Résonance Magnétique des Systèmes Biologiques, CNRS, University of Bordeaux, Bordeaux, France
| | | | - Emeline J. Ribot
- Centre de Résonance Magnétique des Systèmes Biologiques, CNRS, University of Bordeaux, Bordeaux, France
| | - Macha Nikolski
- Bordeaux Bioinformatics Center, University of Bordeaux, Bordeaux, France,IBGC, CNRS, University of Bordeaux, Bordeaux, France,*Correspondence: Edgar Lefevre, ; Macha Nikolski,
| |
Collapse
|
26
|
Dikici E, Nguyen XV, Bigelow M, Ryu JL, Prevedello LM. Advancing Brain Metastases Detection in T1-Weighted Contrast-Enhanced 3D MRI Using Noisy Student-Based Training. Diagnostics (Basel) 2022; 12:2023. [PMID: 36010373 PMCID: PMC9407228 DOI: 10.3390/diagnostics12082023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2022] [Revised: 08/17/2022] [Accepted: 08/19/2022] [Indexed: 11/17/2022] Open
Abstract
The detection of brain metastases (BM) in their early stages could have a positive impact on the outcome of cancer patients. The authors previously developed a framework for detecting small BM (with diameters of <15 mm) in T1-weighted contrast-enhanced 3D magnetic resonance images (T1c). This study aimed to advance the framework with a noisy-student-based self-training strategy to use a large corpus of unlabeled T1c data. Accordingly, a sensitivity-based noisy-student learning approach was formulated to provide high BM detection sensitivity with a reduced count of false positives. This paper (1) proposes student/teacher convolutional neural network architectures, (2) presents data and model noising mechanisms, and (3) introduces a novel pseudo-labeling strategy factoring in the sensitivity constraint. The evaluation was performed using 217 labeled and 1247 unlabeled exams via two-fold cross-validation. The framework utilizing only the labeled exams produced 9.23 false positives for 90% BM detection sensitivity, whereas the one using the introduced learning strategy led to ~9% reduction in false detections (i.e., 8.44). Significant reductions in false positives (>10%) were also observed in reduced labeled data scenarios (using 50% and 75% of labeled data). The results suggest that the introduced strategy could be utilized in existing medical detection applications with access to unlabeled datasets to elevate their performances.
Collapse
Affiliation(s)
- Engin Dikici
- Department of Radiology, The Ohio State University College of Medicine, Columbus, OH 43210, USA
| | - Xuan V. Nguyen
- Department of Radiology, The Ohio State University College of Medicine, Columbus, OH 43210, USA
| | - Matthew Bigelow
- Department of Radiology, The Ohio State University College of Medicine, Columbus, OH 43210, USA
| | | | - Luciano M. Prevedello
- Department of Radiology, The Ohio State University College of Medicine, Columbus, OH 43210, USA
| |
Collapse
|
27
|
Bouget D, Pedersen A, Jakola AS, Kavouridis V, Emblem KE, Eijgelaar RS, Kommers I, Ardon H, Barkhof F, Bello L, Berger MS, Conti Nibali M, Furtner J, Hervey-Jumper S, Idema AJS, Kiesel B, Kloet A, Mandonnet E, Müller DMJ, Robe PA, Rossi M, Sciortino T, Van den Brink WA, Wagemakers M, Widhalm G, Witte MG, Zwinderman AH, De Witt Hamer PC, Solheim O, Reinertsen I. Preoperative Brain Tumor Imaging: Models and Software for Segmentation and Standardized Reporting. Front Neurol 2022; 13:932219. [PMID: 35968292 PMCID: PMC9364874 DOI: 10.3389/fneur.2022.932219] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Accepted: 06/23/2022] [Indexed: 11/23/2022] Open
Abstract
For patients suffering from brain tumor, prognosis estimation and treatment decisions are made by a multidisciplinary team based on a set of preoperative MR scans. Currently, the lack of standardized and automatic methods for tumor detection and generation of clinical reports, incorporating a wide range of tumor characteristics, represents a major hurdle. In this study, we investigate the most occurring brain tumor types: glioblastomas, lower grade gliomas, meningiomas, and metastases, through four cohorts of up to 4,000 patients. Tumor segmentation models were trained using the AGU-Net architecture with different preprocessing steps and protocols. Segmentation performances were assessed in-depth using a wide-range of voxel and patient-wise metrics covering volume, distance, and probabilistic aspects. Finally, two software solutions have been developed, enabling an easy use of the trained models and standardized generation of clinical reports: Raidionics and Raidionics-Slicer. Segmentation performances were quite homogeneous across the four different brain tumor types, with an average true positive Dice ranging between 80 and 90%, patient-wise recall between 88 and 98%, and patient-wise precision around 95%. In conjunction to Dice, the identified most relevant other metrics were the relative absolute volume difference, the variation of information, and the Hausdorff, Mahalanobis, and object average symmetric surface distances. With our Raidionics software, running on a desktop computer with CPU support, tumor segmentation can be performed in 16-54 s depending on the dimensions of the MRI volume. For the generation of a standardized clinical report, including the tumor segmentation and features computation, 5-15 min are necessary. All trained models have been made open-access together with the source code for both software solutions and validation metrics computation. In the future, a method to convert results from a set of metrics into a final single score would be highly desirable for easier ranking across trained models. In addition, an automatic classification of the brain tumor type would be necessary to replace manual user input. Finally, the inclusion of post-operative segmentation in both software solutions will be key for generating complete post-operative standardized clinical reports.
Collapse
Affiliation(s)
- David Bouget
- Department of Health Research, SINTEF Digital, Trondheim, Norway
| | - André Pedersen
- Department of Health Research, SINTEF Digital, Trondheim, Norway
- Department of Clinical and Molecular Medicine, Norwegian University of Science and Technology, Trondheim, Norway
- Clinic of Surgery, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
| | - Asgeir S. Jakola
- Department of Neurosurgery, Sahlgrenska University Hospital, Gothenburg, Sweden
- Department of Clinical Neuroscience, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - Vasileios Kavouridis
- Department of Neurosurgery, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
| | - Kyrre E. Emblem
- Division of Radiology and Nuclear Medicine, Department of Physics and Computational Radiology, Oslo University Hospital, Oslo, Norway
| | - Roelant S. Eijgelaar
- Department of Neurosurgery, Amsterdam University Medical Centers, Vrije Universiteit, Amsterdam, Netherlands
- Cancer Center Amsterdam, Brain Tumor Center, Amsterdam University Medical Centers, Amsterdam, Netherlands
| | - Ivar Kommers
- Department of Neurosurgery, Amsterdam University Medical Centers, Vrije Universiteit, Amsterdam, Netherlands
- Cancer Center Amsterdam, Brain Tumor Center, Amsterdam University Medical Centers, Amsterdam, Netherlands
| | - Hilko Ardon
- Department of Neurosurgery, Twee Steden Hospital, Tilburg, Netherlands
| | - Frederik Barkhof
- Department of Radiology and Nuclear Medicine, Amsterdam University Medical Centers, Vrije Universiteit, Amsterdam, Netherlands
- Institutes of Neurology and Healthcare Engineering, University College London, London, United Kingdom
| | - Lorenzo Bello
- Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Humanitas Research Hospital, Università degli Studi di Milano, Milan, Italy
| | - Mitchel S. Berger
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, United States
| | - Marco Conti Nibali
- Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Humanitas Research Hospital, Università degli Studi di Milano, Milan, Italy
| | - Julia Furtner
- Department of Biomedical Imaging and Image-Guided Therapy, Medical University Vienna, Wien, Austria
| | - Shawn Hervey-Jumper
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, United States
| | | | - Barbara Kiesel
- Department of Neurosurgery, Medical University Vienna, Wien, Austria
| | - Alfred Kloet
- Department of Neurosurgery, Haaglanden Medical Center, The Hague, Netherlands
| | | | - Domenique M. J. Müller
- Department of Neurosurgery, Amsterdam University Medical Centers, Vrije Universiteit, Amsterdam, Netherlands
- Cancer Center Amsterdam, Brain Tumor Center, Amsterdam University Medical Centers, Amsterdam, Netherlands
| | - Pierre A. Robe
- Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, Netherlands
| | - Marco Rossi
- Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Humanitas Research Hospital, Università degli Studi di Milano, Milan, Italy
| | - Tommaso Sciortino
- Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Humanitas Research Hospital, Università degli Studi di Milano, Milan, Italy
| | | | - Michiel Wagemakers
- Department of Neurosurgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| | - Georg Widhalm
- Department of Neurosurgery, Medical University Vienna, Wien, Austria
| | - Marnix G. Witte
- Department of Radiation Oncology, Netherlands Cancer Institute, Amsterdam, Netherlands
| | - Aeilko H. Zwinderman
- Department of Clinical Epidemiology and Biostatistics, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, Netherlands
| | - Philip C. De Witt Hamer
- Department of Neurosurgery, Amsterdam University Medical Centers, Vrije Universiteit, Amsterdam, Netherlands
- Cancer Center Amsterdam, Brain Tumor Center, Amsterdam University Medical Centers, Amsterdam, Netherlands
| | - Ole Solheim
- Department of Neurosurgery, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
- Department of Neuromedicine and Movement Science, Norwegian University of Science and Technology, Trondheim, Norway
| | - Ingerid Reinertsen
- Department of Health Research, SINTEF Digital, Trondheim, Norway
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
28
|
Huang Y, Bert C, Sommer P, Frey B, Gaipl U, Distel LV, Weissmann T, Uder M, Schmidt MA, Dörfler A, Maier A, Fietkau R, Putz F. Deep learning for brain metastasis detection and segmentation in longitudinal MRI data. Med Phys 2022; 49:5773-5786. [PMID: 35833351 DOI: 10.1002/mp.15863] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 06/22/2022] [Accepted: 06/28/2022] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Brain metastases occur frequently in patients with metastatic cancer. Early and accurate detection of brain metastases is essential for treatment planning and prognosis in radiation therapy. Due to their tiny sizes and relatively low contrast, small brain metastases are very difficult to detect manually. With the recent development of deep learning technologies, several researchers have reported promising results in automated brain metastasis detection. However, the detection sensitivity is still not high enough for tiny brain metastases, and integration into clinical practice in regard to differentiating true metastases from false positives is challenging. METHODS The DeepMedic network with the binary cross-entropy (BCE) loss is used as our baseline method. To improve brain metastasis detection performance, a custom detection loss called volume-level sensitivity-specificity (VSS) is proposed, which rates metastasis detection sensitivity and specificity at a (sub-)volume level. As sensitivity and precision are always a trade-off, either a high sensitivity or a high precision can be achieved for brain metastasis detection by adjusting the weights in the VSS loss without decline in dice score coefficient for segmented metastases. To reduce metastasis-like structures being detected as false positive metastases, a temporal prior volume is proposed as an additional input of DeepMedic. The modified network is called DeepMedic+ for distinction. Combining a high sensitivity VSS loss and a high specificity loss for DeepMedic+, the majority of true positive metastases are confirmed with high specificity, while additional metastases candidates in each patient are marked with high sensitivity for detailed expert evaluation. RESULTS Our proposed VSS loss improves the sensitivity of brain metastasis detection, increasing the sensitivity from 85.3% for DeepMedic with BCE to 97.5% for DeepMedic with VSS. Alternatively, the precision is improved from 69.1% for DeepMedic with BCE to 98.7% for DeepMedic with VSS. Comparing DeepMedic+ with DeepMedic with the same VSS loss, 44.4% of the false positive metastases are reduced in the high sensitivity model and the precision reaches 99.6% for the high specificity model. The mean dice coefficient for all metastases is about 0.81. With the ensemble of the high sensitivity and high specificity models, on average only 1.5 false positive metastases per patient need further check, while the majority of true positive metastases are confirmed. CONCLUSIONS Our proposed VSS loss and temporal prior improve brain metastasis detection sensitivity and precision. The ensemble learning is able to distinguish high confidence true positive metastases from metastases candidates that require special expert review or further follow-up, being particularly well-fit to the requirements of expert support in real clinical practice. This facilitates metastasis detection and segmentation for neuroradiologists in diagnostic and radiation oncologists in therapeutic clinical applications. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Yixing Huang
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Christoph Bert
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Philipp Sommer
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Benjamin Frey
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Udo Gaipl
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Luitpold V Distel
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Thomas Weissmann
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Michael Uder
- Institute of Radiology, Universitätsklinikum Erlangen, FAU, Erlangen, Germany
| | - Manuel A Schmidt
- Department of Neuroradiology, Universitätsklinikum Erlangen, FAU, Erlangen, Germany
| | - Arnd Dörfler
- Department of Neuroradiology, Universitätsklinikum Erlangen, FAU, Erlangen, Germany
| | | | - Rainer Fietkau
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Florian Putz
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| |
Collapse
|
29
|
A Survey on Deep Learning for Precision Oncology. Diagnostics (Basel) 2022; 12:diagnostics12061489. [PMID: 35741298 PMCID: PMC9222056 DOI: 10.3390/diagnostics12061489] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 06/14/2022] [Accepted: 06/14/2022] [Indexed: 12/27/2022] Open
Abstract
Precision oncology, which ensures optimized cancer treatment tailored to the unique biology of a patient’s disease, has rapidly developed and is of great clinical importance. Deep learning has become the main method for precision oncology. This paper summarizes the recent deep-learning approaches relevant to precision oncology and reviews over 150 articles within the last six years. First, we survey the deep-learning approaches categorized by various precision oncology tasks, including the estimation of dose distribution for treatment planning, survival analysis and risk estimation after treatment, prediction of treatment response, and patient selection for treatment planning. Secondly, we provide an overview of the studies per anatomical area, including the brain, bladder, breast, bone, cervix, esophagus, gastric, head and neck, kidneys, liver, lung, pancreas, pelvis, prostate, and rectum. Finally, we highlight the challenges and discuss potential solutions for future research directions.
Collapse
|
30
|
Kouli O, Hassane A, Badran D, Kouli T, Hossain-Ibrahim K, Steele JD. Automated brain tumour identification using magnetic resonance imaging: a systematic review and meta-analysis. Neurooncol Adv 2022; 4:vdac081. [PMID: 35769411 PMCID: PMC9234754 DOI: 10.1093/noajnl/vdac081] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
Background Automated brain tumor identification facilitates diagnosis and treatment planning. We evaluate the performance of traditional machine learning (TML) and deep learning (DL) in brain tumor detection and segmentation, using MRI. Methods A systematic literature search from January 2000 to May 8, 2021 was conducted. Study quality was assessed using the Checklist for Artificial Intelligence in Medical Imaging (CLAIM). Detection meta-analysis was performed using a unified hierarchical model. Segmentation studies were evaluated using a random effects model. Sensitivity analysis was performed for externally validated studies. Results Of 224 studies included in the systematic review, 46 segmentation and 38 detection studies were eligible for meta-analysis. In detection, DL achieved a lower false positive rate compared to TML; 0.018 (95% CI, 0.011 to 0.028) and 0.048 (0.032 to 0.072) (P < .001), respectively. In segmentation, DL had a higher dice similarity coefficient (DSC), particularly for tumor core (TC); 0.80 (0.77 to 0.83) and 0.63 (0.56 to 0.71) (P < .001), persisting on sensitivity analysis. Both manual and automated whole tumor (WT) segmentation had “good” (DSC ≥ 0.70) performance. Manual TC segmentation was superior to automated; 0.78 (0.69 to 0.86) and 0.64 (0.53 to 0.74) (P = .014), respectively. Only 30% of studies reported external validation. Conclusions The comparable performance of automated to manual WT segmentation supports its integration into clinical practice. However, manual outperformance for sub-compartmental segmentation highlights the need for further development of automated methods in this area. Compared to TML, DL provided superior performance for detection and sub-compartmental segmentation. Improvements in the quality and design of studies, including external validation, are required for the interpretability and generalizability of automated models.
Collapse
Affiliation(s)
- Omar Kouli
- School of Medicine, University of Dundee , Dundee UK
- NHS Greater Glasgow and Clyde , Dundee UK
| | | | | | - Tasnim Kouli
- School of Medicine, University of Dundee , Dundee UK
| | | | - J Douglas Steele
- Division of Imaging Science and Technology, School of Medicine, University of Dundee , UK
| |
Collapse
|
31
|
Park JE. Artificial Intelligence in Neuro-Oncologic Imaging: A Brief Review for Clinical Use Cases and Future Perspectives. Brain Tumor Res Treat 2022; 10:69-75. [PMID: 35545825 PMCID: PMC9098975 DOI: 10.14791/btrt.2021.0031] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2021] [Revised: 03/24/2022] [Accepted: 04/18/2022] [Indexed: 11/20/2022] Open
Abstract
The artificial intelligence (AI) techniques, both deep learning end-to-end approaches and radiomics with machine learning, have been developed for various imaging-based tasks in neuro-oncology. In this brief review, use cases of AI in neuro-oncologic imaging are summarized: image quality improvement, metastasis detection, radiogenomics, and treatment response monitoring. We then give a brief overview of generative adversarial network and potential utility of synthetic images for various deep learning algorithms of imaging-based tasks and image translation tasks as becoming new data input. Lastly, we highlight the importance of cohorts and clinical trial as a true validation for clinical utility of AI in neuro-oncologic imaging.
Collapse
Affiliation(s)
- Ji Eun Park
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea.
| |
Collapse
|
32
|
Das S, Nayak GK, Saba L, Kalra M, Suri JS, Saxena S. An artificial intelligence framework and its bias for brain tumor segmentation: A narrative review. Comput Biol Med 2022; 143:105273. [PMID: 35228172 DOI: 10.1016/j.compbiomed.2022.105273] [Citation(s) in RCA: 41] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Revised: 01/15/2022] [Accepted: 01/24/2022] [Indexed: 02/06/2023]
Abstract
BACKGROUND Artificial intelligence (AI) has become a prominent technique for medical diagnosis and represents an essential role in detecting brain tumors. Although AI-based models are widely used in brain lesion segmentation (BLS), understanding their effectiveness is challenging due to their complexity and diversity. Several reviews on brain tumor segmentation are available, but none of them describe a link between the threats due to risk-of-bias (RoB) in AI and its architectures. In our review, we focused on linking RoB and different AI-based architectural Cluster in popular DL framework. Further, due to variance in these designs and input data types in medical imaging, it is necessary to present a narrative review considering all facets of BLS. APPROACH The proposed study uses a PRISMA strategy based on 75 relevant studies found by searching PubMed, Scopus, and Google Scholar. Based on the architectural evolution, DL studies were subsequently categorized into four classes: convolutional neural network (CNN)-based, encoder-decoder (ED)-based, transfer learning (TL)-based, and hybrid DL (HDL)-based architectures. These studies were then analyzed considering 32 AI attributes, with clusters including AI architecture, imaging modalities, hyper-parameters, performance evaluation metrics, and clinical evaluation. Then, after these studies were scored for all attributes, a composite score was computed, normalized, and ranked. Thereafter, a bias cutoff (AP(ai)Bias 1.0, AtheroPoint, Roseville, CA, USA) was established to detect low-, moderate- and high-bias studies. CONCLUSION The four classes of architectures, from best-to worst-performing, are TL > ED > CNN > HDL. ED-based models had the lowest AI bias for BLS. This study presents a set of three primary and six secondary recommendations for lowering the RoB.
Collapse
Affiliation(s)
- Suchismita Das
- CSE Department, International Institute of Information Technology, Bhubaneswar, Odisha, India; CSE Department, KIIT Deemed to be University, Bhubaneswar, Odisha, India
| | - G K Nayak
- CSE Department, International Institute of Information Technology, Bhubaneswar, Odisha, India
| | - Luca Saba
- Department of Radiology, AOU, University of Cagliari, Cagliari, Italy
| | - Mannudeep Kalra
- Department of Radiology, Massachusetts General Hospital, 55 Fruit Street, Boston, MA, USA
| | - Jasjit S Suri
- Stroke Diagnostic and Monitoring Division, AtheroPoint™ LLC, Roseville, CA, USA.
| | - Sanjay Saxena
- CSE Department, International Institute of Information Technology, Bhubaneswar, Odisha, India
| |
Collapse
|
33
|
Vobugari N, Raja V, Sethi U, Gandhi K, Raja K, Surani SR. Advancements in Oncology with Artificial Intelligence-A Review Article. Cancers (Basel) 2022; 14:1349. [PMID: 35267657 PMCID: PMC8909088 DOI: 10.3390/cancers14051349] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 02/26/2022] [Accepted: 02/28/2022] [Indexed: 02/05/2023] Open
Abstract
Well-trained machine learning (ML) and artificial intelligence (AI) systems can provide clinicians with therapeutic assistance, potentially increasing efficiency and improving efficacy. ML has demonstrated high accuracy in oncology-related diagnostic imaging, including screening mammography interpretation, colon polyp detection, glioma classification, and grading. By utilizing ML techniques, the manual steps of detecting and segmenting lesions are greatly reduced. ML-based tumor imaging analysis is independent of the experience level of evaluating physicians, and the results are expected to be more standardized and accurate. One of the biggest challenges is its generalizability worldwide. The current detection and screening methods for colon polyps and breast cancer have a vast amount of data, so they are ideal areas for studying the global standardization of artificial intelligence. Central nervous system cancers are rare and have poor prognoses based on current management standards. ML offers the prospect of unraveling undiscovered features from routinely acquired neuroimaging for improving treatment planning, prognostication, monitoring, and response assessment of CNS tumors such as gliomas. By studying AI in such rare cancer types, standard management methods may be improved by augmenting personalized/precision medicine. This review aims to provide clinicians and medical researchers with a basic understanding of how ML works and its role in oncology, especially in breast cancer, colorectal cancer, and primary and metastatic brain cancer. Understanding AI basics, current achievements, and future challenges are crucial in advancing the use of AI in oncology.
Collapse
Affiliation(s)
- Nikitha Vobugari
- Department of Internal Medicine, Medstar Washington Hospital Center, Washington, DC 20010, USA; (N.V.); (K.G.)
| | - Vikranth Raja
- Department of Medicine, P.S.G Institute of Medical Sciences and Research, Coimbatore 641004, Tamil Nadu, India;
| | - Udhav Sethi
- School of Computer Science, University of Waterloo, Waterloo, ON N2L 3G1, Canada;
| | - Kejal Gandhi
- Department of Internal Medicine, Medstar Washington Hospital Center, Washington, DC 20010, USA; (N.V.); (K.G.)
| | - Kishore Raja
- Department of Pediatric Cardiology, University of Minnesota, Minneapolis, MN 55454, USA;
| | - Salim R. Surani
- Department of Pulmonary and Critical Care, Texas A&M University, College Station, TX 77843, USA
| |
Collapse
|
34
|
Shirokikh B, Dalechina A, Shevtsov A, Krivov E, Kostjuchenko V, Durgaryan A, Galkin M, Golanov A, Belyaev M. Systematic Clinical Evaluation of A Deep Learning Method for Medical Image Segmentation: Radiosurgery Application. IEEE J Biomed Health Inform 2022; 26:3037-3046. [PMID: 35213318 DOI: 10.1109/jbhi.2022.3153394] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
We systematically evaluate a Deep Learning model in a 3D medical image segmentation task. With our model, we address the flaws of manual segmentation: high inter-rater contouring variability and time consumption of the contouring process. The main extension over the existing evaluations is the careful and detailed analysis that could be further generalized on other medical image segmentation tasks. Firstly, we analyze the changes in the inter-rater detection agreement. We show that the model reduces the number of detection disagreements by 48% (p < 0.05). Secondly, we show that the model improves the inter-rater contouring agreement from 0.845 to 0.871 surface Dice Score (p < 0.05). Thirdly, we show that the model accelerates the delineation process between 1.6 and 2.0 times (p < 0.05). Finally, we design the setup of the clinical experiment to either exclude or estimate the evaluation biases; thus, preserving the significance of the results. Besides the clinical evaluation, we also share intuitions and practical ideas for building an efficient DL-based model for 3D medical image segmentation.
Collapse
|
35
|
Yang Z, Chen M, Kazemimoghadam M, Ma L, Stojadinovic S, Timmerman R, Dan T, Wardak Z, Lu W, Gu X. Deep-learning and radiomics ensemble classifier for false positive reduction in brain metastases segmentation. Phys Med Biol 2022; 67:10.1088/1361-6560/ac4667. [PMID: 34952535 PMCID: PMC8858586 DOI: 10.1088/1361-6560/ac4667] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Accepted: 12/24/2021] [Indexed: 01/21/2023]
Abstract
Stereotactic radiosurgery (SRS) is now the standard of care for brain metastases (BMs) patients. The SRS treatment planning process requires precise target delineation, which in clinical workflow for patients with multiple (>4) BMs (mBMs) could become a pronounced time bottleneck. Our group has developed an automated BMs segmentation platform to assist in this process. The accuracy of the auto-segmentation, however, is influenced by the presence of false-positive segmentations, mainly caused by the injected contrast during MRI acquisition. To address this problem and further improve the segmentation performance, a deep-learning and radiomics ensemble classifier was developed to reduce the false-positive rate in segmentations. The proposed model consists of a Siamese network and a radiomic-based support vector machine (SVM) classifier. The 2D-based Siamese network contains a pair of parallel feature extractors with shared weights followed by a single classifier. This architecture is designed to identify the inter-class difference. On the other hand, the SVM model takes the radiomic features extracted from 3D segmentation volumes as the input for twofold classification, either a false-positive segmentation or a true BM. Lastly, the outputs from both models create an ensemble to generate the final label. The performance of the proposed model in the segmented mBMs testing dataset reached the accuracy (ACC), sensitivity (SEN), specificity (SPE) and area under the curve of 0.91, 0.96, 0.90 and 0.93, respectively. After integrating the proposed model into the original segmentation platform, the average segmentation false negative rate (FNR) and the false positive over the union (FPoU) were 0.13 and 0.09, respectively, which preserved the initial FNR (0.07) and significantly improved the FPoU (0.55). The proposed method effectively reduced the false-positive rate in the BMs raw segmentations indicating that the integration of the proposed ensemble classifier into the BMs segmentation platform provides a beneficial tool for mBMs SRS management.
Collapse
Affiliation(s)
- Zi Yang
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Mingli Chen
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Mahdieh Kazemimoghadam
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Lin Ma
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Strahinja Stojadinovic
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Robert Timmerman
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Tu Dan
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Zabi Wardak
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Weiguo Lu
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Xuejun Gu
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305
| |
Collapse
|
36
|
Pflüger I, Wald T, Isensee F, Schell M, Meredig H, Schlamp K, Bernhardt D, Brugnara G, Heußel CP, Debus J, Wick W, Bendszus M, Maier-Hein KH, Vollmuth P. Automated detection and quantification of brain metastases on clinical MRI data using artificial neural networks. Neurooncol Adv 2022; 4:vdac138. [PMID: 36105388 PMCID: PMC9466273 DOI: 10.1093/noajnl/vdac138] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Abstract
Background
Reliable detection and precise volumetric quantification of brain metastases (BM) on MRI are essential for guiding treatment decisions. Here we evaluate the potential of artificial neural networks (ANN) for automated detection and quantification of BM.
Methods
A consecutive series of 308 patients with BM was used for developing an ANN (with a 4:1 split for training/testing) for automated volumetric assessment of contrast-enhancing tumors (CE) and non-enhancing FLAIR signal abnormality including edema (NEE). An independent consecutive series of 30 patients was used for external testing. Performance was assessed case-wise for CE and NEE and lesion-wise for CE using the case-wise/lesion-wise DICE-coefficient (C/L-DICE), positive predictive value (L-PPV) and sensitivity (C/L-Sensitivity).
Results
The performance of detecting CE lesions on the validation dataset was not significantly affected when evaluating different volumetric thresholds (0.001–0.2 cm3; P = .2028). The median L-DICE and median C-DICE for CE lesions were 0.78 (IQR = 0.6–0.91) and 0.90 (IQR = 0.85–0.94) in the institutional as well as 0.79 (IQR = 0.67–0.82) and 0.84 (IQR = 0.76–0.89) in the external test dataset. The corresponding median L-Sensitivity and median L-PPV were 0.81 (IQR = 0.63–0.92) and 0.79 (IQR = 0.63–0.93) in the institutional test dataset, as compared to 0.85 (IQR = 0.76–0.94) and 0.76 (IQR = 0.68–0.88) in the external test dataset. The median C-DICE for NEE was 0.96 (IQR = 0.92–0.97) in the institutional test dataset as compared to 0.85 (IQR = 0.72–0.91) in the external test dataset.
Conclusion
The developed ANN-based algorithm (publicly available at www.github.com/NeuroAI-HD/HD-BM) allows reliable detection and precise volumetric quantification of CE and NEE compartments in patients with BM.
Collapse
Affiliation(s)
- Irada Pflüger
- Department of Neuroradiology, Heidelberg University Hospital , Heidelberg , Germany
| | - Tassilo Wald
- Medical Image Computing, German Cancer Research Center (DKFZ) , Heidelberg , Germany
| | - Fabian Isensee
- Medical Image Computing, German Cancer Research Center (DKFZ) , Heidelberg , Germany
| | - Marianne Schell
- Department of Neuroradiology, Heidelberg University Hospital , Heidelberg , Germany
| | - Hagen Meredig
- Department of Neuroradiology, Heidelberg University Hospital , Heidelberg , Germany
| | - Kai Schlamp
- Department of Diagnostic and Interventional Radiology with Nuclear Medicine, Clinic for Thoracic Diseases (Thoraxklinik), Heidelberg University Hospital , Heidelberg , Germany
| | - Denise Bernhardt
- Department of Radiation Oncology, Klinikum rechts der Isar, Technical University Munich , Munich , Germany
| | - Gianluca Brugnara
- Department of Neuroradiology, Heidelberg University Hospital , Heidelberg , Germany
| | - Claus Peter Heußel
- Department of Diagnostic and Interventional Radiology with Nuclear Medicine, Clinic for Thoracic Diseases (Thoraxklinik), Heidelberg University Hospital , Heidelberg , Germany
- Member of the Cerman Center for Lung Research (DZL), Translational Lung Research Center (TLRC) , Heidelberg , Germany
| | - Juergen Debus
- Department of Radiation Oncology, Heidelberg University Hospital , Heidelberg , Germany
- Heidelberg Institute for Radiation Oncology (HIRO), Heidelberg University Hospital , Heidelberg , Germany
- German Cancer Consotium (DKTK), National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ) , Heidelberg , Germany
- Clinical Cooperation Unit Radiation Oncology, German Cancer Research Center (DKFZ) , Heidelberg , Germany
| | - Wolfgang Wick
- Neurology Clinic, Heidelberg University Hospital , Heidelberg , Germany
- Clinical Cooperation Unit Neurooncology, German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ) , Heidelberg , Germany
| | - Martin Bendszus
- Department of Neuroradiology, Heidelberg University Hospital , Heidelberg , Germany
| | - Klaus H Maier-Hein
- Medical Image Computing, German Cancer Research Center (DKFZ) , Heidelberg , Germany
| | - Philipp Vollmuth
- Department of Neuroradiology, Heidelberg University Hospital , Heidelberg , Germany
| |
Collapse
|
37
|
Machine Learning-Based Radiomics in Neuro-Oncology. ACTA NEUROCHIRURGICA. SUPPLEMENT 2021; 134:139-151. [PMID: 34862538 DOI: 10.1007/978-3-030-85292-4_18] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
In the last decades, modern medicine has evolved into a data-centered discipline, generating massive amounts of granular high-dimensional data exceeding human comprehension. With improved computational methods, machine learning and artificial intelligence (AI) as tools for data processing and analysis are becoming more and more important. At the forefront of neuro-oncology and AI-research, the field of radiomics has emerged. Non-invasive assessments of quantitative radiological biomarkers mined from complex imaging characteristics across various applications are used to predict survival, discriminate between primary and secondary tumors, as well as between progression and pseudo-progression. In particular, the application of molecular phenotyping, envisioned in the field of radiogenomics, has gained popularity for both primary and secondary brain tumors. Although promising results have been obtained thus far, the lack of workflow standardization and availability of multicenter data remains challenging. The objective of this review is to provide an overview of novel applications of machine learning- and deep learning-based radiomics in primary and secondary brain tumors and their implications for future research in the field.
Collapse
|
38
|
Fayaz M, Torokeldiev N, Turdumamatov S, Qureshi MS, Qureshi MB, Gwak J. An Efficient Methodology for Brain MRI Classification Based on DWT and Convolutional Neural Network. SENSORS 2021; 21:s21227480. [PMID: 34833556 PMCID: PMC8619601 DOI: 10.3390/s21227480] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/17/2021] [Revised: 11/01/2021] [Accepted: 11/08/2021] [Indexed: 12/21/2022]
Abstract
In this paper, a model based on discrete wavelet transform and convolutional neural network for brain MR image classification has been proposed. The proposed model is comprised of three main stages, namely preprocessing, feature extraction, and classification. In the preprocessing, the median filter has been applied to remove salt-and-pepper noise from the brain MRI images. In the discrete wavelet transform, discrete Harr wavelet transform has been used. In the proposed model, 3-level Harr wavelet decomposition has been applied on the images to remove low-level detail and reduce the size of the images. Next, the convolutional neural network has been used for classifying the brain MR images into normal and abnormal. The convolutional neural network is also a prevalent classification method and has been widely used in different areas. In this study, the convolutional neural network has been used for brain MRI classification. The proposed methodology has been applied to the standard dataset, and for performance evaluation, we have used different performance evaluation measures. The results indicate that the proposed method provides good results with 99% accuracy. The proposed method results are then presented for comparison with some state-of-the-art algorithms where simply the proposed method outperforms the counterpart algorithms. The proposed model has been developed to be used for practical applications.
Collapse
Affiliation(s)
- Muhammad Fayaz
- Department of Computer Science, University of Central Asia, 310 Lenin Street, Naryn 722918, Kyrgyzstan; (M.F.); (M.S.Q.)
| | - Nurlan Torokeldiev
- Department of Mathematics and Natural Sciences, University of Central Asia, Khorog 736, Tajikistan;
| | - Samat Turdumamatov
- Department of Mathematics and Natural Sciences, University of Central Asia, 310 Lenin Street, Naryn 722918, Kyrgyzstan;
| | - Muhammad Shuaib Qureshi
- Department of Computer Science, University of Central Asia, 310 Lenin Street, Naryn 722918, Kyrgyzstan; (M.F.); (M.S.Q.)
| | - Muhammad Bilal Qureshi
- Department of Computer Science and IT, University of Lakki Marwat, Lakki Marwat 28420, KPK, Pakistan;
| | - Jeonghwan Gwak
- Department of Software, Korea National University of Transportation, Chungju 27469, Korea
- Department of Biomedical Engineering, Korea National University of Transportation, Chungju 27469, Korea
- Department of AI Robotics Engineering, Korea National University of Transportation, Chungju 27469, Korea
- Department of IT & Energy Convergence (BK21 FOUR), Korea National University of Transportation, Chungju 27469, Korea
- Correspondence: ; Tel.: +82-43-841-5852
| |
Collapse
|
39
|
Cho J, Kim YJ, Sunwoo L, Lee GP, Nguyen TQ, Cho SJ, Baik SH, Bae YJ, Choi BS, Jung C, Sohn CH, Han JH, Kim CY, Kim KG, Kim JH. Deep Learning-Based Computer-Aided Detection System for Automated Treatment Response Assessment of Brain Metastases on 3D MRI. Front Oncol 2021; 11:739639. [PMID: 34778056 PMCID: PMC8579083 DOI: 10.3389/fonc.2021.739639] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2021] [Accepted: 09/30/2021] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Although accurate treatment response assessment for brain metastases (BMs) is crucial, it is highly labor intensive. This retrospective study aimed to develop a computer-aided detection (CAD) system for automated BM detection and treatment response evaluation using deep learning. METHODS We included 214 consecutive MRI examinations of 147 patients with BM obtained between January 2015 and August 2016. These were divided into the training (174 MR images from 127 patients) and test datasets according to temporal separation (temporal test set #1; 40 MR images from 20 patients). For external validation, 24 patients with BM and 11 patients without BM from other institutions were included (geographic test set). In addition, we included 12 MRIs from BM patients obtained between August 2017 and March 2020 (temporal test set #2). Detection sensitivity, dice similarity coefficient (DSC) for segmentation, and agreements in one-dimensional and volumetric Response Assessment in Neuro-Oncology Brain Metastases (RANO-BM) criteria between CAD and radiologists were assessed. RESULTS In the temporal test set #1, the sensitivity was 75.1% (95% confidence interval [CI]: 69.6%, 79.9%), mean DSC was 0.69 ± 0.22, and false-positive (FP) rate per scan was 0.8 for BM ≥ 5 mm. Agreements in the RANO-BM criteria were moderate (κ, 0.52) and substantial (κ, 0.68) for one-dimensional and volumetric, respectively. In the geographic test set, sensitivity was 87.7% (95% CI: 77.2%, 94.5%), mean DSC was 0.68 ± 0.20, and FP rate per scan was 1.9 for BM ≥ 5 mm. In the temporal test set #2, sensitivity was 94.7% (95% CI: 74.0%, 99.9%), mean DSC was 0.82 ± 0.20, and FP per scan was 0.5 (6/12) for BM ≥ 5 mm. CONCLUSIONS Our CAD showed potential for automated treatment response assessment of BM ≥ 5 mm.
Collapse
Affiliation(s)
- Jungheum Cho
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Young Jae Kim
- Department of Biomedical Engineering, Gachon University Gil Medical Center, Incheon, South Korea
| | - Leonard Sunwoo
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, South Korea
- Center for Artificial Intelligence in Healthcare, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Gi Pyo Lee
- Department of Biomedical Engineering, Gachon University Gil Medical Center, Incheon, South Korea
| | - Toan Quang Nguyen
- Department of Radiology, Vietnam National Cancer Hospital, Hanoi, Vietnam
| | - Se Jin Cho
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Sung Hyun Baik
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Yun Jung Bae
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Byung Se Choi
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Cheolkyu Jung
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Chul-Ho Sohn
- Department of Radiology, Seoul National University Hospital, Seoul, South Korea
| | - Jung-Ho Han
- Department of Neurosurgery, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Chae-Yong Kim
- Department of Neurosurgery, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Kwang Gi Kim
- Department of Biomedical Engineering, Gachon University Gil Medical Center, Incheon, South Korea
| | - Jae Hyoung Kim
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, South Korea
| |
Collapse
|
40
|
Nomura Y, Hanaoka S, Takenaga T, Nakao T, Shibata H, Miki S, Yoshikawa T, Watadani T, Hayashi N, Abe O. Preliminary study of generalized semiautomatic segmentation for 3D voxel labeling of lesions based on deep learning. Int J Comput Assist Radiol Surg 2021; 16:1901-1913. [PMID: 34652606 DOI: 10.1007/s11548-021-02504-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Accepted: 09/17/2021] [Indexed: 11/28/2022]
Abstract
PURPOSE The three-dimensional (3D) voxel labeling of lesions requires significant radiologists' effort in the development of computer-aided detection software. To reduce the time required for the 3D voxel labeling, we aimed to develop a generalized semiautomatic segmentation method based on deep learning via a data augmentation-based domain generalization framework. In this study, we investigated whether a generalized semiautomatic segmentation model trained using two types of lesion can segment previously unseen types of lesion. METHODS We targeted lung nodules in chest CT images, liver lesions in hepatobiliary-phase images of Gd-EOB-DTPA-enhanced MR imaging, and brain metastases in contrast-enhanced MR images. For each lesion, the 32 × 32 × 32 isotropic volume of interest (VOI) around the center of gravity of the lesion was extracted. The VOI was input into a 3D U-Net model to define the label of the lesion. For each type of target lesion, we compared five types of data augmentation and two types of input data. RESULTS For all considered target lesions, the highest dice coefficients among the training patterns were obtained when using a combination of the existing data augmentation-based domain generalization framework and random monochrome inversion and when using the resized VOI as the input image. The dice coefficients were 0.639 ± 0.124 for the lung nodules, 0.660 ± 0.137 for the liver lesions, and 0.727 ± 0.115 for the brain metastases. CONCLUSIONS Our generalized semiautomatic segmentation model could label unseen three types of lesion with different contrasts from the surroundings. In addition, the resized VOI as the input image enables the adaptation to the various sizes of lesions even when the size distribution differed between the training set and the test set.
Collapse
Affiliation(s)
- Yukihiro Nomura
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan. .,Center for Frontier Medical Engineering, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba, 263-8522, Japan.
| | - Shouhei Hanaoka
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Tomomi Takenaga
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Takahiro Nakao
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Hisaichi Shibata
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Soichiro Miki
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Takeharu Yoshikawa
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Takeyuki Watadani
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Naoto Hayashi
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Osamu Abe
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| |
Collapse
|
41
|
Deep Learning-Based Segmentation of Various Brain Lesions for Radiosurgery. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11199180] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Semantic segmentation of medical images with deep learning models is rapidly being developed. In this study, we benchmarked state-of-the-art deep learning segmentation algorithms on our clinical stereotactic radiosurgery dataset. The dataset consists of 1688 patients with various brain lesions (pituitary tumors, meningioma, schwannoma, brain metastases, arteriovenous malformation, and trigeminal neuralgia), and we divided the dataset into a training set (1557 patients) and test set (131 patients). This study demonstrates the strengths and weaknesses of deep-learning algorithms in a fairly practical scenario. We compared the model performances concerning their sampling method, model architecture, and the choice of loss functions, identifying suitable settings for their applications and shedding light on the possible improvements. Evidence from this study led us to conclude that deep learning could be promising in assisting the segmentation of brain lesions even if the training dataset was of high heterogeneity in lesion types and sizes.
Collapse
|
42
|
Sun YC, Hsieh AT, Fang ST, Wu HM, Kao LW, Chung WY, Chen HH, Liou KD, Lin YS, Guo WY, Lu HHS. Can 3D artificial intelligence models outshine 2D ones in the detection of intracranial metastatic tumors on magnetic resonance images? J Chin Med Assoc 2021; 84:956-962. [PMID: 34613943 DOI: 10.1097/jcma.0000000000000614] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
BACKGROUND This study aimed to compare the prediction performance of two-dimensional (2D) and three-dimensional (3D) semantic segmentation models for intracranial metastatic tumors with a volume ≥ 0.3 mL. METHODS We used postcontrast T1 whole-brain magnetic resonance (MR), which was collected from Taipei Veterans General Hospital (TVGH). Also, the study was approved by the institutional review board (IRB) of TVGH. The 2D image segmentation model does not fully use the spatial information between neighboring slices, whereas the 3D segmentation model does. We treated the U-Net as the basic model for 2D and 3D architectures. RESULTS For the prediction of intracranial metastatic tumors, the area under the curve (AUC) of the 3D model was 87.6% and that of the 2D model was 81.5%. CONCLUSION Building a semantic segmentation model based on 3D deep convolutional neural networks might be crucial to achieve a high detection rate in clinical applications for intracranial metastatic tumors.
Collapse
Affiliation(s)
- Ying-Chou Sun
- Department of Radiology, Taipei Veterans General Hospital, Taipei, Taiwan, ROC
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan, ROC
| | - Ang-Ting Hsieh
- Institute of Data Science and Engineering, National Yang Ming Chiao Tung University, Hsinchu, Taiwan, ROC
| | - Ssu-Ting Fang
- Institute of Statistics, National Yang Ming Chiao Tung University, Hsinchu, Taiwan, ROC
| | - Hsiu-Mei Wu
- Department of Radiology, Taipei Veterans General Hospital, Taipei, Taiwan, ROC
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan, ROC
| | - Liang-Wei Kao
- Institute of Statistics, National Yang Ming Chiao Tung University, Hsinchu, Taiwan, ROC
| | - Wen-Yuh Chung
- Division of Functional Neurosurgery, Taipei Veterans General Hospital, Taipei, Taiwan, ROC
- Institute of Neurological, Yang Ming Chiao Tung University, Taipei, Taiwan, ROC
| | - Hung-Hsun Chen
- Center of Teaching and Learning Development, National Yang Ming Chiao Tung University, Hsinchu, Taiwan, ROC
| | - Kang-Du Liou
- Division of Functional Neurosurgery, Taipei Veterans General Hospital, Taipei, Taiwan, ROC
- Institute of Neurological, Yang Ming Chiao Tung University, Taipei, Taiwan, ROC
| | - Yu-Shiou Lin
- Institute of Data Science and Engineering, National Yang Ming Chiao Tung University, Hsinchu, Taiwan, ROC
| | - Wan-Yuo Guo
- Department of Radiology, Taipei Veterans General Hospital, Taipei, Taiwan, ROC
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan, ROC
| | - Henry Horng-Shing Lu
- Institute of Data Science and Engineering, National Yang Ming Chiao Tung University, Hsinchu, Taiwan, ROC
- Institute of Statistics, National Yang Ming Chiao Tung University, Hsinchu, Taiwan, ROC
- Department of Medical Research, Taipei Veterans General Hospital, Taipei, Taiwan, ROC
| |
Collapse
|
43
|
Eckardt JN, Wendt K, Bornhäuser M, Middeke JM. Reinforcement Learning for Precision Oncology. Cancers (Basel) 2021; 13:4624. [PMID: 34572853 PMCID: PMC8472712 DOI: 10.3390/cancers13184624] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Revised: 09/13/2021] [Accepted: 09/13/2021] [Indexed: 01/19/2023] Open
Abstract
Precision oncology is grounded in the increasing understanding of genetic and molecular mechanisms that underly malignant disease and offer different treatment pathways for the individual patient. The growing complexity of medical data has led to the implementation of machine learning techniques that are vastly applied for risk assessment and outcome prediction using either supervised or unsupervised learning. Still largely overlooked is reinforcement learning (RL) that addresses sequential tasks by exploring the underlying dynamics of an environment and shaping it by taking actions in order to maximize cumulative rewards over time, thereby achieving optimal long-term outcomes. Recent breakthroughs in RL demonstrated remarkable results in gameplay and autonomous driving, often achieving human-like or even superhuman performance. While this type of machine learning holds the potential to become a helpful decision support tool, it comes with a set of distinctive challenges that need to be addressed to ensure applicability, validity and safety. In this review, we highlight recent advances of RL focusing on studies in oncology and point out current challenges and pitfalls that need to be accounted for in future studies in order to successfully develop RL-based decision support systems for precision oncology.
Collapse
Affiliation(s)
- Jan-Niklas Eckardt
- Department of Internal Medicine I, University Hospital Carl Gustav Carus, 01307 Dresden, Germany; (M.B.); (J.M.M.)
| | - Karsten Wendt
- Institute of Software and Multimedia Technology, Technical University Dresden, 01069 Dresden, Germany;
| | - Martin Bornhäuser
- Department of Internal Medicine I, University Hospital Carl Gustav Carus, 01307 Dresden, Germany; (M.B.); (J.M.M.)
- German Consortium for Translational Cancer Research, 69120 Heidelberg, Germany
- National Center for Tumor Diseases, 01307 Dresden, Germany
| | - Jan Moritz Middeke
- Department of Internal Medicine I, University Hospital Carl Gustav Carus, 01307 Dresden, Germany; (M.B.); (J.M.M.)
| |
Collapse
|
44
|
Kazemimoghadam M, Chi W, Rahimi A, Kim N, Alluri P, Nwachukwu C, Lu W, Gu X. Saliency-guided deep learning network for automatic tumor bed volume delineation in post-operative breast irradiation. Phys Med Biol 2021; 66:10.1088/1361-6560/ac176d. [PMID: 34298539 PMCID: PMC8639319 DOI: 10.1088/1361-6560/ac176d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Accepted: 07/23/2021] [Indexed: 11/12/2022]
Abstract
Efficient, reliable and reproducible target volume delineation is a key step in the effective planning of breast radiotherapy. However, post-operative breast target delineation is challenging as the contrast between the tumor bed volume (TBV) and normal breast tissue is relatively low in CT images. In this study, we propose to mimic the marker-guidance procedure in manual target delineation. We developed a saliency-based deep learning segmentation (SDL-Seg) algorithm for accurate TBV segmentation in post-operative breast irradiation. The SDL-Seg algorithm incorporates saliency information in the form of markers' location cues into a U-Net model. The design forces the model to encode the location-related features, which underscores regions with high saliency levels and suppresses low saliency regions. The saliency maps were generated by identifying markers on CT images. Markers' location were then converted to probability maps using a distance transformation coupled with a Gaussian filter. Subsequently, the CT images and the corresponding saliency maps formed a multi-channel input for the SDL-Seg network. Our in-house dataset was comprised of 145 prone CT images from 29 post-operative breast cancer patients, who received 5-fraction partial breast irradiation (PBI) regimen on GammaPod. The 29 patients were randomly split into training (19), validation (5) and test (5) sets. The performance of the proposed method was compared against basic U-Net. Our model achieved mean (standard deviation) of 76.4(±2.7) %, 6.76(±1.83) mm, and 1.9(±0.66) mm for Dice similarity coefficient, 95 percentile Hausdorff distance, and average symmetric surface distance respectively on the test set with computation time of below 11 seconds per one CT volume. SDL-Seg showed superior performance relative to basic U-Net for all the evaluation metrics while preserving low computation cost. The findings demonstrate that SDL-Seg is a promising approach for improving the efficiency and accuracy of the on-line treatment planning procedure of PBI, such as GammaPod based PBI.
Collapse
Affiliation(s)
- Mahdieh Kazemimoghadam
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Weicheng Chi
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
- School of Software Engineering, South China University of Technology, Guangzhou, Guangdong 510006, People's Republic of China
| | - Asal Rahimi
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Nathan Kim
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Prasanna Alluri
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Chika Nwachukwu
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | | | - Xuejun Gu
- Stanford University, Palo Alto, CA, United States of America
| |
Collapse
|
45
|
Hsu DG, Ballangrud Å, Shamseddine A, Deasy JO, Veeraraghavan H, Cervino L, Beal K, Aristophanous M. Automatic segmentation of brain metastases using T1 magnetic resonance and computed tomography images. Phys Med Biol 2021; 66. [PMID: 34315148 DOI: 10.1088/1361-6560/ac1835] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Accepted: 07/27/2021] [Indexed: 12/26/2022]
Abstract
An increasing number of patients with multiple brain metastases are being treated with stereotactic radiosurgery (SRS). Manually identifying and contouring all metastatic lesions is difficult and time-consuming, and a potential source of variability. Hence, we developed a 3D deep learning approach for segmenting brain metastases on MR and CT images. Five-hundred eleven patients treated with SRS were retrospectively identified for this study. Prior to radiotherapy, the patients were imaged with 3D T1 spoiled-gradient MR post-Gd (T1 + C) and contrast-enhanced CT (CECT), which were co-registered by a treatment planner. The gross tumor volume contours, authored by the attending radiation oncologist, were taken as the ground truth. There were 3 ± 4 metastases per patient, with volume up to 57 ml. We produced a multi-stage model that automatically performs brain extraction, followed by detection and segmentation of brain metastases using co-registered T1 + C and CECT. Augmented data from 80% of these patients were used to train modified 3D V-Net convolutional neural networks for this task. We combined a normalized boundary loss function with soft Dice loss to improve the model optimization, and employed gradient accumulation to stabilize the training. The average Dice similarity coefficient (DSC) for brain extraction was 0.975 ± 0.002 (95% CI). The detection sensitivity per metastasis was 90% (329/367), with moderate dependence on metastasis size. Averaged across 102 test patients, our approach had metastasis detection sensitivity 95 ± 3%, 2.4 ± 0.5 false positives, DSC of 0.76 ± 0.03, and 95th-percentile Hausdorff distance of 2.5 ± 0.3 mm (95% CIs). The volumes of automatic and manual segmentations were strongly correlated for metastases of volume up to 20 ml (r=0.97,p<0.001). This work expounds a fully 3D deep learning approach capable of automatically detecting and segmenting brain metastases using co-registered T1 + C and CECT.
Collapse
Affiliation(s)
- Dylan G Hsu
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| | - Åse Ballangrud
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| | - Achraf Shamseddine
- Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| | - Joseph O Deasy
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| | - Laura Cervino
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| | - Kathryn Beal
- Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| | - Michalis Aristophanous
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| |
Collapse
|
46
|
Amemiya S, Takao H, Kato S, Yamashita H, Sakamoto N, Abe O. Feature-fusion improves MRI single-shot deep learning detection of small brain metastases. J Neuroimaging 2021; 32:111-119. [PMID: 34388855 DOI: 10.1111/jon.12916] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2021] [Revised: 07/25/2021] [Accepted: 07/26/2021] [Indexed: 12/01/2022] Open
Abstract
BACKGROUND AND PURPOSE To examine whether feature-fusion (FF) method improves single-shot detector's (SSD's) detection of small brain metastases on contrast-enhanced (CE) T1-weighted MRI. METHODS The study included 234 MRI scans from 234 patients (64.3 years±12.0; 126 men). The ground-truth annotation was performed semiautomatically. SSDs with and without an FF module were developed and trained using 178 scans. The detection performance was evaluated at the SSDs' 50% confidence threshold using sensitivity, positive-predictive value (PPV), and the false-positive (FP) per scan with the remaining 56 scans. RESULTS FF-SSD achieved an overall sensitivity of 86.0% (95% confidence interval [CI]: [83.0%, 85.6%]; 196/228) and 46.8% PPV (95% CI: [42.0%, 46.3%]; 196/434), with 4.3 FP (95% CI: [4.3, 4.9]). Lesions smaller than 3 mm had 45.8% sensitivity (95% CI: [36.1%, 45.5%]; 22/48) with 2.0 FP (95% CI: [1.9, 2.1]). Lesions measuring 3-6 mm had 92.3% sensitivity (95% CI: [86.5%, 92.0%]; 48/52) with 1.8 FP (95% CI: [1.7, 2.2]). Lesions larger than 6 mm had 98.4% sensitivity (95% CI: [97.8%, 99.4%]; 126/128) 0.5 FP (95% CI: [0.5, 0.8]) per scan. FF-SSD had a significantly higher sensitivity for lesions < 3 mm (p = 0.008, t = 3.53) than the baseline SSD, while the overall PPV was similar (p = 0.06, t = -2.16). A similar trend was observed even when the detector's confidence threshold was varied as low as 0.2, for which the FF-SSD's sensitivity was 91.2% and the FP was 9.5. CONCLUSIONS The FF-SSD algorithm identified brain metastases on CE T1-weighted MRI with high accuracy.
Collapse
Affiliation(s)
- Shiori Amemiya
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Hidemasa Takao
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Shimpei Kato
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Hiroshi Yamashita
- Department of Radiology, Teikyo University Hospital, Mizonokuchi, Kanagawa, Japan
| | - Naoya Sakamoto
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| |
Collapse
|
47
|
Abstract
The central role of MRI in neuro-oncology is undisputed. The technique is used, both in clinical practice and in clinical trials, to diagnose and monitor disease activity, support treatment decision-making, guide the use of focused treatments and determine response to treatment. Despite recent substantial advances in imaging technology and image analysis techniques, clinical MRI is still primarily used for the qualitative subjective interpretation of macrostructural features, as opposed to quantitative analyses that take into consideration multiple pathophysiological features. However, the field of quantitative imaging and imaging biomarker development is maturing. The European Imaging Biomarkers Alliance (EIBALL) and Quantitative Imaging Biomarkers Alliance (QIBA) are setting standards for biomarker development, validation and implementation, as well as promoting the use of quantitative imaging and imaging biomarkers by demonstrating their clinical value. In parallel, advanced imaging techniques are reaching the clinical arena, providing quantitative, commonly physiological imaging parameters that are driving the discovery, validation and implementation of quantitative imaging and imaging biomarkers in the clinical routine. Additionally, computational analysis techniques are increasingly being used in the research setting to convert medical images into objective high-dimensional data and define radiomic signatures of disease states. Here, I review the definition and current state of MRI biomarkers in neuro-oncology, and discuss the clinical potential of quantitative image analysis techniques.
Collapse
|
48
|
Presentation of Novel Hybrid Algorithm for Detection and Classification of Breast Cancer Using Growth Region Method and Probabilistic Neural Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:5863496. [PMID: 34239550 PMCID: PMC8238608 DOI: 10.1155/2021/5863496] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/09/2021] [Accepted: 06/10/2021] [Indexed: 11/17/2022]
Abstract
Mammography is a significant screening test for early detection of breast cancer, which increases the patient's chances of complete recovery. In this paper, a clustering method is presented for the detection of breast cancer tumor locations and areas. To implement the clustering method, we used the growth region approach. This method detects similar pixels nearby. To find the best initial point for detection, it is essential to remove human interaction in clustering. Therefore, in this paper, the FCM-GA algorithm is used to find the best point for starting growth. Their results are compared with the manual selection method and Gaussian Mixture Model method for verification. The classification is performed to diagnose breast cancer type in two primary datasets of MIAS and BI-RADS using features of GLCM and probabilistic neural network (PNN). Results of clustering show that the presented FCM-GA method outperforms other methods. Moreover, the accuracy of the clustering method for FCM-GA is 94%, as the best approach used in this paper. Furthermore, the result shows that the PNN methods have high accuracy and sensitivity with the MIAS dataset.
Collapse
|
49
|
Mohammadi R, Shokatian I, Salehi M, Arabi H, Shiri I, Zaidi H. Deep learning-based auto-segmentation of organs at risk in high-dose rate brachytherapy of cervical cancer. Radiother Oncol 2021; 159:231-240. [DOI: 10.1016/j.radonc.2021.03.030] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 03/20/2021] [Accepted: 03/24/2021] [Indexed: 12/11/2022]
|
50
|
Rudie JD, Weiss DA, Colby JB, Rauschecker AM, Laguna B, Braunstein S, Sugrue LP, Hess CP, Villanueva-Meyer JE. Three-dimensional U-Net Convolutional Neural Network for Detection and Segmentation of Intracranial Metastases. Radiol Artif Intell 2021; 3:e200204. [PMID: 34136817 PMCID: PMC8204134 DOI: 10.1148/ryai.2021200204] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Revised: 02/05/2021] [Accepted: 02/19/2021] [Indexed: 05/05/2023]
Abstract
PURPOSE To develop and validate a neural network for automated detection and segmentation of intracranial metastases on brain MRI studies obtained for stereotactic radiosurgery treatment planning. MATERIALS AND METHODS In this retrospective study, 413 patients (average age, 61 years ± 12 [standard deviation]; 238 women) with a total of 5202 intracranial metastases (median volume, 0.05 cm3; interquartile range, 0.02-0.18 cm3) undergoing stereotactic radiosurgery at one institution were included (January 2017 to February 2020). A total of 563 MRI examinations were performed among the patients, and studies were split into training (n = 413), validation (n = 50), and test (n = 100) datasets. A three-dimensional (3D) U-Net convolutional network was trained and validated on 413 T1 postcontrast or subtraction scans, and several loss functions were evaluated. After model validation, 100 discrete test patients, who underwent imaging after the training and validation patients, were used for final model evaluation. Performance for detection and segmentation of metastases was evaluated using Dice scores, false discovery rates, and false-negative rates, and a comparison with neuroradiologist interrater reliability was performed. RESULTS The median Dice score for segmenting enhancing metastases in the test set was 0.75 (interquartile range, 0.63-0.84). There were strong correlations between manually segmented and predicted metastasis volumes (r = 0.98, P < .001) and between the number of manually segmented and predicted metastases (R = 0.95, P < .001). Higher Dice scores were strongly correlated with larger metastasis volumes on a logarithmically transformed scale (r = 0.71). Sensitivity across the whole test sample was 70.0% overall and 96.4% for metastases larger than 6 mm. There was an average of 0.46 false-positive results per scan, with the positive predictive value being 91.5%. In comparison, the median Dice score between two neuroradiologists was 0.85 (interquartile range, 0.80-0.89), with sensitivity across the test sample being 87.9% overall and 98.4% for metastases larger than 6 mm. CONCLUSION A 3D U-Net-based convolutional neural network was able to segment brain metastases with high accuracy and perform detection at the level of human interrater reliability for metastases larger than 6 mm.Keywords: Adults, Brain/Brain Stem, CNS, Feature detection, MR-Imaging, Neural Networks, Neuro-Oncology, Quantification, Segmentation© RSNA, 2021.
Collapse
Affiliation(s)
- Jeffrey D. Rudie
- From the Department of Radiology and Biomedical Imaging, University of California, San Francisco, 513 Parnassus Ave, San Francisco, CA 94143
| | - David A. Weiss
- From the Department of Radiology and Biomedical Imaging, University of California, San Francisco, 513 Parnassus Ave, San Francisco, CA 94143
| | - John B. Colby
- From the Department of Radiology and Biomedical Imaging, University of California, San Francisco, 513 Parnassus Ave, San Francisco, CA 94143
| | - Andreas M. Rauschecker
- From the Department of Radiology and Biomedical Imaging, University of California, San Francisco, 513 Parnassus Ave, San Francisco, CA 94143
| | - Benjamin Laguna
- From the Department of Radiology and Biomedical Imaging, University of California, San Francisco, 513 Parnassus Ave, San Francisco, CA 94143
| | - Steve Braunstein
- From the Department of Radiology and Biomedical Imaging, University of California, San Francisco, 513 Parnassus Ave, San Francisco, CA 94143
| | - Leo P. Sugrue
- From the Department of Radiology and Biomedical Imaging, University of California, San Francisco, 513 Parnassus Ave, San Francisco, CA 94143
| | - Christopher P. Hess
- From the Department of Radiology and Biomedical Imaging, University of California, San Francisco, 513 Parnassus Ave, San Francisco, CA 94143
| | - Javier E. Villanueva-Meyer
- From the Department of Radiology and Biomedical Imaging, University of California, San Francisco, 513 Parnassus Ave, San Francisco, CA 94143
| |
Collapse
|