1
|
Venkataram T, Kashyap S, Harikar MM, Inserra F, Barone F, Travali M, Da Ros V, Umana GE, Ogunbayo OA, Aribisala B. The application of machine learning for treatment selection of unruptured brain arteriovenous malformations: A secondary analysis of the ARUBA trial data. Clin Neurol Neurosurg 2025; 249:108681. [PMID: 39673942 DOI: 10.1016/j.clineuro.2024.108681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2024] [Revised: 12/05/2024] [Accepted: 12/07/2024] [Indexed: 12/16/2024]
Abstract
OBJECTIVE To build a supervised machine learning (ML) model that selects the best first-line treatment strategy for unruptured bAVMs. METHODS A Randomized Trial of Unruptured Brain Arteriovenous Malformations (ARUBA) trial data was obtained from the National Institute of Neurological Disorders and Stroke (NINDS). A team of five clinicians examined the demographic, clinical, and radiological details of each patient at baseline and reached a consensus on the best first-line treatment for bAVMs. Their treatment choice was used to train an automated supervised ML (autoML) model to select treatments for bAVMs for the training dataset. The accuracy and AUC of the algorithm in selecting the treatment strategy were measured for the test dataset, and feature importance scores of the included variables were calculated. RESULTS Among the 100,000 combinations of supervised ML algorithms and their hyperparameters attempted by autoML, gradient boosting classifier had the best predictive performance with an overall accuracy of 0.74 and an area under the curve (AUC) of 0.88. The treatment-specific accuracies were 0.96, 0.85, 0.84, and 0.82; and AUCs were 0.75, 0.95, 0.80, and 0.88 for medical management, surgery, endovascular embolization, and gamma-knife radiosurgery, respectively. Spetzler-Martin score, followed by eloquent AVM location and AVM size, were the three most important features in determining treatments. CONCLUSION ML could reliably select the best first-line treatment strategy for bAVMs as per multidisciplinary expert consensus. This study can be replicated for larger population-based AVM registries, with the inclusion of outcome data, thus helping address the bias involved in the management of unruptured bAVMs.
Collapse
Affiliation(s)
- Tejas Venkataram
- Department of Neurosurgery, St. John's Medical College Hospital, Bengaluru, India
| | | | - Mandara M Harikar
- Clinical Trials Programme, Usher Institute of Molecular, Genetic, and Population Health Sciences, The University of Edinburgh, Edinburgh, UK
| | - Francesco Inserra
- Department of Neurosurgery, Trauma Center, Gamma Knife Center, Cannizzaro Hospital, Catania, Italy
| | - Fabio Barone
- Department of Neurosurgery, Trauma Center, Gamma Knife Center, Cannizzaro Hospital, Catania, Italy
| | - Mario Travali
- Department of Diagnostic and Interventional Neuroradiology, Azienda Ospedaliera Cannizzaro, Catania, Italy
| | - Valeriox Da Ros
- Diagnostic Imaging Unit, Department of Biomedicine and Prevention, 9318 University of Rome Tor Vergata, Italy
| | - Giuseppe E Umana
- Department of Neurosurgery, Trauma Center, Gamma Knife Center, Cannizzaro Hospital, Catania, Italy
| | - Oluseye A Ogunbayo
- Edinburgh Surgery Online, Clinical Science Teaching Organisation, Clinical Surgery, College of Medicine and Veterinary Medicine, The University of Edinburgh, Edinburgh, UK
| | - Benjamin Aribisala
- Department of Neuroimaging Sciences, University of Edinburgh, Edinburgh, UK; Lothian Birth Cohort studies, Department of Psychology, University of Edinburgh, Edinburgh, UK; Department of Computer Science, Lagos State University, Nigeria.
| |
Collapse
|
2
|
Huang P, Yan H, Shang J, Xie X. Prior information guided deep-learning model for tumor bed segmentation in breast cancer radiotherapy. BMC Med Imaging 2024; 24:312. [PMID: 39558240 PMCID: PMC11571877 DOI: 10.1186/s12880-024-01469-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Accepted: 10/16/2024] [Indexed: 11/20/2024] Open
Abstract
BACKGROUND AND PURPOSE Tumor bed (TB) is the residual cavity of resected tumor after surgery. Delineating TB from CT is crucial in generating clinical target volume for radiotherapy. Due to multiple surgical effects and low image contrast, segmenting TB from soft tissue is challenging. In clinical practice, titanium clips were used as marks to guide the searching of TB. However, this information is limited and may cause large error. To provide more prior location information, the tumor regions on both pre-operative and post-operative CTs are both used by the deep learning model in segmenting TB from surrounding tissues. MATERIALS AND METHODS For breast cancer patient after surgery and going to be treated by radiotherapy, it is important to delineate the target volume for treatment planning. In clinical practice, the target volume is usually generated from TB by adding a certain margin. Therefore, it is crucial to identify TB from soft tissue. To facilitate this process, a deep learning model is developed to segment TB from CT with the guidance of prior tumor location. Initially, the tumor contour on the pre-operative CT is delineated by physician for surgical planning purpose. Then this contour is transformed to the post-operative CT via the deformable image registration between paired pre-operative and post-operative CTs. The original and transformed tumor regions are both used as inputs for predicting the possible region of TB by the deep-learning model. RESULTS Compared to the one without prior tumor contour information, the dice similarity coefficient of the deep-learning model with the prior tumor contour information is improved significantly (0.812 vs. 0.520, P = 0.001). Compared to the traditional gray-level thresholding method, the dice similarity coefficient of the deep-learning model with the prior tumor contour information is improved significantly (0.812 vs.0.633, P = 0.0005). CONCLUSIONS The prior tumor contours on both pre-operative and post-operative CTs provide valuable information in searching for the precise location of TB on post-operative CT. The proposed method provided a feasible way to assist auto-segmentation of TB in treatment planning of radiotherapy after breast-conserving surgery.
Collapse
Affiliation(s)
- Peng Huang
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Hui Yan
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Jiawen Shang
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Xin Xie
- Department of Radiation Oncology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, 350014, China.
| |
Collapse
|
3
|
Dong M, Xiang S, Hong T, Wu C, Yu J, Yang K, Yang W, Li X, Ren J, Jin H, Li Y, Li G, Ye M, Lu J, Zhang H. Artificial intelligence-based automatic nidus segmentation of cerebral arteriovenous malformation on time-of-flight magnetic resonance angiography. Eur J Radiol 2024; 178:111572. [PMID: 39002268 DOI: 10.1016/j.ejrad.2024.111572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 03/08/2024] [Accepted: 06/12/2024] [Indexed: 07/15/2024]
Abstract
OBJECTIVE Accurate nidus segmentation and quantification have long been challenging but important tasks in the clinical management of Cerebral Arteriovenous Malformation (CAVM). However, there are still dilemmas in nidus segmentation, such as difficulty defining the demarcation of the nidus, observer-dependent variation and time consumption. The aim of this study isto develop an artificial intelligence model to automatically segment the nidus on Time-Of-Flight Magnetic Resonance Angiography (TOF-MRA) images. METHODS A total of 92patients with CAVM who underwent both TOF-MRA and DSA examinations were enrolled. Two neurosurgeonsmanually segmented the nidusonTOF-MRA images,which were regarded as theground-truth reference. AU-Net-basedAImodelwascreatedfor automatic nidus detectionand segmentationonTOF-MRA images. RESULTS The meannidus volumes of the AI segmentationmodeland the ground truthwere 5.427 ± 4.996 and 4.824 ± 4.567 mL,respectively. The meandifference in the nidus volume between the two groups was0.603 ± 1.514 mL,which wasnot statisticallysignificant (P = 0.693). The DSC,precision and recallofthe testset were 0.754 ± 0.074, 0.713 ± 0.102 and 0.816 ± 0.098, respectively. The linear correlation coefficient of the nidus volume betweenthesetwo groupswas 0.988, p < 0.001. CONCLUSION The performance of the AI segmentationmodel is moderate consistent with that of manual segmentation. This AI model has great potential in clinical settings, such as preoperative planning, treatment efficacy evaluation, riskstratification and follow-up.
Collapse
Affiliation(s)
- Mengqi Dong
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing, China; China International Neuroscience Institute, Beijing, China.
| | - Sishi Xiang
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing, China; China International Neuroscience Institute, Beijing, China.
| | - Tao Hong
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing, China; China International Neuroscience Institute, Beijing, China.
| | - Chunxue Wu
- Department of Radiology and Nuclear Medicine, Xuanwu Hospital, Capital Medical University, Beijing, China; Beijing Key Laboratory of Magnetic Resonance Imaging and Brain Informatics, Beijing, China.
| | - Jiaxing Yu
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing, China; China International Neuroscience Institute, Beijing, China.
| | - Kun Yang
- The National Center for Neurological Disorders, Xuanwu Hospital, Capital Medical University, Beijing, China.
| | - Wanxin Yang
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing, China; China International Neuroscience Institute, Beijing, China.
| | - Xiangyu Li
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing, China; China International Neuroscience Institute, Beijing, China.
| | - Jian Ren
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing, China; China International Neuroscience Institute, Beijing, China.
| | - Hailan Jin
- Department of R&D, UnionStrong (Beijing) Technology Co., Ltd., Beijing, China.
| | - Ye Li
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing, China; China International Neuroscience Institute, Beijing, China.
| | - Guilin Li
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing, China; China International Neuroscience Institute, Beijing, China.
| | - Ming Ye
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing, China; China International Neuroscience Institute, Beijing, China.
| | - Jie Lu
- Department of Radiology and Nuclear Medicine, Xuanwu Hospital, Capital Medical University, Beijing, China; Beijing Key Laboratory of Magnetic Resonance Imaging and Brain Informatics, Beijing, China.
| | - Hongqi Zhang
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing, China; China International Neuroscience Institute, Beijing, China.
| |
Collapse
|
4
|
Grossen AA, Evans AR, Ernst GL, Behnen CC, Zhao X, Bauer AM. The current landscape of machine learning-based radiomics in arteriovenous malformations: a systematic review and radiomics quality score assessment. Front Neurol 2024; 15:1398876. [PMID: 38915798 PMCID: PMC11194423 DOI: 10.3389/fneur.2024.1398876] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2024] [Accepted: 05/21/2024] [Indexed: 06/26/2024] Open
Abstract
Background Arteriovenous malformations (AVMs) are rare vascular anomalies involving a disorganization of arteries and veins with no intervening capillaries. In the past 10 years, radiomics and machine learning (ML) models became increasingly popular for analyzing diagnostic medical images. The goal of this review was to provide a comprehensive summary of current radiomic models being employed for the diagnostic, therapeutic, prognostic, and predictive outcomes in AVM management. Methods A systematic literature review was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 guidelines, in which the PubMed and Embase databases were searched using the following terms: (cerebral OR brain OR intracranial OR central nervous system OR spine OR spinal) AND (AVM OR arteriovenous malformation OR arteriovenous malformations) AND (radiomics OR radiogenomics OR machine learning OR artificial intelligence OR deep learning OR computer-aided detection OR computer-aided prediction OR computer-aided treatment decision). A radiomics quality score (RQS) was calculated for all included studies. Results Thirteen studies were included, which were all retrospective in nature. Three studies (23%) dealt with AVM diagnosis and grading, 1 study (8%) gauged treatment response, 8 (62%) predicted outcomes, and the last one (8%) addressed prognosis. No radiomics model had undergone external validation. The mean RQS was 15.92 (range: 10-18). Conclusion We demonstrated that radiomics is currently being studied in different facets of AVM management. While not ready for clinical use, radiomics is a rapidly emerging field expected to play a significant future role in medical imaging. More prospective studies are warranted to determine the role of radiomics in the diagnosis, prediction of comorbidities, and treatment selection in AVM management.
Collapse
Affiliation(s)
- Audrey A. Grossen
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, OK, United States
| | - Alexander R. Evans
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, OK, United States
| | - Griffin L. Ernst
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, OK, United States
| | - Connor C. Behnen
- Data Science and Analytics, University of Oklahoma, Norman, OK, United States
| | - Xiaochun Zhao
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, OK, United States
| | - Andrew M. Bauer
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, OK, United States
| |
Collapse
|
5
|
Rong Y, Chen Q, Fu Y, Yang X, Al-Hallaq HA, Wu QJ, Yuan L, Xiao Y, Cai B, Latifi K, Benedict SH, Buchsbaum JC, Qi XS. NRG Oncology Assessment of Artificial Intelligence Deep Learning-Based Auto-segmentation for Radiation Therapy: Current Developments, Clinical Considerations, and Future Directions. Int J Radiat Oncol Biol Phys 2024; 119:261-280. [PMID: 37972715 PMCID: PMC11023777 DOI: 10.1016/j.ijrobp.2023.10.033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 09/16/2023] [Accepted: 10/14/2023] [Indexed: 11/19/2023]
Abstract
Deep learning neural networks (DLNN) in Artificial intelligence (AI) have been extensively explored for automatic segmentation in radiotherapy (RT). In contrast to traditional model-based methods, data-driven AI-based models for auto-segmentation have shown high accuracy in early studies in research settings and controlled environment (single institution). Vendor-provided commercial AI models are made available as part of the integrated treatment planning system (TPS) or as a stand-alone tool that provides streamlined workflow interacting with the main TPS. These commercial tools have drawn clinics' attention thanks to their significant benefit in reducing the workload from manual contouring and shortening the duration of treatment planning. However, challenges occur when applying these commercial AI-based segmentation models to diverse clinical scenarios, particularly in uncontrolled environments. Contouring nomenclature and guideline standardization has been the main task undertaken by the NRG Oncology. AI auto-segmentation holds the potential clinical trial participants to reduce interobserver variations, nomenclature non-compliance, and contouring guideline deviations. Meanwhile, trial reviewers could use AI tools to verify contour accuracy and compliance of those submitted datasets. In recognizing the growing clinical utilization and potential of these commercial AI auto-segmentation tools, NRG Oncology has formed a working group to evaluate the clinical utilization and potential of commercial AI auto-segmentation tools. The group will assess in-house and commercially available AI models, evaluation metrics, clinical challenges, and limitations, as well as future developments in addressing these challenges. General recommendations are made in terms of the implementation of these commercial AI models, as well as precautions in recognizing the challenges and limitations.
Collapse
Affiliation(s)
- Yi Rong
- Mayo Clinic Arizona, Phoenix, AZ
| | - Quan Chen
- City of Hope Comprehensive Cancer Center Duarte, CA
| | - Yabo Fu
- Memorial Sloan Kettering Cancer Center, Commack, NY
| | | | | | | | - Lulin Yuan
- Virginia Commonwealth University, Richmond, VA
| | - Ying Xiao
- University of Pennsylvania/Abramson Cancer Center, Philadelphia, PA
| | - Bin Cai
- The University of Texas Southwestern Medical Center, Dallas, TX
| | | | - Stanley H Benedict
- University of California Davis Comprehensive Cancer Center, Sacramento, CA
| | | | - X Sharon Qi
- University of California Los Angeles, Los Angeles, CA
| |
Collapse
|
6
|
Li X, Xiang S, Li G. Application of artificial intelligence in brain arteriovenous malformations: Angioarchitectures, clinical symptoms and prognosis prediction. Interv Neuroradiol 2024:15910199241238798. [PMID: 38515371 PMCID: PMC11571152 DOI: 10.1177/15910199241238798] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2024] [Accepted: 02/26/2024] [Indexed: 03/23/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) has rapidly advanced in the medical field, leveraging its intelligence and automation for the management of various diseases. Brain arteriovenous malformations (AVM) are particularly noteworthy, experiencing rapid development in recent years and yielding remarkable results. This paper aims to summarize the applications of AI in the management of AVMs management. METHODS Literatures published in PubMed during 1999-2022, discussing AI application in AVMs management were reviewed. RESULTS AI algorithms have been applied in various aspects of AVM management, particularly in machine learning and deep learning models. Automatic lesion segmentation or delineation is a promising application that can be further developed and verified. Prognosis prediction using machine learning algorithms with radiomic-based analysis is another meaningful application. CONCLUSIONS AI has been widely used in AVMs management. This article summarizes the current research progress, limitations and future research directions.
Collapse
Affiliation(s)
- Xiangyu Li
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing, China
| | - Sishi Xiang
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing, China
| | - Guilin Li
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
7
|
Hong JS, You WC, Sun MH, Pan HC, Lin YH, Lu YF, Chen KM, Huang TH, Lee WK, Wu YT. Deep Learning Detection and Segmentation of Brain Arteriovenous Malformation on Magnetic Resonance Angiography. J Magn Reson Imaging 2024; 59:587-598. [PMID: 37220191 DOI: 10.1002/jmri.28795] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Revised: 05/06/2023] [Accepted: 05/08/2023] [Indexed: 05/25/2023] Open
Abstract
BACKGROUND The delineation of brain arteriovenous malformations (bAVMs) is crucial for subsequent treatment planning. Manual segmentation is time-consuming and labor-intensive. Applying deep learning to automatically detect and segment bAVM might help to improve clinical practice efficiency. PURPOSE To develop an approach for detecting bAVM and segmenting its nidus on Time-of-flight magnetic resonance angiography using deep learning methods. STUDY TYPE Retrospective. SUBJECTS 221 bAVM patients aged 7-79 underwent radiosurgery from 2003 to 2020. They were split into 177 training, 22 validation, and 22 test data. FIELD STRENGTH/SEQUENCE 1.5 T, Time-of-flight magnetic resonance angiography based on 3D gradient echo. ASSESSMENT The YOLOv5 and YOLOv8 algorithms were utilized to detect bAVM lesions and the U-Net and U-Net++ models to segment the nidus from the bounding boxes. The mean average precision, F1, precision, and recall were used to assess the model performance on the bAVM detection. To evaluate the model's performance on nidus segmentation, the Dice coefficient and balanced average Hausdorff distance (rbAHD) were employed. STATISTICAL TESTS The Student's t-test was used to test the cross-validation results (P < 0.05). The Wilcoxon rank test was applied to compare the median for the reference values and the model inference results (P < 0.05). RESULTS The detection results demonstrated that the model with pretraining and augmentation performed optimally. The U-Net++ with random dilation mechanism resulted in higher Dice and lower rbAHD, compared to that without that mechanism, across varying dilated bounding box conditions (P < 0.05). When combining detection and segmentation, the Dice and rbAHD were statistically different from the references calculated using the detected bounding boxes (P < 0.05). For the detected lesions in the test dataset, it showed the highest Dice of 0.82 and the lowest rbAHD of 5.3%. DATA CONCLUSION This study showed that pretraining and data augmentation improved YOLO detection performance. Properly limiting lesion ranges allows for adequate bAVM segmentation. LEVEL OF EVIDENCE 4 TECHNICAL EFFICACY STAGE: 1.
Collapse
Affiliation(s)
- Jia-Sheng Hong
- Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei City, 112, Taiwan
| | - Weir-Chiang You
- Department of Radiation Oncology, Taichung Veterans General Hospital, Taichung, 407, Taiwan
| | - Ming-Hsi Sun
- Department of Neurosurgery, Taichung Veterans General Hospital, Taichung, 407, Taiwan
| | - Hung-Chuan Pan
- Department of Neurosurgery, Taichung Veterans General Hospital, Taichung, 407, Taiwan
| | - Yi-Hui Lin
- Department of Radiation Oncology, Taichung Veterans General Hospital, Taichung, 407, Taiwan
| | - Yung-Fa Lu
- Department of Radiation Oncology, Taichung Veterans General Hospital, Taichung, 407, Taiwan
| | - Kuan-Ming Chen
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei City, 112, Taiwan
| | - Tzu-Hsuan Huang
- Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei City, 112, Taiwan
| | - Wei-Kai Lee
- Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei City, 112, Taiwan
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei City, 112, Taiwan
| | - Yu-Te Wu
- Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei City, 112, Taiwan
- Brain Research Center, National Yang Ming Chiao Tung University, Taipei City, 112, Taiwan
| |
Collapse
|
8
|
Lee CC, Yang HC, Wu HM, Lin YY, Lu CF, Peng SJ, Wu YT, Sheehan JP, Guo WY. Computational Modeling and AI in Radiation Neuro-Oncology and Radiosurgery. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1462:307-322. [PMID: 39523273 DOI: 10.1007/978-3-031-64892-2_18] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2024]
Abstract
The chapter explores the extensive integration of artificial intelligence (AI) in healthcare systems, with a specific focus on its application in stereotactic radiosurgery. The rapid evolution of AI technology has led to promising developments in this field, particularly through the utilization of machine learning and deep learning models. The diverse implementation of AI algorithms was developed from various aspects of radiosurgery, including the successful detection of spontaneous tumors and the automated delineation or segmentation of lesions. These applications show potential for extension to longitudinal treatment follow-up. Additionally, the chapter highlights the established use of machine learning algorithms, particularly those incorporating radiomic-based analysis, in predicting treatment outcomes. The discussion encompasses current achievements, existing limitations, and the need for further investigation in the dynamic intersection of AI and radiosurgery.
Collapse
Affiliation(s)
- Cheng-Chia Lee
- Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan.
- Brain Research Center, National Yang Ming Chiao Tung University, Taipei, Taiwan.
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan.
| | - Huai-Che Yang
- Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan
- Brain Research Center, National Yang Ming Chiao Tung University, Taipei, Taiwan
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Hsiu-Mei Wu
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Department of Radiology, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Yen-Yu Lin
- Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Chia-Feng Lu
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Syu-Jyun Peng
- In-Service Master Program in Artificial Intelligence in Medicine, College of Medicine, Taipei Medical University, Taipai, Taiwan
| | - Yu-Te Wu
- Brain Research Center, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Institute of Biophotonics, National Yang Ming University, Taipei, Taiwan
| | - Jason P Sheehan
- Department of Neurological Surgery, University of Virginia, Charlottesville, VA, USA
| | - Wan-Yuo Guo
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Department of Radiology, Taipei Veterans General Hospital, Taipei, Taiwan
| |
Collapse
|
9
|
Rodríguez Mallma MJ, Vilca-Aguilar M, Zuloaga-Rotta L, Borja-Rosales R, Salas-Ojeda M, Mauricio D. Machine Learning Approach for Analyzing 3-Year Outcomes of Patients with Brain Arteriovenous Malformation (AVM) after Stereotactic Radiosurgery (SRS). Diagnostics (Basel) 2023; 14:22. [PMID: 38201331 PMCID: PMC10871108 DOI: 10.3390/diagnostics14010022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 12/14/2023] [Accepted: 12/17/2023] [Indexed: 01/12/2024] Open
Abstract
A cerebral arteriovenous malformation (AVM) is a tangle of abnormal blood vessels that irregularly connects arteries and veins. Stereotactic radiosurgery (SRS) has been shown to be an effective treatment for AVM patients, but the factors associated with AVM obliteration remains a matter of debate. In this study, we aimed to develop a model that can predict whether patients with AVM will be cured 36 months after intervention by means of SRS and identify the most important predictors that explain the probability of being cured. A machine learning (ML) approach was applied using decision tree (DT) and logistic regression (LR) techniques on historical data (sociodemographic, clinical, treatment, angioarchitecture, and radiosurgery procedure) of 202 patients with AVM who underwent SRS at the Instituto de Radiocirugía del Perú (IRP) between 2005 and 2018. The LR model obtained the best results for predicting AVM cure with an accuracy of 0.92, sensitivity of 0.93, specificity of 0.89, and an area under the curve (AUC) of 0.98, which shows that ML models are suitable for predicting the prognosis of medical conditions such as AVM and can be a support tool for medical decision-making. In addition, several factors were identified that could explain whether patients with AVM would be cured at 36 months with the highest likelihood: the location of the AVM, the occupation of the patient, and the presence of hemorrhage.
Collapse
Affiliation(s)
| | - Marcos Vilca-Aguilar
- Instituto de Radiocirugía del Perú, Clínica San Pablo, Lima 15023, Peru
- Servicio de Neurocirugía, Hospital María Auxiliadora, Lima 15828, Peru
| | - Luis Zuloaga-Rotta
- Facultad de Ingeniería Industrial y de Sistemas, Universidad Nacional de Ingeniería, Lima 15333, Peru
| | - Rubén Borja-Rosales
- Facultad de Ingeniería Industrial y de Sistemas, Universidad Nacional de Ingeniería, Lima 15333, Peru
| | | | - David Mauricio
- Universidad Nacional Mayor de San Marcos, Lima 15081, Peru
| |
Collapse
|
10
|
Li J, Song Y, Wu Y, Liang L, Li G, Bai S. Clinical evaluation on automatic segmentation results of convolutional neural networks in rectal cancer radiotherapy. Front Oncol 2023; 13:1158315. [PMID: 37731629 PMCID: PMC10508953 DOI: 10.3389/fonc.2023.1158315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 08/11/2023] [Indexed: 09/22/2023] Open
Abstract
Purpose Image segmentation can be time-consuming and lacks consistency between different oncologists, which is essential in conformal radiotherapy techniques. We aimed to evaluate automatic delineation results generated by convolutional neural networks (CNNs) from geometry and dosimetry perspectives and explore the reliability of these segmentation tools in rectal cancer. Methods Forty-seven rectal cancer cases treated from February 2018 to April 2019 were randomly collected retrospectively in our cancer center. The oncologists delineated regions of interest (ROIs) on planning CT images as the ground truth, including clinical target volume (CTV), bladder, small intestine, and femoral heads. The corresponding automatic segmentation results were generated by DeepLabv3+ and ResUNet, and we also used Atlas-Based Autosegmentation (ABAS) software for comparison. The geometry evaluation was carried out using the volumetric Dice similarity coefficient (DSC) and surface DSC, and critical dose parameters were assessed based on replanning optimized by clinically approved or automatically generated CTVs and organs at risk (OARs), i.e., the Planref and Plantest. Pearson test was used to explore the correlation between geometric metrics and dose parameters. Results In geometric evaluation, DeepLabv3+ performed better in DCS metrics for the CTV (volumetric DSC, mean = 0.96, P< 0.01; surface DSC, mean = 0.78, P< 0.01) and small intestine (volumetric DSC, mean = 0.91, P< 0.01; surface DSC, mean = 0.62, P< 0.01), ResUNet had advantages in volumetric DSC of the bladder (mean = 0.97, P< 0.05). For critical dose parameters analysis between Planref and Plantest, there was a significant difference for target volumes (P< 0.01), and no significant difference was found for the ResUNet-generated small intestine (P > 0.05). For the correlation test, a negative correlation was found between DSC metrics (volumetric, surface DSC) and dosimetric parameters (δD95, δD95, HI, CI) for target volumes (P< 0.05), and no significant correlation was found for most tests of OARs (P > 0.05). Conclusions CNNs show remarkable repeatability and time-saving in automatic segmentation, and their accuracy also has a certain potential in clinical practice. Meanwhile, clinical aspects, such as dose distribution, may need to be considered when comparing the performance of auto-segmentation methods.
Collapse
Affiliation(s)
- Jing Li
- Radiotherapy Physics & Technology Center, Cancer Center, West China Hospital, Sichuan University, Chengdu, China
| | - Ying Song
- Radiotherapy Physics & Technology Center, Cancer Center, West China Hospital, Sichuan University, Chengdu, China
- Machine Intelligence Laboratory, College of Computer Science, Chengdu, China
| | - Yongchang Wu
- Radiotherapy Physics & Technology Center, Cancer Center, West China Hospital, Sichuan University, Chengdu, China
| | - Lan Liang
- Radiotherapy Physics & Technology Center, Cancer Center, West China Hospital, Sichuan University, Chengdu, China
| | - Guangjun Li
- Radiotherapy Physics & Technology Center, Cancer Center, West China Hospital, Sichuan University, Chengdu, China
| | - Sen Bai
- Radiotherapy Physics & Technology Center, Cancer Center, West China Hospital, Sichuan University, Chengdu, China
| |
Collapse
|
11
|
Lei Y, Wang T, Roper J, Tian S, Patel P, Bradley JD, Jani AB, Liu T, Yang X. Automatic segmentation of neurovascular bundle on mri using deep learning based topological modulated network. Med Phys 2023; 50:5479-5488. [PMID: 36939189 PMCID: PMC10509305 DOI: 10.1002/mp.16378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 01/20/2023] [Accepted: 03/09/2023] [Indexed: 03/21/2023] Open
Abstract
PURPOSE Radiation damage on neurovascular bundles (NVBs) may be the cause of sexual dysfunction after radiotherapy for prostate cancer. However, it is challenging to delineate NVBs as organ-at-risks from planning CTs during radiotherapy. Recently, the integration of MR into radiotherapy made NVBs contour delineating possible. In this study, we aim to develop an MRI-based deep learning method for automatic NVB segmentation. METHODS The proposed method, named topological modulated network, consists of three subnetworks, that is, a focal modulation, a hierarchical block and a topological fully convolutional network (FCN). The focal modulation is used to derive the location and bounds of left and right NVBs', namely the candidate volume-of-interests (VOIs). The hierarchical block aims to highlight the NVB boundaries information on derived feature map. The topological FCN then segments the NVBs inside the VOIs by considering the topological consistency nature of the vascular delineating. Based on the location information of candidate VOIs, the segmentations of NVBs can then be brought back to the input MRI's coordinate system. RESULTS A five-fold cross-validation study was performed on 60 patient cases to evaluate the performance of the proposed method. The segmented results were compared with manual contours. The Dice similarity coefficient (DSC) and 95th percentile Hausdorff distance (HD95 ) are (left NVB) 0.81 ± 0.10, 1.49 ± 0.88 mm, and (right NVB) 0.80 ± 0.15, 1.54 ± 1.22 mm, respectively. CONCLUSION We proposed a novel deep learning-based segmentation method for NVBs on pelvic MR images. The good segmentation agreement of our method with the manually drawn ground truth contours supports the feasibility of the proposed method, which can be potentially used to spare NVBs during proton and photon radiotherapy and thereby improve the quality of life for prostate cancer patients.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jeffrey D Bradley
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
12
|
Visibelli A, Roncaglia B, Spiga O, Santucci A. The Impact of Artificial Intelligence in the Odyssey of Rare Diseases. Biomedicines 2023; 11:887. [PMID: 36979866 PMCID: PMC10045927 DOI: 10.3390/biomedicines11030887] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 02/28/2023] [Accepted: 03/08/2023] [Indexed: 03/16/2023] Open
Abstract
Emerging machine learning (ML) technologies have the potential to significantly improve the research and treatment of rare diseases, which constitute a vast set of diseases that affect a small proportion of the total population. Artificial Intelligence (AI) algorithms can help to quickly identify patterns and associations that would be difficult or impossible for human analysts to detect. Predictive modeling techniques, such as deep learning, have been used to forecast the progression of rare diseases, enabling the development of more targeted treatments. Moreover, AI has also shown promise in the field of drug development for rare diseases with the identification of subpopulations of patients who may be most likely to respond to a particular drug. This review aims to highlight the achievements of AI algorithms in the study of rare diseases in the past decade and advise researchers on which methods have proven to be most effective. The review will focus on specific rare diseases, as defined by a prevalence rate that does not exceed 1-9/100,000 on Orphanet, and will examine which AI methods have been most successful in their study. We believe this review can guide clinicians and researchers in the successful application of ML in rare diseases.
Collapse
Affiliation(s)
- Anna Visibelli
- Department of Biotechnology, Chemistry and Pharmacy, University of Siena, 53100 Siena, Italy
| | - Bianca Roncaglia
- Department of Biotechnology, Chemistry and Pharmacy, University of Siena, 53100 Siena, Italy
| | - Ottavia Spiga
- Department of Biotechnology, Chemistry and Pharmacy, University of Siena, 53100 Siena, Italy
- Competence Center ARTES 4.0, 53100 Siena, Italy
- SienabioACTIVE—SbA, 53100 Siena, Italy
| | - Annalisa Santucci
- Department of Biotechnology, Chemistry and Pharmacy, University of Siena, 53100 Siena, Italy
- Competence Center ARTES 4.0, 53100 Siena, Italy
- SienabioACTIVE—SbA, 53100 Siena, Italy
| |
Collapse
|
13
|
Application of artificial intelligence to stereotactic radiosurgery for intracranial lesions: detection, segmentation, and outcome prediction. J Neurooncol 2023; 161:441-450. [PMID: 36635582 DOI: 10.1007/s11060-022-04234-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Accepted: 12/30/2022] [Indexed: 01/14/2023]
Abstract
BACKGROUND Rapid evolution of artificial intelligence (AI) prompted its wide application in healthcare systems. Stereotactic radiosurgery served as a good candidate for AI model development and achieved encouraging result in recent years. This article aimed at demonstrating current AI application in radiosurgery. METHODS Literatures published in PubMed during 2010-2022, discussing AI application in stereotactic radiosurgery were reviewed. RESULTS AI algorithms, especially machine learning/deep learning models, have been administered to different aspect of stereotactic radiosurgery. Spontaneous tumor detection and automated lesion delineation or segmentation were two of the promising application, which could be further extended to longitudinal treatment follow-up. Outcome prediction utilized machine learning algorithms with radiomic-based analysis was another well-established application. CONCLUSIONS Stereotactic radiosurgery has taken a lead role in AI development. Current achievement, limitation, and further investigation was summarized in this article.
Collapse
|
14
|
Jiao Y, Zhang JZ, Zhao Q, Liu JQ, Wu ZZ, Li Y, Li H, Fu WL, Weng JC, Huo R, Zhao SZ, Wang S, Cao Y, Zhao JZ. Machine Learning-Enabled Determination of Diffuseness of Brain Arteriovenous Malformations from Magnetic Resonance Angiography. Transl Stroke Res 2022; 13:939-948. [PMID: 34383209 DOI: 10.1007/s12975-021-00933-1] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2021] [Revised: 07/04/2021] [Accepted: 07/25/2021] [Indexed: 11/25/2022]
Abstract
The diffuseness of brain arteriovenous malformations (bAVMs) is a significant factor in surgical outcome evaluation and hemorrhagic risk prediction. However, there are still predicaments in identifying diffuseness, such as the judging variety resulting from different experience and difficulties in quantification. The purpose of this study was to develop a machine learning (ML) model to automatically identify the diffuseness of bAVM niduses using three-dimensional (3D) time-of-flight magnetic resonance angiography (TOF-MRA) images. A total of 635 patients with bAVMs who underwent TOF-MRA imaging were enrolled. Three experienced neuroradiologists delineated the bAVM lesions and identified the diffuseness on TOF-MRA images, which were considered the ground-truth reference. The U-Net-based segmentation model was trained to segment lesion areas. Eight mainstream ML models were trained through the radiomic features of segmented lesions to identify diffuseness, based on which an integrated model was built and yielded the best performance. In the test set, the Dice score, F2 score, precision, and recall for the segmentation model were 0.80 [0.72-0.84], 0.80 [0.71-0.86], 0.84 [0.77-0.93], and 0.82 [0.69-0.89], respectively. For the diffuseness identification model, the ensemble-based model was applied with an area under the Receiver-operating characteristic curves (AUC) of 0.93 (95% CI 0.87-0.99) in the training set. The AUC, accuracy, precision, recall, and F1 score for the diffuseness identification model were 0.95, 0.90, 0.81, 0.84, and 0.83, respectively, in the test set. The ML models showed good performance in automatically detecting bAVM lesions and identifying diffuseness. The method may help to judge the diffuseness of bAVMs objectively, quantificationally, and efficiently.
Collapse
Affiliation(s)
- Yuming Jiao
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, 119 South Fourth Ring Road West, Fengtai District, Beijing, People's Republic of China
- China National Clinical Research Center for Neurological Diseases, Beijing, People's Republic of China
- Center of Stroke, Beijing Institute for Brain Disorders, Beijing, People's Republic of China
- Beijing Key Laboratory of Translational Medicine for Cerebrovascular Disease, Beijing, People's Republic of China
| | - Jun-Ze Zhang
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, 119 South Fourth Ring Road West, Fengtai District, Beijing, People's Republic of China
- China National Clinical Research Center for Neurological Diseases, Beijing, People's Republic of China
- Center of Stroke, Beijing Institute for Brain Disorders, Beijing, People's Republic of China
- Beijing Key Laboratory of Translational Medicine for Cerebrovascular Disease, Beijing, People's Republic of China
| | - Qi Zhao
- China National Clinical Research Center for Neurological Diseases, Beijing, People's Republic of China
| | - Jia-Qi Liu
- China National Clinical Research Center for Neurological Diseases, Beijing, People's Republic of China
| | - Zhen-Zhou Wu
- China National Clinical Research Center for Neurological Diseases, Beijing, People's Republic of China
| | - Yan Li
- China National Clinical Research Center for Neurological Diseases, Beijing, People's Republic of China
| | - Hao Li
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, 119 South Fourth Ring Road West, Fengtai District, Beijing, People's Republic of China
- China National Clinical Research Center for Neurological Diseases, Beijing, People's Republic of China
- Center of Stroke, Beijing Institute for Brain Disorders, Beijing, People's Republic of China
- Beijing Key Laboratory of Translational Medicine for Cerebrovascular Disease, Beijing, People's Republic of China
| | - Wei-Lun Fu
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, 119 South Fourth Ring Road West, Fengtai District, Beijing, People's Republic of China
- China National Clinical Research Center for Neurological Diseases, Beijing, People's Republic of China
- Center of Stroke, Beijing Institute for Brain Disorders, Beijing, People's Republic of China
- Beijing Key Laboratory of Translational Medicine for Cerebrovascular Disease, Beijing, People's Republic of China
| | - Jian-Cong Weng
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, 119 South Fourth Ring Road West, Fengtai District, Beijing, People's Republic of China
- China National Clinical Research Center for Neurological Diseases, Beijing, People's Republic of China
- Center of Stroke, Beijing Institute for Brain Disorders, Beijing, People's Republic of China
- Beijing Key Laboratory of Translational Medicine for Cerebrovascular Disease, Beijing, People's Republic of China
| | - Ran Huo
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, 119 South Fourth Ring Road West, Fengtai District, Beijing, People's Republic of China
- China National Clinical Research Center for Neurological Diseases, Beijing, People's Republic of China
- Center of Stroke, Beijing Institute for Brain Disorders, Beijing, People's Republic of China
- Beijing Key Laboratory of Translational Medicine for Cerebrovascular Disease, Beijing, People's Republic of China
| | - Shao-Zhi Zhao
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, 119 South Fourth Ring Road West, Fengtai District, Beijing, People's Republic of China
- China National Clinical Research Center for Neurological Diseases, Beijing, People's Republic of China
- Center of Stroke, Beijing Institute for Brain Disorders, Beijing, People's Republic of China
- Beijing Key Laboratory of Translational Medicine for Cerebrovascular Disease, Beijing, People's Republic of China
| | - Shuo Wang
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, 119 South Fourth Ring Road West, Fengtai District, Beijing, People's Republic of China
- China National Clinical Research Center for Neurological Diseases, Beijing, People's Republic of China
- Center of Stroke, Beijing Institute for Brain Disorders, Beijing, People's Republic of China
- Beijing Key Laboratory of Translational Medicine for Cerebrovascular Disease, Beijing, People's Republic of China
| | - Yong Cao
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, 119 South Fourth Ring Road West, Fengtai District, Beijing, People's Republic of China.
- China National Clinical Research Center for Neurological Diseases, Beijing, People's Republic of China.
- Center of Stroke, Beijing Institute for Brain Disorders, Beijing, People's Republic of China.
- Beijing Key Laboratory of Translational Medicine for Cerebrovascular Disease, Beijing, People's Republic of China.
| | - Ji-Zong Zhao
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, 119 South Fourth Ring Road West, Fengtai District, Beijing, People's Republic of China
- China National Clinical Research Center for Neurological Diseases, Beijing, People's Republic of China
- Center of Stroke, Beijing Institute for Brain Disorders, Beijing, People's Republic of China
- Beijing Key Laboratory of Translational Medicine for Cerebrovascular Disease, Beijing, People's Republic of China
| |
Collapse
|
15
|
Colombo E, Fick T, Esposito G, Germans M, Regli L, van Doormaal T. Segmentation techniques of brain arteriovenous malformations for 3D visualization: a systematic review. LA RADIOLOGIA MEDICA 2022; 127:1333-1341. [PMID: 36255659 PMCID: PMC9747834 DOI: 10.1007/s11547-022-01567-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Accepted: 09/30/2022] [Indexed: 12/15/2022]
Abstract
BACKGROUND Visualization, analysis and characterization of the angioarchitecture of a brain arteriovenous malformation (bAVM) present crucial steps for understanding and management of these complex lesions. Three-dimensional (3D) segmentation and 3D visualization of bAVMs play hereby a significant role. We performed a systematic review regarding currently available 3D segmentation and visualization techniques for bAVMs. METHODS PubMed, Embase and Google Scholar were searched to identify studies reporting 3D segmentation techniques applied to bAVM characterization. Category of input scan, segmentation (automatic, semiautomatic, manual), time needed for segmentation and 3D visualization techniques were noted. RESULTS Thirty-three studies were included. Thirteen (39%) used MRI as baseline imaging modality, 9 used DSA (27%), and 7 used CT (21%). Segmentation through automatic algorithms was used in 20 (61%), semiautomatic segmentation in 6 (18%), and manual segmentation in 7 (21%) studies. Median automatic segmentation time was 10 min (IQR 33), semiautomatic 25 min (IQR 73). Manual segmentation time was reported in only one study, with the mean of 5-10 min. Thirty-two (97%) studies used screens to visualize the 3D segmentations outcomes and 1 (3%) study utilized a heads-up display (HUD). Integration with mixed reality was used in 4 studies (12%). CONCLUSIONS A golden standard for 3D visualization of bAVMs does not exist. This review describes a tendency over time to base segmentation on algorithms trained with machine learning. Unsupervised fuzzy-based algorithms thereby stand out as potential preferred strategy. Continued efforts will be necessary to improve algorithms, integrate complete hemodynamic assessment and find innovative tools for tridimensional visualization.
Collapse
Affiliation(s)
- Elisa Colombo
- Department of Neurosurgery, Clinical Neuroscience Center and University of Zürich, University Hospital Zurich, Frauenklinikstrasse 10, 8091, Zürich, ZH, Switzerland.
| | - Tim Fick
- Prinses Màxima Center, Department of Neurosurgery, Utrecht, CS, The Netherlands
| | - Giuseppe Esposito
- Department of Neurosurgery and Clinical Neuroscience Centerentrum, University Hospital of Zurich, Zürich, ZH, Switzerland
| | - Menno Germans
- Department of Neurosurgery and Clinical Neuroscience Centerentrum, University Hospital of Zurich, Zürich, ZH, Switzerland
| | - Luca Regli
- Department of Neurosurgery and Clinical Neuroscience Centerentrum, University Hospital of Zurich, Zürich, ZH, Switzerland
| | - Tristan van Doormaal
- Department of Neurosurgery and Clinical Neuroscience Centerentrum, University Hospital of Zurich, Zürich, ZH, Switzerland
| |
Collapse
|
16
|
Huang C, Wang J, Wang SH, Zhang YD. Applicable artificial intelligence for brain disease: A survey. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.07.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
17
|
Saggi S, Winkler EA, Ammanuel SG, Morshed RA, Garcia JH, Young JS, Semonche A, Fullerton HJ, Kim H, Cooke DL, Hetts SW, Abla A, Lawton MT, Gupta N. Machine learning for predicting hemorrhage in pediatric patients with brain arteriovenous malformation. J Neurosurg Pediatr 2022; 30:203-209. [PMID: 35916099 DOI: 10.3171/2022.4.peds21470] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Accepted: 04/11/2022] [Indexed: 11/06/2022]
Abstract
OBJECTIVE Ruptured brain arteriovenous malformations (bAVMs) in a child are associated with substantial morbidity and mortality. Prior studies investigating predictors of hemorrhagic presentation of a bAVM during childhood are limited. Machine learning (ML), which has high predictive accuracy when applied to large data sets, can be a useful adjunct for predicting hemorrhagic presentation. The goal of this study was to use ML in conjunction with a traditional regression approach to identify predictors of hemorrhagic presentation in pediatric patients based on a retrospective cohort study design. METHODS Using data obtained from 186 pediatric patients over a 19-year study period, the authors implemented three ML algorithms (random forest models, gradient boosted decision trees, and AdaBoost) to identify features that were most important for predicting hemorrhagic presentation. Additionally, logistic regression analysis was used to ascertain significant predictors of hemorrhagic presentation as a comparison. RESULTS All three ML models were consistent in identifying bAVM size and patient age at presentation as the two most important factors for predicting hemorrhagic presentation. Age at presentation was not identified as a significant predictor of hemorrhagic presentation in multivariable logistic regression. Gradient boosted decision trees/AdaBoost and random forest models identified bAVM location and a concurrent arterial aneurysm as the third most important factors, respectively. Finally, logistic regression identified a left-sided bAVM, small bAVM size, and the presence of a concurrent arterial aneurysm as significant risk factors for hemorrhagic presentation. CONCLUSIONS By using an ML approach, the authors found predictors of hemorrhagic presentation that were not identified using a conventional regression approach.
Collapse
Affiliation(s)
- Satvir Saggi
- 1Department of Neurological Surgery, University of California, San Francisco
| | - Ethan A Winkler
- 1Department of Neurological Surgery, University of California, San Francisco
| | - Simon G Ammanuel
- 1Department of Neurological Surgery, University of California, San Francisco
| | - Ramin A Morshed
- 1Department of Neurological Surgery, University of California, San Francisco
| | - Joseph H Garcia
- 1Department of Neurological Surgery, University of California, San Francisco
| | - Jacob S Young
- 1Department of Neurological Surgery, University of California, San Francisco
| | - Alexa Semonche
- 1Department of Neurological Surgery, University of California, San Francisco
| | - Heather J Fullerton
- 2Pediatric Stroke and Cerebrovascular Disease Center, Department of Neurology, University of California, San Francisco
| | - Helen Kim
- 3Center for Cerebrovascular Research, Department of Anesthesia and Perioperative Care, University of California, San Francisco
| | - Daniel L Cooke
- 4Department of Radiology and Biomedical Imaging, University of California, San Francisco, California
| | - Steven W Hetts
- 4Department of Radiology and Biomedical Imaging, University of California, San Francisco, California
| | - Adib Abla
- 1Department of Neurological Surgery, University of California, San Francisco
| | - Michael T Lawton
- 5Department of Neurological Surgery, Barrow Neurological Institute, Phoenix, Arizona; and
| | - Nalin Gupta
- 1Department of Neurological Surgery, University of California, San Francisco.,6Department of Pediatrics, University of California, San Francisco, California
| |
Collapse
|
18
|
Chen X, Lei Y, Su J, Yang H, Ni W, Yu J, Gu Y, Mao Y. A Review of Artificial Intelligence in Cerebrovascular Disease Imaging: Applications and Challenges. Curr Neuropharmacol 2022; 20:1359-1382. [PMID: 34749621 PMCID: PMC9881077 DOI: 10.2174/1570159x19666211108141446] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Revised: 09/07/2021] [Accepted: 10/10/2021] [Indexed: 01/10/2023] Open
Abstract
BACKGROUND A variety of emerging medical imaging technologies based on artificial intelligence have been widely applied in many diseases, but they are still limitedly used in the cerebrovascular field even though the diseases can lead to catastrophic consequences. OBJECTIVE This work aims to discuss the current challenges and future directions of artificial intelligence technology in cerebrovascular diseases through reviewing the existing literature related to applications in terms of computer-aided detection, prediction and treatment of cerebrovascular diseases. METHODS Based on artificial intelligence applications in four representative cerebrovascular diseases including intracranial aneurysm, arteriovenous malformation, arteriosclerosis and moyamoya disease, this paper systematically reviews studies published between 2006 and 2021 in five databases: National Center for Biotechnology Information, Elsevier Science Direct, IEEE Xplore Digital Library, Web of Science and Springer Link. And three refinement steps were further conducted after identifying relevant literature from these databases. RESULTS For the popular research topic, most of the included publications involved computer-aided detection and prediction of aneurysms, while studies about arteriovenous malformation, arteriosclerosis and moyamoya disease showed an upward trend in recent years. Both conventional machine learning and deep learning algorithms were utilized in these publications, but machine learning techniques accounted for a larger proportion. CONCLUSION Algorithms related to artificial intelligence, especially deep learning, are promising tools for medical imaging analysis and will enhance the performance of computer-aided detection, prediction and treatment of cerebrovascular diseases.
Collapse
Affiliation(s)
- Xi Chen
- School of Information Science and Technology, Fudan University, Shanghai, China; ,These authors contributed equally to this work
| | - Yu Lei
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China,These authors contributed equally to this work
| | - Jiabin Su
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| | - Heng Yang
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| | - Wei Ni
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| | - Jinhua Yu
- School of Information Science and Technology, Fudan University, Shanghai, China; ,Address correspondence to these authors at the School of Information Science and Technology, Fudan University, Shanghai 200433, China; Tel: +86 021 65643202; Fax: +86 021 65643202; E-mail: Department of Neurosurgery, Huashan Hospital of Fudan University, Shanghai 200040, China; Tel: +86 021 52889999; Fax: +86 021 62489191; E-mail:
| | - Yuxiang Gu
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China,Address correspondence to these authors at the School of Information Science and Technology, Fudan University, Shanghai 200433, China; Tel: +86 021 65643202; Fax: +86 021 65643202; E-mail: Department of Neurosurgery, Huashan Hospital of Fudan University, Shanghai 200040, China; Tel: +86 021 52889999; Fax: +86 021 62489191; E-mail:
| | - Ying Mao
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| |
Collapse
|
19
|
A Survey on Deep Learning for Precision Oncology. Diagnostics (Basel) 2022; 12:diagnostics12061489. [PMID: 35741298 PMCID: PMC9222056 DOI: 10.3390/diagnostics12061489] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 06/14/2022] [Accepted: 06/14/2022] [Indexed: 12/27/2022] Open
Abstract
Precision oncology, which ensures optimized cancer treatment tailored to the unique biology of a patient’s disease, has rapidly developed and is of great clinical importance. Deep learning has become the main method for precision oncology. This paper summarizes the recent deep-learning approaches relevant to precision oncology and reviews over 150 articles within the last six years. First, we survey the deep-learning approaches categorized by various precision oncology tasks, including the estimation of dose distribution for treatment planning, survival analysis and risk estimation after treatment, prediction of treatment response, and patient selection for treatment planning. Secondly, we provide an overview of the studies per anatomical area, including the brain, bladder, breast, bone, cervix, esophagus, gastric, head and neck, kidneys, liver, lung, pancreas, pelvis, prostate, and rectum. Finally, we highlight the challenges and discuss potential solutions for future research directions.
Collapse
|
20
|
Simon AB, Hurt B, Karunamuni R, Kim GY, Moiseenko V, Olson S, Farid N, Hsiao A, Hattangadi-Gluth JA. Automated segmentation of multiparametric magnetic resonance images for cerebral AVM radiosurgery planning: a deep learning approach. Sci Rep 2022; 12:786. [PMID: 35039538 PMCID: PMC8763944 DOI: 10.1038/s41598-021-04466-3] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2021] [Accepted: 12/13/2021] [Indexed: 11/28/2022] Open
Abstract
Stereotactic radiosurgery planning for cerebral arteriovenous malformations (AVM) is complicated by the variability in appearance of an AVM nidus across different imaging modalities. We developed a deep learning approach to automatically segment cerebrovascular-anatomical maps from multiple high-resolution magnetic resonance imaging/angiography (MRI/MRA) sequences in AVM patients, with the goal of facilitating target delineation. Twenty-three AVM patients who were evaluated for radiosurgery and underwent multi-parametric MRI/MRA were included. A hybrid semi-automated and manual approach was used to label MRI/MRAs with arteries, veins, brain parenchyma, cerebral spinal fluid (CSF), and embolized vessels. Next, these labels were used to train a convolutional neural network to perform this task. Imaging from 17 patients (6362 image slices) was used for training, and 6 patients (1224 slices) for validation. Performance was evaluated by Dice Similarity Coefficient (DSC). Classification performance was good for arteries, veins, brain parenchyma, and CSF, with DSCs of 0.86, 0.91, 0.98, and 0.91, respectively in the validation image set. Performance was lower for embolized vessels, with a DSC of 0.75. This demonstrates the proof of principle that accurate, high-resolution cerebrovascular-anatomical maps can be generated from multiparametric MRI/MRA. Clinical validation of their utility in radiosurgery planning is warranted.
Collapse
Affiliation(s)
- Aaron B Simon
- Department of Radiation Medicine and Applied Sciences, University of California San Diego, 3960 Health Sciences Dr, Mail Code 0865, La Jolla, CA, USA.,Department of Radiation Oncology, University of California Irvine, Orange, CA, USA
| | - Brian Hurt
- Department of Radiology, University of California San Diego, La Jolla, CA, USA
| | - Roshan Karunamuni
- Department of Radiation Medicine and Applied Sciences, University of California San Diego, 3960 Health Sciences Dr, Mail Code 0865, La Jolla, CA, USA
| | - Gwe-Ya Kim
- Department of Radiation Medicine and Applied Sciences, University of California San Diego, 3960 Health Sciences Dr, Mail Code 0865, La Jolla, CA, USA
| | - Vitali Moiseenko
- Department of Radiation Medicine and Applied Sciences, University of California San Diego, 3960 Health Sciences Dr, Mail Code 0865, La Jolla, CA, USA
| | - Scott Olson
- Division of Neurosurgery, University of California San Diego, La Jolla, CA, USA
| | - Nikdokht Farid
- Department of Radiology, University of California San Diego, La Jolla, CA, USA
| | - Albert Hsiao
- Department of Radiology, University of California San Diego, La Jolla, CA, USA
| | - Jona A Hattangadi-Gluth
- Department of Radiation Medicine and Applied Sciences, University of California San Diego, 3960 Health Sciences Dr, Mail Code 0865, La Jolla, CA, USA.
| |
Collapse
|
21
|
Matkovic LA, Wang T, Lei Y, Akin-Akintayo OO, Ojo OAA, Akintayo AA, Roper J, Bradley JD, Liu T, Schuster DM, Yang X. Prostate and dominant intraprostatic lesion segmentation on PET/CT using cascaded regional-net. Phys Med Biol 2021; 66:10.1088/1361-6560/ac3c13. [PMID: 34808603 PMCID: PMC8725511 DOI: 10.1088/1361-6560/ac3c13] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2021] [Accepted: 11/22/2021] [Indexed: 12/22/2022]
Abstract
Focal boost to dominant intraprostatic lesions (DILs) has recently been proposed for prostate radiation therapy. Accurate and fast delineation of the prostate and DILs is thus required during treatment planning. In this paper, we develop a learning-based method using positron emission tomography (PET)/computed tomography (CT) images to automatically segment the prostate and its DILs. To enable end-to-end segmentation, a deep learning-based method, called cascaded regional-Net, is utilized. The first network, referred to as dual attention network, is used to segment the prostate via extracting comprehensive features from both PET and CT images. A second network, referred to as mask scoring regional convolutional neural network (MSR-CNN), is used to segment the DILs from the PET and CT within the prostate region. Scoring strategy is used to diminish the misclassification of the DILs. For DIL segmentation, the proposed cascaded regional-Net uses two steps to remove normal tissue regions, with the first step cropping images based on prostate segmentation and the second step using MSR-CNN to further locate the DILs. The binary masks of DILs and prostates of testing patients are generated on the PET/CT images by the trained model. For evaluation, we retrospectively investigated 49 prostate cancer patients with PET/CT images acquired. The prostate and DILs of each patient were contoured by radiation oncologists and set as the ground truths and targets. We used five-fold cross-validation and a hold-out test to train and evaluate our method. The mean surface distance and DSC values were 0.666 ± 0.696 mm and 0.932 ± 0.059 for the prostate and 0.814 ± 1.002 mm and 0.801 ± 0.178 for the DILs among all 49 patients. The proposed method has shown promise for facilitating prostate and DIL delineation for DIL focal boost prostate radiation therapy.
Collapse
Affiliation(s)
- Luke A. Matkovic
- Department of Radiation Oncology, Emory University,
Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of
Technology, Atlanta, GA
| | - Tonghe Wang
- Department of Radiation Oncology, Emory University,
Atlanta, GA
- Winship Cancer Institute, Emory University, Atlanta,
GA
| | - Yang Lei
- Department of Radiation Oncology, Emory University,
Atlanta, GA
| | | | | | | | - Justin Roper
- Department of Radiation Oncology, Emory University,
Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of
Technology, Atlanta, GA
- Winship Cancer Institute, Emory University, Atlanta,
GA
| | - Jeffery D. Bradley
- Department of Radiation Oncology, Emory University,
Atlanta, GA
- Winship Cancer Institute, Emory University, Atlanta,
GA
| | - Tian Liu
- Department of Radiation Oncology, Emory University,
Atlanta, GA
- Winship Cancer Institute, Emory University, Atlanta,
GA
| | - David M. Schuster
- Department of Radiology and Imaging Sciences, Emory
University, Atlanta, GA
- Winship Cancer Institute, Emory University, Atlanta,
GA
| | - Xiaofeng Yang
- Department of Radiation Oncology, Emory University,
Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of
Technology, Atlanta, GA
- Winship Cancer Institute, Emory University, Atlanta,
GA
| |
Collapse
|
22
|
Lei Y, Wang T, Dong X, Tian S, Liu Y, Mao H, Curran WJ, Shu HK, Liu T, Yang X. MRI classification using semantic random forest with auto-context model. Quant Imaging Med Surg 2021; 11:4753-4766. [PMID: 34888187 PMCID: PMC8611460 DOI: 10.21037/qims-20-1114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2020] [Accepted: 04/28/2021] [Indexed: 11/06/2022]
Abstract
BACKGROUND It is challenging to differentiate air and bone on MR images of conventional sequences due to their low contrast. We propose to combine semantic feature extraction under auto-context manner into random forest to improve reasonability of the MRI segmentation for MRI-based radiotherapy treatment planning or PET attention correction. METHODS We applied a semantic classification random forest (SCRF) method which consists of a training stage and a segmentation stage. In the training stage, patch-based MRI features were extracted from registered MRI-CT training images, and the most informative elements were selected via feature selection to train an initial random forest. The rest sequence of random forests was trained by a combination of MRI feature and semantic feature under an auto-context manner. During segmentation, the MRI patches were first fed into these random forests to derive patch-based segmentation. By using patch fusion, the final end-to-end segmentation was obtained. RESULTS The Dice similarity coefficient (DSC) for air, bone and soft tissue classes obtained via proposed method were 0.976±0.007, 0.819±0.050 and 0.932±0.031, compared to 0.916±0.099, 0.673±0.151 and 0.830±0.083 with random forest (RF), and 0.942±0.086, 0.791±0.046 and 0.917±0.033 with U-Net. SCRF also outperformed the competing methods in sensitivity and specificity for all three structure types. CONCLUSIONS The proposed method accurately segmented bone, air and soft tissue. It is promising in facilitating advanced MR application in diagnosis and therapy.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xue Dong
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yingzi Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Hui-Kuo Shu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| |
Collapse
|
23
|
Lin H, Xiao H, Dong L, Teo KBK, Zou W, Cai J, Li T. Deep learning for automatic target volume segmentation in radiation therapy: a review. Quant Imaging Med Surg 2021; 11:4847-4858. [PMID: 34888194 PMCID: PMC8611469 DOI: 10.21037/qims-21-168] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Accepted: 09/16/2021] [Indexed: 12/21/2022]
Abstract
Deep learning, a new branch of machine learning algorithm, has emerged as a fast growing trend in medical imaging and become the state-of-the-art method in various clinical applications such as Radiology, Histo-pathology and Radiation Oncology. Specifically in radiation oncology, deep learning has shown its power in performing automatic segmentation tasks in radiation therapy for Organs-At-Risks (OAR), given its potential in improving the efficiency of OAR contouring and reducing the inter- and intra-observer variabilities. The similar interests were shared for target volume segmentation, an essential step of radiation therapy treatment planning, where the gross tumor volume is defined and microscopic spread is encompassed. The deep learning-based automatic segmentation method has recently been expanded into target volume automatic segmentation. In this paper, the authors summarized the major deep learning architectures of supervised learning fashion related to target volume segmentation, reviewed the mechanism of each infrastructure, surveyed the use of these models in various imaging domains (including Computational Tomography with and without contrast, Magnetic Resonant Imaging and Positron Emission Tomography) and multiple clinical sites, and compared the performance of different models using standard geometric evaluation metrics. The paper concluded with a discussion of open challenges and potential paths of future research in target volume automatic segmentation and how it may benefit the clinical practice.
Collapse
Affiliation(s)
- Hui Lin
- Department of Radaition Oncology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiation Oncology, University of California, San Francisco, CA, USA
| | - Haonan Xiao
- Department of Health Technology & Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Lei Dong
- Department of Radaition Oncology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Kevin Boon-Keng Teo
- Department of Radaition Oncology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Wei Zou
- Department of Radaition Oncology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Jing Cai
- Department of Health Technology & Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Taoran Li
- Department of Radaition Oncology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
24
|
Srinivasan VM, Lawton MT. Commentary: External Validation of the R2eD AVM Score to Predict the Likelihood of Rupture Presentation of Brain Arteriovenous Malformations. Neurosurgery 2021; 89:E162-E164. [PMID: 34161595 DOI: 10.1093/neuros/nyab225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Accepted: 05/16/2021] [Indexed: 11/14/2022] Open
Affiliation(s)
- Visish M Srinivasan
- Department of Neurosurgery, Barrow Neurological Institute, St. Joseph's Hospital and Medical Center, Phoenix, Arizona, USA
| | - Michael T Lawton
- Department of Neurosurgery, Barrow Neurological Institute, St. Joseph's Hospital and Medical Center, Phoenix, Arizona, USA
| |
Collapse
|
25
|
Wang M, Jiao Y, Zeng C, Zhang C, He Q, Yang Y, Tu W, Qiu H, Shi H, Zhang D, Kang D, Wang S, Liu AL, Jiang W, Cao Y, Zhao J. Chinese Cerebrovascular Neurosurgery Society and Chinese Interventional & Hybrid Operation Society, of Chinese Stroke Association Clinical Practice Guidelines for Management of Brain Arteriovenous Malformations in Eloquent Areas. Front Neurol 2021; 12:651663. [PMID: 34177760 PMCID: PMC8219979 DOI: 10.3389/fneur.2021.651663] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2021] [Accepted: 04/20/2021] [Indexed: 11/13/2022] Open
Abstract
Aim: The aim of this guideline is to present current and comprehensive recommendations for the management of brain arteriovenous malformations (bAVMs) located in eloquent areas. Methods: An extended literature search on MEDLINE was performed between Jan 1970 and May 2020. Eloquence-related literature was further screened and interpreted in different subcategories of this guideline. The writing group discussed narrative text and recommendations through group meetings and online video conferences. Recommendations followed the Applying Classification of Recommendations and Level of Evidence proposed by the American Heart Association/American Stroke Association. Prerelease review of the draft guideline was performed by four expert peer reviewers and by the members of Chinese Stroke Association. Results: In total, 809 out of 2,493 publications were identified to be related to eloquent structure or neurological functions of bAVMs. Three-hundred and forty-one publications were comprehensively interpreted and cited by this guideline. Evidence-based guidelines were presented for the clinical evaluation and treatment of bAVMs with eloquence involved. Topics focused on neuroanatomy of activated eloquent structure, functional neuroimaging, neurological assessment, indication, and recommendations of different therapeutic managements. Fifty-nine recommendations were summarized, including 20 in Class I, 30 in Class IIa, 9 in Class IIb, and 2 in Class III. Conclusions: The management of eloquent bAVMs remains challenging. With the evolutionary understanding of eloquent areas, the guideline highlights the assessment of eloquent bAVMs, and a strategy for decision-making in the management of eloquent bAVMs.
Collapse
Affiliation(s)
- Mingze Wang
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
- China National Clinical Research Center for Neurological Diseases, Beijing, China
- Center of Stroke, Beijing Institute for Brain Disorders, Beijing, China
- Beijing Key Laboratory of Translational Medicine for Cerebrovascular Disease, Beijing, China
| | - Yuming Jiao
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
- China National Clinical Research Center for Neurological Diseases, Beijing, China
- Center of Stroke, Beijing Institute for Brain Disorders, Beijing, China
- Beijing Key Laboratory of Translational Medicine for Cerebrovascular Disease, Beijing, China
| | - Chaofan Zeng
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
- China National Clinical Research Center for Neurological Diseases, Beijing, China
- Center of Stroke, Beijing Institute for Brain Disorders, Beijing, China
- Beijing Key Laboratory of Translational Medicine for Cerebrovascular Disease, Beijing, China
| | - Chaoqi Zhang
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
- China National Clinical Research Center for Neurological Diseases, Beijing, China
- Center of Stroke, Beijing Institute for Brain Disorders, Beijing, China
- Beijing Key Laboratory of Translational Medicine for Cerebrovascular Disease, Beijing, China
| | - Qiheng He
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
- China National Clinical Research Center for Neurological Diseases, Beijing, China
- Center of Stroke, Beijing Institute for Brain Disorders, Beijing, China
- Beijing Key Laboratory of Translational Medicine for Cerebrovascular Disease, Beijing, China
| | - Yi Yang
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
- China National Clinical Research Center for Neurological Diseases, Beijing, China
- Center of Stroke, Beijing Institute for Brain Disorders, Beijing, China
- Beijing Key Laboratory of Translational Medicine for Cerebrovascular Disease, Beijing, China
| | - Wenjun Tu
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
- China National Clinical Research Center for Neurological Diseases, Beijing, China
- Center of Stroke, Beijing Institute for Brain Disorders, Beijing, China
- Beijing Key Laboratory of Translational Medicine for Cerebrovascular Disease, Beijing, China
| | - Hancheng Qiu
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
- China National Clinical Research Center for Neurological Diseases, Beijing, China
- Center of Stroke, Beijing Institute for Brain Disorders, Beijing, China
- Beijing Key Laboratory of Translational Medicine for Cerebrovascular Disease, Beijing, China
| | - Huaizhang Shi
- Department of Neurosurgery, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Dong Zhang
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
- China National Clinical Research Center for Neurological Diseases, Beijing, China
- Center of Stroke, Beijing Institute for Brain Disorders, Beijing, China
- Beijing Key Laboratory of Translational Medicine for Cerebrovascular Disease, Beijing, China
| | - Dezhi Kang
- Department of Neurosurgery, The First Affiliated Hospital of Fujian Medical University, Fuzhou, China
| | - Shuo Wang
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
- China National Clinical Research Center for Neurological Diseases, Beijing, China
- Center of Stroke, Beijing Institute for Brain Disorders, Beijing, China
- Beijing Key Laboratory of Translational Medicine for Cerebrovascular Disease, Beijing, China
| | - A-li Liu
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
- China National Clinical Research Center for Neurological Diseases, Beijing, China
- Center of Stroke, Beijing Institute for Brain Disorders, Beijing, China
- Beijing Key Laboratory of Translational Medicine for Cerebrovascular Disease, Beijing, China
- Gamma Knife Center, Beijing Neurosurgical Institute, Beijing, China
| | - Weijian Jiang
- Department of Vascular Neurosurgery, Chinese People's Liberation Army Rocket Army Characteristic Medical Center, Beijing, China
| | - Yong Cao
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
- China National Clinical Research Center for Neurological Diseases, Beijing, China
- Center of Stroke, Beijing Institute for Brain Disorders, Beijing, China
- Beijing Key Laboratory of Translational Medicine for Cerebrovascular Disease, Beijing, China
| | - Jizong Zhao
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
- China National Clinical Research Center for Neurological Diseases, Beijing, China
- Center of Stroke, Beijing Institute for Brain Disorders, Beijing, China
- Beijing Key Laboratory of Translational Medicine for Cerebrovascular Disease, Beijing, China
- Savaid Medical School, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
26
|
Zhang Y, Li H, Du J, Qin J, Wang T, Chen Y, Liu B, Gao W, Ma G, Lei B. 3D Multi-Attention Guided Multi-Task Learning Network for Automatic Gastric Tumor Segmentation and Lymph Node Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1618-1631. [PMID: 33646948 DOI: 10.1109/tmi.2021.3062902] [Citation(s) in RCA: 56] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Automatic gastric tumor segmentation and lymph node (LN) classification not only can assist radiologists in reading images, but also provide image-guided clinical diagnosis and improve diagnosis accuracy. However, due to the inhomogeneous intensity distribution of gastric tumor and LN in CT scans, the ambiguous/missing boundaries, and highly variable shapes of gastric tumor, it is quite challenging to develop an automatic solution. To comprehensively address these challenges, we propose a novel 3D multi-attention guided multi-task learning network for simultaneous gastric tumor segmentation and LN classification, which makes full use of the complementary information extracted from different dimensions, scales, and tasks. Specifically, we tackle task correlation and heterogeneity with the convolutional neural network consisting of scale-aware attention-guided shared feature learning for refined and universal multi-scale features, and task-aware attention-guided feature learning for task-specific discriminative features. This shared feature learning is equipped with two types of scale-aware attention (visual attention and adaptive spatial attention) and two stage-wise deep supervision paths. The task-aware attention-guided feature learning comprises a segmentation-aware attention module and a classification-aware attention module. The proposed 3D multi-task learning network can balance all tasks by combining segmentation and classification loss functions with weight uncertainty. We evaluate our model on an in-house CT images dataset collected from three medical centers. Experimental results demonstrate that our method outperforms the state-of-the-art algorithms, and obtains promising performance for tumor segmentation and LN classification. Moreover, to explore the generalization for other segmentation tasks, we also extend the proposed network to liver tumor segmentation in CT images of the MICCAI 2017 Liver Tumor Segmentation Challenge. Our implementation is released at https://github.com/infinite-tao/MA-MTLN.
Collapse
|
27
|
Wang T, Lei Y, Roper J, Ghavidel B, Beitler JJ, McDonald M, Curran WJ, Liu T, Yang X. Head and neck multi-organ segmentation on dual-energy CT using dual pyramid convolutional neural networks. Phys Med Biol 2021; 66:10.1088/1361-6560/abfce2. [PMID: 33915524 PMCID: PMC11747937 DOI: 10.1088/1361-6560/abfce2] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Accepted: 04/29/2021] [Indexed: 11/11/2022]
Abstract
Organ delineation is crucial to diagnosis and therapy, while it is also labor-intensive and observer-dependent. Dual energy CT (DECT) provides additional image contrast than conventional single energy CT (SECT), which may facilitate automatic organ segmentation. This work aims to develop an automatic multi-organ segmentation approach using deep learning for head-and-neck region on DECT. We proposed a mask scoring regional convolutional neural network (R-CNN) where comprehensive features are firstly learnt from two independent pyramid networks and are then combined via deep attention strategy to highlight the informative ones extracted from both two channels of low and high energy CT. To perform multi-organ segmentation and avoid misclassification, a mask scoring subnetwork was integrated into the Mask R-CNN framework to build the correlation between the class of potential detected organ's region-of-interest (ROI) and the shape of that organ's segmentation within that ROI. We evaluated our model on DECT images from 127 head-and-neck cancer patients (66 training, 61 testing) with manual contours of 19 organs as training target and ground truth. For large- and mid-sized organs such as brain and parotid, the proposed method successfully achieved average Dice similarity coefficient (DSC) larger than 0.8. For small-sized organs with very low contrast such as chiasm, cochlea, lens and optic nerves, the DSCs ranged between around 0.5 and 0.8. With the proposed method, using DECT images outperforms using SECT in almost all 19 organs with statistical significance in DSC (p<0.05). Meanwhile, by using the DECT, the proposed method is also significantly superior to a recently developed FCN-based method in most of organs in terms of DSC and the 95th percentile Hausdorff distance. Quantitative results demonstrated the feasibility of the proposed method, the superiority of using DECT to SECT, and the advantage of the proposed R-CNN over FCN on the head-and-neck patient study. The proposed method has the potential to facilitate the current head-and-neck cancer radiation therapy workflow in treatment planning.
Collapse
Affiliation(s)
- Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Beth Ghavidel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Jonathan J Beitler
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Mark McDonald
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| |
Collapse
|
28
|
Fu Y, Lei Y, Wang T, Curran WJ, Liu T, Yang X. A review of deep learning based methods for medical image multi-organ segmentation. Phys Med 2021; 85:107-122. [PMID: 33992856 PMCID: PMC8217246 DOI: 10.1016/j.ejmp.2021.05.003] [Citation(s) in RCA: 89] [Impact Index Per Article: 22.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 03/12/2021] [Accepted: 05/03/2021] [Indexed: 12/12/2022] Open
Abstract
Deep learning has revolutionized image processing and achieved the-state-of-art performance in many medical image segmentation tasks. Many deep learning-based methods have been published to segment different parts of the body for different medical applications. It is necessary to summarize the current state of development for deep learning in the field of medical image segmentation. In this paper, we aim to provide a comprehensive review with a focus on multi-organ image segmentation, which is crucial for radiotherapy where the tumor and organs-at-risk need to be contoured for treatment planning. We grouped the surveyed methods into two broad categories which are 'pixel-wise classification' and 'end-to-end segmentation'. Each category was divided into subgroups according to their network design. For each type, we listed the surveyed works, highlighted important contributions and identified specific challenges. Following the detailed review, we discussed the achievements, shortcomings and future potentials of each category. To enable direct comparison, we listed the performance of the surveyed works that used thoracic and head-and-neck benchmark datasets.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA.
| |
Collapse
|
29
|
Lei Y, Wang T, Tian S, Fu Y, Patel P, Jani AB, Curran WJ, Liu T, Yang X. Male pelvic CT multi-organ segmentation using synthetic MRI-aided dual pyramid networks. Phys Med Biol 2021; 66:10.1088/1361-6560/abf2f9. [PMID: 33780918 PMCID: PMC11755409 DOI: 10.1088/1361-6560/abf2f9] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Accepted: 03/29/2021] [Indexed: 12/17/2022]
Abstract
The delineation of the prostate and organs-at-risk (OARs) is fundamental to prostate radiation treatment planning, but is currently labor-intensive and observer-dependent. We aimed to develop an automated computed tomography (CT)-based multi-organ (bladder, prostate, rectum, left and right femoral heads (RFHs)) segmentation method for prostate radiation therapy treatment planning. The proposed method uses synthetic MRIs (sMRIs) to offer superior soft-tissue information for male pelvic CT images. Cycle-consistent adversarial networks (CycleGAN) were used to generate CT-based sMRIs. Dual pyramid networks (DPNs) extracted features from both CTs and sMRIs. A deep attention strategy was integrated into the DPNs to select the most relevant features from both CTs and sMRIs to identify organ boundaries. The CT-based sMRI generated from our previously trained CycleGAN and its corresponding CT images were inputted to the proposed DPNs to provide complementary information for pelvic multi-organ segmentation. The proposed method was trained and evaluated using datasets from 140 patients with prostate cancer, and were then compared against state-of-art methods. The Dice similarity coefficients and mean surface distances between our results and ground truth were 0.95 ± 0.05, 1.16 ± 0.70 mm; 0.88 ± 0.08, 1.64 ± 1.26 mm; 0.90 ± 0.04, 1.27 ± 0.48 mm; 0.95 ± 0.04, 1.08 ± 1.29 mm; and 0.95 ± 0.04, 1.11 ± 1.49 mm for bladder, prostate, rectum, left and RFHs, respectively. Mean center of mass distances was within 3 mm for all organs. Our results performed significantly better than those of competing methods in most evaluation metrics. We demonstrated the feasibility of sMRI-aided DPNs for multi-organ segmentation on pelvic CT images, and its superiority over other networks. The proposed method could be used in routine prostate cancer radiotherapy treatment planning to rapidly segment the prostate and standard OARs.
Collapse
Affiliation(s)
| | | | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| |
Collapse
|
30
|
Liu Y, Lei Y, Fu Y, Wang T, Tang X, Jiang X, Curran WJ, Liu T, Patel P, Yang X. CT-based multi-organ segmentation using a 3D self-attention U-net network for pancreatic radiotherapy. Med Phys 2020; 47:4316-4324. [PMID: 32654153 DOI: 10.1002/mp.14386] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2019] [Revised: 07/05/2020] [Accepted: 07/06/2020] [Indexed: 01/24/2023] Open
Abstract
PURPOSE Segmentation of organs-at-risk (OARs) is a weak link in radiotherapeutic treatment planning process because the manual contouring action is labor-intensive and time-consuming. This work aimed to develop a deep learning-based method for rapid and accurate pancreatic multi-organ segmentation that can expedite the treatment planning process. METHODS We retrospectively investigated one hundred patients with computed tomography (CT) simulation scanned and contours delineated. Eight OARs including large bowel, small bowel, duodenum, left kidney, right kidney, liver, spinal cord and stomach were the target organs to be segmented. The proposed three-dimensional (3D) deep attention U-Net is featured with a deep attention strategy to effectively differentiate multiple organs. Performance of the proposed method was evaluated using six metrics, including Dice similarity coefficient (DSC), sensitivity, specificity, Hausdorff distance 95% (HD95), mean surface distance (MSD) and residual mean square distance (RMSD). RESULTS The contours generated by the proposed method closely resemble the ground-truth manual contours, as evidenced by encouraging quantitative results in terms of DSC, sensitivity, specificity, HD95, MSD and RMSD. For DSC, mean values of 0.91 ± 0.03, 0.89 ± 0.06, 0.86 ± 0.06, 0.95 ± 0.02, 0.95 ± 0.02, 0.96 ± 0.01, 0.87 ± 0.05 and 0.93 ± 0.03 were achieved for large bowel, small bowel, duodenum, left kidney, right kidney, liver, spinal cord and stomach, respectively. CONCLUSIONS The proposed method could significantly expedite the treatment planning process by rapidly segmenting multiple OARs. The method could potentially be used in pancreatic adaptive radiotherapy to increase dose delivery accuracy and minimize gastrointestinal toxicity.
Collapse
Affiliation(s)
- Yingzi Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiangyang Tang
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaojun Jiang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
31
|
Wang T, Lei Y, Fu Y, Curran WJ, Liu T, Nye JA, Yang X. Machine learning in quantitative PET: A review of attenuation correction and low-count image reconstruction methods. Phys Med 2020; 76:294-306. [PMID: 32738777 PMCID: PMC7484241 DOI: 10.1016/j.ejmp.2020.07.028] [Citation(s) in RCA: 60] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Revised: 07/13/2020] [Accepted: 07/21/2020] [Indexed: 02/08/2023] Open
Abstract
The rapid expansion of machine learning is offering a new wave of opportunities for nuclear medicine. This paper reviews applications of machine learning for the study of attenuation correction (AC) and low-count image reconstruction in quantitative positron emission tomography (PET). Specifically, we present the developments of machine learning methodology, ranging from random forest and dictionary learning to the latest convolutional neural network-based architectures. For application in PET attenuation correction, two general strategies are reviewed: 1) generating synthetic CT from MR or non-AC PET for the purposes of PET AC, and 2) direct conversion from non-AC PET to AC PET. For low-count PET reconstruction, recent deep learning-based studies and the potential advantages over conventional machine learning-based methods are presented and discussed. In each application, the proposed methods, study designs and performance of published studies are listed and compared with a brief discussion. Finally, the overall contributions and remaining challenges are summarized.
Collapse
Affiliation(s)
- Tonghe Wang
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA; Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA
| | - Yabo Fu
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA; Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA; Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Jonathon A Nye
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA; Winship Cancer Institute, Emory University, Atlanta, GA, USA.
| |
Collapse
|
32
|
Dai X, Lei Y, Zhang Y, Qiu RLJ, Wang T, Dresser SA, Curran WJ, Patel P, Liu T, Yang X. Automatic multi-catheter detection using deeply supervised convolutional neural network in MRI-guided HDR prostate brachytherapy. Med Phys 2020; 47:4115-4124. [PMID: 32484573 DOI: 10.1002/mp.14307] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2020] [Revised: 05/19/2020] [Accepted: 05/24/2020] [Indexed: 12/19/2022] Open
Abstract
PURPOSE High-dose-rate (HDR) brachytherapy is an established technique to be used as monotherapy option or focal boost in conjunction with external beam radiation therapy (EBRT) for treating prostate cancer. Radiation source path reconstruction is a critical procedure in HDR treatment planning. Manually identifying the source path is labor intensive and time inefficient. In recent years, magnetic resonance imaging (MRI) has become a valuable imaging modality for image-guided HDR prostate brachytherapy due to its superb soft-tissue contrast for target delineation and normal tissue contouring. The purpose of this study is to investigate a deep-learning-based method to automatically reconstruct multiple catheters in MRI for prostate cancer HDR brachytherapy treatment planning. METHODS Attention gated U-Net incorporated with total variation (TV) regularization model was developed for multi-catheter segmentation in MRI. The attention gates were used to improve the accuracy of identifying small catheter points, while TV regularization was adopted to encode the natural spatial continuity of catheters into the model. The model was trained using the binary catheter annotation images offered by experienced physicists as ground truth paired with original MRI images. After the network was trained, MR images of a new prostate cancer patient receiving HDR brachytherapy were fed into the model to predict the locations and shapes of all the catheters. Quantitative assessments of our proposed method were based on catheter shaft and tip errors compared to the ground truth. RESULTS Our method detected 299 catheters from 20 patients receiving HDR prostate brachytherapy with a catheter tip error of 0.37 ± 1.68 mm and a catheter shaft error of 0.93 ± 0.50 mm. For detection of catheter tips, our method resulted in 87% of the catheter tips within an error of less than ± 2.0 mm, and more than 71% of the tips can be localized within an absolute error of no >1.0 mm. For catheter shaft localization, 97% of catheters were detected with an error of <2.0 mm, while 63% were within 1.0 mm. CONCLUSIONS In this study, we proposed a novel multi-catheter detection method to precisely localize the tips and shafts of catheters in three-dimensional MRI images of HDR prostate brachytherapy. It paves the way for elevating the quality and outcome of MRI-guided HDR prostate brachytherapy.
Collapse
Affiliation(s)
- Xianjin Dai
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Yupei Zhang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Richard L J Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Sean A Dresser
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| |
Collapse
|
33
|
He X, Guo BJ, Lei Y, Wang T, Fu Y, Curran WJ, Zhang LJ, Liu T, Yang X. Automatic segmentation and quantification of epicardial adipose tissue from coronary computed tomography angiography. Phys Med Biol 2020; 65:095012. [PMID: 32182595 DOI: 10.1088/1361-6560/ab8077] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
Epicardial adipose tissue (EAT) is a visceral fat deposit, that's known for its association with factors, such as obesity, diabetes mellitus, age, and hypertension. Segmentation of the EAT in a fast and reproducible way is important for the interpretation of its role as an independent risk marker intricate. However, EAT has a variable distribution, and various diseases may affect the volume of the EAT, which can increase the complexity of the already time-consuming manual segmentation work. We propose a 3D deep attention U-Net method to automatically segment the EAT from coronary computed tomography angiography (CCTA). Five-fold cross-validation and hold-out experiments were used to evaluate the proposed method through a retrospective investigation of 200 patients. The automatically segmented EAT volume was compared with physician-approved clinical contours. Quantitative metrics used were the Dice similarity coefficient (DSC), sensitivity, specificity, Jaccard index (JAC), Hausdorff distance (HD), mean surface distance (MSD), residual mean square distance (RMSD), and the center of mass distance (CMD). For cross-validation, the median DSC, sensitivity, and specificity were 92.7%, 91.1%, and 95.1%, respectively, with JAC, HD, CMD, MSD, and RMSD are 82.9% ± 8.8%, 3.77 ± 1.86 mm, 1.98 ± 1.50 mm, 0.37 ± 0.24 mm, and 0.65 ± 0.37 mm, respectively. For the hold-out test, the accuracy of the proposed method remained high. We developed a novel deep learning-based approach for the automated segmentation of the EAT on CCTA images. We demonstrated the high accuracy of the proposed learning-based segmentation method through comparison with ground truth contour of 200 clinical patient cases using 8 quantitative metrics, Pearson correlation, and Bland-Altman analysis. Our automatic EAT segmentation results show the potential of the proposed method to be used in computer-aided diagnosis of coronary artery diseases (CADs) in clinical settings.
Collapse
Affiliation(s)
- Xiuxiu He
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America. Co-first author
| | | | | | | | | | | | | | | | | |
Collapse
|
34
|
Zhang Y, Lei Y, Qiu RLJ, Wang T, Wang H, Jani AB, Curran WJ, Patel P, Liu T, Yang X. Multi-needle Localization with Attention U-Net in US-guided HDR Prostate Brachytherapy. Med Phys 2020; 47:2735-2745. [PMID: 32155666 DOI: 10.1002/mp.14128] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2019] [Revised: 02/17/2020] [Accepted: 03/04/2020] [Indexed: 12/11/2022] Open
Abstract
PURPOSE Ultrasound (US)-guided high dose rate (HDR) prostate brachytherapy requests the clinicians to place HDR needles (catheters) into the prostate gland under transrectal US (TRUS) guidance in the operating room. The quality of the subsequent radiation treatment plan is largely dictated by the needle placements, which varies upon the experience level of the clinicians and the procedure protocols. Real-time plan dose distribution, if available, could be a vital tool to provide more subjective assessment of the needle placements, hence potentially improving the radiation plan quality and the treatment outcome. However, due to low signal-to-noise ratio (SNR) in US imaging, real-time multi-needle segmentation in 3D TRUS, which is the major obstacle for real-time dose mapping, has not been realized to date. In this study, we propose a deep learning-based method that enables accurate and real-time digitization of the multiple needles in the 3D TRUS images of HDR prostate brachytherapy. METHODS A deep learning model based on the U-Net architecture was developed to segment multiple needles in the 3D TRUS images. Attention gates were considered in our model to improve the prediction on the small needle points. Furthermore, the spatial continuity of needles was encoded into our model with total variation (TV) regularization. The combined network was trained on 3D TRUS patches with the deep supervision strategy, where the binary needle annotation images were provided as ground truth. The trained network was then used to localize and segment the HDR needles for a new patient's TRUS images. We evaluated our proposed method based on the needle shaft and tip errors against manually defined ground truth and compared our method with other state-of-art methods (U-Net and deeply supervised attention U-Net). RESULTS Our method detected 96% needles of 339 needles from 23 HDR prostate brachytherapy patients with 0.290 ± 0.236 mm at shaft error and 0.442 ± 0.831 mm at tip error. For shaft localization, our method resulted in 96% localizations with less than 0.8 mm error (needle diameter is 1.67 mm), while for tip localization, our method resulted in 75% needles with 0 mm error and 21% needles with 2 mm error (TRUS image slice thickness is 2 mm). No significant difference is observed (P = 0.83) on tip localization between our results with the ground truth. Compared with U-Net and deeply supervised attention U-Net, the proposed method delivers a significant improvement on both shaft error and tip error (P < 0.05). CONCLUSIONS We proposed a new segmentation method to precisely localize the tips and shafts of multiple needles in 3D TRUS images of HDR prostate brachytherapy. The 3D rendering of the needles could help clinicians to evaluate the needle placements. It paves the way for the development of real-time plan dose assessment tools that can further elevate the quality and outcome of HDR prostate brachytherapy.
Collapse
Affiliation(s)
- Yupei Zhang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Richard L J Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Hesheng Wang
- Department of Radiation Oncology, New York University, New York, NY, USA
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| |
Collapse
|
35
|
Liu Y, Lei Y, Wang T, Fu Y, Tang X, Curran WJ, Liu T, Patel P, Yang X. CBCT-based synthetic CT generation using deep-attention cycleGAN for pancreatic adaptive radiotherapy. Med Phys 2020; 47:2472-2483. [PMID: 32141618 DOI: 10.1002/mp.14121] [Citation(s) in RCA: 127] [Impact Index Per Article: 25.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2019] [Revised: 02/27/2020] [Accepted: 02/27/2020] [Indexed: 12/18/2022] Open
Abstract
PURPOSE Current clinical application of cone-beam CT (CBCT) is limited to patient setup. Imaging artifacts and Hounsfield unit (HU) inaccuracy make the process of CBCT-based adaptive planning presently impractical. In this study, we developed a deep-learning-based approach to improve CBCT image quality and HU accuracy for potential extended clinical use in CBCT-guided pancreatic adaptive radiotherapy. METHODS Thirty patients previously treated with pancreas SBRT were included. The CBCT acquired prior to the first fraction of treatment was registered to the planning CT for training and generation of synthetic CT (sCT). A self-attention cycle generative adversarial network (cycleGAN) was used to generate CBCT-based sCT. For the cohort of 30 patients, the CT-based contours and treatment plans were transferred to the first fraction CBCTs and sCTs for dosimetric comparison. RESULTS At the site of abdomen, mean absolute error (MAE) between CT and sCT was 56.89 ± 13.84 HU, comparing to 81.06 ± 15.86 HU between CT and the raw CBCT. No significant differences (P > 0.05) were observed in the PTV and OAR dose-volume-histogram (DVH) metrics between the CT- and sCT-based plans, while significant differences (P < 0.05) were found between the CT- and the CBCT-based plans. CONCLUSIONS The image similarity and dosimetric agreement between the CT and sCT-based plans validated the dose calculation accuracy carried by sCT. The CBCT-based sCT approach can potentially increase treatment precision and thus minimize gastrointestinal toxicity.
Collapse
Affiliation(s)
- Yingzi Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiangyang Tang
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
36
|
Jun Guo B, He X, Lei Y, Harms J, Wang T, Curran WJ, Liu T, Jiang Zhang L, Yang X. Automated left ventricular myocardium segmentation using 3D deeply supervised attention U‐net for coronary computed tomography angiography; CT myocardium segmentation. Med Phys 2020; 47:1775-1785. [DOI: 10.1002/mp.14066] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2019] [Revised: 01/22/2020] [Accepted: 01/28/2020] [Indexed: 01/30/2023] Open
Affiliation(s)
- Bang Jun Guo
- Department of Medical Imaging Jinling Hospital The First School of Clinical Medicine Southern Medical University Nanjing210002China
- Department of Radiation Oncology and Winship Cancer Institute Emory University Atlanta GA 30322USA
| | - Xiuxiu He
- Department of Radiation Oncology and Winship Cancer Institute Emory University Atlanta GA 30322USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute Emory University Atlanta GA 30322USA
| | - Joseph Harms
- Department of Radiation Oncology and Winship Cancer Institute Emory University Atlanta GA 30322USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute Emory University Atlanta GA 30322USA
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer Institute Emory University Atlanta GA 30322USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute Emory University Atlanta GA 30322USA
| | - Long Jiang Zhang
- Department of Medical Imaging Jinling Hospital The First School of Clinical Medicine Southern Medical University Nanjing210002China
- Department of Medical Imaging Jinling Hospital Medical School of Nanjing University Nanjing210002China
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute Emory University Atlanta GA 30322USA
| |
Collapse
|
37
|
Lei Y, Wang T, Tian S, Dong X, Jani AB, Schuster D, Curran WJ, Patel P, Liu T, Yang X. Male pelvic multi-organ segmentation aided by CBCT-based synthetic MRI. Phys Med Biol 2020; 65:035013. [PMID: 31851956 DOI: 10.1088/1361-6560/ab63bb] [Citation(s) in RCA: 47] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
To develop an automated cone-beam computed tomography (CBCT) multi-organ segmentation method for potential CBCT-guided adaptive radiation therapy workflow. The proposed method combines the deep leaning-based image synthesis method, which generates magnetic resonance images (MRIs) with superior soft-tissue contrast from on-board setup CBCT images to aid CBCT segmentation, with a deep attention strategy, which focuses on learning discriminative features for differentiating organ margins. The whole segmentation method consists of 3 major steps. First, a cycle-consistent adversarial network (CycleGAN) was used to estimate a synthetic MRI (sMRI) from CBCT images. Second, a deep attention network was trained based on sMRI and its corresponding manual contours. Third, the segmented contours for a query patient was obtained by feeding the patient's CBCT images into the trained sMRI estimation and segmentation model. In our retrospective study, we included 100 prostate cancer patients, each of whom has CBCT acquired with prostate, bladder and rectum contoured by physicians with MRI guidance as ground truth. We trained and tested our model with separate datasets among these patients. The resulting segmentations were compared with physicians' manual contours. The Dice similarity coefficient and mean surface distance indices between our segmented and physicians' manual contours (bladder, prostate, and rectum) were 0.95 ± 0.02, 0.44 ± 0.22 mm, 0.86 ± 0.06, 0.73 ± 0.37 mm, and 0.91 ± 0.04, 0.72 ± 0.65 mm, respectively. We have proposed a novel CBCT-only pelvic multi-organ segmentation strategy using CBCT-based sMRI and validated its accuracy against manual contours. This technique could provide accurate organ volume for treatment planning without requiring MR images acquisition, greatly facilitating routine clinical workflow.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America. Co-first author
| | | | | | | | | | | | | | | | | | | |
Collapse
|
38
|
Dong X, Lei Y, Tian S, Wang T, Patel P, Curran WJ, Jani AB, Liu T, Yang X. Synthetic MRI-aided multi-organ segmentation on male pelvic CT using cycle consistent deep attention network. Radiother Oncol 2019; 141:192-199. [PMID: 31630868 DOI: 10.1016/j.radonc.2019.09.028] [Citation(s) in RCA: 78] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Revised: 09/24/2019] [Accepted: 09/29/2019] [Indexed: 11/17/2022]
Abstract
BACKGROUND AND PURPOSE Manual contouring is labor intensive, and subject to variations in operator knowledge, experience and technique. This work aims to develop an automated computed tomography (CT) multi-organ segmentation method for prostate cancer treatment planning. METHODS AND MATERIALS The proposed method exploits the superior soft-tissue information provided by synthetic MRI (sMRI) to aid the multi-organ segmentation on pelvic CT images. A cycle generative adversarial network (CycleGAN) was used to estimate sMRIs from CT images. A deep attention U-Net (DAUnet) was trained on sMRI and corresponding multi-organ contours for auto-segmentation. The deep attention strategy was introduced to identify the most relevant features to differentiate different organs. Deep supervision was incorporated into the DAUnet to enhance the features' discriminative ability. Segmented contours of a patient were obtained by feeding CT image into the trained CycleGAN to generate sMRI, which was then fed to the trained DAUnet to generate organ contours. We trained and evaluated our model with 140 datasets from prostate patients. RESULTS The Dice similarity coefficient and mean surface distance between our segmented and bladder, prostate, and rectum manual contours were 0.95 ± 0.03, 0.52 ± 0.22 mm; 0.87 ± 0.04, 0.93 ± 0.51 mm; and 0.89 ± 0.04, 0.92 ± 1.03 mm, respectively. CONCLUSION We proposed a sMRI-aided multi-organ automatic segmentation method on pelvic CT images. By integrating deep attention and deep supervision strategy, the proposed network provides accurate and consistent prostate, bladder and rectum segmentation, and has the potential to facilitate routine prostate-cancer radiotherapy treatment planning.
Collapse
Affiliation(s)
- Xue Dong
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States.
| |
Collapse
|
39
|
Wang T, Lei Y, Tian Z, Dong X, Liu Y, Jiang X, Curran WJ, Liu T, Shu HK, Yang X. Deep learning-based image quality improvement for low-dose computed tomography simulation in radiation therapy. J Med Imaging (Bellingham) 2019; 6:043504. [PMID: 31673567 PMCID: PMC6811730 DOI: 10.1117/1.jmi.6.4.043504] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2019] [Accepted: 10/03/2019] [Indexed: 01/02/2023] Open
Abstract
Low-dose computed tomography (CT) is desirable for treatment planning and simulation in radiation therapy. Multiple rescanning and replanning during the treatment course with a smaller amount of dose than a single conventional full-dose CT simulation is a crucial step in adaptive radiation therapy. We developed a machine learning-based method to improve image quality of low-dose CT for radiation therapy treatment simulation. We used a residual block concept and a self-attention strategy with a cycle-consistent adversarial network framework. A fully convolution neural network with residual blocks and attention gates (AGs) was used in the generator to enable end-to-end transformation. We have collected CT images from 30 patients treated with frameless brain stereotactic radiosurgery (SRS) for this study. These full-dose images were used to generate projection data, which were then added with noise to simulate the low-mAs scanning scenario. Low-dose CT images were reconstructed from this noise-contaminated projection data and were fed into our network along with the original full-dose CT images for training. The performance of our network was evaluated by quantitatively comparing the high-quality CT images generated by our method with the original full-dose images. When mAs is reduced to 0.5% of the original CT scan, the mean square error of the CT images obtained by our method is ∼ 1.6 % , with respect to the original full-dose images. The proposed method successfully improved the noise, contract-to-noise ratio, and nonuniformity level to be close to those of full-dose CT images and outperforms a state-of-the-art iterative reconstruction method. Dosimetric studies show that the average differences of dose-volume histogram metrics are < 0.1 Gy ( p > 0.05 ). These quantitative results strongly indicate that the denoised low-dose CT images using our method maintains image accuracy and quality and are accurate enough for dose calculation in current CT simulation of brain SRS treatment. We also demonstrate the great potential for low-dose CT in the process of simulation and treatment planning.
Collapse
Affiliation(s)
- Tonghe Wang
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Yang Lei
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Zhen Tian
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Xue Dong
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Yingzi Liu
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Xiaojun Jiang
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Walter J. Curran
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Tian Liu
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Hui-Kuo Shu
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Xiaofeng Yang
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| |
Collapse
|
40
|
Lei Y, Tian S, He X, Wang T, Wang B, Patel P, Jani AB, Mao H, Curran WJ, Liu T, Yang X. Ultrasound prostate segmentation based on multidirectional deeply supervised V-Net. Med Phys 2019; 46:3194-3206. [PMID: 31074513 PMCID: PMC6625925 DOI: 10.1002/mp.13577] [Citation(s) in RCA: 68] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2018] [Revised: 04/14/2019] [Accepted: 05/01/2019] [Indexed: 01/09/2023] Open
Abstract
PURPOSE Transrectal ultrasound (TRUS) is a versatile and real-time imaging modality that is commonly used in image-guided prostate cancer interventions (e.g., biopsy and brachytherapy). Accurate segmentation of the prostate is key to biopsy needle placement, brachytherapy treatment planning, and motion management. Manual segmentation during these interventions is time-consuming and subject to inter- and intraobserver variation. To address these drawbacks, we aimed to develop a deep learning-based method which integrates deep supervision into a three-dimensional (3D) patch-based V-Net for prostate segmentation. METHODS AND MATERIALS We developed a multidirectional deep-learning-based method to automatically segment the prostate for ultrasound-guided radiation therapy. A 3D supervision mechanism is integrated into the V-Net stages to deal with the optimization difficulties when training a deep network with limited training data. We combine a binary cross-entropy (BCE) loss and a batch-based Dice loss into the stage-wise hybrid loss function for a deep supervision training. During the segmentation stage, the patches are extracted from the newly acquired ultrasound image as the input of the well-trained network and the well-trained network adaptively labels the prostate tissue. The final segmented prostate volume is reconstructed using patch fusion and further refined through a contour refinement processing. RESULTS Forty-four patients' TRUS images were used to test our segmentation method. Our segmentation results were compared with the manually segmented contours (ground truth). The mean prostate volume Dice similarity coefficient (DSC), Hausdorff distance (HD), mean surface distance (MSD), and residual mean surface distance (RMSD) were 0.92 ± 0.03, 3.94 ± 1.55, 0.60 ± 0.23, and 0.90 ± 0.38 mm, respectively. CONCLUSION We developed a novel deeply supervised deep learning-based approach with reliable contour refinement to automatically segment the TRUS prostate, demonstrated its clinical feasibility, and validated its accuracy compared to manual segmentation. The proposed technique could be a useful tool for diagnostic and therapeutic applications in prostate cancer.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Xiuxiu He
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Bo Wang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Ashesh B. Jani
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| |
Collapse
|