1
|
Chen M, Wang K, Dohopolski M, Morgan H, Sher D, Wang J. TransAnaNet: Transformer-based anatomy change prediction network for head and neck cancer radiotherapy. Med Phys 2025; 52:3015-3029. [PMID: 39887473 DOI: 10.1002/mp.17655] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2024] [Revised: 01/11/2025] [Accepted: 01/14/2025] [Indexed: 02/01/2025] Open
Abstract
BACKGROUND Adaptive radiotherapy (ART) can compensate for the dosimetric impact of anatomic change during radiotherapy of head-neck cancer (HNC) patients. However, implementing ART universally poses challenges in clinical workflow and resource allocation, given the variability in patient response and the constraints of available resources. Therefore, the prediction of anatomical change during radiotherapy for HNC patients is of importance to optimize patient clinical benefit and treatment resources. Current studies focus on developing binary ART eligibility classification models to identify patients who would experience significant anatomical change, but these models lack the ability to present the complex patterns and variations in anatomical changes over time. Vision Transformers (ViTs) represent a recent advancement in neural network architectures, utilizing self-attention mechanisms to process image data. Unlike traditional Convolutional Neural Networks (CNNs), ViTs can capture global contextual information more effectively, making them well-suited for image analysis and image generation tasks that involve complex patterns and structures, such as predicting anatomical changes in medical imaging. PURPOSE The purpose of this study is to assess the feasibility of using a ViT-based neural network to predict radiotherapy-induced anatomic change of HNC patients. METHODS We retrospectively included 121 HNC patients treated with definitive chemoradiotherapy (CRT) or radiation alone. We collected the planning computed tomography image (pCT), planned dose, cone beam computed tomography images (CBCTs) acquired at the initial treatment (CBCT01) and Fraction 21 (CBCT21), and primary tumor volume (GTVp) and involved nodal volume (GTVn) delineated on both pCT and CBCTs of each patient for model construction and evaluation. A UNet-style Swin-Transformer-based ViT network was designed to learn the spatial correspondence and contextual information from embedded image patches of CT, dose, CBCT01, GTVp, and GTVn. The deformation vector field between CBCT01 and CBCT21 was estimated by the model as the prediction of anatomic change, and deformed CBCT01 was used as the prediction of CBCT21. We also generated binary masks of GTVp, GTVn, and patient body for volumetric change evaluation. We used data from 101 patients for training and validation, and the remaining 20 patients for testing. Image and volumetric similarity metrics including mean square error (MSE), peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), Dice coefficient, and average surface distance were used to measure the similarity between the target image and predicted CBCT. Anatomy change prediction performance of the proposed model was compared to a CNN-based prediction model and a traditional ViT-based prediction model. RESULTS The predicted image from the proposed method yielded the best similarity to the real image (CBCT21) over pCT, CBCT01, and predicted CBCTs from other comparison models. The average MSE, PSNR, and SSIM between the normalized predicted CBCT and CBCT21 are 0.009, 20.266, and 0.933, while the average Dice coefficient between body mask, GTVp mask, and GTVn mask is 0.972, 0.792, and 0.821, respectively. CONCLUSIONS The proposed method showed promising performance for predicting radiotherapy-induced anatomic change, which has the potential to assist in the decision-making of HNC ART.
Collapse
Affiliation(s)
- Meixu Chen
- Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, Texas, USA
| | - Kai Wang
- Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, Texas, USA
- Department of Radiation Oncology, University of Maryland Medical Center, Baltimore, Maryland, USA
| | - Michael Dohopolski
- Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, Texas, USA
| | - Howard Morgan
- Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, Texas, USA
- Department of Radiation Oncology, Central Arkansas Radiation Therapy Institute, Little Rock, Arkansas, USA
| | - David Sher
- Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, Texas, USA
| | - Jing Wang
- Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, Texas, USA
| |
Collapse
|
2
|
He R, Sarwal V, Qiu X, Zhuang Y, Zhang L, Liu Y, Chiang J. Generative AI Models in Time-Varying Biomedical Data: Scoping Review. J Med Internet Res 2025; 27:e59792. [PMID: 40063929 PMCID: PMC11933772 DOI: 10.2196/59792] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 08/08/2024] [Accepted: 11/15/2024] [Indexed: 03/28/2025] Open
Abstract
BACKGROUND Trajectory modeling is a long-standing challenge in the application of computational methods to health care. In the age of big data, traditional statistical and machine learning methods do not achieve satisfactory results as they often fail to capture the complex underlying distributions of multimodal health data and long-term dependencies throughout medical histories. Recent advances in generative artificial intelligence (AI) have provided powerful tools to represent complex distributions and patterns with minimal underlying assumptions, with major impact in fields such as finance and environmental sciences, prompting researchers to apply these methods for disease modeling in health care. OBJECTIVE While AI methods have proven powerful, their application in clinical practice remains limited due to their highly complex nature. The proliferation of AI algorithms also poses a significant challenge for nondevelopers to track and incorporate these advances into clinical research and application. In this paper, we introduce basic concepts in generative AI and discuss current algorithms and how they can be applied to health care for practitioners with little background in computer science. METHODS We surveyed peer-reviewed papers on generative AI models with specific applications to time-series health data. Our search included single- and multimodal generative AI models that operated over structured and unstructured data, physiological waveforms, medical imaging, and multi-omics data. We introduce current generative AI methods, review their applications, and discuss their limitations and future directions in each data modality. RESULTS We followed the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines and reviewed 155 articles on generative AI applications to time-series health care data across modalities. Furthermore, we offer a systematic framework for clinicians to easily identify suitable AI methods for their data and task at hand. CONCLUSIONS We reviewed and critiqued existing applications of generative AI to time-series health data with the aim of bridging the gap between computational methods and clinical application. We also identified the shortcomings of existing approaches and highlighted recent advances in generative AI that represent promising directions for health care modeling.
Collapse
Affiliation(s)
- Rosemary He
- Department of Computer Science, University of California, Los Angeles, Los Angeles, CA, United States
- Department of Computational Medicine, University of California, Los Angeles, Los Angeles, CA, United States
| | - Varuni Sarwal
- Department of Computer Science, University of California, Los Angeles, Los Angeles, CA, United States
- Department of Computational Medicine, University of California, Los Angeles, Los Angeles, CA, United States
| | - Xinru Qiu
- Division of Biomedical Sciences, School of Medicine, University of California Riverside, Riverside, CA, United States
| | - Yongwen Zhuang
- Department of Biostatistics, University of Michigan, Ann Arbor, MI, United States
| | - Le Zhang
- Institute for Integrative Genome Biology, University of California Riverside, Riverside, CA, United States
| | - Yue Liu
- Institute for Cellular and Molecular Biology, University of Texas at Austin, Austin, TX, United States
| | - Jeffrey Chiang
- Department of Computational Medicine, University of California, Los Angeles, Los Angeles, CA, United States
- Department of Neurosurgery, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States
| |
Collapse
|
3
|
Patel AN, Srinivasan K. Deep learning paradigms in lung cancer diagnosis: A methodological review, open challenges, and future directions. Phys Med 2025; 131:104914. [PMID: 39938402 DOI: 10.1016/j.ejmp.2025.104914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Revised: 12/19/2024] [Accepted: 01/30/2025] [Indexed: 02/14/2025] Open
Abstract
Lung cancer is the leading cause of global cancer-related deaths, which emphasizes the critical importance of early diagnosis in enhancing patient outcomes. Deep learning has demonstrated significant promise in lung cancer diagnosis, excelling in nodule detection, classification, and prognosis prediction. This methodological review comprehensively explores deep learning models' application in lung cancer diagnosis, uncovering their integration across various imaging modalities. Deep learning consistently achieves state-of-the-art performance, occasionally surpassing human expert accuracy. Notably, deep neural networks excel in detecting lung nodules, distinguishing between benign and malignant nodules, and predicting patient prognosis. They have also led to the development of computer-aided diagnosis systems, enhancing diagnostic accuracy for radiologists. This review follows the specified criteria for article selection outlined by PRISMA framework. Despite challenges such as data quality and interpretability limitations, this review emphasizes the potential of deep learning to significantly improve the precision and efficiency of lung cancer diagnosis, facilitating continued research efforts to overcome these obstacles and fully harness neural network's transformative impact in this field.
Collapse
Affiliation(s)
- Aryan Nikul Patel
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore, India.
| | - Kathiravan Srinivasan
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore, India.
| |
Collapse
|
4
|
Huang Y, Gomaa A, Höfler D, Schubert P, Gaipl U, Frey B, Fietkau R, Bert C, Putz F. Principles of artificial intelligence in radiooncology. Strahlenther Onkol 2025; 201:210-235. [PMID: 39105746 PMCID: PMC11839771 DOI: 10.1007/s00066-024-02272-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Accepted: 06/17/2024] [Indexed: 08/07/2024]
Abstract
PURPOSE In the rapidly expanding field of artificial intelligence (AI) there is a wealth of literature detailing the myriad applications of AI, particularly in the realm of deep learning. However, a review that elucidates the technical principles of deep learning as relevant to radiation oncology in an easily understandable manner is still notably lacking. This paper aims to fill this gap by providing a comprehensive guide to the principles of deep learning that is specifically tailored toward radiation oncology. METHODS In light of the extensive variety of AI methodologies, this review selectively concentrates on the specific domain of deep learning. It emphasizes the principal categories of deep learning models and delineates the methodologies for training these models effectively. RESULTS This review initially delineates the distinctions between AI and deep learning as well as between supervised and unsupervised learning. Subsequently, it elucidates the fundamental principles of major deep learning models, encompassing multilayer perceptrons (MLPs), convolutional neural networks (CNNs), recurrent neural networks (RNNs), transformers, generative adversarial networks (GANs), diffusion-based generative models, and reinforcement learning. For each category, it presents representative networks alongside their specific applications in radiation oncology. Moreover, the review outlines critical factors essential for training deep learning models, such as data preprocessing, loss functions, optimizers, and other pivotal training parameters including learning rate and batch size. CONCLUSION This review provides a comprehensive overview of deep learning principles tailored toward radiation oncology. It aims to enhance the understanding of AI-based research and software applications, thereby bridging the gap between complex technological concepts and clinical practice in radiation oncology.
Collapse
Affiliation(s)
- Yixing Huang
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander Universität Erlangen-Nürnberg, 91054, Erlangen, Germany.
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), 91054, Erlangen, Germany.
| | - Ahmed Gomaa
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander Universität Erlangen-Nürnberg, 91054, Erlangen, Germany
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), 91054, Erlangen, Germany
| | - Daniel Höfler
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander Universität Erlangen-Nürnberg, 91054, Erlangen, Germany
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), 91054, Erlangen, Germany
| | - Philipp Schubert
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander Universität Erlangen-Nürnberg, 91054, Erlangen, Germany
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), 91054, Erlangen, Germany
| | - Udo Gaipl
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), 91054, Erlangen, Germany
- Translational Radiobiology, Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander Universität Erlangen-Nürnberg, 91054, Erlangen, Germany
| | - Benjamin Frey
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), 91054, Erlangen, Germany
- Translational Radiobiology, Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander Universität Erlangen-Nürnberg, 91054, Erlangen, Germany
| | - Rainer Fietkau
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander Universität Erlangen-Nürnberg, 91054, Erlangen, Germany
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), 91054, Erlangen, Germany
| | - Christoph Bert
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander Universität Erlangen-Nürnberg, 91054, Erlangen, Germany
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), 91054, Erlangen, Germany
| | - Florian Putz
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander Universität Erlangen-Nürnberg, 91054, Erlangen, Germany
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), 91054, Erlangen, Germany
| |
Collapse
|
5
|
Zhang Y, Xu K, Xie C, Gao Z. Emotion inference in conversations based on commonsense enhancement and graph structures. PLoS One 2024; 19:e0315039. [PMID: 39661617 PMCID: PMC11633976 DOI: 10.1371/journal.pone.0315039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2024] [Accepted: 11/19/2024] [Indexed: 12/13/2024] Open
Abstract
In the task of emotion inference, a common issue is the lack of common sense knowledge, particularly in the context of dialogue, where traditional research has failed to effectively extract structural features, resulting in lower accuracy in emotion inference. To address this, this paper proposes a dialogue emotion inference model based on Common Sense Enhancement and Graph Model (CEICG). This model integrates external common sense with graph model techniques by dynamically constructing nodes and defining diverse edge relations to simulate the evolution of dialogue, thereby effectively capturing the structural and semantic features of the conversation. The model employs two methods to incorporate external common sense into the graph model, overcoming the limitations of previous models in understanding complex dialogue structures and the absence of external knowledge. This strategy of integrating external common sense significantly enhances the model's emotion inference capabilities, improving the understanding of emotions in dialogue. Experimental results demonstrate that the CEICG model outperforms six existing baseline models in emotion inference tasks across three datasets.
Collapse
Affiliation(s)
- Yuanmin Zhang
- China Unicom (Sichuan) Industrial Internet Co. Ltd., Chengdu, Sichuan, People’s Republic of China
| | - Kexin Xu
- School of Computer and Software Engineering, Xihua University, Chengdu, Sichuan, People’s Republic of China
| | - Chunzhi Xie
- School of Computer and Software Engineering, Xihua University, Chengdu, Sichuan, People’s Republic of China
| | - Zhisheng Gao
- School of Computer and Software Engineering, Xihua University, Chengdu, Sichuan, People’s Republic of China
| |
Collapse
|
6
|
Wang Y, Zhou C, Ying L, Chan HP, Lee E, Chughtai A, Hadjiiski LM, Kazerooni EA. Enhancing Early Lung Cancer Diagnosis: Predicting Lung Nodule Progression in Follow-Up Low-Dose CT Scan with Deep Generative Model. Cancers (Basel) 2024; 16:2229. [PMID: 38927934 PMCID: PMC11201561 DOI: 10.3390/cancers16122229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2024] [Revised: 05/31/2024] [Accepted: 06/01/2024] [Indexed: 06/28/2024] Open
Abstract
Early diagnosis of lung cancer can significantly improve patient outcomes. We developed a Growth Predictive model based on the Wasserstein Generative Adversarial Network framework (GP-WGAN) to predict the nodule growth patterns in the follow-up LDCT scans. The GP-WGAN was trained with a training set (N = 776) containing 1121 pairs of nodule images with about 1-year intervals and deployed to an independent test set of 450 nodules on baseline LDCT scans to predict nodule images (GP-nodules) in their 1-year follow-up scans. The 450 GP-nodules were finally classified as malignant or benign by a lung cancer risk prediction (LCRP) model, achieving a test AUC of 0.827 ± 0.028, which was comparable to the AUC of 0.862 ± 0.028 achieved by the same LCRP model classifying real follow-up nodule images (p = 0.071). The net reclassification index yielded consistent outcomes (NRI = 0.04; p = 0.62). Other baseline methods, including Lung-RADS and the Brock model, achieved significantly lower performance (p < 0.05). The results demonstrated that the GP-nodules predicted by our GP-WGAN model achieved comparable performance with the nodules in the real follow-up scans for lung cancer diagnosis, indicating the potential to detect lung cancer earlier when coupled with accelerated clinical management versus the current approach of waiting until the next screening exam.
Collapse
Affiliation(s)
- Yifan Wang
- Department of Radiology, The University of Michigan Medical School, Ann Arbor, MI 48109-0904, USA; (Y.W.); (H.-P.C.); (E.L.); (A.C.); (L.M.H.); (E.A.K.)
- Department of Electrical Engineering and Computer Science, The University of Michigan, Ann Arbor, MI 48109-2122, USA;
| | - Chuan Zhou
- Department of Radiology, The University of Michigan Medical School, Ann Arbor, MI 48109-0904, USA; (Y.W.); (H.-P.C.); (E.L.); (A.C.); (L.M.H.); (E.A.K.)
| | - Lei Ying
- Department of Electrical Engineering and Computer Science, The University of Michigan, Ann Arbor, MI 48109-2122, USA;
| | - Heang-Ping Chan
- Department of Radiology, The University of Michigan Medical School, Ann Arbor, MI 48109-0904, USA; (Y.W.); (H.-P.C.); (E.L.); (A.C.); (L.M.H.); (E.A.K.)
| | - Elizabeth Lee
- Department of Radiology, The University of Michigan Medical School, Ann Arbor, MI 48109-0904, USA; (Y.W.); (H.-P.C.); (E.L.); (A.C.); (L.M.H.); (E.A.K.)
| | - Aamer Chughtai
- Department of Radiology, The University of Michigan Medical School, Ann Arbor, MI 48109-0904, USA; (Y.W.); (H.-P.C.); (E.L.); (A.C.); (L.M.H.); (E.A.K.)
- Diagnostic Radiology, Cleveland Clinic, Cleveland, OH 44195, USA
| | - Lubomir M. Hadjiiski
- Department of Radiology, The University of Michigan Medical School, Ann Arbor, MI 48109-0904, USA; (Y.W.); (H.-P.C.); (E.L.); (A.C.); (L.M.H.); (E.A.K.)
| | - Ella A. Kazerooni
- Department of Radiology, The University of Michigan Medical School, Ann Arbor, MI 48109-0904, USA; (Y.W.); (H.-P.C.); (E.L.); (A.C.); (L.M.H.); (E.A.K.)
- Department of Internal Medicine, The University of Michigan Medical School, Ann Arbor, MI 48109-0904, USA
| |
Collapse
|
7
|
Cheng J, Zhao B, Liu Z, Huang D, Qin N, Yang A, Chen Y, Shu J. DMGM: deformable-mechanism based cervical cancer staging via MRI multi-sequence . Phys Med Biol 2024; 69:115044. [PMID: 38749463 DOI: 10.1088/1361-6560/ad4c50] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Accepted: 05/15/2024] [Indexed: 05/31/2024]
Abstract
Objective.This study aims to leverage a deep learning approach, specifically a deformable convolutional layer, for staging cervical cancer using multi-sequence MRI images. This is in response to the challenges doctors face in simultaneously identifying multiple sequences, a task that computer-aided diagnosis systems can potentially improve due to their vast information storage capabilities.Approach.To address the challenge of limited sample sizes, we introduce a sequence enhancement strategy to diversify samples and mitigate overfitting. We propose a novel deformable ConvLSTM module that integrates a deformable mechanism with ConvLSTM, enabling the model to adapt to data with varying structures. Furthermore, we introduce the deformable multi-sequence guidance model (DMGM) as an auxiliary diagnostic tool for cervical cancer staging.Main results.Through extensive testing, including comparative and ablation studies, we validate the effectiveness of the deformable ConvLSTM module and the DMGM. Our findings highlight the model's ability to adapt to the deformation mechanism and address the challenges in cervical cancer tumor staging, thereby overcoming the overfitting issue and ensuring the synchronization of asynchronous scan sequences. The research also utilized the multi-modal data from BraTS 2019 as an external test dataset to validate the effectiveness of the proposed methodology presented in this study.Significance.The DMGM represents the first deep learning model to analyze multiple MRI sequences for cervical cancer, demonstrating strong generalization capabilities and effective staging in small dataset scenarios. This has significant implications for both deep learning applications and medical diagnostics. The source code will be made available subsequently.
Collapse
Affiliation(s)
- Junqiang Cheng
- Institute of Systems Science and Technology, School of Electrical Engineering, Southwest Jiaotong University, Chengdu 611756, People's Republic of China
| | - Binnan Zhao
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou 646000, People's Republic of China
| | - Ziyi Liu
- State Key Laboratory of Air Traffic Management System, Nanjing 210022, People's Republic of China
| | - Deqing Huang
- Institute of Systems Science and Technology, School of Electrical Engineering, Southwest Jiaotong University, Chengdu 611756, People's Republic of China
| | - Na Qin
- Institute of Systems Science and Technology, School of Electrical Engineering, Southwest Jiaotong University, Chengdu 611756, People's Republic of China
| | - Aisen Yang
- Institute of Systems Science and Technology, School of Electrical Engineering, Southwest Jiaotong University, Chengdu 611756, People's Republic of China
| | - Yuan Chen
- Institute of Systems Science and Technology, School of Electrical Engineering, Southwest Jiaotong University, Chengdu 611756, People's Republic of China
| | - Jian Shu
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou 646000, People's Republic of China
| |
Collapse
|
8
|
Chen M, Wang K, Dohopolski M, Morgan H, Sher D, Wang J. TransAnaNet: Transformer-based Anatomy Change Prediction Network for Head and Neck Cancer Patient Radiotherapy. ARXIV 2024:arXiv:2405.05674v2. [PMID: 38764596 PMCID: PMC11100917] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 05/21/2024]
Abstract
Background Adaptive radiotherapy (ART) can compensate for the dosimetric impact of anatomic change during radiotherapy of head neck cancer (HNC) patients. However, implementing ART universally poses challenges in clinical workflow and resource allocation, given the variability in patient response and the constraints of available resources. Therefore, early identification of head and neck cancer (HNC) patients who would experience significant anatomical change during radiotherapy (RT) is of importance to optimize patient clinical benefit and treatment resources. Purpose The purpose of this study is to assess the feasibility of using a vision-transformer (ViT) based neural network to predict radiotherapy induced anatomic change of HNC patients. Methods We retrospectively included 121 HNC patients treated with definitive RT/CRT. We collected the planning CT (pCT), planned dose, CBCTs acquired at the initial treatment (CBCT01) and fraction 21 (CBCT21), and primary tumor volume (GTVp) and involved nodal volume (GTVn) delineated on both pCT and CBCTs for model construction and evaluation. A UNet-style ViT network was designed to learn the spatial correspondence and contextual information from embedded image patches of CT, dose, CBCT01, GTVp, and GTVn. The deformation vector field between CBCT01 and CBCT21 was estimated by the model as the prediction of anatomic change, and deformed CBCT01 was used as the prediction of CBCT21. We also generated binary masks of GTVp, GTVn and patient body for volumetric change evaluation. We used data from 100 patients for training and validation, and the remaining 21 patients for testing. Image and volumetric similarity metrics including mean square error (MSE), structural similarity index (SSIM), dice coefficient, and average surface distance were used to measure the similarity between the target image and predicted CBCT. Results The predicted image from the proposed method yielded the best similarity to the real image (CBCT21) over pCT, CBCT01, and predicted CBCTs from other comparison models. The average MSE and SSIM between the normalized predicted CBCT to CBCT21 are 0.009 and 0.933, while the average dice coefficient between body mask, GTVp mask, and GTVn mask are 0.972, 0.792, and 0.821 respectively. Conclusions The proposed method showed promising performance for predicting radiotherapy induced anatomic change, which has the potential to assist in the decision making of HNC Adaptive RT.
Collapse
Affiliation(s)
- Meixu Chen
- Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX, 75235, USA
| | - Kai Wang
- Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX, 75235, USA
- Department of Radiation Oncology, University of Maryland Medical Center, Baltimore, MD, 21201, USA
| | - Michael Dohopolski
- Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX, 75235, USA
| | - Howard Morgan
- Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX, 75235, USA
- Department of Radiation Oncology, Central Arkansas Radiation Therapy Institute, Little Rock, AR, 72205, USA
| | - David Sher
- Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX, 75235, USA
| | - Jing Wang
- Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX, 75235, USA
| |
Collapse
|
9
|
Weng J, Bhupathiraju SHV, Samant T, Dresner A, Wu J, Samant SS. Convolutional LSTM model for cine image prediction of abdominal motion. Phys Med Biol 2024; 69:085024. [PMID: 38518378 DOI: 10.1088/1361-6560/ad3722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Accepted: 03/22/2024] [Indexed: 03/24/2024]
Abstract
Objective.In this study, we tackle the challenge of latency in magnetic resonance linear accelerator (MR-Linac) systems, which compromises target coverage accuracy in gated real-time radiotherapy. Our focus is on enhancing motion prediction precision in abdominal organs to address this issue. We developed a convolutional long short-term memory (convLSTM) model, utilizing 2D cine magnetic resonance (cine-MR) imaging for this purpose.Approach.Our model, featuring a sequence-to-one architecture with six input frames and one output frame, employs structural similarity index measure (SSIM) as loss function. Data was gathered from 17 cine-MRI datasets using the Philips Ingenia MR-sim system and an Elekta Unity MR-Linac equivalent sequence, focusing on regions of interest (ROIs) like the stomach, liver, pancreas, and kidney. The datasets varied in duration from 1 to 10 min.Main results.The study comprised three main phases: hyperparameter optimization, individual training, and transfer learning with or without fine-tuning. Hyperparameters were initially optimized to construct the most effective model. Then, the model was individually applied to each dataset to predict images four frames ahead (1.24-3.28 s). We evaluated the model's performance using metrics such as SSIM, normalized mean square error, normalized correlation coefficient, and peak signal-to-noise ratio, specifically for ROIs with target motion. The average SSIM values achieved were 0.54, 0.64, 0.77, and 0.66 for the stomach, liver, kidney, and pancreas, respectively. In the transfer learning phase with fine-tuning, the model showed improved SSIM values of 0.69 for the liver and 0.78 for the kidney, compared to 0.64 and 0.37 without fine-tuning.Significance. The study's significant contribution is demonstrating the convLSTM model's ability to accurately predict motion for multiple abdominal organs using a Unity-equivalent MR sequence. This advancement is key in mitigating latency issues in MR-Linac radiotherapy, potentially improving the precision and effectiveness of real-time treatment for abdominal cancers.
Collapse
Affiliation(s)
- J Weng
- Department of Radiation Oncology, University of Florida, Gainesville, FL, United States of America
| | - S H V Bhupathiraju
- Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL, United States of America
| | - T Samant
- Tera Insights, Gainesville, FL, United States of America
| | - A Dresner
- Philips Healthcare MR Oncology, Cleveland, OH, United States of America
| | - J Wu
- Department of Radiation Oncology, University of Florida, Gainesville, FL, United States of America
| | - S S Samant
- Department of Radiation Oncology, University of Florida, Gainesville, FL, United States of America
| |
Collapse
|
10
|
Abstract
Artificial intelligence (AI) is an epoch-making technology, among which the 2 most advanced parts are machine learning and deep learning algorithms that have been further developed by machine learning, and it has been partially applied to assist EUS diagnosis. AI-assisted EUS diagnosis has been reported to have great value in the diagnosis of pancreatic tumors and chronic pancreatitis, gastrointestinal stromal tumors, esophageal early cancer, biliary tract, and liver lesions. The application of AI in EUS diagnosis still has some urgent problems to be solved. First, the development of sensitive AI diagnostic tools requires a large amount of high-quality training data. Second, there is overfitting and bias in the current AI algorithms, leading to poor diagnostic reliability. Third, the value of AI still needs to be determined in prospective studies. Fourth, the ethical risks of AI need to be considered and avoided.
Collapse
Affiliation(s)
- Deyu Zhang
- Department of Gastroenterology, Changhai hospital, Naval Medical University, Shanghai 200433, China
| | - Chang Wu
- Department of Gastroenterology, Changhai hospital, Naval Medical University, Shanghai 200433, China
| | - Zhenghui Yang
- Department of Gastroenterology, Changhai hospital, Naval Medical University, Shanghai 200433, China
| | - Hua Yin
- Department of Gastroenterology, General Hospital of Ningxia Medical University, Yinchuan 750004, Ningxia Hui Autonomous Region, China
| | - Yue Liu
- Department of Gastroenterology, Changhai hospital, Naval Medical University, Shanghai 200433, China
| | - Wanshun Li
- Department of Gastroenterology, Changhai hospital, Naval Medical University, Shanghai 200433, China
| | - Haojie Huang
- Department of Gastroenterology, Changhai hospital, Naval Medical University, Shanghai 200433, China
| | - Zhendong Jin
- Department of Gastroenterology, Changhai hospital, Naval Medical University, Shanghai 200433, China
| |
Collapse
|
11
|
Fang J, Wang J, Li A, Yan Y, Liu H, Li J, Yang H, Hou Y, Yang X, Yang M, Liu J. Parameterized Gompertz-Guided Morphological AutoEncoder for Predicting Pulmonary Nodule Growth. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3602-3613. [PMID: 37471191 DOI: 10.1109/tmi.2023.3297209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/22/2023]
Abstract
The growth rate of pulmonary nodules is a critical clue to the cancerous diagnosis. It is essential to monitor their dynamic progressions during pulmonary nodule management. To facilitate the prosperity of research on nodule growth prediction, we organized and published a temporal dataset called NLSTt with consecutive computed tomography (CT) scans. Based on the self-built dataset, we develop a visual learner to predict the growth for the following CT scan qualitatively and further propose a model to predict the growth rate of pulmonary nodules quantitatively, so that better diagnosis can be achieved with the help of our predicted results. To this end, in this work, we propose a parameterized Gempertz-guided morphological autoencoder (GM-AE) to generate any future-time-span high-quality visual appearances of pulmonary nodules from the baseline CT scan. Specifically, we parameterize a popular mathematical model for tumor growth kinetics, Gompertz, to predict future masses and volumes of pulmonary nodules. Then, we exploit the expected growth rate on the mass and volume to guide decoders generating future shape and texture of pulmonary nodules. We introduce two branches in an autoencoder to encourage shape-aware and textural-aware representation learning and integrate the generated shape into the textural-aware branch to simulate the future morphology of pulmonary nodules. We conduct extensive experiments on the self-built NLSTt dataset to demonstrate the superiority of our GM-AE to its competitive counterparts. Experiment results also reveal the learnable Gompertz function enjoys promising descriptive power in accounting for inter-subject variability of the growth rate for pulmonary nodules. Besides, we evaluate our GM-AE model on an in-house dataset to validate its generalizability and practicality. We make its code publicly available along with the published NLSTt dataset.
Collapse
|
12
|
Ma M, Zhang X, Li Y, Wang X, Zhang R, Wang Y, Sun P, Wang X, Sun X. ConvLSTM coordinated longitudinal transformer under spatio-temporal features for tumor growth prediction. Comput Biol Med 2023; 164:107313. [PMID: 37562325 DOI: 10.1016/j.compbiomed.2023.107313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 07/17/2023] [Accepted: 08/07/2023] [Indexed: 08/12/2023]
Abstract
Accurate quantification of tumor growth patterns can indicate the development process of the disease. According to the important features of tumor growth rate and expansion, physicians can intervene and diagnose patients more efficiently to improve the cure rate. However, the existing longitudinal growth model can not well analyze the dependence between tumor growth pixels in the long space-time, and fail to effectively fit the nonlinear growth law of tumors. So, we propose the ConvLSTM coordinated longitudinal Transformer (LCTformer) under spatiotemporal features for tumor growth prediction. We design the Adaptive Edge Enhancement Module (AEEM) to learn static spatial features of different size tumors under time series and make the depth model more focused on tumor edge regions. In addition, we propose the Growth Prediction Module (GPM) to characterize the future growth trend of tumors. It consists of a Longitudinal Transformer and ConvLSTM. Based on the adaptive abstract features of current tumors, Longitudinal Transformer explores the dynamic growth patterns between spatiotemporal CT sequences and learns the future morphological features of tumors under the dual views of residual information and sequence motion relationship in parallel. ConvLSTM can better learn the location information of target tumors, and it complements Longitudinal Transformer to jointly predict future imaging of tumors to reduce the loss of growth information. Finally, Channel Enhancement Fusion Module (CEFM) performs the dense fusion of the generated tumor feature images in the channel and spatial dimensions and realizes accurate quantification of the whole tumor growth process. Our model has been strictly trained and tested on the NLST dataset. The average prediction accuracy can reach 88.52% (Dice score), 89.64% (Recall), and 11.06 (RMSE), which can improve the work efficiency of doctors.
Collapse
Affiliation(s)
- Manfu Ma
- College of Computer Science & Engineering, Northwest Normal University, Lanzhou, 730070, China
| | - Xiaoming Zhang
- College of Computer Science & Engineering, Northwest Normal University, Lanzhou, 730070, China
| | - Yong Li
- College of Computer Science & Engineering, Northwest Normal University, Lanzhou, 730070, China.
| | - Xia Wang
- Department of Pharmacy, The People's Hospital of Gansu Province, Lanzhou, 730000, China
| | - Ruigen Zhang
- College of Computer Science & Engineering, Northwest Normal University, Lanzhou, 730070, China
| | - Yang Wang
- College of Computer Science & Engineering, Northwest Normal University, Lanzhou, 730070, China
| | - Penghui Sun
- College of Computer Science & Engineering, Northwest Normal University, Lanzhou, 730070, China
| | - Xuegang Wang
- College of Computer Science & Engineering, Northwest Normal University, Lanzhou, 730070, China
| | - Xuan Sun
- College of Computer Science & Engineering, Northwest Normal University, Lanzhou, 730070, China
| |
Collapse
|
13
|
Wang R, Bashyam V, Yang Z, Yu F, Tassopoulou V, Chintapalli SS, Skampardoni I, Sreepada LP, Sahoo D, Nikita K, Abdulkadir A, Wen J, Davatzikos C. Applications of generative adversarial networks in neuroimaging and clinical neuroscience. Neuroimage 2023; 269:119898. [PMID: 36702211 PMCID: PMC9992336 DOI: 10.1016/j.neuroimage.2023.119898] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Revised: 12/16/2022] [Accepted: 01/21/2023] [Indexed: 01/25/2023] Open
Abstract
Generative adversarial networks (GANs) are one powerful type of deep learning models that have been successfully utilized in numerous fields. They belong to the broader family of generative methods, which learn to generate realistic data with a probabilistic model by learning distributions from real samples. In the clinical context, GANs have shown enhanced capabilities in capturing spatially complex, nonlinear, and potentially subtle disease effects compared to traditional generative methods. This review critically appraises the existing literature on the applications of GANs in imaging studies of various neurological conditions, including Alzheimer's disease, brain tumors, brain aging, and multiple sclerosis. We provide an intuitive explanation of various GAN methods for each application and further discuss the main challenges, open questions, and promising future directions of leveraging GANs in neuroimaging. We aim to bridge the gap between advanced deep learning methods and neurology research by highlighting how GANs can be leveraged to support clinical decision making and contribute to a better understanding of the structural and functional patterns of brain diseases.
Collapse
Affiliation(s)
- Rongguang Wang
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA.
| | - Vishnu Bashyam
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Zhijian Yang
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Fanyang Yu
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Vasiliki Tassopoulou
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Sai Spandana Chintapalli
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Ioanna Skampardoni
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA; School of Electrical and Computer Engineering, National Technical University of Athens, Athens, Greece
| | - Lasya P Sreepada
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Dushyant Sahoo
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Konstantina Nikita
- School of Electrical and Computer Engineering, National Technical University of Athens, Athens, Greece
| | - Ahmed Abdulkadir
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA; Department of Clinical Neurosciences, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Junhao Wen
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Christos Davatzikos
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, USA.
| |
Collapse
|
14
|
Shoaib MA, Chuah JH, Ali R, Hasikin K, Khalil A, Hum YC, Tee YK, Dhanalakshmi S, Lai KW. An Overview of Deep Learning Methods for Left Ventricle Segmentation. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023; 2023:4208231. [PMID: 36756163 PMCID: PMC9902166 DOI: 10.1155/2023/4208231] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Revised: 10/25/2022] [Accepted: 11/24/2022] [Indexed: 01/31/2023]
Abstract
Cardiac health diseases are one of the key causes of death around the globe. The number of heart patients has considerably increased during the pandemic. Therefore, it is crucial to assess and analyze the medical and cardiac images. Deep learning architectures, specifically convolutional neural networks have profoundly become the primary choice for the assessment of cardiac medical images. The left ventricle is a vital part of the cardiovascular system where the boundary and size perform a significant role in the evaluation of cardiac function. Due to automatic segmentation and good promising results, the left ventricle segmentation using deep learning has attracted a lot of attention. This article presents a critical review of deep learning methods used for the left ventricle segmentation from frequently used imaging modalities including magnetic resonance images, ultrasound, and computer tomography. This study also demonstrates the details of the network architecture, software, and hardware used for training along with publicly available cardiac image datasets and self-prepared dataset details incorporated. The summary of the evaluation matrices with results used by different researchers is also presented in this study. Finally, all this information is summarized and comprehended in order to assist the readers to understand the motivation and methodology of various deep learning models, as well as exploring potential solutions to future challenges in LV segmentation.
Collapse
Affiliation(s)
- Muhammad Ali Shoaib
- Department of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia
- Faculty of Information and Communication Technology, BUITEMS, Quetta, Pakistan
| | - Joon Huang Chuah
- Department of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia
| | - Raza Ali
- Department of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia
- Faculty of Information and Communication Technology, BUITEMS, Quetta, Pakistan
| | - Khairunnisa Hasikin
- Department of Biomedical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia
| | - Azira Khalil
- Faculty of Science & Technology, Universiti Sains Islam Malaysia, Nilai 71800, Malaysia
| | - Yan Chai Hum
- Department of Mechatronics and Biomedical Engineering, Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Malaysia
| | - Yee Kai Tee
- Department of Mechatronics and Biomedical Engineering, Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Malaysia
| | - Samiappan Dhanalakshmi
- Department of Electronics and Communication Engineering, SRM Institute of Science and Technology, Kattankulathur, India
| | - Khin Wee Lai
- Department of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia
| |
Collapse
|
15
|
Lee PY, Wei HJ, Pouliopoulos AN, Forsyth BT, Yang Y, Zhang C, Laine AF, Konofagou EE, Wu CC, Guo J. Deep Learning Enables Reduced Gadolinium Dose for Contrast-Enhanced Blood-Brain Barrier Opening. ARXIV 2023:arXiv:2301.07248v1. [PMID: 36713234 PMCID: PMC9882566] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
Abstract
Focused ultrasound (FUS) can be used to open the blood-brain barrier (BBB), and MRI with contrast agents can detect that opening. However, repeated use of gadolinium-based contrast agents (GBCAs) presents safety concerns to patients. This study is the first to propose the idea of modeling a volume transfer constant (Ktrans) through deep learning to reduce the dosage of contrast agents. The goal of the study is not only to reconstruct artificial intelligence (AI) derived Ktrans images but to also enhance the intensity with low dosage contrast agent T1 weighted MRI scans. We successfully validated this idea through a previous state-of-the-art temporal network algorithm, which focused on extracting time domain features at the voxel level. Then we used a Spatiotemporal Network (ST-Net), composed of a spatiotemporal convolutional neural network (CNN)-based deep learning architecture with the addition of a three-dimensional CNN encoder, to improve the model performance. We tested the ST-Net model on ten datasets of FUS-induced BBB-openings aquired from different sides of the mouse brain. ST-Net successfully detected and enhanced BBB-opening signals without sacrificing spatial domain information. ST-Net was shown to be a promising method of reducing the need of contrast agents for modeling BBB-opening K-trans maps from time-series Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) scans.
Collapse
Affiliation(s)
- Pin-Yu Lee
- Department of Biomedical Engineering, The Fu Foundation of Engineering and Applied Science, Columbia University, New York, NY 10027 USA
| | - Hong-Jian Wei
- Department of Radiation Oncology, Columbia University Irving Medical Center, New York, NY 10032 USA
| | - Antonios N Pouliopoulos
- Department of Biomedical Engineering, Columbia University. He is now with the School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Britney T Forsyth
- Department of Biomedical Engineering, The Fu Foundation of Engineering and Applied Science, Columbia University, New York, NY 10027 USA
| | - Yanting Yang
- Department of Biomedical Engineering, The Fu Foundation of Engineering and Applied Science, Columbia University, New York, NY 10027 USA
| | - Chenghao Zhang
- Department of Biomedical Engineering, The Fu Foundation of Engineering and Applied Science, Columbia University, New York, NY 10027 USA
| | - Andrew F Laine
- Departments of Biomedical Engineering and Radiology (Physics), Columbia University, New York, NY 10027 USA
| | - Elisa E Konofagou
- Departments of Biomedical Engineering and Radiology (Physics), Columbia University, New York, NY 10027 USA
| | - Cheng-Chia Wu
- Department of Radiation Oncology, Columbia University Irving Medical Center, New York, NY 10032 USA
| | - Jia Guo
- Department of Psychiatry, Columbia University Irving Medical Center, New York, NY 10032 USA
| |
Collapse
|
16
|
Miao M, Zheng L, Xu B, Yang Z, Hu W. A multiple frequency bands parallel spatial–temporal 3D deep residual learning framework for EEG-based emotion recognition. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
17
|
Static-Dynamic coordinated Transformer for Tumor Longitudinal Growth Prediction. Comput Biol Med 2022; 148:105922. [DOI: 10.1016/j.compbiomed.2022.105922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2022] [Revised: 07/18/2022] [Accepted: 07/30/2022] [Indexed: 11/20/2022]
|
18
|
Xiao N, Song Z. An Algorithm for Time Prediction Signal Interference Detection Based on the LSTM-SVM Model. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:1626458. [PMID: 35310589 PMCID: PMC8933116 DOI: 10.1155/2022/1626458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 02/17/2022] [Indexed: 11/30/2022]
Abstract
Interference detection is an important part of the electronic defense system. It is difficult to detect interference with the traditional method of extracting characteristic parameters for interference generated at the same frequency as the original signal. Aiming at this special time-frequency overlapping interference signal, this paper proposes an interference detection algorithm based on the long short-term memory-support vector machines (LSTM-SVM) model. LSTM is used for the time series prediction of the received signal. The difference between the predicted signal and the received signal is used as the feature sample, and the SVM algorithm is used to classify the feature samples to obtain the recognition rate of whether the sample has interference. The LSTM-SVM model is compared with the gate recurrent unit-support vector machines (GRU-SVM) model, and the comparison results are visualized using a confusion matrix. The simulation results show that this LSTM-SVM model algorithm cannot only detect the existence of the interference signal but also can determine the specific position of the interference signal in the received waveform, and the detection performance is better than the GRU-SVM model.
Collapse
Affiliation(s)
- Ningbo Xiao
- School of Electronics and Information, Northwestern Polytechnical University, Xi'an 710072, Shaanxi, China
| | - Zuxun Song
- School of Electronics and Information, Northwestern Polytechnical University, Xi'an 710072, Shaanxi, China
| |
Collapse
|
19
|
Lee D, Hu YC, Kuo L, Alam S, Yorke E, Li A, Rimner A, Zhang P. Deep learning driven predictive treatment planning for adaptive radiotherapy of lung cancer. Radiother Oncol 2022; 169:57-63. [PMID: 35189155 PMCID: PMC9018570 DOI: 10.1016/j.radonc.2022.02.013] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Revised: 01/24/2022] [Accepted: 02/14/2022] [Indexed: 11/26/2022]
Abstract
BACKGROUND AND PURPOSE To develop a novel deep learning algorithm of sequential analysis, Seq2Seq, for predicting weekly anatomical changes of lung tumor and esophagus during definitive radiotherapy, incorporate the potential tumor shrinkage into a predictive treatment planning paradigm, and improve the therapeutic ratio. METHODS AND MATERIALS Seq2Seq starts with the primary tumor and esophagus observed on the planning CT to predict their geometric evolution during radiotherapy on a weekly basis, and subsequently updates the predictions with new snapshots acquired via weekly CBCTs. Seq2Seq is equipped with convolutional long short term memory to analyze the spatial-temporal changes of longitudinal images, trained and validated using a dataset including sixty patients. Predictive plans were optimized according to each weekly prediction and made ready for weekly deployment to mitigate the clinical burden of online weekly replanning. RESULTS Seq2Seq tracks structural changes well: DICE between predicted and actual weekly tumor and esophagus were (0.83 ± 0.10, 0.79 ± 0.14, 0.78 ± 0.12, 0.77 ± 0.12, 0.75 ± 0.12, 0.71 ± 0.17), and (0.72 ± 0.16, 0.73 ± 0.11, 0.75 ± 0.08, 0.74 ± 0.09, 0.72 ± 0.14, 0.71 ± 0.14), respectively, while the average Hausdorff distances were within 2 mm. Evaluating dose to the actual weekly tumor and esophagus, a 4.2 Gy reduction in esophagus mean dose while maintaining 60 Gy tumor coverage was achieved with the predictive weekly plans, compared to the plan optimized using the initial tumor and esophagus alone, primarily due to noticeable tumor shrinkage during radiotherapy. CONCLUSION It is feasible to predict the longitudinal changes of tumor and esophagus with the Seq2Seq, which could lead to improving the efficiency and effectiveness of lung adaptive radiotherapy.
Collapse
|
20
|
Koyuncu B, Melek A, Yilmaz D, Tuzer M, Unlu MB. Chemotherapy response prediction with diffuser elapser network. Sci Rep 2022; 12:1628. [PMID: 35102179 PMCID: PMC8803972 DOI: 10.1038/s41598-022-05460-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Accepted: 11/10/2021] [Indexed: 12/31/2022] Open
Abstract
In solid tumors, elevated fluid pressure and inadequate blood perfusion resulting from unbalanced angiogenesis are the prominent reasons for the ineffective drug delivery inside tumors. To normalize the heterogeneous and tortuous tumor vessel structure, antiangiogenic treatment is an effective approach. Additionally, the combined therapy of antiangiogenic agents and chemotherapy drugs has shown promising effects on enhanced drug delivery. However, the need to find the appropriate scheduling and dosages of the combination therapy is one of the main problems in anticancer therapy. Our study aims to generate a realistic response to the treatment schedule, making it possible for future works to use these patient-specific responses to decide on the optimal starting time and dosages of cytotoxic drug treatment. Our dataset is based on our previous in-silico model with a framework for the tumor microenvironment, consisting of a tumor layer, vasculature network, interstitial fluid pressure, and drug diffusion maps. In this regard, the chemotherapy response prediction problem is discussed in the study, putting forth a proof of concept for deep learning models to capture the tumor growth and drug response behaviors simultaneously. The proposed model utilizes multiple convolutional neural network submodels to predict future tumor microenvironment maps considering the effects of ongoing treatment. Since the model has the task of predicting future tumor microenvironment maps, we use two image quality evaluation metrics, which are structural similarity and peak signal-to-noise ratio, to evaluate model performance. We track tumor cell density values of ground truth and predicted tumor microenvironments. The model predicts tumor microenvironment maps seven days ahead with the average structural similarity score of 0.973 and the average peak signal ratio of 35.41 in the test set. It also predicts tumor cell density at the end day of 7 with the mean absolute percentage error of [Formula: see text].
Collapse
Affiliation(s)
- Batuhan Koyuncu
- Department of Computer Engineering, Bogazici University, Istanbul, 34342, Turkey
- Center for Life Sciences and Technologies, Bogazici University, Istanbul, 34342, Turkey
| | - Ahmet Melek
- Department of Management, Bogazici University, Istanbul, 34342, Turkey
- Center for Life Sciences and Technologies, Bogazici University, Istanbul, 34342, Turkey
| | - Defne Yilmaz
- Department of Physics, Bogazici University, Istanbul, 34342, Turkey
- Center for Life Sciences and Technologies, Bogazici University, Istanbul, 34342, Turkey
| | - Mert Tuzer
- Department of Physics, Bogazici University, Istanbul, 34342, Turkey
- Center for Life Sciences and Technologies, Bogazici University, Istanbul, 34342, Turkey
| | - Mehmet Burcin Unlu
- Department of Physics, Bogazici University, Istanbul, 34342, Turkey.
- Center for Life Sciences and Technologies, Bogazici University, Istanbul, 34342, Turkey.
- Hokkaido University, Global Station for Quantum Medical Science and Engineering, Global Institution for Collaborative Research and Education (GI-CoRE), Sapporo, 060-8648, Japan.
| |
Collapse
|
21
|
Temporal refinement of 3D CNN semantic segmentations on 4D time-series of undersampled tomograms using hidden Markov models. Sci Rep 2021; 11:23279. [PMID: 34857791 PMCID: PMC8640015 DOI: 10.1038/s41598-021-02466-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Accepted: 11/09/2021] [Indexed: 11/23/2022] Open
Abstract
Recently, several convolutional neural networks have been proposed not only for 2D images, but also for 3D and 4D volume segmentation. Nevertheless, due to the large data size of the latter, acquiring a sufficient amount of training annotations is much more strenuous than in 2D images. For 4D time-series tomograms, this is usually handled by segmenting the constituent tomograms independently through time with 3D convolutional neural networks. Inter-volume information is therefore not utilized, potentially leading to temporal incoherence. In this paper, we attempt to resolve this by proposing two hidden Markov model variants that refine 4D segmentation labels made by 3D convolutional neural networks working on each time point. Our models utilize not only inter-volume information, but also the prediction confidence generated by the 3D segmentation convolutional neural networks themselves. To the best of our knowledge, this is the first attempt to refine 4D segmentations made by 3D convolutional neural networks using hidden Markov models. During experiments we test our models, qualitatively, quantitatively and behaviourally, using prespecified segmentations. We demonstrate in the domain of time series tomograms which are typically undersampled to allow more frequent capture; a particularly challenging problem. Finally, our dataset and code is publicly available.
Collapse
|
22
|
Cwalina KK, Rajchowski P, Olejniczak A, Błaszkiewicz O, Burczyk R. Channel State Estimation in LTE-Based Heterogenous Networks Using Deep Learning. SENSORS 2021; 21:s21227716. [PMID: 34833787 PMCID: PMC8618544 DOI: 10.3390/s21227716] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Revised: 11/07/2021] [Accepted: 11/12/2021] [Indexed: 11/16/2022]
Abstract
Following the continuous development of the information technology, the concept of dense urban networks has evolved as well. The powerful tools, like machine learning, break new ground in smart network and interface design. In this paper the concept of using deep learning for estimating the radio channel parameters of the LTE (Long Term Evolution) radio interface is presented. It was proved that the deep learning approach provides a significant gain (almost 40%) with 10.7% compared to the linear model with the lowest RMSE (Root Mean Squared Error) 17.01%. The solution can be adopted as a part of the data allocation algorithm implemented in the telemetry devices equipped with the 4G radio interface, or, after the adjustment, the NB-IoT (Narrowband Internet of Things), to maximize the reliability of the services in harsh indoor or urban environments. Presented results also prove the existence of the inverse proportional dependence between the number of hidden layers and the number of historical samples in terms of the obtained RMSE. The increase of the historical data memory allows using models with fewer hidden layers while maintaining a comparable RMSE value for each scenario, which reduces the total computational cost.
Collapse
|
23
|
Efficient Energy Management Based on Convolutional Long Short-Term Memory Network for Smart Power Distribution System. ENERGIES 2021. [DOI: 10.3390/en14196161] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
An efficient energy management system is integrated with the power grid to collect information about the energy consumption and provide the appropriate control to optimize the supply–demand pattern. Therefore, there is a need for intelligent decisions for the generation and distribution of energy, which is only possible by making the correct future predictions. In the energy market, future knowledge of the energy consumption pattern helps the end-user to decide when to buy or sell the energy to reduce the energy cost and decrease the peak consumption. The Internet of things (IoT) and energy data analytic techniques have provided the convenience to collect the data from the end devices on a large scale and to manipulate all the recorded data. Forecasting an electric load is fairly challenging due to the high uncertainty and dynamic nature involved due to spatiotemporal pattern consumption. Existing conventional forecasting models lack the ability to deal with the spatio-temporally varying data. To overcome the above-mentioned challenges, this work proposes an encoder–decoder model based on convolutional long short-term memory networks (ConvLSTM) for energy load forecasting. The proposed architecture uses encode consisting of multiple ConvLSTM layers to extract the salient features in the data and to learn the sequential dependency and then passes the output to the decoder, having LSTM layers to make forecasting. The forecasting results produced by the proposed approach are favorably comparable to the existing state-of-the-art and better than the conventional methods with the least error rate. Quantitative analyses show that a mean absolute percentage error (MAPE) of 6.966% for household energy consumption and 16.81% for city-wide energy consumption is obtained for the proposed forecasting model in comparison with existing encoder–decoder-based deep learning models for two real-world datasets.
Collapse
|
24
|
Lee D, Alam SR, Jiang J, Zhang P, Nadeem S, Hu YC. Deformation driven Seq2Seq longitudinal tumor and organs-at-risk prediction for radiotherapy. Med Phys 2021; 48:4784-4798. [PMID: 34245602 DOI: 10.1002/mp.15075] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Revised: 05/21/2021] [Accepted: 06/07/2021] [Indexed: 11/06/2022] Open
Abstract
PURPOSE Radiotherapy presents unique challenges and clinical requirements for longitudinal tumor and organ-at-risk (OAR) prediction during treatment. The challenges include tumor inflammation/edema and radiation-induced changes in organ geometry, whereas the clinical requirements demand flexibility in input/output sequence timepoints to update the predictions on rolling basis and the grounding of all predictions in relationship to the pre-treatment imaging information for response and toxicity assessment in adaptive radiotherapy. METHODS To deal with the aforementioned challenges and to comply with the clinical requirements, we present a novel 3D sequence-to-sequence model based on Convolution Long Short-Term Memory (ConvLSTM) that makes use of series of deformation vector fields (DVFs) between individual timepoints and reference pre-treatment/planning CTs to predict future anatomical deformations and changes in gross tumor volume as well as critical OARs. High-quality DVF training data are created by employing hyper-parameter optimization on the subset of the training data with DICE coefficient and mutual information metric. We validated our model on two radiotherapy datasets: a publicly available head-and-neck dataset (28 patients with manually contoured pre-, mid-, and post-treatment CTs), and an internal non-small cell lung cancer dataset (63 patients with manually contoured planning CT and 6 weekly CBCTs). RESULTS The use of DVF representation and skip connections overcomes the blurring issue of ConvLSTM prediction with the traditional image representation. The mean and standard deviation of DICE for predictions of lung GTV at weeks 4, 5, and 6 were 0.83 ± 0.09, 0.82 ± 0.08, and 0.81 ± 0.10, respectively, and for post-treatment ipsilateral and contralateral parotids, were 0.81 ± 0.06 and 0.85 ± 0.02. CONCLUSION We presented a novel DVF-based Seq2Seq model for medical images, leveraging the complete 3D imaging information of a relatively large longitudinal clinical dataset, to carry out longitudinal GTV/OAR predictions for anatomical changes in HN and lung radiotherapy patients, which has potential to improve RT outcomes.
Collapse
Affiliation(s)
- Donghoon Lee
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Sadegh R Alam
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Jue Jiang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Pengpeng Zhang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Saad Nadeem
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Yu-Chi Hu
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| |
Collapse
|
25
|
DeepPrognosis: Preoperative prediction of pancreatic cancer survival and surgical margin via comprehensive understanding of dynamic contrast-enhanced CT imaging and tumor-vascular contact parsing. Med Image Anal 2021; 73:102150. [PMID: 34303891 DOI: 10.1016/j.media.2021.102150] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Revised: 05/08/2021] [Accepted: 06/24/2021] [Indexed: 12/15/2022]
Abstract
Pancreatic ductal adenocarcinoma (PDAC) is one of the most lethal cancers and carries a dismal prognosis of ∼10% in five year survival rate. Surgery remains the best option of a potential cure for patients who are evaluated to be eligible for initial resection of PDAC. However, outcomes vary significantly even among the resected patients who were the same cancer stage and received similar treatments. Accurate quantitative preoperative prediction of primary resectable PDACs for personalized cancer treatment is thus highly desired. Nevertheless, there are a very few automated methods yet to fully exploit the contrast-enhanced computed tomography (CE-CT) imaging for PDAC prognosis assessment. CE-CT plays a critical role in PDAC staging and resectability evaluation. In this work, we propose a novel deep neural network model for the survival prediction of primary resectable PDAC patients, named as 3D Contrast-Enhanced Convolutional Long Short-Term Memory network (CE-ConvLSTM), which can derive the tumor attenuation signatures or patterns from patient CE-CT imaging studies. Tumor-vascular relationships, which might indicate the resection margin status, have also been proven to hold strong relationships with the overall survival of PDAC patients. To capture such relationships, we propose a self-learning approach for automated pancreas and peripancreatic anatomy segmentation without requiring any annotations on our PDAC datasets. We then employ a multi-task convolutional neural network (CNN) to accomplish both tasks of survival outcome and margin prediction where the network benefits from learning the resection margin related image features to improve the survival prediction. Our presented framework can improve overall survival prediction performances compared with existing state-of-the-art survival analysis approaches. The new staging biomarker integrating both the proposed risk signature and margin prediction has evidently added values to be combined with the current clinical staging system.
Collapse
|
26
|
Fang C, Bai S, Chen Q, Zhou Y, Xia L, Qin L, Gong S, Xie X, Zhou C, Tu D, Zhang C, Liu X, Chen W, Bai X, Torr PHS. Deep learning for predicting COVID-19 malignant progression. Med Image Anal 2021; 72:102096. [PMID: 34051438 PMCID: PMC8112895 DOI: 10.1016/j.media.2021.102096] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2020] [Revised: 03/23/2021] [Accepted: 04/27/2021] [Indexed: 01/08/2023]
Abstract
As COVID-19 is highly infectious, many patients can simultaneously flood into hospitals for diagnosis and treatment, which has greatly challenged public medical systems. Treatment priority is often determined by the symptom severity based on first assessment. However, clinical observation suggests that some patients with mild symptoms may quickly deteriorate. Hence, it is crucial to identify patient early deterioration to optimize treatment strategy. To this end, we develop an early-warning system with deep learning techniques to predict COVID-19 malignant progression. Our method leverages CT scans and the clinical data of outpatients and achieves an AUC of 0.920 in the single-center study. We also propose a domain adaptation approach to improve the generalization of our model and achieve an average AUC of 0.874 in the multicenter study. Moreover, our model automatically identifies crucial indicators that contribute to the malignant progression, including Troponin, Brain natriuretic peptide, White cell count, Aspartate aminotransferase, Creatinine, and Hypersensitive C-reactive protein.
Collapse
Affiliation(s)
- Cong Fang
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Song Bai
- Department of Engineering Science, University of Oxford, Parks Road, Oxford OX1 3PJ, United Kingdom
| | - Qianlan Chen
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Yu Zhou
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Liming Xia
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Lixin Qin
- Department of Radiology, Wuhan Pulmonary Hospital, Wuhan 430030, China
| | - Shi Gong
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Xudong Xie
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Chunhua Zhou
- Department of Radiology, Wuhan Pulmonary Hospital, Wuhan 430030, China
| | - Dandan Tu
- HUST-HW Joint Innovation Lab, Wuhan 430074, China
| | | | - Xiaowu Liu
- HUST-HW Joint Innovation Lab, Wuhan 430074, China
| | - Weiwei Chen
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China.
| | - Xiang Bai
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan 430074, China.
| | - Philip H S Torr
- Department of Engineering Science, University of Oxford, Parks Road, Oxford OX1 3PJ, United Kingdom
| |
Collapse
|
27
|
Li R, Roy A, Bice N, Kirby N, Fakhreddine M, Papanikolaou N. Managing tumor changes during radiotherapy using a deep learning model. Med Phys 2021; 48:5152-5164. [PMID: 33959978 DOI: 10.1002/mp.14925] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2020] [Revised: 03/09/2021] [Accepted: 04/27/2021] [Indexed: 12/25/2022] Open
Abstract
PURPOSE We propose a treatment planning framework that accounts for weekly lung tumor shrinkage using cone beam computed tomography (CBCT) images with a deep learning-based model. METHODS Sixteen patients with non-small-cell lung cancer (NSCLC) were selected with one planning CT and six weekly CBCTs each. A deep learning-based model was applied to predict the weekly deformation of the primary tumor based on the spatial and temporal features extracted from previous weekly CBCTs. Starting from Week 3, the tumor contour at Week N was predicted by the model based on the input from all the previous weeks (1, 2 … N - 1), and was evaluated against the manually contoured tumor using Dice coefficient (DSC), precision, average surface distance (ASD), and Hausdorff distance (HD). Information about the predicted tumor was then entered into the treatment planning system and the plan was re-optimized every week. The objectives were to maximize the dose coverage in the target region while minimizing the toxicity to the surrounding healthy tissue. Dosimetric evaluation of the target and organs at risk (heart, lung, esophagus, and spinal cord) was performed on four cases, comparing between a conventional plan (ignoring tumor shrinkage) and the shrinkage-based plan. RESULTS he primary tumor volumes decreased on average by 38% ± 26% during six weeks of treatment. DSCs and ASD between the predicted tumor and the actual tumor for Weeks 3, 4, 5, 6 were 0.81, 0.82, 0.79, 0.78 and 1.49, 1.59, 1.92, 2.12 mm, respectively, which were significantly superior to the score of 0.70, 0.68, 0.66, 0.63 and 2.81, 3.22, 3.69, 3.63 mm between the rigidly transferred tumors ignoring shrinkage and the actual tumor. While target coverage metrics were maintained for the re-optimized plans, lung mean dose dropped by 2.85, 0.46, 2.39, and 1.48 Gy for four sample cases when compared to the original plan. Doses in other organs such as esophagus were also reduced for some cases. CONCLUSION We developed a deep learning-based model for tumor shrinkage prediction. This model used CBCTs and contours from previous weeks as input and produced reasonable tumor contours with a high prediction accuracy (DSC, precision, HD, and ASD). The proposed framework maintained target coverage while reducing dose in the lungs and esophagus.
Collapse
Affiliation(s)
- Ruiqi Li
- Department of Radiation Oncology, University of Texas Health Science Center at San Antonio, San Antonio, Texas, USA
| | - Arkajyoti Roy
- Department of Management Science and Statistics, University of Texas at San Antonio, San Antonio, Texas, USA
| | - Noah Bice
- Department of Radiation Oncology, University of Texas Health Science Center at San Antonio, San Antonio, Texas, USA
| | - Neil Kirby
- Department of Radiation Oncology, University of Texas Health Science Center at San Antonio, San Antonio, Texas, USA
| | - Mohamad Fakhreddine
- Department of Radiation Oncology, University of Texas Health Science Center at San Antonio, San Antonio, Texas, USA
| | - Niko Papanikolaou
- Department of Radiation Oncology, University of Texas Health Science Center at San Antonio, San Antonio, Texas, USA
| |
Collapse
|
28
|
Zhou SK, Greenspan H, Davatzikos C, Duncan JS, van Ginneken B, Madabhushi A, Prince JL, Rueckert D, Summers RM. A review of deep learning in medical imaging: Imaging traits, technology trends, case studies with progress highlights, and future promises. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2021; 109:820-838. [PMID: 37786449 PMCID: PMC10544772 DOI: 10.1109/jproc.2021.3054390] [Citation(s) in RCA: 267] [Impact Index Per Article: 66.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
Since its renaissance, deep learning has been widely used in various medical imaging tasks and has achieved remarkable success in many medical imaging applications, thereby propelling us into the so-called artificial intelligence (AI) era. It is known that the success of AI is mostly attributed to the availability of big data with annotations for a single task and the advances in high performance computing. However, medical imaging presents unique challenges that confront deep learning approaches. In this survey paper, we first present traits of medical imaging, highlight both clinical needs and technical challenges in medical imaging, and describe how emerging trends in deep learning are addressing these issues. We cover the topics of network architecture, sparse and noisy labels, federating learning, interpretability, uncertainty quantification, etc. Then, we present several case studies that are commonly found in clinical practice, including digital pathology and chest, brain, cardiovascular, and abdominal imaging. Rather than presenting an exhaustive literature survey, we instead describe some prominent research highlights related to these case study applications. We conclude with a discussion and presentation of promising future directions.
Collapse
Affiliation(s)
- S Kevin Zhou
- School of Biomedical Engineering, University of Science and Technology of China and Institute of Computing Technology, Chinese Academy of Sciences
| | - Hayit Greenspan
- Biomedical Engineering Department, Tel-Aviv University, Israel
| | - Christos Davatzikos
- Radiology Department and Electrical and Systems Engineering Department, University of Pennsylvania, USA
| | - James S Duncan
- Departments of Biomedical Engineering and Radiology & Biomedical Imaging, Yale University
| | | | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University and Louis Stokes Cleveland Veterans Administration Medical Center, USA
| | - Jerry L Prince
- Electrical and Computer Engineering Department, Johns Hopkins University, USA
| | - Daniel Rueckert
- Klinikum rechts der Isar, TU Munich, Germany and Department of Computing, Imperial College, UK
| | | |
Collapse
|
29
|
Abstract
The interest in artificial intelligence (AI) has ballooned within radiology in the past few years primarily due to notable successes of deep learning. With the advances brought by deep learning, AI has the potential to recognize and localize complex patterns from different radiological imaging modalities, many of which even achieve comparable performance to human decision-making in recent applications. In this chapter, we review several AI applications in radiology for different anatomies: chest, abdomen, pelvis, as well as general lesion detection/identification that is not limited to specific anatomies. For each anatomy site, we focus on introducing the tasks of detection, segmentation, and classification with an emphasis on describing the technology development pathway with the aim of providing the reader with an understanding of what AI can do in radiology and what still needs to be done for AI to better fit in radiology. Combining with our own research experience of AI in medicine, we elaborate how AI can enrich knowledge discovery, understanding, and decision-making in radiology, rather than replacing the radiologist.
Collapse
|
30
|
Lee D, Zhang P, Nadeem S, Alam S, Jiang J, Caringi A, Allgood N, Aristophanous M, Mechalakos J, Hu YC. Predictive dose accumulation for HN adaptive radiotherapy. Phys Med Biol 2020; 65:235011. [PMID: 33007769 DOI: 10.1088/1361-6560/abbdb8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
During radiation therapy (RT) of head and neck (HN) cancer, the shape and volume of the parotid glands (PG) may change significantly, resulting in clinically relevant deviations of delivered dose from the planning dose. Early and accurate longitudinal prediction of PG anatomical changes during the RT can be valuable to inform decisions on plan adaptation. We developed a deep neural network for longitudinal predictions using the displacement fields (DFs) between the planning computed tomography (pCT) and weekly cone beam computed tomography (CBCT). Sixty-three HN patients treated with volumetric modulated arc were retrospectively studied. We calculated DFs between pCT and week 1-3 CBCT by B-spline and Demon deformable image registration (DIR). The resultant DFs were subsequently used as input to our novel network to predict the week 4 to 6 DFs for generating predicted weekly PG contours and weekly dose distributions. For evaluation, we measured dice similarity (DICE), and the uncertainty of accumulated dose. Moreover, we compared the detection accuracies of candidates for adaptive radiotherapy (ART) when the trigger criteria were mean dose difference more than 10%, 7.5%, and 5%, respectively. The DICE of ipsilateral/contralateral PG at week 4 to 6 using the prediction model trained with B-spline were 0.81 [Formula: see text] 0.07/0.81 [Formula: see text] 0.04 (week 4), 0.79 [Formula: see text] 0.06/0.81 [Formula: see text] 0.05 (week 5) and 0.78 [Formula: see text] 0.06/0.82 [Formula: see text] (week 6). The DICE with the Demons model were 0.78 [Formula: see text] 0.08/0.82 [Formula: see text] 0.03 (week 4), 0.77 [Formula: see text] 0.07/0.82 [Formula: see text] 0.04 (week 5) and 0.75 [Formula: see text] 0.07/0.82 [Formula: see text] 0.02 (week 6). The dose volume histogram (DVH) analysis with the predicted accumulated dose showed the feasibility of predicting dose uncertainty due to the PG anatomical changes. The AUC of ART candidate detection with our predictive model was over 0.90. In conclusion, the proposed network was able to predict future anatomical changes and dose uncertainty of PGs with clinically acceptable accuracy, and hence can be readily integrated into the ART workflow.
Collapse
Affiliation(s)
- Donghoon Lee
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center New York, NY, United States of America
| | | | | | | | | | | | | | | | | | | |
Collapse
|