1
|
Sinha A, Kawahara J, Pakzad A, Abhishek K, Ruthven M, Ghorbel E, Kacem A, Aouada D, Hamarneh G. DermSynth3D: Synthesis of in-the-wild annotated dermatology images. Med Image Anal 2024; 95:103145. [PMID: 38615432 DOI: 10.1016/j.media.2024.103145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 02/11/2024] [Accepted: 03/18/2024] [Indexed: 04/16/2024]
Abstract
In recent years, deep learning (DL) has shown great potential in the field of dermatological image analysis. However, existing datasets in this domain have significant limitations, including a small number of image samples, limited disease conditions, insufficient annotations, and non-standardized image acquisitions. To address these shortcomings, we propose a novel framework called DermSynth3D. DermSynth3D blends skin disease patterns onto 3D textured meshes of human subjects using a differentiable renderer and generates 2D images from various camera viewpoints under chosen lighting conditions in diverse background scenes. Our method adheres to top-down rules that constrain the blending and rendering process to create 2D images with skin conditions that mimic in-the-wild acquisitions, ensuring more meaningful results. The framework generates photo-realistic 2D dermatological images and the corresponding dense annotations for semantic segmentation of the skin, skin conditions, body parts, bounding boxes around lesions, depth maps, and other 3D scene parameters, such as camera position and lighting conditions. DermSynth3D allows for the creation of custom datasets for various dermatology tasks. We demonstrate the effectiveness of data generated using DermSynth3D by training DL models on synthetic data and evaluating them on various dermatology tasks using real 2D dermatological images. We make our code publicly available at https://github.com/sfu-mial/DermSynth3D.
Collapse
Affiliation(s)
- Ashish Sinha
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby V5A 1S6, Canada
| | - Jeremy Kawahara
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby V5A 1S6, Canada
| | - Arezou Pakzad
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby V5A 1S6, Canada
| | - Kumar Abhishek
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby V5A 1S6, Canada
| | - Matthieu Ruthven
- Computer Vision, Imaging & Machine Intelligence Research Group, Interdisciplinary Centre for Security, Reliability and Trust (SnT), University of Luxembourg, L-1855, Luxembourg
| | - Enjie Ghorbel
- Computer Vision, Imaging & Machine Intelligence Research Group, Interdisciplinary Centre for Security, Reliability and Trust (SnT), University of Luxembourg, L-1855, Luxembourg; Cristal Laboratory, National School of Computer Sciences, University of Manouba, 2010, Tunisia
| | - Anis Kacem
- Computer Vision, Imaging & Machine Intelligence Research Group, Interdisciplinary Centre for Security, Reliability and Trust (SnT), University of Luxembourg, L-1855, Luxembourg
| | - Djamila Aouada
- Computer Vision, Imaging & Machine Intelligence Research Group, Interdisciplinary Centre for Security, Reliability and Trust (SnT), University of Luxembourg, L-1855, Luxembourg
| | - Ghassan Hamarneh
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby V5A 1S6, Canada.
| |
Collapse
|
2
|
Berris T, Myronakis M, Stratakis J, Perisinakis K, Karantanas A, Damilakis J. Is deep learning-enabled real-time personalized CT dosimetry feasible using only patient images as input? Phys Med 2024; 122:103381. [PMID: 38810391 DOI: 10.1016/j.ejmp.2024.103381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/21/2024] [Revised: 03/28/2024] [Accepted: 05/20/2024] [Indexed: 05/31/2024] Open
Abstract
PURPOSE To propose a novel deep-learning based dosimetry method that allows quick and accurate estimation of organ doses for individual patients, using only their computed tomography (CT) images as input. METHODS Despite recent advances in medical dosimetry, personalized CT dosimetry remains a labour-intensive process. Current state-of-the-art methods utilize time-consuming Monte Carlo (MC) based simulations for individual organ dose estimation in CT. The proposed method uses conditional generative adversarial networks (cGANs) to substitute MC simulations with fast dose image generation, based on image-to-image translation. The pix2pix architecture in conjunction with a regression model was utilized for the generation of the synthetic dose images. The lungs, heart, breast, bone and skin were manually segmented to estimate and compare organ doses calculated using both the original and synthetic dose images, respectively. RESULTS The average organ dose estimation error for the proposed method was 8.3% and did not exceed 20% for any of the organs considered. The performance of the method in the clinical environment was also assessed. Using segmentation tools developed in-house, an automatic organ dose calculation pipeline was set up. Calculation of organ doses for heart and lung for each CT slice took about 2 s. CONCLUSIONS This work shows that deep learning-enabled personalized CT dosimetry is feasible in real-time, using only patient CT images as input.
Collapse
Affiliation(s)
- Theocharis Berris
- Department of Medical Physics, School of Medicine, University of Crete, P.O. Box 2208, 71003 Iraklion, Crete, Greece
| | - Marios Myronakis
- Department of Medical Physics, School of Medicine, University of Crete, P.O. Box 2208, 71003 Iraklion, Crete, Greece
| | - John Stratakis
- Department of Medical Physics, University Hospital of Iraklion, 71110 Iraklion, Crete, Greece
| | - Kostas Perisinakis
- Department of Medical Physics, School of Medicine, University of Crete, P.O. Box 2208, 71003 Iraklion, Crete, Greece
| | - Apostolos Karantanas
- Department of Radiology, School of Medicine, University of Crete, P.O. Box 2208, 71003 Iraklion, Crete, Greece
| | - John Damilakis
- Department of Medical Physics, School of Medicine, University of Crete, P.O. Box 2208, 71003 Iraklion, Crete, Greece.
| |
Collapse
|
3
|
Siafarikas N. Personalized medicine in old age psychiatry and Alzheimer's disease. Front Psychiatry 2024; 15:1297798. [PMID: 38751423 PMCID: PMC11094449 DOI: 10.3389/fpsyt.2024.1297798] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 04/15/2024] [Indexed: 05/18/2024] Open
Abstract
Elderly patients show us unfolded lives with unique individual characteristics. An increasing life span is associated with increasing physical and mental disease burden. Alzheimer's disease (AD) is an increasing challenge in old age. AD cannot be cured but it can be treated. The complexity of old age and AD offer targets for personalized medicine (PM). Targets for stratification of patients, detection of patients at risk for AD or for future targeted therapy are plentiful and can be found in several omic-levels.
Collapse
Affiliation(s)
- Nikias Siafarikas
- Department of Geriatric Psychiatry, Akershus University Hospital, Lørenskog, Norway
| |
Collapse
|
4
|
Koike Y, Ohira S, Kihara S, Anetai Y, Takegawa H, Nakamura S, Miyazaki M, Konishi K, Tanigawa N. Synthetic Low-Energy Monochromatic Image Generation in Single-Energy Computed Tomography System Using a Transformer-Based Deep Learning Model. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01111-z. [PMID: 38637424 DOI: 10.1007/s10278-024-01111-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Revised: 03/31/2024] [Accepted: 04/03/2024] [Indexed: 04/20/2024]
Abstract
While dual-energy computed tomography (DECT) technology introduces energy-specific information in clinical practice, single-energy CT (SECT) is predominantly used, limiting the number of people who can benefit from DECT. This study proposed a novel method to generate synthetic low-energy virtual monochromatic images at 50 keV (sVMI50keV) from SECT images using a transformer-based deep learning model, SwinUNETR. Data were obtained from 85 patients who underwent head and neck radiotherapy. Among these, the model was built using data from 70 patients for whom only DECT images were available. The remaining 15 patients, for whom both DECT and SECT images were available, were used to predict from the actual SECT images. We used the SwinUNETR model to generate sVMI50keV. The image quality was evaluated, and the results were compared with those of the convolutional neural network-based model, Unet. The mean absolute errors from the true VMI50keV were 36.5 ± 4.9 and 33.0 ± 4.4 Hounsfield units for Unet and SwinUNETR, respectively. SwinUNETR yielded smaller errors in tissue attenuation values compared with those of Unet. The contrast changes in sVMI50keV generated by SwinUNETR from SECT were closer to those of DECT-derived VMI50keV than the contrast changes in Unet-generated sVMI50keV. This study demonstrated the potential of transformer-based models for generating synthetic low-energy VMIs from SECT images, thereby improving the image quality of head and neck cancer imaging. It provides a practical and feasible solution to obtain low-energy VMIs from SECT data that can benefit a large number of facilities and patients without access to DECT technology.
Collapse
Affiliation(s)
- Yuhei Koike
- Department of Radiology, Kansai Medical University, 2-5-1 Shinmachi, Hirakata, Osaka, 573-1010, Japan.
| | - Shingo Ohira
- Department of Comprehensive Radiation Oncology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
- Department of Radiation Oncology, Osaka International Cancer Institute, 3-1-69 Otemae, Chuo-ku, Osaka, 537-8567, Japan
| | - Sayaka Kihara
- Department of Radiation Oncology, Osaka International Cancer Institute, 3-1-69 Otemae, Chuo-ku, Osaka, 537-8567, Japan
| | - Yusuke Anetai
- Department of Radiology, Kansai Medical University, 2-5-1 Shinmachi, Hirakata, Osaka, 573-1010, Japan
| | - Hideki Takegawa
- Department of Radiology, Kansai Medical University, 2-5-1 Shinmachi, Hirakata, Osaka, 573-1010, Japan
| | - Satoaki Nakamura
- Department of Radiology, Kansai Medical University, 2-5-1 Shinmachi, Hirakata, Osaka, 573-1010, Japan
| | - Masayoshi Miyazaki
- Department of Radiation Oncology, Osaka International Cancer Institute, 3-1-69 Otemae, Chuo-ku, Osaka, 537-8567, Japan
| | - Koji Konishi
- Department of Radiation Oncology, Osaka International Cancer Institute, 3-1-69 Otemae, Chuo-ku, Osaka, 537-8567, Japan
| | - Noboru Tanigawa
- Department of Radiology, Kansai Medical University, 2-5-1 Shinmachi, Hirakata, Osaka, 573-1010, Japan
| |
Collapse
|
5
|
Wang T, Yang X. Take CT, get PET free: AI-powered breakthrough in lung cancer diagnosis and prognosis. Cell Rep Med 2024; 5:101486. [PMID: 38631288 PMCID: PMC11031371 DOI: 10.1016/j.xcrm.2024.101486] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2024] [Revised: 02/21/2024] [Accepted: 03/04/2024] [Indexed: 04/19/2024]
Abstract
PET scans provide additional clinical value but are costly and not universally accessible. Salehjahromi et al.1 developed an AI-based pipeline to synthesize PET images from diagnostic CT scans, demonstrating its potential clinical utility across various clinical tasks for lung cancer.
Collapse
Affiliation(s)
- Tonghe Wang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, USA.
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA.
| |
Collapse
|
6
|
Yuan S, Liu Y, Wei R, Zhu J, Men K, Dai J. A novel loss function to reproduce texture features for deep learning-based MRI-to-CT synthesis. Med Phys 2024; 51:2695-2706. [PMID: 38043105 DOI: 10.1002/mp.16850] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 10/03/2023] [Accepted: 10/31/2023] [Indexed: 12/05/2023] Open
Abstract
BACKGROUND Studies on computed tomography (CT) synthesis based on magnetic resonance imaging (MRI) have mainly focused on pixel-wise consistency, but the texture features of regions of interest (ROIs) have not received appropriate attention. PURPOSE This study aimed to propose a novel loss function to reproduce texture features of ROIs and pixel-wise consistency for deep learning-based MRI-to-CT synthesis. The method was expected to assist the multi-modality studies for radiomics. METHODS The study retrospectively enrolled 127 patients with nasopharyngeal carcinoma. CT and MRI images were collected for each patient, and then rigidly registered as pre-procession. We proposed a gray-level co-occurrence matrix (GLCM)-based loss function to improve the reproducibility of texture features. This novel loss function could be embedded into the present deep learning-based framework for image synthesis. In this study, a typical image synthesis model was selected as the baseline, which contained a Unet trained mean square error (MSE) loss function. We embedded the proposed loss function and designed experiments to supervise different ROIs to prove its effectiveness. The concordance correlation coefficient (CCC) of the GLCM feature was employed to evaluate the reproducibility of GLCM features, which are typical texture features. Besides, we used a publicly available dataset of brain tumors to verify our loss function. RESULTS Compared with the baseline, the proposed method improved the pixel-wise image quality metrics (MAE: 107.5 to 106.8 HU; SSIM: 0.9728 to 0.9730). CCC values of the GLCM features in GTVnx were significantly improved from 0.78 ± 0.12 to 0.82 ± 0.11 (p < 0.05 for paired t-test). Generally, > 90% (22/24) of the GLCM-based features were improved compared with the baseline, where the Informational Measure of Correlation feature was improved the most (CCC: 0.74 to 0.83). For the public dataset, the loss function also shows its effectiveness. With our proposed loss function added, the ability to reproduce texture features was improved in the ROIs. CONCLUSIONS The proposed method reproduced texture features for MRI-to-CT synthesis, which would benefit radiomics studies based on image multi-modality synthesis.
Collapse
Affiliation(s)
- Siqi Yuan
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yuxiang Liu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Ran Wei
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Ji Zhu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
7
|
Emin S, Rossi E, Myrvold Rooth E, Dorniok T, Hedman M, Gagliardi G, Villegas F. Clinical implementation of a commercial synthetic computed tomography solution for radiotherapy treatment of glioblastoma. Phys Imaging Radiat Oncol 2024; 30:100589. [PMID: 38818305 PMCID: PMC11137592 DOI: 10.1016/j.phro.2024.100589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 05/12/2024] [Accepted: 05/13/2024] [Indexed: 06/01/2024] Open
Abstract
Background and Purpose Magnetic resonance (MR)-only radiotherapy (RT) workflow eliminates uncertainties due to computed tomography (CT)-MR image registration, by using synthetic CT (sCT) images generated from MR. This study describes the clinical implementation process, from retrospective commissioning to prospective validation stage of a commercial artificial intelligence (AI)-based sCT product. Evaluation of the dosimetric performance of the sCT is presented, with emphasis on the impact of voxel size differences between image modalities. Materials and methods sCT performance was assessed in glioblastoma RT planning. Dose differences for 30 patients in both commissioning and validation cohorts were calculated at various dose-volume-histogram (DVH) points for target and organs-at-risk (OAR). A gamma analysis was conducted on regridded image plans. Quality assurance (QA) guidelines were established based on commissioning phase results. Results Mean dose difference to target structures was found to be within ± 0.7 % regardless of image resolution and cohort. OARs' mean dose differences were within ± 1.3 % for plans calculated on regridded images for both cohorts, while differences were higher for plans with original voxel size, reaching up to -4.2 % for chiasma D2% in the commissioning cohort. Gamma passing rates for the brain structure using the criteria 1 %/1mm, 2 %/2mm and 3 %/3mm were 93.6 %/99.8 %/100 % and 96.6 %/99.9 %/100 % for commissioning and validation cohorts, respectively. Conclusions Dosimetric outcomes in both commissioning and validation stages confirmed sCT's equivalence to CT. The large patient cohort in this study aided in establishing a robust QA program for the MR-only workflow, now applied in glioblastoma RT at our center.
Collapse
Affiliation(s)
- Sevgi Emin
- Department of Medical Radiation Physics and Nuclear Medicine, Karolinska University Hospital, 171 76 Stockholm, Sweden
| | - Elia Rossi
- Department of Radiation Oncology, Karolinska University Hospital, 171 76 Stockholm, Sweden
| | | | - Torsten Dorniok
- Department of Medical Radiation Physics and Nuclear Medicine, Karolinska University Hospital, 171 76 Stockholm, Sweden
| | - Mattias Hedman
- Department of Radiation Oncology, Karolinska University Hospital, 171 76 Stockholm, Sweden
- Department of Oncology-Pathology, Karolinska Institute, 171 77 Stockholm, Sweden
| | - Giovanna Gagliardi
- Department of Medical Radiation Physics and Nuclear Medicine, Karolinska University Hospital, 171 76 Stockholm, Sweden
- Department of Oncology-Pathology, Karolinska Institute, 171 77 Stockholm, Sweden
| | - Fernanda Villegas
- Department of Medical Radiation Physics and Nuclear Medicine, Karolinska University Hospital, 171 76 Stockholm, Sweden
- Department of Oncology-Pathology, Karolinska Institute, 171 77 Stockholm, Sweden
| |
Collapse
|
8
|
Sherwani MK, Gopalakrishnan S. A systematic literature review: deep learning techniques for synthetic medical image generation and their applications in radiotherapy. FRONTIERS IN RADIOLOGY 2024; 4:1385742. [PMID: 38601888 PMCID: PMC11004271 DOI: 10.3389/fradi.2024.1385742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 03/11/2024] [Indexed: 04/12/2024]
Abstract
The aim of this systematic review is to determine whether Deep Learning (DL) algorithms can provide a clinically feasible alternative to classic algorithms for synthetic Computer Tomography (sCT). The following categories are presented in this study: ∙ MR-based treatment planning and synthetic CT generation techniques. ∙ Generation of synthetic CT images based on Cone Beam CT images. ∙ Low-dose CT to High-dose CT generation. ∙ Attenuation correction for PET images. To perform appropriate database searches, we reviewed journal articles published between January 2018 and June 2023. Current methodology, study strategies, and results with relevant clinical applications were analyzed as we outlined the state-of-the-art of deep learning based approaches to inter-modality and intra-modality image synthesis. This was accomplished by contrasting the provided methodologies with traditional research approaches. The key contributions of each category were highlighted, specific challenges were identified, and accomplishments were summarized. As a final step, the statistics of all the cited works from various aspects were analyzed, which revealed that DL-based sCTs have achieved considerable popularity, while also showing the potential of this technology. In order to assess the clinical readiness of the presented methods, we examined the current status of DL-based sCT generation.
Collapse
Affiliation(s)
- Moiz Khan Sherwani
- Section for Evolutionary Hologenomics, Globe Institute, University of Copenhagen, Copenhagen, Denmark
| | | |
Collapse
|
9
|
Salehjahromi M, Karpinets TV, Sujit SJ, Qayati M, Chen P, Aminu M, Saad MB, Bandyopadhyay R, Hong L, Sheshadri A, Lin J, Antonoff MB, Sepesi B, Ostrin EJ, Toumazis I, Huang P, Cheng C, Cascone T, Vokes NI, Behrens C, Siewerdsen JH, Hazle JD, Chang JY, Zhang J, Lu Y, Godoy MCB, Chung C, Jaffray D, Wistuba I, Lee JJ, Vaporciyan AA, Gibbons DL, Gladish G, Heymach JV, Wu CC, Zhang J, Wu J. Synthetic PET from CT improves diagnosis and prognosis for lung cancer: Proof of concept. Cell Rep Med 2024; 5:101463. [PMID: 38471502 PMCID: PMC10983039 DOI: 10.1016/j.xcrm.2024.101463] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Revised: 09/07/2023] [Accepted: 02/15/2024] [Indexed: 03/14/2024]
Abstract
[18F]Fluorodeoxyglucose positron emission tomography (FDG-PET) and computed tomography (CT) are indispensable components in modern medicine. Although PET can provide additional diagnostic value, it is costly and not universally accessible, particularly in low-income countries. To bridge this gap, we have developed a conditional generative adversarial network pipeline that can produce FDG-PET from diagnostic CT scans based on multi-center multi-modal lung cancer datasets (n = 1,478). Synthetic PET images are validated across imaging, biological, and clinical aspects. Radiologists confirm comparable imaging quality and tumor contrast between synthetic and actual PET scans. Radiogenomics analysis further proves that the dysregulated cancer hallmark pathways of synthetic PET are consistent with actual PET. We also demonstrate the clinical values of synthetic PET in improving lung cancer diagnosis, staging, risk prediction, and prognosis. Taken together, this proof-of-concept study testifies to the feasibility of applying deep learning to obtain high-fidelity PET translated from CT.
Collapse
Affiliation(s)
| | | | - Sheeba J Sujit
- Department of Imaging Physics, MD Anderson Cancer Center, Houston, TX, USA
| | - Mohamed Qayati
- Department of Imaging Physics, MD Anderson Cancer Center, Houston, TX, USA
| | - Pingjun Chen
- Department of Imaging Physics, MD Anderson Cancer Center, Houston, TX, USA
| | - Muhammad Aminu
- Department of Imaging Physics, MD Anderson Cancer Center, Houston, TX, USA
| | - Maliazurina B Saad
- Department of Imaging Physics, MD Anderson Cancer Center, Houston, TX, USA
| | | | - Lingzhi Hong
- Department of Imaging Physics, MD Anderson Cancer Center, Houston, TX, USA; Department of Thoracic/Head and Neck Medical Oncology, MD Anderson Cancer Center, Houston, TX, USA
| | - Ajay Sheshadri
- Department of Pulmonary Medicine, MD Anderson Cancer Center, Houston, TX USA
| | - Julie Lin
- Department of Pulmonary Medicine, MD Anderson Cancer Center, Houston, TX USA
| | - Mara B Antonoff
- Department of Thoracic and Cardiovascular Surgery, MD Anderson Cancer Center, Houston, TX, USA
| | - Boris Sepesi
- Department of Thoracic and Cardiovascular Surgery, MD Anderson Cancer Center, Houston, TX, USA
| | - Edwin J Ostrin
- Department of General Internal Medicine, MD Anderson Cancer Center, Houston, TX, USA
| | - Iakovos Toumazis
- Department of Health Services Research, MD Anderson Cancer Center, Houston, TX, USA
| | - Peng Huang
- Department of Oncology, The Sidney Kimmel Comprehensive Cancer Center at Johns Hopkins, Baltimore, MD, USA
| | - Chao Cheng
- Institute for Clinical and Translational Research, Baylor College of Medicine, Houston, TX, USA
| | - Tina Cascone
- Department of Thoracic/Head and Neck Medical Oncology, MD Anderson Cancer Center, Houston, TX, USA
| | - Natalie I Vokes
- Department of Thoracic/Head and Neck Medical Oncology, MD Anderson Cancer Center, Houston, TX, USA
| | - Carmen Behrens
- Department of Thoracic/Head and Neck Medical Oncology, MD Anderson Cancer Center, Houston, TX, USA
| | - Jeffrey H Siewerdsen
- Department of Imaging Physics, MD Anderson Cancer Center, Houston, TX, USA; Institute for Data Science in Oncology, MD Anderson Cancer Center, Houston, TX, USA
| | - John D Hazle
- Department of Imaging Physics, MD Anderson Cancer Center, Houston, TX, USA
| | - Joe Y Chang
- Department of Radiation Oncology, MD Anderson Cancer Center, Houston, TX, USA
| | - Jianhua Zhang
- Department of Genomic Medicine, MD Anderson Cancer Center, Houston, TX, USA
| | - Yang Lu
- Department of Nuclear Medicine, MD Anderson Cancer Center, Houston, TX, USA
| | - Myrna C B Godoy
- Department of Thoracic Imaging, MD Anderson Cancer Center, Houston, TX, USA
| | - Caroline Chung
- Department of Radiation Oncology, MD Anderson Cancer Center, Houston, TX, USA; Institute for Data Science in Oncology, MD Anderson Cancer Center, Houston, TX, USA
| | - David Jaffray
- Department of Imaging Physics, MD Anderson Cancer Center, Houston, TX, USA; Institute for Data Science in Oncology, MD Anderson Cancer Center, Houston, TX, USA
| | - Ignacio Wistuba
- Department of Translational Molecular Pathology, MD Anderson Cancer Center, Houston, TX, USA
| | - J Jack Lee
- Department of Biostatistics, MD Anderson Cancer Center, Houston, TX, USA
| | - Ara A Vaporciyan
- Department of Thoracic and Cardiovascular Surgery, MD Anderson Cancer Center, Houston, TX, USA
| | - Don L Gibbons
- Department of Thoracic/Head and Neck Medical Oncology, MD Anderson Cancer Center, Houston, TX, USA
| | - Gregory Gladish
- Department of Thoracic Imaging, MD Anderson Cancer Center, Houston, TX, USA
| | - John V Heymach
- Department of Thoracic/Head and Neck Medical Oncology, MD Anderson Cancer Center, Houston, TX, USA
| | - Carol C Wu
- Department of Thoracic Imaging, MD Anderson Cancer Center, Houston, TX, USA
| | - Jianjun Zhang
- Department of Genomic Medicine, MD Anderson Cancer Center, Houston, TX, USA; Department of Thoracic/Head and Neck Medical Oncology, MD Anderson Cancer Center, Houston, TX, USA; Lung Cancer Genomics Program, MD Anderson Cancer Center, Houston, TX, USA; Lung Cancer Interception Program, MD Anderson Cancer Center, Houston, TX, USA
| | - Jia Wu
- Department of Imaging Physics, MD Anderson Cancer Center, Houston, TX, USA; Department of Thoracic/Head and Neck Medical Oncology, MD Anderson Cancer Center, Houston, TX, USA; Institute for Data Science in Oncology, MD Anderson Cancer Center, Houston, TX, USA.
| |
Collapse
|
10
|
Posselt C, Avci MY, Yigitsoy M, Schuenke P, Kolbitsch C, Schaeffter T, Remmele S. Simulation of acquisition shifts in T2 weighted fluid-attenuated inversion recovery magnetic resonance images to stress test artificial intelligence segmentation networks. J Med Imaging (Bellingham) 2024; 11:024013. [PMID: 38666039 PMCID: PMC11042016 DOI: 10.1117/1.jmi.11.2.024013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Revised: 03/01/2024] [Accepted: 03/29/2024] [Indexed: 04/28/2024] Open
Abstract
Purpose To provide a simulation framework for routine neuroimaging test data, which allows for "stress testing" of deep segmentation networks against acquisition shifts that commonly occur in clinical practice for T2 weighted (T2w) fluid-attenuated inversion recovery magnetic resonance imaging protocols. Approach The approach simulates "acquisition shift derivatives" of MR images based on MR signal equations. Experiments comprise the validation of the simulated images by real MR scans and example stress tests on state-of-the-art multiple sclerosis lesion segmentation networks to explore a generic model function to describe the F1 score in dependence of the contrast-affecting sequence parameters echo time (TE) and inversion time (TI). Results The differences between real and simulated images range up to 19% in gray and white matter for extreme parameter settings. For the segmentation networks under test, the F1 score dependency on TE and TI can be well described by quadratic model functions (R 2 > 0.9 ). The coefficients of the model functions indicate that changes of TE have more influence on the model performance than TI. Conclusions We show that these deviations are in the range of values as may be caused by erroneous or individual differences in relaxation times as described by literature. The coefficients of the F1 model function allow for a quantitative comparison of the influences of TE and TI. Limitations arise mainly from tissues with a low baseline signal (like cerebrospinal fluid) and when the protocol contains contrast-affecting measures that cannot be modeled due to missing information in the DICOM header.
Collapse
Affiliation(s)
- Christiane Posselt
- University of Applied Sciences, Faculty of Electrical and Industrial Engineering, Landshut, Germany
| | | | | | - Patrick Schuenke
- Physikalisch‐Technische Bundesanstalt (PTB), Braunschweig and Berlin, Germany
| | - Christoph Kolbitsch
- Physikalisch‐Technische Bundesanstalt (PTB), Braunschweig and Berlin, Germany
| | - Tobias Schaeffter
- Physikalisch‐Technische Bundesanstalt (PTB), Braunschweig and Berlin, Germany
- Technical University of Berlin, Department of Medical Engineering, Berlin, Germany
| | - Stefanie Remmele
- University of Applied Sciences, Faculty of Electrical and Industrial Engineering, Landshut, Germany
| |
Collapse
|
11
|
Zhuang Y, Mathai TS, Mukherjee P, Summers RM. Segmentation of pelvic structures in T2 MRI via MR-to-CT synthesis. Comput Med Imaging Graph 2024; 112:102335. [PMID: 38271870 PMCID: PMC10969342 DOI: 10.1016/j.compmedimag.2024.102335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 01/07/2024] [Accepted: 01/07/2024] [Indexed: 01/27/2024]
Abstract
Segmentation of multiple pelvic structures in MRI volumes is a prerequisite for many clinical applications, such as sarcopenia assessment, bone density measurement, and muscle-to-fat volume ratio estimation. While many CT-specific datasets and automated CT-based multi-structure pelvis segmentation methods exist, there are few MRI-specific multi-structure segmentation methods in literature. In this pilot work, we propose a lightweight and annotation-free pipeline to synthetically translate T2 MRI volumes of the pelvis to CT, and subsequently leverage an existing CT-only tool called TotalSegmentator to segment 8 pelvic structures in the generated CT volumes. The predicted masks were then mapped back to the original MR volumes as segmentation masks. We compared the predicted masks against the expert annotations of the public TCGA-UCEC dataset and an internal dataset. Experiments demonstrated that the proposed pipeline achieved Dice measures ≥65% for 8 pelvic structures in T2 MRI. The proposed pipeline is an alternative method to obtain multi-organ and structure segmentations without being encumbered by time-consuming manual annotations. By exploiting the significant research progress in CTs, it is possible to extend the proposed pipeline to other MRI sequences in principle. Our research bridges the chasm between the current CT-based multi-structure segmentation and MRI-based segmentation. The manually segmented structures in the TCGA-UCEC dataset are publicly available.
Collapse
Affiliation(s)
- Yan Zhuang
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, 10 Center Dr, Bethesda, 20892, MD, USA
| | - Tejas Sudharshan Mathai
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, 10 Center Dr, Bethesda, 20892, MD, USA
| | - Pritam Mukherjee
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, 10 Center Dr, Bethesda, 20892, MD, USA
| | - Ronald M Summers
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, 10 Center Dr, Bethesda, 20892, MD, USA.
| |
Collapse
|
12
|
Li X, Johnson JM, Strigel RM, Bancroft LCH, Hurley SA, Estakhraji SIZ, Kumar M, Fowler AM, McMillan AB. Attenuation correction and truncation completion for breast PET/MR imaging using deep learning. Phys Med Biol 2024; 69:045031. [PMID: 38252969 DOI: 10.1088/1361-6560/ad2126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 01/22/2024] [Indexed: 01/24/2024]
Abstract
Objective. Simultaneous PET/MR scanners combine the high sensitivity of MR imaging with the functional imaging of PET. However, attenuation correction of breast PET/MR imaging is technically challenging. The purpose of this study is to establish a robust attenuation correction algorithm for breast PET/MR images that relies on deep learning (DL) to recreate the missing portions of the patient's anatomy (truncation completion), as well as to provide bone information for attenuation correction from only the PET data.Approach. Data acquired from 23 female subjects with invasive breast cancer scanned with18F-fluorodeoxyglucose PET/CT and PET/MR localized to the breast region were used for this study. Three DL models, U-Net with mean absolute error loss (DLMAE) model, U-Net with mean squared error loss (DLMSE) model, and U-Net with perceptual loss (DLPerceptual) model, were trained to predict synthetic CT images (sCT) for PET attenuation correction (AC) given non-attenuation corrected (NAC) PETPET/MRimages as inputs. The DL and Dixon-based sCT reconstructed PET images were compared against those reconstructed from CT images by calculating the percent error of the standardized uptake value (SUV) and conducting Wilcoxon signed rank statistical tests.Main results. sCT images from the DLMAEmodel, the DLMSEmodel, and the DLPerceptualmodel were similar in mean absolute error (MAE), peak-signal-to-noise ratio, and normalized cross-correlation. No significant difference in SUV was found between the PET images reconstructed using the DLMSEand DLPerceptualsCTs compared to the reference CT for AC in all tissue regions. All DL methods performed better than the Dixon-based method according to SUV analysis.Significance. A 3D U-Net with MSE or perceptual loss model can be implemented into a reconstruction workflow, and the derived sCT images allow successful truncation completion and attenuation correction for breast PET/MR images.
Collapse
Affiliation(s)
- Xue Li
- Department of Electrical and Computer Engineering, University of Wisconsin, Madison, WI, United States of America
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
| | - Jacob M Johnson
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
| | - Roberta M Strigel
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
- Department of Medical Physics, University of Wisconsin, Madison, WI, United States of America
- University of Wisconsin Carbone Cancer Center, Madison, WI, United States of America
| | - Leah C Henze Bancroft
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
| | - Samuel A Hurley
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
| | - S Iman Zare Estakhraji
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
| | - Manoj Kumar
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
- ICTR Graduate Program in Clinical Investigation, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
| | - Amy M Fowler
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
- Department of Medical Physics, University of Wisconsin, Madison, WI, United States of America
- University of Wisconsin Carbone Cancer Center, Madison, WI, United States of America
| | - Alan B McMillan
- Department of Electrical and Computer Engineering, University of Wisconsin, Madison, WI, United States of America
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
- Department of Medical Physics, University of Wisconsin, Madison, WI, United States of America
- University of Wisconsin Carbone Cancer Center, Madison, WI, United States of America
| |
Collapse
|
13
|
Tsuchiya N, Kimura K, Tateishi U, Watabe T, Hatano K, Uemura M, Nonomura N, Shimizu A. Detection support of lesions in patients with prostate cancer using [Formula: see text]-PSMA 1007 PET/CT. Int J Comput Assist Radiol Surg 2024:10.1007/s11548-024-03067-5. [PMID: 38329565 DOI: 10.1007/s11548-024-03067-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Accepted: 01/19/2024] [Indexed: 02/09/2024]
Abstract
PURPOSE This study proposes a detection support system for primary and metastatic lesions of prostate cancer using [Formula: see text]-PSMA 1007 positron emission tomography/computed tomography (PET/CT) images with non-image information, including patient metadata and location information of an input slice image. METHODS A convolutional neural network with condition generators and feature-wise linear modulation (FiLM) layers was employed to allow input of not only PET/CT images but also non-image information, namely, Gleason score, flag of pre- or post-prostatectomy, and normalized z-coordinate of an input slice. We explored the insertion position of the FiLM layers to optimize the conditioning of the network using non-image information. RESULTS [Formula: see text]-PSMA 1007 PET/CT images were collected from 163 patients with prostate cancer and applied to the proposed system in a threefold cross-validation manner to evaluate the performance. The proposed system achieved a Dice score of 0.5732 (per case) and sensitivity of 0.8200 (per lesion), which are 3.87 and 4.16 points higher than the network without non-image information. CONCLUSION This study demonstrated the effectiveness of the use of non-image information, including metadata of the patient and location information of the input slice image, in the detection of prostate cancer from [Formula: see text]-PSMA 1007 PET/CT images. Improvement in the sensitivity of inactive and small lesions remains a future challenge.
Collapse
Affiliation(s)
- Naoki Tsuchiya
- Institute of Engineering, Tokyo University of Agriculture and Technology, Koganei, Tokyo, Japan.
| | - Koichiro Kimura
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Bunkyo City, Tokyo, Japan
| | - Ukihide Tateishi
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Bunkyo City, Tokyo, Japan
| | - Tadashi Watabe
- Department of Nuclear Medicine and Tracer Kinetics, Graduate School of Medicine, Osaka University, Osaka, Japan
| | - Koji Hatano
- Department of Urology, Graduate School of Medicine, Osaka University, Osaka, Japan
| | - Motohide Uemura
- Department of Urology, Graduate School of Medicine, Osaka University, Osaka, Japan
- Department of Urology, Fukushima Medical University School of Medicine, Fukushima, Japan
| | - Norio Nonomura
- Department of Urology, Graduate School of Medicine, Osaka University, Osaka, Japan
| | - Akinobu Shimizu
- Institute of Engineering, Tokyo University of Agriculture and Technology, Koganei, Tokyo, Japan.
| |
Collapse
|
14
|
Dayarathna S, Islam KT, Uribe S, Yang G, Hayat M, Chen Z. Deep learning based synthesis of MRI, CT and PET: Review and analysis. Med Image Anal 2024; 92:103046. [PMID: 38052145 DOI: 10.1016/j.media.2023.103046] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 11/14/2023] [Accepted: 11/29/2023] [Indexed: 12/07/2023]
Abstract
Medical image synthesis represents a critical area of research in clinical decision-making, aiming to overcome the challenges associated with acquiring multiple image modalities for an accurate clinical workflow. This approach proves beneficial in estimating an image of a desired modality from a given source modality among the most common medical imaging contrasts, such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron Emission Tomography (PET). However, translating between two image modalities presents difficulties due to the complex and non-linear domain mappings. Deep learning-based generative modelling has exhibited superior performance in synthetic image contrast applications compared to conventional image synthesis methods. This survey comprehensively reviews deep learning-based medical imaging translation from 2018 to 2023 on pseudo-CT, synthetic MR, and synthetic PET. We provide an overview of synthetic contrasts in medical imaging and the most frequently employed deep learning networks for medical image synthesis. Additionally, we conduct a detailed analysis of each synthesis method, focusing on their diverse model designs based on input domains and network architectures. We also analyse novel network architectures, ranging from conventional CNNs to the recent Transformer and Diffusion models. This analysis includes comparing loss functions, available datasets and anatomical regions, and image quality assessments and performance in other downstream tasks. Finally, we discuss the challenges and identify solutions within the literature, suggesting possible future directions. We hope that the insights offered in this survey paper will serve as a valuable roadmap for researchers in the field of medical image synthesis.
Collapse
Affiliation(s)
- Sanuwani Dayarathna
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia.
| | | | - Sergio Uribe
- Department of Medical Imaging and Radiation Sciences, Faculty of Medicine, Monash University, Clayton VIC 3800, Australia
| | - Guang Yang
- Bioengineering Department and Imperial-X, Imperial College London, W12 7SL, United Kingdom
| | - Munawar Hayat
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia
| | - Zhaolin Chen
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia; Monash Biomedical Imaging, Clayton VIC 3800, Australia
| |
Collapse
|
15
|
Masad IS, Abu-Qasmieh IF, Al-Quran HH, Alawneh KZ, Abdalla KM, Al-Qudah AM. CT-based generation of synthetic-pseudo MR images with different weightings for human knee. Comput Biol Med 2024; 169:107842. [PMID: 38096761 DOI: 10.1016/j.compbiomed.2023.107842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 12/07/2023] [Accepted: 12/07/2023] [Indexed: 02/08/2024]
Abstract
Synthetic MR images are generated for their high soft-tissue contrast avoiding the discomfort by the long acquisition time and placing claustrophobic patients in the MR scanner's confined space. The aim of this study is to generate synthetic pseudo-MR images from a real CT image for the knee region in vivo. 19 healthy subjects were scanned for model training, while 13 other healthy subjects were imaged for testing. The approach used in this work is novel such that the registration was performed between the MR and CT images, and the femur bone, patella, and the surrounding soft tissue were segmented on the CT image. The tissue type was mapped to its corresponding mean and standard deviation values of the CT# of a window moving on each pixel in the reconstructed CT images, which enabled the remapping of the tissue to its MRI intrinsic parameters: T1, T2, and proton density (ρ). To generate the synthetic MR image of a knee slice, a classic spin-echo sequence was simulated using proper intrinsic and contrast parameters. Results showed that the synthetic MR images were comparable to the real images acquired with the same TE and TR values, and the average slope between them (for all knee segments) was 0.98, while the average percentage root mean square difference (PRD) was 25.7%. In conclusion, this study has shown the feasibility and validity of accurately generating synthetic MR images of the knee region in vivo with different weightings from a single real CT image.
Collapse
Affiliation(s)
- Ihssan S Masad
- Department of Biomedical Systems and Informatics Engineering, Yarmouk University, Irbid, 21163, Jordan.
| | - Isam F Abu-Qasmieh
- Department of Biomedical Systems and Informatics Engineering, Yarmouk University, Irbid, 21163, Jordan
| | - Hiam H Al-Quran
- Department of Biomedical Systems and Informatics Engineering, Yarmouk University, Irbid, 21163, Jordan
| | - Khaled Z Alawneh
- Department of Diagnostic Radiology, Faculty of Medicine, Jordan University of Science and Technology, Irbid, 22110, Jordan; King Abdullah University Hospital, Irbid, 22110, Jordan
| | - Khalid M Abdalla
- Department of Diagnostic Radiology, Faculty of Medicine, Jordan University of Science and Technology, Irbid, 22110, Jordan
| | - Ali M Al-Qudah
- Department of Biomedical Systems and Informatics Engineering, Yarmouk University, Irbid, 21163, Jordan
| |
Collapse
|
16
|
Hu T, Li B, Yang J, Zhang B, Fang L, Liu Y, Xiao P, Xie Q. Application of geometric shape-based CT field-of-view extension algorithms in an all-digital positron emission tomography/computed tomography system. Med Phys 2024; 51:1034-1046. [PMID: 38103259 DOI: 10.1002/mp.16888] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Revised: 10/30/2023] [Accepted: 11/14/2023] [Indexed: 12/18/2023] Open
Abstract
BACKGROUND Computed tomography (CT)-based positron emission tomography (PET) attenuation correction (AC) is a commonly used method in PET AC. However, the CT truncation caused by the subject's limbs outside the CT field-of-view (FOV) leads to errors in PET AC. PURPOSE In order to enhance the quantitative accuracy of PET imaging in the all-digital DigitMI 930 PET/CT system, we assessed the impact of FOV truncation on its image quality and investigated the effectiveness of geometric shape-based FOV extension algorithms in this system. METHODS We implemented two geometric shape-based FOV extension algorithms. By setting the data from different numbers of detector channels on either side of the sinogram to zero, we simulated various levels of truncation. Specific regions of interest (ROI) were selected, and the mean values of these ROIs were calculated to visually compare the differences between truncated CT, CT extended using the FOV extension algorithms, and the original CT. Furthermore, we conducted statistical analyses on the mean and standard deviation of residual maps between truncated/extended CT and the original CT at different levels of truncation. Subsequently, similar data processing was applied to PET images corrected using original CT and those corrected using simulated truncated and extended CT images. This allowed us to evaluate the influence of FOV truncation on the images produced by the DigitMI 930 PET/CT system and assess the effectiveness of the FOV extension algorithms. RESULTS Truncation caused bright artifacts at the CT FOV edge and a slight increase in pixel values within the FOV. When using truncated CT data for PET AC, the PET activity outside the CT FOV decreased, while the extension algorithm effectively reduced these effects. Patient data showed that the activity within the CT FOV decreased by 60% in the truncated image compared to the base image, but this number could be reduced to at least 17.3% after extension. CONCLUSION The two geometric shape-based algorithms effectively eliminate CT truncation artifacts and restore the true distribution of CT shape and PET emission data outside the FOV in the all-digital DigitMI 930 PET/CT system. These two algorithms can be used as basic solutions for CT FOV extension in all-digital PET/CT systems.
Collapse
Affiliation(s)
- Tianjiao Hu
- Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, China
| | - Bingxuan Li
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, China
| | - Jigang Yang
- Nuclear Medicine Department, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Bo Zhang
- Biomedical Engineering Department, Huazhong University of Science and Technology, Wuhan, China
| | - Lei Fang
- Biomedical Engineering Department, Huazhong University of Science and Technology, Wuhan, China
| | - Yuqing Liu
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, China
| | - Peng Xiao
- Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, China
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, China
- Biomedical Engineering Department, Huazhong University of Science and Technology, Wuhan, China
| | - Qingguo Xie
- Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, China
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, China
- Biomedical Engineering Department, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
17
|
Eidex Z, Ding Y, Wang J, Abouei E, Qiu RLJ, Liu T, Wang T, Yang X. Deep learning in MRI-guided radiation therapy: A systematic review. J Appl Clin Med Phys 2024; 25:e14155. [PMID: 37712893 PMCID: PMC10860468 DOI: 10.1002/acm2.14155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 05/10/2023] [Accepted: 08/21/2023] [Indexed: 09/16/2023] Open
Abstract
Recent advances in MRI-guided radiation therapy (MRgRT) and deep learning techniques encourage fully adaptive radiation therapy (ART), real-time MRI monitoring, and the MRI-only treatment planning workflow. Given the rapid growth and emergence of new state-of-the-art methods in these fields, we systematically review 197 studies written on or before December 31, 2022, and categorize the studies into the areas of image segmentation, image synthesis, radiomics, and real time MRI. Building from the underlying deep learning methods, we discuss their clinical importance and current challenges in facilitating small tumor segmentation, accurate x-ray attenuation information from MRI, tumor characterization and prognosis, and tumor motion tracking. In particular, we highlight the recent trends in deep learning such as the emergence of multi-modal, visual transformer, and diffusion models.
Collapse
Affiliation(s)
- Zach Eidex
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
- School of Mechanical EngineeringGeorgia Institute of TechnologyAtlantaGeorgiaUSA
| | - Yifu Ding
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Jing Wang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Elham Abouei
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Richard L. J. Qiu
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Tian Liu
- Department of Radiation OncologyIcahn School of Medicine at Mount SinaiNew YorkNew YorkUSA
| | - Tonghe Wang
- Department of Medical PhysicsMemorial Sloan Kettering Cancer CenterNew YorkNew YorkUSA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
- School of Mechanical EngineeringGeorgia Institute of TechnologyAtlantaGeorgiaUSA
| |
Collapse
|
18
|
Rudroff T. Artificial Intelligence's Transformative Role in Illuminating Brain Function in Long COVID Patients Using PET/FDG. Brain Sci 2024; 14:73. [PMID: 38248288 PMCID: PMC10813353 DOI: 10.3390/brainsci14010073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Revised: 01/05/2024] [Accepted: 01/09/2024] [Indexed: 01/23/2024] Open
Abstract
Cutting-edge brain imaging techniques, particularly positron emission tomography with Fluorodeoxyglucose (PET/FDG), are being used in conjunction with Artificial Intelligence (AI) to shed light on the neurological symptoms associated with Long COVID. AI, particularly deep learning algorithms such as convolutional neural networks (CNN) and generative adversarial networks (GAN), plays a transformative role in analyzing PET scans, identifying subtle metabolic changes, and offering a more comprehensive understanding of Long COVID's impact on the brain. It aids in early detection of abnormal brain metabolism patterns, enabling personalized treatment plans. Moreover, AI assists in predicting the progression of neurological symptoms, refining patient care, and accelerating Long COVID research. It can uncover new insights, identify biomarkers, and streamline drug discovery. Additionally, the application of AI extends to non-invasive brain stimulation techniques, such as transcranial direct current stimulation (tDCS), which have shown promise in alleviating Long COVID symptoms. AI can optimize treatment protocols by analyzing neuroimaging data, predicting individual responses, and automating adjustments in real time. While the potential benefits are vast, ethical considerations and data privacy must be rigorously addressed. The synergy of AI and PET scans in Long COVID research offers hope in understanding and mitigating the complexities of this condition.
Collapse
Affiliation(s)
- Thorsten Rudroff
- Department of Health and Human Physiology, University of Iowa, Iowa City, IA 52242, USA; ; Tel.: +1-(319)-467-0363; Fax: +1-(319)-355-6669
- Department of Neurology, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA
| |
Collapse
|
19
|
Rambojun AM, Komber H, Rossdale J, Suntharalingam J, Rodrigues JCL, Ehrhardt MJ, Repetti A. Uncertainty quantification in computed tomography pulmonary angiography. PNAS NEXUS 2024; 3:pgad404. [PMID: 38737009 PMCID: PMC11087828 DOI: 10.1093/pnasnexus/pgad404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Accepted: 10/26/2023] [Indexed: 05/14/2024]
Abstract
Computed tomography (CT) imaging of the thorax is widely used for the detection and monitoring of pulmonary embolism (PE). However, CT images can contain artifacts due to the acquisition or the processes involved in image reconstruction. Radiologists often have to distinguish between such artifacts and actual PEs. We provide a proof of concept in the form of a scalable hypothesis testing method for CT, to enable quantifying uncertainty of possible PEs. In particular, we introduce a Bayesian Framework to quantify the uncertainty of an observed compact structure that can be identified as a PE. We assess the ability of the method to operate under high-noise environments and with insufficient data.
Collapse
Affiliation(s)
- Adwaye M Rambojun
- Department of Mathematical Sciences, University of Bath, Bath BA2 7JU, UK
| | | | | | - Jay Suntharalingam
- Royal United Hospital, Bath BA1 3NG, UK
- Department of Life Sciences, University of Bath, Bath BA2 7JU, UK
| | | | | | - Audrey Repetti
- School of Engineering and Physical Sciences, School of Mathematical and Computer Sciences, Heriot-Watt University, Edinburgh EH14 4AS, UK
- Maxwell Institute for Mathematical Sciences, Edinburgh EH8 9BT, UK
| |
Collapse
|
20
|
Lucas A, Campbell Arnold T, Okar SV, Vadali C, Kawatra KD, Ren Z, Cao Q, Shinohara RT, Schindler MK, Davis KA, Litt B, Reich DS, Stein JM. Multi-contrast high-field quality image synthesis for portable low-field MRI using generative adversarial networks and paired data. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.12.28.23300409. [PMID: 38234785 PMCID: PMC10793526 DOI: 10.1101/2023.12.28.23300409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2024]
Abstract
Introduction Portable low-field strength (64mT) MRI scanners promise to increase access to neuroimaging for clinical and research purposes, however these devices produce lower quality images compared to high-field scanners. In this study, we developed and evaluated a deep learning architecture to generate high-field quality brain images from low-field inputs using a paired dataset of multiple sclerosis (MS) patients scanned at 64mT and 3T. Methods A total of 49 MS patients were scanned on portable 64mT and standard 3T scanners at Penn (n=25) or the National Institutes of Health (NIH, n=24) with T1-weighted, T2-weighted and FLAIR acquisitions. Using this paired data, we developed a generative adversarial network (GAN) architecture for low- to high-field image translation (LowGAN). We then evaluated synthesized images with respect to image quality, brain morphometry, and white matter lesions. Results Synthetic high-field images demonstrated visually superior quality compared to low-field inputs and significantly higher normalized cross-correlation (NCC) to actual high-field images for T1 (p=0.001) and FLAIR (p<0.001) contrasts. LowGAN generally outperformed the current state-of-the-art for low-field volumetrics. For example, thalamic, lateral ventricle, and total cortical volumes in LowGAN outputs did not differ significantly from 3T measurements. Synthetic outputs preserved MS lesions and captured a known inverse relationship between total lesion volume and thalamic volume. Conclusions LowGAN generates synthetic high-field images with comparable visual and quantitative quality to actual high-field scans. Enhancing portable MRI image quality could add value and boost clinician confidence, enabling wider adoption of this technology.
Collapse
Affiliation(s)
- Alfredo Lucas
- Perelman School of Medicine, University of Pennsylvania
- Center for Neuroengineering and Therapeutics, Departments of Bioengineering and Neurology, University of Pennsylvania
| | - T Campbell Arnold
- Center for Neuroengineering and Therapeutics, Departments of Bioengineering and Neurology, University of Pennsylvania
| | - Serhat V Okar
- National Institute of Neurological Disorders and Stroke, National Institutes of Health
| | - Chetan Vadali
- Center for Neuroengineering and Therapeutics, Departments of Bioengineering and Neurology, University of Pennsylvania
- Department of Radiology, University of Pennsylvania
| | - Karan D Kawatra
- National Institute of Neurological Disorders and Stroke, National Institutes of Health
| | - Zheng Ren
- Penn Statistics in Imaging and Visualization Center, Department of Biostatistics, Epidemiology, and Informatics, University of Pennsylvania
| | - Quy Cao
- Penn Statistics in Imaging and Visualization Center, Department of Biostatistics, Epidemiology, and Informatics, University of Pennsylvania
| | - Russell T Shinohara
- Penn Statistics in Imaging and Visualization Center, Department of Biostatistics, Epidemiology, and Informatics, University of Pennsylvania
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania
| | - Matthew K Schindler
- Perelman School of Medicine, University of Pennsylvania
- Department of Neurology, University of Pennsylvania
| | - Kathryn A Davis
- Perelman School of Medicine, University of Pennsylvania
- Center for Neuroengineering and Therapeutics, Departments of Bioengineering and Neurology, University of Pennsylvania
- Department of Neurology, University of Pennsylvania
| | - Brian Litt
- Perelman School of Medicine, University of Pennsylvania
- Center for Neuroengineering and Therapeutics, Departments of Bioengineering and Neurology, University of Pennsylvania
- Department of Neurology, University of Pennsylvania
| | - Daniel S Reich
- National Institute of Neurological Disorders and Stroke, National Institutes of Health
| | - Joel M Stein
- Perelman School of Medicine, University of Pennsylvania
- Center for Neuroengineering and Therapeutics, Departments of Bioengineering and Neurology, University of Pennsylvania
- Department of Radiology, University of Pennsylvania
| |
Collapse
|
21
|
Tong MW, Tolpadi AA, Bhattacharjee R, Han M, Majumdar S, Pedoia V. Synthetic Knee MRI T 1p Maps as an Avenue for Clinical Translation of Quantitative Osteoarthritis Biomarkers. Bioengineering (Basel) 2023; 11:17. [PMID: 38247894 PMCID: PMC10812962 DOI: 10.3390/bioengineering11010017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Revised: 12/15/2023] [Accepted: 12/21/2023] [Indexed: 01/23/2024] Open
Abstract
A 2D U-Net was trained to generate synthetic T1p maps from T2 maps for knee MRI to explore the feasibility of domain adaptation for enriching existing datasets and enabling rapid, reliable image reconstruction. The network was developed using 509 healthy contralateral and injured ipsilateral knee images from patients with ACL injuries and reconstruction surgeries acquired across three institutions. Network generalizability was evaluated on 343 knees acquired in a clinical setting and 46 knees from simultaneous bilateral acquisition in a research setting. The deep neural network synthesized high-fidelity reconstructions of T1p maps, preserving textures and local T1p elevation patterns in cartilage with a normalized mean square error of 2.4% and Pearson's correlation coefficient of 0.93. Analysis of reconstructed T1p maps within cartilage compartments revealed minimal bias (-0.10 ms), tight limits of agreement, and quantification error (5.7%) below the threshold for clinically significant change (6.42%) associated with osteoarthritis. In an out-of-distribution external test set, synthetic maps preserved T1p textures, but exhibited increased bias and wider limits of agreement. This study demonstrates the capability of image synthesis to reduce acquisition time, derive meaningful information from existing datasets, and suggest a pathway for standardizing T1p as a quantitative biomarker for osteoarthritis.
Collapse
Affiliation(s)
- Michelle W. Tong
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA 94143, USA (S.M.); (V.P.)
- Department of Bioengineering, University of California Berkeley, Berkeley, CA 94720, USA
| | - Aniket A. Tolpadi
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA 94143, USA (S.M.); (V.P.)
- Department of Bioengineering, University of California Berkeley, Berkeley, CA 94720, USA
| | - Rupsa Bhattacharjee
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA 94143, USA (S.M.); (V.P.)
| | - Misung Han
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA 94143, USA (S.M.); (V.P.)
| | - Sharmila Majumdar
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA 94143, USA (S.M.); (V.P.)
| | - Valentina Pedoia
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA 94143, USA (S.M.); (V.P.)
| |
Collapse
|
22
|
Pinto-Coelho L. How Artificial Intelligence Is Shaping Medical Imaging Technology: A Survey of Innovations and Applications. Bioengineering (Basel) 2023; 10:1435. [PMID: 38136026 PMCID: PMC10740686 DOI: 10.3390/bioengineering10121435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2023] [Revised: 12/12/2023] [Accepted: 12/15/2023] [Indexed: 12/24/2023] Open
Abstract
The integration of artificial intelligence (AI) into medical imaging has guided in an era of transformation in healthcare. This literature review explores the latest innovations and applications of AI in the field, highlighting its profound impact on medical diagnosis and patient care. The innovation segment explores cutting-edge developments in AI, such as deep learning algorithms, convolutional neural networks, and generative adversarial networks, which have significantly improved the accuracy and efficiency of medical image analysis. These innovations have enabled rapid and accurate detection of abnormalities, from identifying tumors during radiological examinations to detecting early signs of eye disease in retinal images. The article also highlights various applications of AI in medical imaging, including radiology, pathology, cardiology, and more. AI-based diagnostic tools not only speed up the interpretation of complex images but also improve early detection of disease, ultimately delivering better outcomes for patients. Additionally, AI-based image processing facilitates personalized treatment plans, thereby optimizing healthcare delivery. This literature review highlights the paradigm shift that AI has brought to medical imaging, highlighting its role in revolutionizing diagnosis and patient care. By combining cutting-edge AI techniques and their practical applications, it is clear that AI will continue shaping the future of healthcare in profound and positive ways.
Collapse
Affiliation(s)
- Luís Pinto-Coelho
- ISEP—School of Engineering, Polytechnic Institute of Porto, 4200-465 Porto, Portugal;
- INESCTEC, Campus of the Engineering Faculty of the University of Porto, 4200-465 Porto, Portugal
| |
Collapse
|
23
|
Schaudt D, Späte C, von Schwerin R, Reichert M, von Schwerin M, Beer M, Kloth C. A Critical Assessment of Generative Models for Synthetic Data Augmentation on Limited Pneumonia X-ray Data. Bioengineering (Basel) 2023; 10:1421. [PMID: 38136012 PMCID: PMC10741143 DOI: 10.3390/bioengineering10121421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Revised: 11/28/2023] [Accepted: 12/12/2023] [Indexed: 12/24/2023] Open
Abstract
In medical imaging, deep learning models serve as invaluable tools for expediting diagnoses and aiding specialized medical professionals in making clinical decisions. However, effectively training deep learning models typically necessitates substantial quantities of high-quality data, a resource often lacking in numerous medical imaging scenarios. One way to overcome this deficiency is to artificially generate such images. Therefore, in this comparative study we train five generative models to artificially increase the amount of available data in such a scenario. This synthetic data approach is evaluated on a a downstream classification task, predicting four causes for pneumonia as well as healthy cases on 1082 chest X-ray images. Quantitative and medical assessments show that a Generative Adversarial Network (GAN)-based approach significantly outperforms more recent diffusion-based approaches on this limited dataset with better image quality and pathological plausibility. We show that better image quality surprisingly does not translate to improved classification performance by evaluating five different classification models and varying the amount of additional training data. Class-specific metrics like precision, recall, and F1-score show a substantial improvement by using synthetic images, emphasizing the data rebalancing effect of less frequent classes. However, overall performance does not improve for most models and configurations, except for a DreamBooth approach which shows a +0.52 improvement in overall accuracy. The large variance of performance impact in this study suggests a careful consideration of utilizing generative models for limited data scenarios, especially with an unexpected negative correlation between image quality and downstream classification improvement.
Collapse
Affiliation(s)
- Daniel Schaudt
- Institute of Databases and Information Systems, Ulm University, James-Franck-Ring, 89081 Ulm, Germany
| | - Christian Späte
- DASU Transferzentrum für Digitalisierung, Analytics und Data Science Ulm, Olgastraße 94, 89073 Ulm, Germany
| | - Reinhold von Schwerin
- Department of Computer Science, Ulm University of Applied Science, Albert–Einstein–Allee 55, 89081 Ulm, Germany
| | - Manfred Reichert
- Institute of Databases and Information Systems, Ulm University, James-Franck-Ring, 89081 Ulm, Germany
| | - Marianne von Schwerin
- Department of Computer Science, Ulm University of Applied Science, Albert–Einstein–Allee 55, 89081 Ulm, Germany
| | - Meinrad Beer
- Department of Radiology, University Hospital of Ulm, Albert–Einstein–Allee 23, 89081 Ulm, Germany
| | - Christopher Kloth
- Department of Radiology, University Hospital of Ulm, Albert–Einstein–Allee 23, 89081 Ulm, Germany
| |
Collapse
|
24
|
Liu Z, Wang B, Ye H, Liu H. Prior information-guided reconstruction network for positron emission tomography images. Quant Imaging Med Surg 2023; 13:8230-8246. [PMID: 38106321 PMCID: PMC10722030 DOI: 10.21037/qims-23-579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Accepted: 10/07/2023] [Indexed: 12/19/2023]
Abstract
Background Deep learning has recently shown great potential in medical image reconstruction tasks. For positron emission tomography (PET) images, the direct reconstruction from raw data to radioactivity images using deep learning without any constraint may lead to the production of nonexistent structures. The aim of this study was to specifically develop and test a flexibly deep learning-based reconstruction network guided by any form of prior knowledge to achieve high quality and high reliability reconstruction. Methods We developed a novel prior information-guided reconstruction network (PIGRN) with a dual-channel generator and a 2-scale discriminator based on a conditional generative adversarial network (cGAN). Besides the raw data channel, an additional channel is provided in the generator for prior information (PI) to guide the training phase. The PI can be reconstructed images obtained via conventional methods, nuclear medical images from other modalities, attenuation correction maps from time-of-flight-PET (TOF-PET) data, or any other physical parameters. For this study, the reconstructed images generated by filtered back projection (FBP) were chosen as the input of the additional channel. To improve the image quality, a 2-scale discriminator was adopted which can focus on both the coarse and fine field of the reconstruction images. Experiments were carried out on both a simulation dataset and a real Sprague Dawley (SD) rat dataset. Results Two classic deep learning-based reconstruction networks, including U-Net and Deep-PET, were compared in our study. Compared with these two methods, our method could provide much higher quality PET image reconstruction in the study of the simulation dataset. The peak signal-to-noise ratio (PSNR) value reached 31.8498, and the structure similarity index measure (SSIM) value reached 0.9754. The real study on SD rats indicated that the proposed network also has strong generalization ability. Conclusions The flexible PIGRN based on cGAN for PET images combines both raw data and PI. The results of comparison experiments and a generalization experiment based on simulation and SD rat datasets demonstrated that the proposed PIGRN has the ability to improve image quality and has strong generalization ability.
Collapse
Affiliation(s)
- Zhiyuan Liu
- State Key Laboratory of Extreme Photonics and Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China
| | - Bo Wang
- State Key Laboratory of Extreme Photonics and Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China
| | - Huihui Ye
- State Key Laboratory of Extreme Photonics and Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China
- Jiaxing Key Laboratory of Photonic Sensing & Intelligent Imaging, Jiaxing, China
- Intelligent Optics & Photonics Research Center, Jiaxing Research Institute Zhejiang University, Jiaxing, China
| | - Huafeng Liu
- State Key Laboratory of Extreme Photonics and Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China
- Jiaxing Key Laboratory of Photonic Sensing & Intelligent Imaging, Jiaxing, China
- Intelligent Optics & Photonics Research Center, Jiaxing Research Institute Zhejiang University, Jiaxing, China
| |
Collapse
|
25
|
Honkamaa J, Khan U, Koivukoski S, Valkonen M, Latonen L, Ruusuvuori P, Marttinen P. Deformation equivariant cross-modality image synthesis with paired non-aligned training data. Med Image Anal 2023; 90:102940. [PMID: 37666115 DOI: 10.1016/j.media.2023.102940] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 08/14/2023] [Accepted: 08/18/2023] [Indexed: 09/06/2023]
Abstract
Cross-modality image synthesis is an active research topic with multiple medical clinically relevant applications. Recently, methods allowing training with paired but misaligned data have started to emerge. However, no robust and well-performing methods applicable to a wide range of real world data sets exist. In this work, we propose a generic solution to the problem of cross-modality image synthesis with paired but non-aligned data by introducing new deformation equivariance encouraging loss functions. The method consists of joint training of an image synthesis network together with separate registration networks and allows adversarial training conditioned on the input even with misaligned data. The work lowers the bar for new clinical applications by allowing effortless training of cross-modality image synthesis networks for more difficult data sets.
Collapse
Affiliation(s)
- Joel Honkamaa
- Department of Computer Science, Aalto University, Finland.
| | - Umair Khan
- Institute of Biomedicine, University of Turku, Finland
| | - Sonja Koivukoski
- Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland
| | - Mira Valkonen
- Faculty of Medicine and Health Technology, Tampere University, Finland
| | - Leena Latonen
- Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland
| | - Pekka Ruusuvuori
- Institute of Biomedicine, University of Turku, Finland; Faculty of Medicine and Health Technology, Tampere University, Finland
| | | |
Collapse
|
26
|
Graf R, Schmitt J, Schlaeger S, Möller HK, Sideri-Lampretsa V, Sekuboyina A, Krieg SM, Wiestler B, Menze B, Rueckert D, Kirschke JS. Denoising diffusion-based MRI to CT image translation enables automated spinal segmentation. Eur Radiol Exp 2023; 7:70. [PMID: 37957426 PMCID: PMC10643734 DOI: 10.1186/s41747-023-00385-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Accepted: 09/12/2023] [Indexed: 11/15/2023] Open
Abstract
BACKGROUND Automated segmentation of spinal magnetic resonance imaging (MRI) plays a vital role both scientifically and clinically. However, accurately delineating posterior spine structures is challenging. METHODS This retrospective study, approved by the ethical committee, involved translating T1-weighted and T2-weighted images into computed tomography (CT) images in a total of 263 pairs of CT/MR series. Landmark-based registration was performed to align image pairs. We compared two-dimensional (2D) paired - Pix2Pix, denoising diffusion implicit models (DDIM) image mode, DDIM noise mode - and unpaired (SynDiff, contrastive unpaired translation) image-to-image translation using "peak signal-to-noise ratio" as quality measure. A publicly available segmentation network segmented the synthesized CT datasets, and Dice similarity coefficients (DSC) were evaluated on in-house test sets and the "MRSpineSeg Challenge" volumes. The 2D findings were extended to three-dimensional (3D) Pix2Pix and DDIM. RESULTS 2D paired methods and SynDiff exhibited similar translation performance and DCS on paired data. DDIM image mode achieved the highest image quality. SynDiff, Pix2Pix, and DDIM image mode demonstrated similar DSC (0.77). For craniocaudal axis rotations, at least two landmarks per vertebra were required for registration. The 3D translation outperformed the 2D approach, resulting in improved DSC (0.80) and anatomically accurate segmentations with higher spatial resolution than that of the original MRI series. CONCLUSIONS Two landmarks per vertebra registration enabled paired image-to-image translation from MRI to CT and outperformed all unpaired approaches. The 3D techniques provided anatomically correct segmentations, avoiding underprediction of small structures like the spinous process. RELEVANCE STATEMENT This study addresses the unresolved issue of translating spinal MRI to CT, making CT-based tools usable for MRI data. It generates whole spine segmentation, previously unavailable in MRI, a prerequisite for biomechanical modeling and feature extraction for clinical applications. KEY POINTS • Unpaired image translation lacks in converting spine MRI to CT effectively. • Paired translation needs registration with two landmarks per vertebra at least. • Paired image-to-image enables segmentation transfer to other domains. • 3D translation enables super resolution from MRI to CT. • 3D translation prevents underprediction of small structures.
Collapse
Affiliation(s)
- Robert Graf
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany.
| | - Joachim Schmitt
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
| | - Sarah Schlaeger
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
| | - Hendrik Kristian Möller
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
| | - Vasiliki Sideri-Lampretsa
- Institut Für KI Und Informatik in Der Medizin, Klinikum Rechts Der Isar, Technical University of Munich, Munich, Germany
| | - Anjany Sekuboyina
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - Sandro Manuel Krieg
- Department of Neurosurgery, Klinikum Rechts Der Isar, School of Medicine, Technical University of Munich, Munich, Germany
| | - Benedikt Wiestler
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
| | - Bjoern Menze
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - Daniel Rueckert
- Institut Für KI Und Informatik in Der Medizin, Klinikum Rechts Der Isar, Technical University of Munich, Munich, Germany
- Visual Information Processing, Imperial College London, London, UK
| | - Jan Stefan Kirschke
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
| |
Collapse
|
27
|
Yuan S, Chen X, Liu Y, Zhu J, Men K, Dai J. Comprehensive evaluation of similarity between synthetic and real CT images for nasopharyngeal carcinoma. Radiat Oncol 2023; 18:182. [PMID: 37936196 PMCID: PMC10629140 DOI: 10.1186/s13014-023-02349-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 09/11/2023] [Indexed: 11/09/2023] Open
Abstract
BACKGROUND Although magnetic resonance imaging (MRI)-to-computed tomography (CT) synthesis studies based on deep learning have significantly progressed, the similarity between synthetic CT (sCT) and real CT (rCT) has only been evaluated in image quality metrics (IQMs). To evaluate the similarity between synthetic CT (sCT) and real CT (rCT) comprehensively, we comprehensively evaluated IQMs and radiomic features for the first time. METHODS This study enrolled 127 patients with nasopharyngeal carcinoma who underwent CT and MRI scans. Supervised-learning (Unet) and unsupervised-learning (CycleGAN) methods were applied to build MRI-to-CT synthesis models. The regions of interest (ROIs) included nasopharynx gross tumor volume (GTVnx), brainstem, parotid glands, and temporal lobes. The peak signal-to-noise ratio (PSNR), mean absolute error (MAE), root mean square error (RMSE), and structural similarity (SSIM) were used to evaluate image quality. Additionally, 837 radiomic features were extracted for each ROI, and the correlation was evaluated using the concordance correlation coefficient (CCC). RESULTS The MAE, RMSE, SSIM, and PSNR of the body were 91.99, 187.12, 0.97, and 51.15 for Unet and 108.30, 211.63, 0.96, and 49.84 for CycleGAN. For the metrics, Unet was superior to CycleGAN (P < 0.05). For the radiomic features, the percentage of four levels (i.e., excellent, good, moderate, and poor, respectively) were as follows: GTVnx, 8.5%, 14.6%, 26.5%, and 50.4% for Unet and 12.3%, 25%, 38.4%, and 24.4% for CycleGAN; other ROIs, 5.44% ± 3.27%, 5.56% ± 2.92%, 21.38% ± 6.91%, and 67.58% ± 8.96% for Unet and 5.16% ± 1.69%, 3.5% ± 1.52%, 12.68% ± 7.51%, and 78.62% ± 8.57% for CycleGAN. CONCLUSIONS Unet-sCT was superior to CycleGAN-sCT for the IQMs. However, neither exhibited absolute superiority in radiomic features, and both were far less similar to rCT. Therefore, further work is required to improve the radiomic similarity for MRI-to-CT synthesis. TRIAL REGISTRATION This study was a retrospective study, so it was free from registration.
Collapse
Affiliation(s)
- Siqi Yuan
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Xinyuan Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Yuxiang Liu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Ji Zhu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China.
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China.
| |
Collapse
|
28
|
Liu Y, Yang B, Chen X, Zhu J, Ji G, Liu Y, Chen B, Lu N, Yi J, Wang S, Li Y, Dai J, Men K. Efficient segmentation using domain adaptation for MRI-guided and CBCT-guided online adaptive radiotherapy. Radiother Oncol 2023; 188:109871. [PMID: 37634767 DOI: 10.1016/j.radonc.2023.109871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2023] [Revised: 07/31/2023] [Accepted: 08/20/2023] [Indexed: 08/29/2023]
Abstract
BACKGROUND Delineation of regions of interest (ROIs) is important for adaptive radiotherapy (ART) but it is also time consuming and labor intensive. AIM This study aims to develop efficient segmentation methods for magnetic resonance imaging-guided ART (MRIgART) and cone-beam computed tomography-guided ART (CBCTgART). MATERIALS AND METHODS MRIgART and CBCTgART studies enrolled 242 prostate cancer patients and 530 nasopharyngeal carcinoma patients, respectively. A public dataset of CBCT from 35 pancreatic cancer patients was adopted to test the framework. We designed two domain adaption methods to learn and adapt the features from planning computed tomography (pCT) to MRI or CBCT modalities. The pCT was transformed to synthetic MRI (sMRI) for MRIgART, while CBCT was transformed to synthetic CT (sCT) for CBCTgART. Generalized segmentation models were trained with large popular data in which the inputs were sMRI for MRIgART and pCT for CBCTgART. Finally, the personalized models for each patient were established by fine-tuning the generalized model with the contours on pCT of that patient. The proposed method was compared with deformable image registration (DIR), a regular deep learning (DL) model trained on the same modality (DL-regular), and a generalized model in our framework (DL-generalized). RESULTS The proposed method achieved better or comparable performance. For MRIgART of the prostate cancer patients, the mean dice similarity coefficient (DSC) of four ROIs was 87.2%, 83.75%, 85.36%, and 92.20% for the DIR, DL-regular, DL-generalized, and proposed method, respectively. For CBCTgART of the nasopharyngeal carcinoma patients, the mean DSC of two target volumes were 90.81% and 91.18%, 75.17% and 58.30%, for the DIR, DL-regular, DL-generalized, and the proposed method, respectively. For CBCTgART of the pancreatic cancer patients, the mean DSC of two ROIs were 61.94% and 61.44%, 63.94% and 81.56%, for the DIR, DL-regular, DL-generalized, and the proposed method, respectively. CONCLUSION The proposed method utilizing personalized modeling improved the segmentation accuracy of ART.
Collapse
Affiliation(s)
- Yuxiang Liu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Bining Yang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Xinyuan Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Ji Zhu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Guangqian Ji
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Yueping Liu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Bo Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Ningning Lu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Junlin Yi
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Shulian Wang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Yexiong Li
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China.
| | - Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China.
| |
Collapse
|
29
|
Thomsen FSL, Iarussi E, Borggrefe J, Boyd SK, Wang Y, Battié MC. Bone-GAN: Generation of virtual bone microstructure of high resolution peripheral quantitative computed tomography. Med Phys 2023; 50:6943-6954. [PMID: 37264564 DOI: 10.1002/mp.16482] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 04/06/2023] [Accepted: 04/25/2023] [Indexed: 06/03/2023] Open
Abstract
BACKGROUND Data-driven development of medical biomarkers of bone requires a large amount of image data but physical measurements are generally too restricted in size and quality to perform a robust training. PURPOSE This study aims to provide a reliable in silico method for the generation of realistic bone microstructure with defined microarchitectural properties. Synthetic bone samples may improve training of neural networks and serve for the development of new diagnostic parameters of bone architecture and mineralization. METHODS One hundred-fifty cadaveric lumbar vertebrae from 48 different male human spines were scanned with a high resolution peripheral quantitative CT. After prepocessing the scans, we extracted 10,795 purely spongeous bone patches, each with a side length of 32 voxels (5 mm) and isotropic voxel size of 164 μm. We trained a volumetric generative adversarial network (GAN) in a progressive manner to create synthetic microstructural bone samples. We then added a style transfer technique to allow the generation of synthetic samples with defined microstructure and gestalt by simultaneously optimizing two entangled loss functions. Reliability testing was performed by comparing real and synthetic bone samples on 10 well-understood microstructural parameters. RESULTS The method was able to create synthetic bone samples with visual and quantitative properties that effectively matched with the real samples. The GAN contained a well-formed latent space allowing to smoothly morph bone samples by their microstructural parameters, visual appearance or both. Optimum performance has been obtained for bone samples with voxel size 32 × 32 × 32, but also samples of size 64 × 64 × 64 could be synthesized. CONCLUSIONS Our two-step-approach combines a parameter-agnostic GAN with a parameter-specific style transfer technique. It allows to generate an unlimited anonymous database of microstructural bone samples with sufficient realism to be used for the development of new data-driven methods of bone-biomarkers. Particularly, the style transfer technique can generate datasets of bone samples with specific conditions to simulate certain bone pathologies.
Collapse
Affiliation(s)
- Felix S L Thomsen
- National Scientific and Technical Research Council (CONICET), Buenos Aires, Argentina
- Department of Radiology, Neuroradiology and Nuclear Medicine, Johannes Wesling University Hospital, Ruhr University Bochum, Bochum, Germany
- Department of Electrical and Computer Engineering, Institute for Computer Science and Engineering, National University of the South (DIEC-ICIC-UNS), Bahía Blanca, Argentina
| | - Emmanuel Iarussi
- National Scientific and Technical Research Council (CONICET), Buenos Aires, Argentina
- Laboratory of Artificial Intelligence, University Torcuato Di Tella, Buenos Aires, Argentina
| | - Jan Borggrefe
- Department of Radiology, Neuroradiology and Nuclear Medicine, Johannes Wesling University Hospital, Ruhr University Bochum, Bochum, Germany
| | - Steven K Boyd
- McCaig Institute for Bone and Joint Health, University of Calgary, Canada
| | - Yue Wang
- Spine lab, Department of Orthopedic Surgery, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Michele C Battié
- Common Spinal Disorders Research Group, Faculty of Rehabilitation Medicine, University of Alberta, Edmonton, Canada
| |
Collapse
|
30
|
Dorent R, Haouchine N, Kogl F, Joutard S, Juvekar P, Torio E, Golby A, Ourselin S, Frisken S, Vercauteren T, Kapur T, Wells WM. Unified Brain MR-Ultrasound Synthesis using Multi-Modal Hierarchical Representations. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2023; 2023:448-458. [PMID: 38655383 PMCID: PMC7615858 DOI: 10.1007/978-3-031-43999-5_43] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 04/26/2024]
Abstract
We introduce MHVAE, a deep hierarchical variational autoencoder (VAE) that synthesizes missing images from various modalities. Extending multi-modal VAEs with a hierarchical latent structure, we introduce a probabilistic formulation for fusing multi-modal images in a common latent representation while having the flexibility to handle incomplete image sets as input. Moreover, adversarial learning is employed to generate sharper images. Extensive experiments are performed on the challenging problem of joint intra-operative ultrasound (iUS) and Magnetic Resonance (MR) synthesis. Our model outperformed multi-modal VAEs, conditional GANs, and the current state-of-the-art unified method (ResViT) for synthesizing missing images, demonstrating the advantage of using a hierarchical latent representation and a principled probabilistic fusion operation. Our code is publicly available.
Collapse
Affiliation(s)
- Reuben Dorent
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | - Nazim Haouchine
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | - Fryderyk Kogl
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | | | - Parikshit Juvekar
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | - Erickson Torio
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | - Alexandra Golby
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | | | - Sarah Frisken
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | | | - Tina Kapur
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | - William M Wells
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
- Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
31
|
Sun H, Wang L, Daskivich T, Qiu S, Han F, D'Agnolo A, Saouaf R, Christodoulou AG, Kim H, Li D, Xie Y. Retrospective T2 quantification from conventional weighted MRI of the prostate based on deep learning. FRONTIERS IN RADIOLOGY 2023; 3:1223377. [PMID: 37886239 PMCID: PMC10598780 DOI: 10.3389/fradi.2023.1223377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Accepted: 09/28/2023] [Indexed: 10/28/2023]
Abstract
Purpose To develop a deep learning-based method to retrospectively quantify T2 from conventional T1- and T2-weighted images. Methods Twenty-five subjects were imaged using a multi-echo spin-echo sequence to estimate reference prostate T2 maps. Conventional T1- and T2-weighted images were acquired as the input images. A U-Net based neural network was developed to directly estimate T2 maps from the weighted images using a four-fold cross-validation training strategy. The structural similarity index (SSIM), peak signal-to-noise ratio (PSNR), mean percentage error (MPE), and Pearson correlation coefficient were calculated to evaluate the quality of network-estimated T2 maps. To explore the potential of this approach in clinical practice, a retrospective T2 quantification was performed on a high-risk prostate cancer cohort (Group 1) and a low-risk active surveillance cohort (Group 2). Tumor and non-tumor T2 values were evaluated by an experienced radiologist based on region of interest (ROI) analysis. Results The T2 maps generated by the trained network were consistent with the corresponding reference. Prostate tissue structures and contrast were well preserved, with a PSNR of 26.41 ± 1.17 dB, an SSIM of 0.85 ± 0.02, and a Pearson correlation coefficient of 0.86. Quantitative ROI analyses performed on 38 prostate cancer patients revealed estimated T2 values of 80.4 ± 14.4 ms and 106.8 ± 16.3 ms for tumor and non-tumor regions, respectively. ROI measurements showed a significant difference between tumor and non-tumor regions of the estimated T2 maps (P < 0.001). In the two-timepoints active surveillance cohort, patients defined as progressors exhibited lower estimated T2 values of the tumor ROIs at the second time point compared to the first time point. Additionally, the T2 difference between two time points for progressors was significantly greater than that for non-progressors (P = 0.010). Conclusion A deep learning method was developed to estimate prostate T2 maps retrospectively from clinically acquired T1- and T2-weighted images, which has the potential to improve prostate cancer diagnosis and characterization without requiring extra scans.
Collapse
Affiliation(s)
- Haoran Sun
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, CA, United States
- Department of Bioengineering, University of California Los Angeles, Los Angeles, CA, United States
| | - Lixia Wang
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, CA, United States
| | - Timothy Daskivich
- Minimal Invasive Urology, Cedars-Sinai Medical Center, Los Angeles, CA, United States
| | - Shihan Qiu
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, CA, United States
- Department of Bioengineering, University of California Los Angeles, Los Angeles, CA, United States
| | - Fei Han
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, CA, United States
| | - Alessandro D'Agnolo
- Imaging/Nuclear Medicine, Cedars-Sinai Medical Center, Los Angeles, CA, United States
| | - Rola Saouaf
- Imaging, Cedars-Sinai Medical Center, Los Angeles, CA, United States
| | - Anthony G. Christodoulou
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, CA, United States
- Department of Bioengineering, University of California Los Angeles, Los Angeles, CA, United States
| | - Hyung Kim
- Minimal Invasive Urology, Cedars-Sinai Medical Center, Los Angeles, CA, United States
| | - Debiao Li
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, CA, United States
- Department of Bioengineering, University of California Los Angeles, Los Angeles, CA, United States
| | - Yibin Xie
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, CA, United States
| |
Collapse
|
32
|
Aouadi S, Yoganathan SA, Torfeh T, Paloor S, Caparrotti P, Hammoud R, Al-Hammadi N. Generation of synthetic CT from CBCT using deep learning approaches for head and neck cancer patients. Biomed Phys Eng Express 2023; 9:055020. [PMID: 37489854 DOI: 10.1088/2057-1976/acea27] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2023] [Accepted: 07/25/2023] [Indexed: 07/26/2023]
Abstract
Purpose.To create a synthetic CT (sCT) from daily CBCT using either deep residual U-Net (DRUnet), or conditional generative adversarial network (cGAN) for adaptive radiotherapy planning (ART).Methods.First fraction CBCT and planning CT (pCT) were collected from 93 Head and Neck patients who underwent external beam radiotherapy. The dataset was divided into training, validation, and test sets of 58, 10 and 25 patients respectively. Three methods were used to generate sCT, 1. Nonlocal means patch based method was modified to include multiscale patches defining the multiscale patch based method (MPBM), 2. An encoder decoder 2D Unet with imbricated deep residual units was implemented, 3. DRUnet was integrated to the generator part of cGAN whereas a convolutional PatchGAN classifier was used as the discriminator. The accuracy of sCT was evaluated geometrically using Mean Absolute Error (MAE). Clinical Volumetric Modulated Arc Therapy (VMAT) plans were copied from pCT to registered CBCT and sCT and dosimetric analysis was performed by comparing Dose Volume Histogram (DVH) parameters of planning target volumes (PTVs) and organs at risk (OARs). Furthermore, 3D Gamma analysis (2%/2mm, global) between the dose on the sCT or CBCT and that on the pCT was performed.Results. The average MAE calculated between pCT and CBCT was 180.82 ± 27.37HU. Overall, all approaches significantly reduced the uncertainties in CBCT. Deep learning approaches outperformed patch-based methods with MAE = 67.88 ± 8.39HU (DRUnet) and MAE = 72.52 ± 8.43HU (cGAN) compared to MAE = 90.69 ± 14.3HU (MPBM). The percentages of DVH metric deviations were below 0.55% for PTVs and 1.17% for OARs using DRUnet. The average Gamma pass rate was 99.45 ± 1.86% for sCT generated using DRUnet.Conclusion.DL approaches outperformed MPBM. Specifically, DRUnet could be used for the generation of sCT with accurate intensities and realistic description of patient anatomy. This could be beneficial for CBCT based ART.
Collapse
Affiliation(s)
- Souha Aouadi
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| | - S A Yoganathan
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| | - Tarraf Torfeh
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| | - Satheesh Paloor
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| | - Palmira Caparrotti
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| | - Rabih Hammoud
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| | - Noora Al-Hammadi
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| |
Collapse
|
33
|
Yang Y, Hu S, Zhang L, Shen D. Deep learning based brain MRI registration driven by local-signed-distance fields of segmentation maps. Med Phys 2023; 50:4899-4915. [PMID: 36880373 DOI: 10.1002/mp.16291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Revised: 12/21/2022] [Accepted: 01/16/2023] [Indexed: 03/08/2023] Open
Abstract
BACKGROUND Deep learning based unsupervised registration utilizes the intensity information to align images. To avoid the influence of intensity variation and improve the registration accuracy, unsupervised and weakly-supervised registration are combined, namely, dually-supervised registration. However, the estimated dense deformation fields (DDFs) will focus on the edges among adjacent tissues when the segmentation labels are directly used to drive the registration progress, which will decrease the plausibility of brain MRI registration. PURPOSE In order to increase the accuracy of registration and ensure the plausibility of registration at the same time, we combine the local-signed-distance fields (LSDFs) and intensity images to dually supervise the registration progress. The proposed method not only uses the intensity and segmentation information but also uses the voxelwise geometric distance information to the edges. Hence, the accurate voxelwise correspondence relationships are guaranteed both inside and outside the edges. METHODS The proposed dually-supervised registration method mainly includes three enhancement strategies. Firstly, we leverage the segmentation labels to construct their LSDFs to provide more geometrical information for guiding the registration process. Secondly, to calculate LSDFs, we construct an LSDF-Net, which is composed of 3D dilation layers and erosion layers. Finally, we design the dually-supervised registration network (VMLSDF ) by combining the unsupervised VoxelMorph (VM) registration network and the weakly-supervised LSDF-Net, to utilize intensity and LSDF information, respectively. RESULTS In this paper, experiments were then carried out on four public brain image datasets: LPBA40, HBN, OASIS1, and OASIS3. The experimental results show that the Dice similarity coefficient (DSC) and 95% Hausdorff distance (HD) of VMLSDF are higher than those of the original unsupervised VM and the dually-supervised registration network (VMseg ) using intensity images and segmentation labels. At the same time, the percentage of negative Jacobian determinant (NJD) of VMLSDF is lower than VMseg . Our code is freely available at https://github.com/1209684549/LSDF. CONCLUSIONS The experimental results show that LSDFs can improve the registration accuracy compared with VM and VMseg , and enhance the plausibility of the DDFs compared with VMseg .
Collapse
Affiliation(s)
- Yue Yang
- School of Information Science and Engineering, Linyi University, Linyi, Shandong, China
| | - Shunbo Hu
- School of Information Science and Engineering, Linyi University, Linyi, Shandong, China
| | - Lintao Zhang
- School of Information Science and Engineering, Linyi University, Linyi, Shandong, China
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| |
Collapse
|
34
|
Müller-Franzes G, Niehues JM, Khader F, Arasteh ST, Haarburger C, Kuhl C, Wang T, Han T, Nolte T, Nebelung S, Kather JN, Truhn D. A multimodal comparison of latent denoising diffusion probabilistic models and generative adversarial networks for medical image synthesis. Sci Rep 2023; 13:12098. [PMID: 37495660 PMCID: PMC10372018 DOI: 10.1038/s41598-023-39278-0] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 07/22/2023] [Indexed: 07/28/2023] Open
Abstract
Although generative adversarial networks (GANs) can produce large datasets, their limited diversity and fidelity have been recently addressed by denoising diffusion probabilistic models, which have demonstrated superiority in natural image synthesis. In this study, we introduce Medfusion, a conditional latent DDPM designed for medical image generation, and evaluate its performance against GANs, which currently represent the state-of-the-art. Medfusion was trained and compared with StyleGAN-3 using fundoscopy images from the AIROGS dataset, radiographs from the CheXpert dataset, and histopathology images from the CRCDX dataset. Based on previous studies, Progressively Growing GAN (ProGAN) and Conditional GAN (cGAN) were used as additional baselines on the CheXpert and CRCDX datasets, respectively. Medfusion exceeded GANs in terms of diversity (recall), achieving better scores of 0.40 compared to 0.19 in the AIROGS dataset, 0.41 compared to 0.02 (cGAN) and 0.24 (StyleGAN-3) in the CRMDX dataset, and 0.32 compared to 0.17 (ProGAN) and 0.08 (StyleGAN-3) in the CheXpert dataset. Furthermore, Medfusion exhibited equal or higher fidelity (precision) across all three datasets. Our study shows that Medfusion constitutes a promising alternative to GAN-based models for generating high-quality medical images, leading to improved diversity and less artifacts in the generated images.
Collapse
Affiliation(s)
- Gustav Müller-Franzes
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany
| | | | - Firas Khader
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany
| | - Soroosh Tayebi Arasteh
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany
| | | | - Christiane Kuhl
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany
| | - Tianci Wang
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany
| | - Tianyu Han
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany
| | - Teresa Nolte
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany
| | - Sven Nebelung
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany
| | - Jakob Nikolas Kather
- Department of Medicine III, University Hospital Aachen, Aachen, Germany
- Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany
| | - Daniel Truhn
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany.
| |
Collapse
|
35
|
Jiang X, Hu Z, Wang S, Zhang Y. Deep Learning for Medical Image-Based Cancer Diagnosis. Cancers (Basel) 2023; 15:3608. [PMID: 37509272 PMCID: PMC10377683 DOI: 10.3390/cancers15143608] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 07/10/2023] [Accepted: 07/10/2023] [Indexed: 07/30/2023] Open
Abstract
(1) Background: The application of deep learning technology to realize cancer diagnosis based on medical images is one of the research hotspots in the field of artificial intelligence and computer vision. Due to the rapid development of deep learning methods, cancer diagnosis requires very high accuracy and timeliness as well as the inherent particularity and complexity of medical imaging. A comprehensive review of relevant studies is necessary to help readers better understand the current research status and ideas. (2) Methods: Five radiological images, including X-ray, ultrasound (US), computed tomography (CT), magnetic resonance imaging (MRI), positron emission computed tomography (PET), and histopathological images, are reviewed in this paper. The basic architecture of deep learning and classical pretrained models are comprehensively reviewed. In particular, advanced neural networks emerging in recent years, including transfer learning, ensemble learning (EL), graph neural network, and vision transformer (ViT), are introduced. Five overfitting prevention methods are summarized: batch normalization, dropout, weight initialization, and data augmentation. The application of deep learning technology in medical image-based cancer analysis is sorted out. (3) Results: Deep learning has achieved great success in medical image-based cancer diagnosis, showing good results in image classification, image reconstruction, image detection, image segmentation, image registration, and image synthesis. However, the lack of high-quality labeled datasets limits the role of deep learning and faces challenges in rare cancer diagnosis, multi-modal image fusion, model explainability, and generalization. (4) Conclusions: There is a need for more public standard databases for cancer. The pre-training model based on deep neural networks has the potential to be improved, and special attention should be paid to the research of multimodal data fusion and supervised paradigm. Technologies such as ViT, ensemble learning, and few-shot learning will bring surprises to cancer diagnosis based on medical images.
Collapse
Grants
- RM32G0178B8 BBSRC
- MC_PC_17171 MRC, UK
- RP202G0230 Royal Society, UK
- AA/18/3/34220 BHF, UK
- RM60G0680 Hope Foundation for Cancer Research, UK
- P202PF11 GCRF, UK
- RP202G0289 Sino-UK Industrial Fund, UK
- P202ED10, P202RE969 LIAS, UK
- P202RE237 Data Science Enhancement Fund, UK
- 24NN201 Fight for Sight, UK
- OP202006 Sino-UK Education Fund, UK
- RM32G0178B8 BBSRC, UK
- 2023SJZD125 Major project of philosophy and social science research in colleges and universities in Jiangsu Province, China
Collapse
Affiliation(s)
- Xiaoyan Jiang
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Zuojin Hu
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Shuihua Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| | - Yudong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| |
Collapse
|
36
|
Zhu J, Chen X, Liu Y, Yang B, Wei R, Qin S, Yang Z, Hu Z, Dai J, Men K. Improving accelerated 3D imaging in MRI-guided radiotherapy for prostate cancer using a deep learning method. Radiat Oncol 2023; 18:108. [PMID: 37393282 DOI: 10.1186/s13014-023-02306-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 06/21/2023] [Indexed: 07/03/2023] Open
Abstract
PURPOSE This study was to improve image quality for high-speed MR imaging using a deep learning method for online adaptive radiotherapy in prostate cancer. We then evaluated its benefits on image registration. METHODS Sixty pairs of 1.5 T MR images acquired with an MR-linac were enrolled. The data included low-speed, high-quality (LSHQ), and high-speed low-quality (HSLQ) MR images. We proposed a CycleGAN, which is based on the data augmentation technique, to learn the mapping between the HSLQ and LSHQ images and then generate synthetic LSHQ (synLSHQ) images from the HSLQ images. Five-fold cross-validation was employed to test the CycleGAN model. The normalized mean absolute error (nMAE), peak signal-to-noise ratio (PSNR), structural similarity index measurement (SSIM), and edge keeping index (EKI) were calculated to determine image quality. The Jacobian determinant value (JDV), Dice similarity coefficient (DSC), and mean distance to agreement (MDA) were used to analyze deformable registration. RESULTS Compared with the LSHQ, the proposed synLSHQ achieved comparable image quality and reduced imaging time by ~ 66%. Compared with the HSLQ, the synLSHQ had better image quality with improvement of 57%, 3.4%, 26.9%, and 3.6% for nMAE, SSIM, PSNR, and EKI, respectively. Furthermore, the synLSHQ enhanced registration accuracy with a superior mean JDV (6%) and preferable DSC and MDA values compared with HSLQ. CONCLUSION The proposed method can generate high-quality images from high-speed scanning sequences. As a result, it shows potential to shorten the scan time while ensuring the accuracy of radiotherapy.
Collapse
Affiliation(s)
- Ji Zhu
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Xinyuan Chen
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Yuxiang Liu
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
- School of Physics and Technology, Wuhan University, Wuhan, 430072, China
| | - Bining Yang
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Ran Wei
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Shirui Qin
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Zhuanbo Yang
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Zhihui Hu
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Jianrong Dai
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Kuo Men
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China.
| |
Collapse
|
37
|
Prieto Canalejo MA, Palau San Pedro A, Geronazzo R, Minsky DM, Juárez-Orozco LE, Namías M. Synthetic Attenuation Correction Maps for SPECT Imaging Using Deep Learning: A Study on Myocardial Perfusion Imaging. Diagnostics (Basel) 2023; 13:2214. [PMID: 37443608 DOI: 10.3390/diagnostics13132214] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 06/24/2023] [Accepted: 06/27/2023] [Indexed: 07/15/2023] Open
Abstract
(1) Background: The CT-based attenuation correction of SPECT images is essential for obtaining accurate quantitative images in cardiovascular imaging. However, there are still many SPECT cameras without associated CT scanners throughout the world, especially in developing countries. Performing additional CT scans implies troublesome planning logistics and larger radiation doses for patients, making it a suboptimal solution. Deep learning (DL) offers a revolutionary way to generate complementary images for individual patients at a large scale. Hence, we aimed to generate linear attenuation coefficient maps from SPECT emission images reconstructed without attenuation correction using deep learning. (2) Methods: A total of 384 SPECT myocardial perfusion studies that used 99mTc-sestamibi were included. A DL model based on a 2D U-Net architecture was trained using information from 312 patients. The quality of the generated synthetic attenuation correction maps (ACMs) and reconstructed emission values were evaluated using three metrics and compared to standard-of-care data using Bland-Altman plots. Finally, a quantitative evaluation of myocardial uptake was performed, followed by a semi-quantitative evaluation of myocardial perfusion. (3) Results: In a test set of 66 test patients, the ACM quality metrics were MSSIM = 0.97 ± 0.001 and NMAE = 3.08 ± 1.26 (%), and the reconstructed emission quality metrics were MSSIM = 0.99 ± 0.003 and NMAE = 0.23 ± 0.13 (%). The 95% limits of agreement (LoAs) at the voxel level for reconstructed SPECT images were: [-9.04; 9.00]%, and for the segment level, they were [-11; 10]%. The 95% LoAs for the Summed Stress Score values between the images reconstructed were [-2.8, 3.0]. When global perfusion scores were assessed, only 2 out of 66 patients showed changes in perfusion categories. (4) Conclusion: Deep learning can generate accurate attenuation correction maps from non-attenuation-corrected cardiac SPECT images. These high-quality attenuation maps are suitable for attenuation correction in myocardial perfusion SPECT imaging and could obviate the need for additional imaging in standalone SPECT scanners.
Collapse
Affiliation(s)
| | | | - Ricardo Geronazzo
- Fundación Centro Diagnóstico Nuclear (FCDN), Buenos Aires C1417CVE, Argentina
| | - Daniel Mauricio Minsky
- Centro Atómico Constituyentes, Comisión Nacional de Energía Atómica, San Martín B1650LWP, Argentina
| | | | - Mauro Namías
- Fundación Centro Diagnóstico Nuclear (FCDN), Buenos Aires C1417CVE, Argentina
| |
Collapse
|
38
|
Capobianco E, Dominietto M. Assessment of brain cancer atlas maps with multimodal imaging features. J Transl Med 2023; 21:385. [PMID: 37308956 DOI: 10.1186/s12967-023-04222-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Accepted: 05/22/2023] [Indexed: 06/14/2023] Open
Abstract
BACKGROUND Glioblastoma Multiforme (GBM) is a fast-growing and highly aggressive brain tumor that invades the nearby brain tissue and presents secondary nodular lesions across the whole brain but generally does not spread to distant organs. Without treatment, GBM can result in death in about 6 months. The challenges are known to depend on multiple factors: brain localization, resistance to conventional therapy, disrupted tumor blood supply inhibiting effective drug delivery, complications from peritumoral edema, intracranial hypertension, seizures, and neurotoxicity. MAIN TEXT Imaging techniques are routinely used to obtain accurate detections of lesions that localize brain tumors. Especially magnetic resonance imaging (MRI) delivers multimodal images both before and after the administration of contrast, which results in displaying enhancement and describing physiological features as hemodynamic processes. This review considers one possible extension of the use of radiomics in GBM studies, one that recalibrates the analysis of targeted segmentations to the whole organ scale. After identifying critical areas of research, the focus is on illustrating the potential utility of an integrated approach with multimodal imaging, radiomic data processing and brain atlases as the main components. The templates associated with the outcome of straightforward analyses represent promising inference tools able to spatio-temporally inform on the GBM evolution while being generalizable also to other cancers. CONCLUSIONS The focus on novel inference strategies applicable to complex cancer systems and based on building radiomic models from multimodal imaging data can be well supported by machine learning and other computational tools potentially able to translate suitably processed information into more accurate patient stratifications and evaluations of treatment efficacy.
Collapse
Affiliation(s)
- Enrico Capobianco
- The Jackson Laboratory, 10 Discovery Drive, Farmington, CT, 06032, USA.
| | - Marco Dominietto
- Paul Scherrer Institute (PSI), Forschungsstrasse 111, 5232, Villigen, Switzerland
- Gate To Brain SA, Via Livio 7, 6830, Chiasso, Switzerland
| |
Collapse
|
39
|
Zhao F, Liu M, Gao Z, Jiang X, Wang R, Zhang L. Dual-scale similarity-guided cycle generative adversarial network for unsupervised low-dose CT denoising. Comput Biol Med 2023; 161:107029. [PMID: 37230021 DOI: 10.1016/j.compbiomed.2023.107029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 04/10/2023] [Accepted: 05/09/2023] [Indexed: 05/27/2023]
Abstract
Removing the noise in low-dose CT (LDCT) is crucial to improving the diagnostic quality. Previously, many supervised or unsupervised deep learning-based LDCT denoising algorithms have been proposed. Unsupervised LDCT denoising algorithms are more practical than supervised ones since they do not need paired samples. However, unsupervised LDCT denoising algorithms are rarely used clinically due to their unsatisfactory denoising ability. In unsupervised LDCT denoising, the lack of paired samples makes the direction of gradient descent full of uncertainty. On the contrary, paired samples used in supervised denoising allow the parameters of networks to have a clear direction of gradient descent. To bridge the gap in performance between unsupervised and supervised LDCT denoising, we propose dual-scale similarity-guided cycle generative adversarial network (DSC-GAN). DSC-GAN uses similarity-based pseudo-pairing to better accomplish unsupervised LDCT denoising. We design a Vision Transformer-based global similarity descriptor and a residual neural network-based local similarity descriptor for DSC-GAN to effectively describe the similarity between two samples. During training, pseudo-pairs, i.e., similar LDCT samples and normal-dose CT (NDCT) samples, dominate parameter updates. Thus, the training can achieve equivalent effect as training with paired samples. Experiments on two datasets demonstrate that DSC-GAN beats the state-of-the-art unsupervised algorithms and reaches a level close to supervised LDCT denoising algorithms.
Collapse
Affiliation(s)
- Feixiang Zhao
- College of Nuclear Technology and Automation Engineering, Chengdu University of Technology, Chengdu, 610000, China.
| | - Mingzhe Liu
- College of Nuclear Technology and Automation Engineering, Chengdu University of Technology, Chengdu, 610000, China; School of Data Science and Artificial Intelligence, Wenzhou University of Technology, Wenzhou, 325000, China.
| | - Zhihong Gao
- Department of Big Data in Health Science, First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, China.
| | - Xin Jiang
- School of Data Science and Artificial Intelligence, Wenzhou University of Technology, Wenzhou, 325000, China.
| | - Ruili Wang
- School of Mathematical and Computational Science, Massey University, Auckland, 0632, New Zealand.
| | - Lejun Zhang
- Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou, 510006, China; College of Information Engineering, Yangzhou University, Yangzhou, 225127, China.
| |
Collapse
|
40
|
Szmul A, Taylor S, Lim P, Cantwell J, Moreira I, Zhang Y, D’Souza D, Moinuddin S, Gaze MN, Gains J, Veiga C. Deep learning based synthetic CT from cone beam CT generation for abdominal paediatric radiotherapy. Phys Med Biol 2023; 68:105006. [PMID: 36996837 PMCID: PMC10160738 DOI: 10.1088/1361-6560/acc921] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Revised: 03/13/2023] [Accepted: 03/30/2023] [Indexed: 04/01/2023]
Abstract
Objective. Adaptive radiotherapy workflows require images with the quality of computed tomography (CT) for re-calculation and re-optimisation of radiation doses. In this work we aim to improve the quality of on-board cone beam CT (CBCT) images for dose calculation using deep learning.Approach. We propose a novel framework for CBCT-to-CT synthesis using cycle-consistent Generative Adversarial Networks (cycleGANs). The framework was tailored for paediatric abdominal patients, a challenging application due to the inter-fractional variability in bowel filling and small patient numbers. We introduced to the networks the concept of global residuals only learning and modified the cycleGAN loss function to explicitly promote structural consistency between source and synthetic images. Finally, to compensate for the anatomical variability and address the difficulties in collecting large datasets in the paediatric population, we applied a smart 2D slice selection based on the common field-of-view (abdomen) to our imaging dataset. This acted as a weakly paired data approach that allowed us to take advantage of scans from patients treated for a variety of malignancies (thoracic-abdominal-pelvic) for training purposes. We first optimised the proposed framework and benchmarked its performance on a development dataset. Later, a comprehensive quantitative evaluation was performed on an unseen dataset, which included calculating global image similarity metrics, segmentation-based measures and proton therapy-specific metrics.Main results. We found improved performance for our proposed method, compared to a baseline cycleGAN implementation, on image-similarity metrics such as Mean Absolute Error calculated for a matched virtual CT (55.0 ± 16.6 HU proposed versus 58.9 ± 16.8 HU baseline). There was also a higher level of structural agreement for gastrointestinal gas between source and synthetic images measured using the dice similarity coefficient (0.872 ± 0.053 proposed versus 0.846 ± 0.052 baseline). Differences found in water-equivalent thickness metrics were also smaller for our method (3.3 ± 2.4% proposed versus 3.7 ± 2.8% baseline).Significance. Our findings indicate that our innovations to the cycleGAN framework improved the quality and structure consistency of the synthetic CTs generated.
Collapse
Affiliation(s)
- Adam Szmul
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| | - Sabrina Taylor
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
| | - Pei Lim
- Department of Oncology, University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Jessica Cantwell
- Radiotherapy, University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Isabel Moreira
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
| | - Ying Zhang
- Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
| | - Derek D’Souza
- Radiotherapy Physics Services, University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Syed Moinuddin
- Radiotherapy, University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Mark N. Gaze
- Department of Oncology, University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Jennifer Gains
- Department of Oncology, University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Catarina Veiga
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
| |
Collapse
|
41
|
Weyts K, Quak E, Licaj I, Ciappuccini R, Lasnon C, Corroyer-Dulmont A, Foucras G, Bardet S, Jaudet C. Deep Learning Denoising Improves and Homogenizes Patient [ 18F]FDG PET Image Quality in Digital PET/CT. Diagnostics (Basel) 2023; 13:diagnostics13091626. [PMID: 37175017 PMCID: PMC10177812 DOI: 10.3390/diagnostics13091626] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 04/18/2023] [Accepted: 04/23/2023] [Indexed: 05/15/2023] Open
Abstract
Given the constant pressure to increase patient throughput while respecting radiation protection, global body PET image quality (IQ) is not satisfactory in all patients. We first studied the association between IQ and other variables, in particular body habitus, on a digital PET/CT. Second, to improve and homogenize IQ, we evaluated a deep learning PET denoising solution (Subtle PETTM) using convolutional neural networks. We analysed retrospectively in 113 patients visual IQ (by a 5-point Likert score in two readers) and semi-quantitative IQ (by the coefficient of variation in the liver, CVliv) as well as lesion detection and quantification in native and denoised PET. In native PET, visual and semi-quantitative IQ were lower in patients with larger body habitus (p < 0.0001 for both) and in men vs. women (p ≤ 0.03 for CVliv). After PET denoising, visual IQ scores increased and became more homogeneous between patients (4.8 ± 0.3 in denoised vs. 3.6 ± 0.6 in native PET; p < 0.0001). CVliv were lower in denoised PET than in native PET, 6.9 ± 0.9% vs. 12.2 ± 1.6%; p < 0.0001. The slope calculated by linear regression of CVliv according to weight was significantly lower in denoised than in native PET (p = 0.0002), demonstrating more uniform CVliv. Lesion concordance rate between both PET series was 369/371 (99.5%), with two lesions exclusively detected in native PET. SUVmax and SUVpeak of up to the five most intense native PET lesions per patient were lower in denoised PET (p < 0.001), with an average relative bias of -7.7% and -2.8%, respectively. DL-based PET denoising by Subtle PETTM allowed [18F]FDG PET global image quality to be improved and homogenized, while maintaining satisfactory lesion detection and quantification. DL-based denoising may render body habitus adaptive PET protocols unnecessary, and pave the way for the improvement and homogenization of PET modalities.
Collapse
Affiliation(s)
- Kathleen Weyts
- Department of Nuclear Medicine, Baclesse Cancer Centre, 14076 Caen, France
| | - Elske Quak
- Department of Nuclear Medicine, Baclesse Cancer Centre, 14076 Caen, France
| | - Idlir Licaj
- Department of Biostatistics, Baclesse Cancer Centre, 14076 Caen, France
- Department of Community Medicine, Faculty of Health Sciences, UiT The Arctic University of Norway, 9019 Tromsø, Norway
| | - Renaud Ciappuccini
- Department of Nuclear Medicine, Baclesse Cancer Centre, 14076 Caen, France
| | - Charline Lasnon
- Department of Nuclear Medicine, Baclesse Cancer Centre, 14076 Caen, France
| | - Aurélien Corroyer-Dulmont
- Department of Medical Physics, Baclesse Cancer Centre, 14076 Caen, France
- ISTCT Unit, CNRS, UNICAEN, Normandy University, GIP CYCERON, 14074 Caen, France
| | - Gauthier Foucras
- Department of Nuclear Medicine, Baclesse Cancer Centre, 14076 Caen, France
| | - Stéphane Bardet
- Department of Nuclear Medicine, Baclesse Cancer Centre, 14076 Caen, France
| | - Cyril Jaudet
- Department of Nuclear Medicine, Baclesse Cancer Centre, 14076 Caen, France
- Department of Medical Physics, Baclesse Cancer Centre, 14076 Caen, France
| |
Collapse
|
42
|
Pang J, Jiang C, Chen Y, Chang J, Feng M, Wang R, Yao J. 3D Shuffle-Mixer: An Efficient Context-Aware Vision Learner of Transformer-MLP Paradigm for Dense Prediction in Medical Volume. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1241-1253. [PMID: 35849668 DOI: 10.1109/tmi.2022.3191974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Dense prediction in medical volume provides enriched guidance for clinical analysis. CNN backbones have met bottleneck due to lack of long-range dependencies and global context modeling power. Recent works proposed to combine vision transformer with CNN, due to its strong global capture ability and learning capability. However, most works are limited to simply applying pure transformer with several fatal flaws (i.e., lack of inductive bias, heavy computation and little consideration for 3D data). Therefore, designing an elegant and efficient vision transformer learner for dense prediction in medical volume is promising and challenging. In this paper, we propose a novel 3D Shuffle-Mixer network of a new Local Vision Transformer-MLP paradigm for medical dense prediction. In our network, a local vision transformer block is utilized to shuffle and learn spatial context from full-view slices of rearranged volume, a residual axial-MLP is designed to mix and capture remaining volume context in a slice-aware manner, and a MLP view aggregator is employed to project the learned full-view rich context to the volume feature in a view-aware manner. Moreover, an Adaptive Scaled Enhanced Shortcut is proposed for local vision transformer to enhance feature along spatial and channel dimensions adaptively, and a CrossMerge is proposed to skip-connect the multi-scale feature appropriately in the pyramid architecture. Extensive experiments demonstrate the proposed model outperforms other state-of-the-art medical dense prediction methods.
Collapse
|
43
|
Ebadi N, Li R, Das A, Roy A, Nikos P, Najafirad P. CBCT-guided adaptive radiotherapy using self-supervised sequential domain adaptation with uncertainty estimation. Med Image Anal 2023; 86:102800. [PMID: 37003101 DOI: 10.1016/j.media.2023.102800] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Revised: 01/29/2023] [Accepted: 03/14/2023] [Indexed: 03/17/2023]
Abstract
Adaptive radiotherapy (ART) is an advanced technology in modern cancer treatment that incorporates progressive changes in patient anatomy into active plan/dose adaption during the fractionated treatment. However, the clinical application relies on the accurate segmentation of cancer tumors on low-quality on-board images, which has posed challenges for both manual delineation and deep learning-based models. In this paper, we propose a novel sequence transduction deep neural network with an attention mechanism to learn the shrinkage of the cancer tumor based on patients' weekly cone-beam computed tomography (CBCT). We design a self-supervised domain adaption (SDA) method to learn and adapt the rich textural and spatial features from pre-treatment high-quality computed tomography (CT) to CBCT modality in order to address the poor image quality and lack of labels. We also provide uncertainty estimation for sequential segmentation, which aids not only in the risk management of treatment planning but also in the calibration and reliability of the model. Our experimental results based on a clinical non-small cell lung cancer (NSCLC) dataset with sixteen patients and ninety-six longitudinal CBCTs show that our model correctly learns weekly deformation of the tumor over time with an average dice score of 0.92 on the immediate next step, and is able to predict multiple steps (up to 5 weeks) for future patient treatments with an average dice score reduction of 0.05. By incorporating the tumor shrinkage predictions into a weekly re-planning strategy, our proposed method demonstrates a significant decrease in the risk of radiation-induced pneumonitis up to 35% while maintaining the high tumor control probability.
Collapse
Affiliation(s)
- Nima Ebadi
- Department of Electrical and Computer Engineering, The University of Texas at San Antonio, San Antonio, TX 78249, United States of America.
| | - Ruiqi Li
- Department of Radiation Oncology, UT Health San Antonio, San Antonio, TX 78229, United States of America.
| | - Arun Das
- Department of Electrical and Computer Engineering, The University of Texas at San Antonio, San Antonio, TX 78249, United States of America; Department of Medicine, The University of Pittsburgh, Pittsburgh, PA 15260, United States of America.
| | - Arkajyoti Roy
- Department of Management Science and Statistics, The University of Texas at San Antonio, San Antonio, TX 78249, United States of America.
| | - Papanikolaou Nikos
- Department of Radiation Oncology, UT Health San Antonio, San Antonio, TX 78229, United States of America.
| | - Peyman Najafirad
- Department of Computer Science, The University of Texas at San Antonio, San Antonio, TX 78249, United States of America.
| |
Collapse
|
44
|
Eidex Z, Ding Y, Wang J, Abouei E, Qiu RL, Liu T, Wang T, Yang X. Deep Learning in MRI-guided Radiation Therapy: A Systematic Review. ARXIV 2023:arXiv:2303.11378v2. [PMID: 36994167 PMCID: PMC10055493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 03/31/2023]
Abstract
MRI-guided radiation therapy (MRgRT) offers a precise and adaptive approach to treatment planning. Deep learning applications which augment the capabilities of MRgRT are systematically reviewed. MRI-guided radiation therapy offers a precise, adaptive approach to treatment planning. Deep learning applications which augment the capabilities of MRgRT are systematically reviewed with emphasis placed on underlying methods. Studies are further categorized into the areas of segmentation, synthesis, radiomics, and real time MRI. Finally, clinical implications, current challenges, and future directions are discussed.
Collapse
Affiliation(s)
- Zach Eidex
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA
| | - Yifu Ding
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Jing Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Elham Abouei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Richard L.J. Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Tian Liu
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, NY
| | - Tonghe Wang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA
| |
Collapse
|
45
|
Abdusalomov AB, Nasimov R, Nasimova N, Muminov B, Whangbo TK. Evaluating Synthetic Medical Images Using Artificial Intelligence with the GAN Algorithm. SENSORS (BASEL, SWITZERLAND) 2023; 23:3440. [PMID: 37050503 PMCID: PMC10098960 DOI: 10.3390/s23073440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Revised: 03/18/2023] [Accepted: 03/18/2023] [Indexed: 06/19/2023]
Abstract
In recent years, considerable work has been conducted on the development of synthetic medical images, but there are no satisfactory methods for evaluating their medical suitability. Existing methods mainly evaluate the quality of noise in the images, and the similarity of the images to the real images used to generate them. For this purpose, they use feature maps of images extracted in different ways or distribution of images set. Then, the proximity of synthetic images to the real set is evaluated using different distance metrics. However, it is not possible to determine whether only one synthetic image was generated repeatedly, or whether the synthetic set exactly repeats the training set. In addition, most evolution metrics take a lot of time to calculate. Taking these issues into account, we have proposed a method that can quantitatively and qualitatively evaluate synthetic images. This method is a combination of two methods, namely, FMD and CNN-based evaluation methods. The estimation methods were compared with the FID method, and it was found that the FMD method has a great advantage in terms of speed, while the CNN method has the ability to estimate more accurately. To evaluate the reliability of the methods, a dataset of different real images was checked.
Collapse
Affiliation(s)
| | - Rashid Nasimov
- Department of Artificial Intelligence, Tashkent State University of Economics, Tashkent 100066, Uzbekistan
| | - Nigorakhon Nasimova
- Department of Artificial Intelligence, Tashkent State University of Economics, Tashkent 100066, Uzbekistan
| | - Bahodir Muminov
- Department of Artificial Intelligence, Tashkent State University of Economics, Tashkent 100066, Uzbekistan
| | - Taeg Keun Whangbo
- Department of Computer Engineering, Gachon University, Sujeong-Gu, Seongnam-Si 461-701, Gyeonggi-Do, Republic of Korea
| |
Collapse
|
46
|
Artificial intelligence-aided method to detect uterine fibroids in ultrasound images: a retrospective study. Sci Rep 2023; 13:3714. [PMID: 36878941 PMCID: PMC9988965 DOI: 10.1038/s41598-022-26771-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Accepted: 12/20/2022] [Indexed: 03/08/2023] Open
Abstract
We explored a new artificial intelligence-assisted method to assist junior ultrasonographers in improving the diagnostic performance of uterine fibroids and further compared it with senior ultrasonographers to confirm the effectiveness and feasibility of the artificial intelligence method. In this retrospective study, we collected a total of 3870 ultrasound images from 667 patients with a mean age of 42.45 years ± 6.23 [SD] for those who received a pathologically confirmed diagnosis of uterine fibroids and 570 women with a mean age of 39.24 years ± 5.32 [SD] without uterine lesions from Shunde Hospital of Southern Medical University between 2015 and 2020. The DCNN model was trained and developed on the training dataset (2706 images) and internal validation dataset (676 images). To evaluate the performance of the model on the external validation dataset (488 images), we assessed the diagnostic performance of the DCNN with ultrasonographers possessing different levels of seniority. The DCNN model aided the junior ultrasonographers (Averaged) in diagnosing uterine fibroids with higher accuracy (94.72% vs. 86.63%, P < 0.001), sensitivity (92.82% vs. 83.21%, P = 0.001), specificity (97.05% vs. 90.80%, P = 0.009), positive predictive value (97.45% vs. 91.68%, P = 0.007), and negative predictive value (91.73% vs. 81.61%, P = 0.001) than they achieved alone. Their ability was comparable to that of senior ultrasonographers (Averaged) in terms of accuracy (94.72% vs. 95.24%, P = 0.66), sensitivity (92.82% vs. 93.66%, P = 0.73), specificity (97.05% vs. 97.16%, P = 0.79), positive predictive value (97.45% vs. 97.57%, P = 0.77), and negative predictive value (91.73% vs. 92.63%, P = 0.75). The DCNN-assisted strategy can considerably improve the uterine fibroid diagnosis performance of junior ultrasonographers to make them more comparable to senior ultrasonographers.
Collapse
|
47
|
Hampel H, Gao P, Cummings J, Toschi N, Thompson PM, Hu Y, Cho M, Vergallo A. The foundation and architecture of precision medicine in neurology and psychiatry. Trends Neurosci 2023; 46:176-198. [PMID: 36642626 PMCID: PMC10720395 DOI: 10.1016/j.tins.2022.12.004] [Citation(s) in RCA: 18] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 11/18/2022] [Accepted: 12/14/2022] [Indexed: 01/15/2023]
Abstract
Neurological and psychiatric diseases have high degrees of genetic and pathophysiological heterogeneity, irrespective of clinical manifestations. Traditional medical paradigms have focused on late-stage syndromic aspects of these diseases, with little consideration of the underlying biology. Advances in disease modeling and methodological design have paved the way for the development of precision medicine (PM), an established concept in oncology with growing attention from other medical specialties. We propose a PM architecture for central nervous system diseases built on four converging pillars: multimodal biomarkers, systems medicine, digital health technologies, and data science. We discuss Alzheimer's disease (AD), an area of significant unmet medical need, as a case-in-point for the proposed framework. AD can be seen as one of the most advanced PM-oriented disease models and as a compelling catalyzer towards PM-oriented neuroscience drug development and advanced healthcare practice.
Collapse
Affiliation(s)
- Harald Hampel
- Alzheimer's Disease & Brain Health, Eisai Inc., Nutley, NJ, USA.
| | - Peng Gao
- Alzheimer's Disease & Brain Health, Eisai Inc., Nutley, NJ, USA
| | - Jeffrey Cummings
- Chambers-Grundy Center for Transformative Neuroscience, Department of Brain Health, School of Integrated Health Sciences, University of Nevada Las Vegas (UNLV), Las Vegas, NV, USA
| | - Nicola Toschi
- Department of Biomedicine and Prevention, University of Rome Tor Vergata, Rome, Italy; Athinoula A. Martinos Center for Biomedical Imaging and Harvard Medical School, Boston, MA, USA
| | - Paul M Thompson
- Imaging Genetics Center, Mark & Mary Stevens Institute for Neuroimaging & Informatics, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Yan Hu
- Alzheimer's Disease & Brain Health, Eisai Inc., Nutley, NJ, USA
| | - Min Cho
- Alzheimer's Disease & Brain Health, Eisai Inc., Nutley, NJ, USA
| | - Andrea Vergallo
- Alzheimer's Disease & Brain Health, Eisai Inc., Nutley, NJ, USA
| |
Collapse
|
48
|
Douglass M, Gorayski P, Patel S, Santos A. Synthetic cranial MRI from 3D optical surface scans using deep learning for radiation therapy treatment planning. Phys Eng Sci Med 2023; 46:367-375. [PMID: 36752996 PMCID: PMC10030422 DOI: 10.1007/s13246-023-01229-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Accepted: 01/29/2023] [Indexed: 02/09/2023]
Abstract
BACKGROUND Optical scanning technologies are increasingly being utilised to supplement treatment workflows in radiation oncology, such as surface-guided radiotherapy or 3D printing custom bolus. One limitation of optical scanning devices is the absence of internal anatomical information of the patient being scanned. As a result, conventional radiation therapy treatment planning using this imaging modality is not feasible. Deep learning is useful for automating various manual tasks in radiation oncology, most notably, organ segmentation and treatment planning. Deep learning models have also been used to transform MRI datasets into synthetic CT datasets, facilitating the development of MRI-only radiation therapy planning. AIMS To train a pix2pix generative adversarial network to transform 3D optical scan data into estimated MRI datasets for a given patient to provide additional anatomical data for a select few radiation therapy treatment sites. The proposed network may provide useful anatomical information for treatment planning of surface mould brachytherapy, total body irradiation, and total skin electron therapy, for example, without delivering any imaging dose. METHODS A 2D pix2pix GAN was trained on 15,000 axial MRI slices of healthy adult brains paired with corresponding external mask slices. The model was validated on a further 5000 previously unseen external mask slices. The predictions were compared with the "ground-truth" MRI slices using the multi-scale structural similarity index (MSSI) metric. A certified neuro-radiologist was subsequently consulted to provide an independent review of the model's performance in terms of anatomical accuracy and consistency. The network was then applied to a 3D photogrammetry scan of a test subject to demonstrate the feasibility of this novel technique. RESULTS The trained pix2pix network predicted MRI slices with a mean MSSI of 0.831 ± 0.057 for the 5000 validation images indicating that it is possible to estimate a significant proportion of a patient's gross cranial anatomy from a patient's exterior contour. When independently reviewed by a certified neuro-radiologist, the model's performance was described as "quite amazing, but there are limitations in the regions where there is wide variation within the normal population." When the trained network was applied to a 3D model of a human subject acquired using optical photogrammetry, the network could estimate the corresponding MRI volume for that subject with good qualitative accuracy. However, a ground-truth MRI baseline was not available for quantitative comparison. CONCLUSIONS A deep learning model was developed, to transform 3D optical scan data of a patient into an estimated MRI volume, potentially increasing the usefulness of optical scanning in radiation therapy planning. This work has demonstrated that much of the human cranial anatomy can be predicted from the external shape of the head and may provide an additional source of valuable imaging data. Further research is required to investigate the feasibility of this approach for use in a clinical setting and further improve the model's accuracy.
Collapse
Affiliation(s)
- Michael Douglass
- Department of Radiation Oncology, Royal Adelaide Hospital, Adelaide, SA, 5000, Australia.
- Australian Bragg Centre for Proton Therapy and Research, SAHMRI, Adelaide, SA, 5000, Australia.
- School of Physical Sciences, University of Adelaide, Adelaide, SA, 5005, Australia.
| | - Peter Gorayski
- Department of Radiation Oncology, Royal Adelaide Hospital, Adelaide, SA, 5000, Australia
- Australian Bragg Centre for Proton Therapy and Research, SAHMRI, Adelaide, SA, 5000, Australia
- University of South Australia, Allied Health & Human Performance, Adelaide, SA, 5000, Australia
| | - Sandy Patel
- Department of Radiology, Royal Adelaide Hospital, Adelaide, SA, 5000, Australia
| | - Alexandre Santos
- Department of Radiation Oncology, Royal Adelaide Hospital, Adelaide, SA, 5000, Australia
- Australian Bragg Centre for Proton Therapy and Research, SAHMRI, Adelaide, SA, 5000, Australia
- School of Physical Sciences, University of Adelaide, Adelaide, SA, 5005, Australia
| |
Collapse
|
49
|
Chen X, Cao Y, Zhang K, Wang Z, Xie X, Wang Y, Men K, Dai J. Technical note: A method to synthesize magnetic resonance images in different patient rotation angles with deep learning for gantry-free radiotherapy. Med Phys 2023; 50:1746-1755. [PMID: 36135718 DOI: 10.1002/mp.15981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Revised: 08/29/2022] [Accepted: 08/31/2022] [Indexed: 11/11/2022] Open
Abstract
BACKGROUND Recently, patient rotating devices for gantry-free radiotherapy, a new approach to implement external beam radiotherapy, have been introduced. When a patient is rotated in the horizontal position, gravity causes anatomic deformation. For treatment planning, one feasible method is to acquire simulation images at different horizontal rotation angles. PURPOSE This study aimed to investigate the feasibility of synthesizing magnetic resonance (MR) images at patient rotation angles of 180° (prone position) and 90° (lateral position) from those at a rotation angle of 0° (supine position) using deep learning. METHODS This study included 23 healthy male volunteers. They underwent MR imaging (MRI) in the supine position and then in the prone (23 volunteers) and lateral (16 volunteers) positions. T1-weighted fast spin echo was performed for all positions with the same parameters. Two two-dimensional deep learning networks, pix2pix generative adversarial network (pix2pix GAN) and CycleGAN, were developed for synthesizing MR images in the prone and lateral positions from those in the supine position, respectively. For the evaluation of the models, leave-one-out cross-validation was performed. The mean absolute error (MAE), Dice similarity coefficient (DSC), and Hausdorff distance (HD) were used to determine the agreement between the prediction and ground truth for the entire body and four specific organs. RESULTS For pix2pix GAN, the synthesized images were visually bad, and no quantitative evaluation was performed. The quantitative evaluation metrics of the body outlines calculated for the synthesized prone and lateral images using CycleGAN were as follows: MAE, 35.63 ± 3.98 and 40.45 ± 5.83, respectively; DSC, 0.97 ± 0.01 and 0.94 ± 0.01, respectively; and HD (in pixels), 16.74 ± 3.55 and 31.69 ± 12.03, respectively. The quantitative metrics of the bladder and prostate performed were also promising for both the prone and lateral images, with mean values >0.90 in DSC (p > 0.05). The mean DSC and HD values of the bilateral femur for the prone images were 0.96 and 3.63 (in pixels), respectively, and 0.78 and 12.65 (in pixels) for the lateral images, respectively (p < 0.05). CONCLUSIONS The CycleGAN could synthesize the MRI at lateral and prone positions using images at supine position, and it could benefit gantry-free radiation therapy.
Collapse
Affiliation(s)
- Xinyuan Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
- National Cancer Center/National Clinical Research Center for Cancer/Hebei Cancer Hospital, Chinese Academy of Medical Sciences, Langfang, China
| | - Ying Cao
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Kaixuan Zhang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Zhen Wang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Xuejie Xie
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yunxiang Wang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
50
|
Fu Y, Dong S, Niu M, Xue L, Guo H, Huang Y, Xu Y, Yu T, Shi K, Yang Q, Shi Y, Zhang H, Tian M, Zhuo C. AIGAN: Attention-encoding Integrated Generative Adversarial Network for the reconstruction of low-dose CT and low-dose PET images. Med Image Anal 2023; 86:102787. [PMID: 36933386 DOI: 10.1016/j.media.2023.102787] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 11/05/2022] [Accepted: 02/22/2023] [Indexed: 03/04/2023]
Abstract
X-ray computed tomography (CT) and positron emission tomography (PET) are two of the most commonly used medical imaging technologies for the evaluation of many diseases. Full-dose imaging for CT and PET ensures the image quality but usually raises concerns about the potential health risks of radiation exposure. The contradiction between reducing the radiation exposure and remaining diagnostic performance can be addressed effectively by reconstructing the low-dose CT (L-CT) and low-dose PET (L-PET) images to the same high-quality ones as full-dose (F-CT and F-PET). In this paper, we propose an Attention-encoding Integrated Generative Adversarial Network (AIGAN) to achieve efficient and universal full-dose reconstruction for L-CT and L-PET images. AIGAN consists of three modules: the cascade generator, the dual-scale discriminator and the multi-scale spatial fusion module (MSFM). A sequence of consecutive L-CT (L-PET) slices is first fed into the cascade generator that integrates with a generation-encoding-generation pipeline. The generator plays the zero-sum game with the dual-scale discriminator for two stages: the coarse and fine stages. In both stages, the generator generates the estimated F-CT (F-PET) images as like the original F-CT (F-PET) images as possible. After the fine stage, the estimated fine full-dose images are then fed into the MSFM, which fully explores the inter- and intra-slice structural information, to output the final generated full-dose images. Experimental results show that the proposed AIGAN achieves the state-of-the-art performances on commonly used metrics and satisfies the reconstruction needs for clinical standards.
Collapse
Affiliation(s)
- Yu Fu
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China; Binjiang Institute, Zhejiang University, Hangzhou, China
| | - Shunjie Dong
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
| | - Meng Niu
- Department of Radiology, The First Hospital of Lanzhou University, Lanzhou, China
| | - Le Xue
- Department of Nuclear Medicine and Medical PET Center The Second Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Hanning Guo
- Institute of Neuroscience and Medicine, Medical Imaging Physics (INM-4), Forschungszentrum Jülich, Jülich, Germany
| | - Yanyan Huang
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
| | - Yuanfan Xu
- Hangzhou Universal Medical Imaging Diagnostic Center, Hangzhou, China
| | - Tianbai Yu
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
| | - Kuangyu Shi
- Department of Nuclear Medicine, University Hospital Bern, Bern, Switzerland
| | - Qianqian Yang
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
| | - Yiyu Shi
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, USA
| | - Hong Zhang
- Binjiang Institute, Zhejiang University, Hangzhou, China; Department of Nuclear Medicine and Medical PET Center The Second Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Mei Tian
- Human Phenome Institute, Fudan University, Shanghai, China.
| | - Cheng Zhuo
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China; Key Laboratory of Collaborative Sensing and Autonomous Unmanned Systems of Zhejiang Province, Hangzhou, China.
| |
Collapse
|