1
|
Sayaque L, Leporq B, Bouyer C, Pilleul F, Hamelin O, Gregoire V, Beuf O. Magnetic resonance imaging with ultra-short echo time sequence for head and neck radiotherapy planning. Phys Med 2025; 133:104974. [PMID: 40209545 DOI: 10.1016/j.ejmp.2025.104974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/26/2024] [Revised: 02/20/2025] [Accepted: 03/29/2025] [Indexed: 04/12/2025] Open
Abstract
BACKGROUND Radiotherapy treatments are usually planned on computed tomography (CT) images. For head and neck localizations, magnetic resonance imaging (MRI) is also increasingly used for delineation as it provides better soft-tissue contrast. PURPOSE Treatment planning exclusively based on MRI is currently not straightforward, as there is no direct link between MRI signal intensity and electron density. This study aims to generate a treatment planning using a UTE sequence, for regions with high anatomical variability. METHODS An ultra-short echo time pulse sequence (1H MRI UTE) was performed on 25 patients with head and neck cancers, treatable by radiotherapy (protocol number R201-004-314), without exclusion due to dental induced artifacts. The hydrogen tissue content, achievable with this sequence can be linked to the electron density of tissues. Generated synthetic CT (sCT) images were compared with reference CT using mean absolute error (MAE) computation. Patient dose calculations were performed on CT and sCT and compared using dose differences, Bland-Altman analysis and global gamma pass rate computation. RESULTS The mean MAE was 210.9 HU for all patients. The mean 3D global gamma pass rates were 93.1 %, 89.2 % and 80.9 %, for 3 %/3mm, 2 %/2mm and 1 %/1mm criteria respectively. The mean of the median dose difference for the planning tumor volume (PTV) was 0.87 % of 70 Gray. CONCLUSIONS The UTE sequence enables a direct physical-based method suitable for radiotherapy planning. The proposed method, based on a dedicated acquisition sequence compatible with clinical duration, provided dosimetry results similar to the reference CT, in a region with high anatomical variability.
Collapse
Affiliation(s)
- Laura Sayaque
- INSA-Lyon, Universite Claude Bernard Lyon 1, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69100 Lyon, France
| | - Benjamin Leporq
- INSA-Lyon, Universite Claude Bernard Lyon 1, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69100 Lyon, France
| | - Charlène Bouyer
- CRLCC Léon Bérard - Département de Radiothérapie, Lyon 69008, France; CRLCC Léon Bérard - Département de Radiologie, Lyon 69008, France
| | - Frank Pilleul
- INSA-Lyon, Universite Claude Bernard Lyon 1, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69100 Lyon, France; CRLCC Léon Bérard - Département de Radiologie, Lyon 69008, France
| | - Olivier Hamelin
- CRLCC Léon Bérard - Département de Radiologie, Lyon 69008, France
| | - Vincent Gregoire
- CRLCC Léon Bérard - Département de Radiothérapie, Lyon 69008, France
| | - Olivier Beuf
- INSA-Lyon, Universite Claude Bernard Lyon 1, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69100 Lyon, France.
| |
Collapse
|
2
|
Chen J, Ye Z, Zhang R, Li H, Fang B, Zhang LB, Wang W. Medical image translation with deep learning: Advances, datasets and perspectives. Med Image Anal 2025; 103:103605. [PMID: 40311301 DOI: 10.1016/j.media.2025.103605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2024] [Revised: 03/07/2025] [Accepted: 04/12/2025] [Indexed: 05/03/2025]
Abstract
Traditional medical image generation often lacks patient-specific clinical information, limiting its clinical utility despite enhancing downstream task performance. In contrast, medical image translation precisely converts images from one modality to another, preserving both anatomical structures and cross-modal features, thus enabling efficient and accurate modality transfer and offering unique advantages for model development and clinical practice. This paper reviews the latest advancements in deep learning(DL)-based medical image translation. Initially, it elaborates on the diverse tasks and practical applications of medical image translation. Subsequently, it provides an overview of fundamental models, including convolutional neural networks (CNNs), transformers, and state space models (SSMs). Additionally, it delves into generative models such as Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), Autoregressive Models (ARs), diffusion Models, and flow Models. Evaluation metrics for assessing translation quality are discussed, emphasizing their importance. Commonly used datasets in this field are also analyzed, highlighting their unique characteristics and applications. Looking ahead, the paper identifies future trends, challenges, and proposes research directions and solutions in medical image translation. It aims to serve as a valuable reference and inspiration for researchers, driving continued progress and innovation in this area.
Collapse
Affiliation(s)
- Junxin Chen
- School of Software, Dalian University of Technology, Dalian 116621, China.
| | - Zhiheng Ye
- School of Software, Dalian University of Technology, Dalian 116621, China.
| | - Renlong Zhang
- Institute of Research and Clinical Innovations, Neusoft Medical Systems Co., Ltd., Beijing, China.
| | - Hao Li
- School of Computing Science, University of Glasgow, Glasgow G12 8QQ, United Kingdom.
| | - Bo Fang
- School of Computer Science, The University of Sydney, Sydney, NSW 2006, Australia.
| | - Li-Bo Zhang
- Department of Radiology, General Hospital of Northern Theater Command, Shenyang 110840, China.
| | - Wei Wang
- Guangdong-Hong Kong-Macao Joint Laboratory for Emotion Intelligence and Pervasive Computing, Artificial Intelligence Research Institute, Shenzhen MSU-BIT University, Shenzhen 518172, China; School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China.
| |
Collapse
|
3
|
Antunes J, Young T, Pittock D, Jacobs P, Nelson A, Piper J, Deshpande S. Assessing multiple MRI sequences in deep learning-based synthetic CT generation for MR-only radiation therapy of head and neck cancers. Radiother Oncol 2025; 205:110782. [PMID: 39929288 DOI: 10.1016/j.radonc.2025.110782] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Revised: 01/29/2025] [Accepted: 02/07/2025] [Indexed: 02/19/2025]
Abstract
PURPOSE This study investigated the effect of multiple magnetic resonance (MR) sequences on the quality of deep-learning-based synthetic computed tomography (sCT) generation in the head and neck region. MATERIALS AND METHODS 12 MR series (T1pre-, T1post-contrast, T2 each with 4 Dixon images) were collected from 26 patients with head and neck cancers. 14 unique deep-learning models using the U-Net framework were trained using multiple MRs as inputs to generate sCTs. Mean absolute error (MAE), Dice Similarity Coefficient (DSC), as well as Gamma pass rates were used to compare sCTs to the actual CT across the different multi-channel MR-sCT models. RESULTS Using all available MR series yielded sCTs with the lowest pixel-wise error (MAE = 80.5 ± 9.9 HU), but increasing channels also increased artificial tissue which led to poorer auto-contouring and lower dosimetric accuracy. Models with T2 protocols generally resulted in poorer quality sCTs. Pre-contrast T1 with all Dixon images was the best multi-channel MR-sCT model, consistently ranking high for all sCT quality measurements (average DSC across all structures = 80.0 % ± 13.6 %, global Gamma Pass Rate = 97.9 % ± 1.7 % at 2 %/2mm dose criterion and 20 % of max dose threshold). CONCLUSIONS Deep-learning networks using all Dixon images from a pre-contrast T1 sequence as multi-channel inputs produced the most clinically viable sCTs. Our proposed method may enable MR-only radiotherapy planning in a clinical setting for head and neck cancers.
Collapse
Affiliation(s)
| | - Tony Young
- Liverpool and Macarthur Cancer Therapy Centres, Sydney, Australia; Ingham Institute, Sydney, Australia
| | | | - Paul Jacobs
- MIM Software Inc, Cleveland, OH, United States
| | | | - Jon Piper
- MIM Software Inc, Cleveland, OH, United States
| | - Shrikant Deshpande
- Ingham Institute, Sydney, Australia; South Western Sydney Clinical School, University of New South Wales, Sydney, Australia
| |
Collapse
|
4
|
Lauwers I, Capala M, Kaushik S, Ruskó L, Cozzini C, Kleijnen JP, Wyatt J, McCallum H, Verduijn G, Wiesinger F, Hernandez-Tamames J, Petit S. Synthetic CT generation using Zero TE MR for head-and-neck radiotherapy. Radiother Oncol 2025; 205:110762. [PMID: 39889967 DOI: 10.1016/j.radonc.2025.110762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2024] [Revised: 12/16/2024] [Accepted: 01/27/2025] [Indexed: 02/03/2025]
Abstract
BACKGROUND AND PURPOSE MRI-based synthetic CTs (synCTs) show promise to replace planning CT scans in various anatomical regions. However, the head-and-neck region remains challenging because of patient-specific air, bone and soft tissues interfaces and oropharynx cavities. Zero-Echo-Time (ZTE) MRI can be fast and silent, accurately discriminate bone and air, and could potentially lead to high dose calculation accuracy, but is relatively unexplored for the head-and-neck region. Here, we prospectively evaluated the dosimetric accuracy of a novel, fast ZTE sequence for synCT generation. MATERIALS AND METHODS The method was developed based on 127 patients and validated in an independent test (n = 17). synCTs were generated using a multi-task 2D U-net from ZTE MRIs (scanning time: 2:33 min (normal scan) or 56 s (accelerated scan)). Clinical treatment plans were recalculated on the synCT. The Hounsfield Units (HU) and dose-volume-histogram metrics were compared between the synCT and CT. Subsequently, synthetic treatment plans were generated to systematically assess dosimetry accuracy in different anatomical regions using dose-volume-histogram metrics. RESULTS The mean absolute error between the synCT and CT was 94 ± 11 HU inside the patient contour. For the clinical plans, 98.8% of PTV metrics deviated less than 2% between synCT and CT and all OAR metrics deviated less than 1 Gy. The synthetic plans showed larger dose differences depending on the location of the PTV. CONCLUSIONS Excellent dose agreement was found based on clinical plans between the CT and a ZTE-MR-based synCT in the head-and-neck region. Synthetic plans are an important addition to clinical plans to evaluate the dosimetric accuracy of synCT scans.
Collapse
Affiliation(s)
- Iris Lauwers
- Department of Radiotherapy, Erasmus MC Cancer Institute, University Medical Center Rotterdam, Rotterdam, the Netherlands.
| | - Marta Capala
- Department of Radiotherapy, Erasmus MC Cancer Institute, University Medical Center Rotterdam, Rotterdam, the Netherlands
| | - Sandeep Kaushik
- GE HealthCare, Munich, Germany; Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - László Ruskó
- GE Healthcare Magyarország Kft., Budapest, Hungary
| | | | - Jean-Paul Kleijnen
- Department of Medical Physics, Haaglanden MC, The Hague, the Netherlands
| | - Jonathan Wyatt
- Translational and Clinical Research Institute, Newcastle University, Newcastle, UK; Northern Centre for Cancer Care, Newcastle upon Tyne Hospitals NHS Foundation Trust, Newcastle, UK
| | - Hazel McCallum
- Translational and Clinical Research Institute, Newcastle University, Newcastle, UK; Northern Centre for Cancer Care, Newcastle upon Tyne Hospitals NHS Foundation Trust, Newcastle, UK
| | - Gerda Verduijn
- Department of Radiotherapy, Erasmus MC Cancer Institute, University Medical Center Rotterdam, Rotterdam, the Netherlands
| | | | - Juan Hernandez-Tamames
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, the Netherlands; Department of Imaging Physics, TU Delft, Delft, the Netherlands
| | - Steven Petit
- Department of Radiotherapy, Erasmus MC Cancer Institute, University Medical Center Rotterdam, Rotterdam, the Netherlands
| |
Collapse
|
5
|
Wang Y, Li K, Zhang R, Fan Y, Huang L, Zhou F. GraCEImpute: A novel graph clustering autoencoder approach for imputation of single-cell RNA-seq data. Comput Biol Med 2025; 184:109400. [PMID: 39561511 DOI: 10.1016/j.compbiomed.2024.109400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2024] [Revised: 10/14/2024] [Accepted: 11/07/2024] [Indexed: 11/21/2024]
Abstract
Single-cell RNA sequencing (scRNA-seq) technology establishes a unique view for elucidating cellular heterogeneity in various biological systems. Yet the scRNA-seq data is compromised by a high dropout rate due to the technological limitation, and the substantial data loss poses computational challenges on subsequent analyses. This study introduces a novel graph clustering autoencoder (GCAE)-based imputation approach (GraCEImpute) to address the challenge of missing data in scRNA-seq data. Our comprehensive evaluation demonstrates that the GraCEImpute model outperforms existing approaches in accurately imputing dropout zeros within scRNA-seq data. The proposed GraCEImpute model also demonstrates the significantly enhanced quality of downstream scRNA-seq data analyses, including clustering, differential gene expression (DEG) analysis, and cell trajectory inference. These improvements underscore the GraCEImpute model's potential to facilitate a deeper understanding of cellular processes and heterogeneity through the scRNA-seq data analyses. The source code is released at https://www.healthinformaticslab.org/supp/.
Collapse
Affiliation(s)
- Yueying Wang
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Kewei Li
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Ruochi Zhang
- School of Artificial Intelligence, Jilin University, Changchun, 130012, China
| | - Yusi Fan
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China.
| | - Lan Huang
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Fengfeng Zhou
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China; School of Biology and Engineering, Guizhou Medical University, Guiyang, 550025, Guizhou, China.
| |
Collapse
|
6
|
Chauhan V, Harikishore K, Girdhar S, Kaushik S, Wiesinger F, Cozzini C, Carl M, Fung M, Mehta BB, Thomas B, Kesavadas C. Utility of zero echo time (ZTE) sequence for assessing bony lesions of skull base and calvarium. Clin Radiol 2024; 79:e1504-e1513. [PMID: 39322533 DOI: 10.1016/j.crad.2024.08.029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2024] [Revised: 06/09/2024] [Accepted: 08/26/2024] [Indexed: 09/27/2024]
Abstract
BACKGROUND The emergence of zero echo time (ZTE) imaging has transformed bone imaging, overcoming historical limitations in capturing detailed bone structures. By minimizing the time gap between radiofrequency excitation and data acquisition, ZTE generates CT-like images. While ZTE has shown promise in various applications, its potential in assessing skull base and calvarium lesions remains unexplored. Hence we aim to introduce a novel perspective by investigating the utility of inverted ZTE images (iZTE) and pseudoCT (pCT) images for studying lesions in the skull base and calvarium. MATERIALS AND METHODS A total of 35 eligible patients, with an average age of 42 years and a male/female ratio of 1:4, underwent ZTE MRI and images are processed to generate iZTE and pCT images were generated through a series of steps including intensity equalization, thresholding, and deep learning-based pCT generation. These images were then compared to CT scans using a rating scale; inter-rater kappa coefficient evaluated observer consensus while statistical metrics like sensitivity and specificity assessed their performance in capturing bone-related characteristics. RESULTS The study revealed excellent interobserver agreement for lesion assessment using both pCT and iZTE imaging modalities, with kappa coefficient of 0.91 (P < 0.0001) and 0.92 respectively (P < 0.0001). Also, pCT and iZTE accurately predicted various lesion characteristics with sensitivity ranging from 84.3% to 95.1% and 82.6%-94.2% (95% CI) with a diagnostic accuracy of 95.56% and 94.44% respectively. Although both of them encountered challenges with ground glassing, hyperostosis, and intralesional bony fragments, they showed good performance in other bony lesion assessments. CONCLUSIONS The pilot study suggests strong potential for integrating the ZTE imaging into standard care for skull base and calvarial bony lesions assessment. Additionally, larger-scale studies are needed for comprehensive assessment of its efficacy.
Collapse
Affiliation(s)
- V Chauhan
- Sree Chitra Tirunal Institute of Medical Sciences and Technology, Trivandrum, Kerala, India.
| | - K Harikishore
- Sree Chitra Tirunal Institute of Medical Sciences and Technology, Trivandrum, Kerala, India.
| | - S Girdhar
- Sree Chitra Tirunal Institute of Medical Sciences and Technology, Trivandrum, Kerala, India.
| | | | | | | | | | | | | | - B Thomas
- Sree Chitra Tirunal Institute of Medical Sciences and Technology, Trivandrum, Kerala, India.
| | - C Kesavadas
- Sree Chitra Tirunal Institute of Medical Sciences and Technology, Trivandrum, Kerala, India.
| |
Collapse
|
7
|
Dai L, Md Johar MG, Alkawaz MH. The diagnostic value of MRI segmentation technique for shoulder joint injuries based on deep learning. Sci Rep 2024; 14:28885. [PMID: 39572780 PMCID: PMC11582322 DOI: 10.1038/s41598-024-80441-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2024] [Accepted: 11/19/2024] [Indexed: 11/24/2024] Open
Abstract
This work is to investigate the diagnostic value of a deep learning-based magnetic resonance imaging (MRI) image segmentation (IS) technique for shoulder joint injuries (SJIs) in swimmers. A novel multi-scale feature fusion network (MSFFN) is developed by optimizing and integrating the AlexNet and U-Net algorithms for the segmentation of MRI images of the shoulder joint. The model is evaluated using metrics such as the Dice similarity coefficient (DSC), positive predictive value (PPV), and sensitivity (SE). A cohort of 52 swimmers with SJIs from Guangzhou Hospital serve as the subjects for this study, wherein the accuracy of the developed shoulder joint MRI IS model in diagnosing swimmers' SJIs is analyzed. The results reveal that the DSC for segmenting joint bones in MRI images based on the MSFFN algorithm is 92.65%, with PPV of 95.83% and SE of 96.30%. Similarly, the DSC for segmenting humerus bones in MRI images is 92.93%, with PPV of 95.56% and SE of 92.78%. The MRI IS algorithm exhibits an accuracy of 86.54% in diagnosing types of SJIs in swimmers, surpassing the conventional diagnostic accuracy of 71.15%. The consistency between the diagnostic results of complete tear, superior surface tear, inferior surface tear, and intratendinous tear of SJIs in swimmers and arthroscopic diagnostic results yield a Kappa value of 0.785 and an accuracy of 87.89%. These findings underscore the significant diagnostic value and potential of the MRI IS technique based on the MSFFN algorithm in diagnosing SJIs in swimmers.
Collapse
Affiliation(s)
- Lina Dai
- School of Information Technology and Engineering, Guangzhou College of Commerce, Guangzhou, China.
- School of Graduate Studies, Management and Science University, Shah Alam, 40100, Selangor, Malaysia.
| | - Md Gapar Md Johar
- Software Engineering and Digital Innovation Center, Management and Science University, Shah Alam, 40100, Selangor, Malaysia
| | - Mohammed Hazim Alkawaz
- Department of Computer Science, College of Education for Pure Science, University of Mosul, Mosul, Nineveh, Iraq
| |
Collapse
|
8
|
Bahloul MA, Jabeen S, Benoumhani S, Alsaleh HA, Belkhatir Z, Al‐Wabil A. Advancements in synthetic CT generation from MRI: A review of techniques, and trends in radiation therapy planning. J Appl Clin Med Phys 2024; 25:e14499. [PMID: 39325781 PMCID: PMC11539972 DOI: 10.1002/acm2.14499] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Revised: 06/27/2024] [Accepted: 07/26/2024] [Indexed: 09/28/2024] Open
Abstract
BACKGROUND Magnetic resonance imaging (MRI) and Computed tomography (CT) are crucial imaging techniques in both diagnostic imaging and radiation therapy. MRI provides excellent soft tissue contrast but lacks the direct electron density data needed to calculate dosage. CT, on the other hand, remains the gold standard due to its accurate electron density information in radiation therapy planning (RTP) but it exposes patients to ionizing radiation. Synthetic CT (sCT) generation from MRI has been a focused study field in the last few years due to cost effectiveness as well as for the objective of minimizing side-effects of using more than one imaging modality for treatment simulation. It offers significant time and cost efficiencies, bypassing the complexities of co-registration, and potentially improving treatment accuracy by minimizing registration-related errors. In an effort to navigate the quickly developing field of precision medicine, this paper investigates recent advancements in sCT generation techniques, particularly those using machine learning (ML) and deep learning (DL). The review highlights the potential of these techniques to improve the efficiency and accuracy of sCT generation for use in RTP by improving patient care and reducing healthcare costs. The intricate web of sCT generation techniques is scrutinized critically, with clinical implications and technical underpinnings for enhanced patient care revealed. PURPOSE This review aims to provide an overview of the most recent advancements in sCT generation from MRI with a particular focus of its use within RTP, emphasizing on techniques, performance evaluation, clinical applications, future research trends and open challenges in the field. METHODS A thorough search strategy was employed to conduct a systematic literature review across major scientific databases. Focusing on the past decade's advancements, this review critically examines emerging approaches introduced from 2013 to 2023 for generating sCT from MRI, providing a comprehensive analysis of their methodologies, ultimately fostering further advancement in the field. This study highlighted significant contributions, identified challenges, and provided an overview of successes within RTP. Classifying the identified approaches, contrasting their advantages and disadvantages, and identifying broad trends were all part of the review's synthesis process. RESULTS The review identifies various sCT generation approaches, consisting atlas-based, segmentation-based, multi-modal fusion, hybrid approaches, ML and DL-based techniques. These approaches are evaluated for image quality, dosimetric accuracy, and clinical acceptability. They are used for MRI-only radiation treatment, adaptive radiotherapy, and MR/PET attenuation correction. The review also highlights the diversity of methodologies for sCT generation, each with its own advantages and limitations. Emerging trends incorporate the integration of advanced imaging modalities including various MRI sequences like Dixon sequences, T1-weighted (T1W), T2-weighted (T2W), as well as hybrid approaches for enhanced accuracy. CONCLUSIONS The study examines MRI-based sCT generation, to minimize negative effects of acquiring both modalities. The study reviews 2013-2023 studies on MRI to sCT generation methods, aiming to revolutionize RTP by reducing use of ionizing radiation and improving patient outcomes. The review provides insights for researchers and practitioners, emphasizing the need for standardized validation procedures and collaborative efforts to refine methods and address limitations. It anticipates the continued evolution of techniques to improve the precision of sCT in RTP.
Collapse
Affiliation(s)
- Mohamed A. Bahloul
- College of EngineeringAlfaisal UniversityRiyadhSaudi Arabia
- Translational Biomedical Engineering Research Lab, College of EngineeringAlfaisal UniversityRiyadhSaudi Arabia
| | - Saima Jabeen
- College of EngineeringAlfaisal UniversityRiyadhSaudi Arabia
- Translational Biomedical Engineering Research Lab, College of EngineeringAlfaisal UniversityRiyadhSaudi Arabia
- AI Research Center, College of EngineeringAlfaisal UniversityRiyadhSaudi Arabia
| | - Sara Benoumhani
- College of EngineeringAlfaisal UniversityRiyadhSaudi Arabia
- AI Research Center, College of EngineeringAlfaisal UniversityRiyadhSaudi Arabia
| | | | - Zehor Belkhatir
- School of Electronics and Computer ScienceUniversity of SouthamptonSouthamptonUK
| | - Areej Al‐Wabil
- College of EngineeringAlfaisal UniversityRiyadhSaudi Arabia
- AI Research Center, College of EngineeringAlfaisal UniversityRiyadhSaudi Arabia
| |
Collapse
|
9
|
Wang N, Jin Z, Liu F, Chen L, Zhao Y, Lin L, Liu A, Song Q. Bone injury imaging in knee and ankle joints using fast-field-echo resembling a CT using restricted echo-spacing MRI: a feasibility study. Front Endocrinol (Lausanne) 2024; 15:1421876. [PMID: 39072275 PMCID: PMC11273369 DOI: 10.3389/fendo.2024.1421876] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/23/2024] [Accepted: 06/25/2024] [Indexed: 07/30/2024] Open
Abstract
Purpose To explore the consistency of FRACTURE (Fast-field-echo Resembling A CT Using Restricted Echo-spacing) MRI and X-Ray/computerized tomography (CT) in the evaluation of bone injuries in knee and ankle joints. Methods From Nov. 2020 to Jul. 2023, 42 patients with knee joint or ankle joint injuries who underwent FRACTURE MRI examinations were retrospectively collected. 11 patients were examined by both X-Ray and FRACTURE examinations. 31 patients were examined by both CT and FRACTURE examinations. The fracture, osteophyte, and bone destruction of the joints were evaluated by two radiologists using X-Ray/CT and FRACTURE images, respectively. Kappa test was used for consistency analysis. Results The evaluation consistency of fracture, osteophyte and bone destruction via X-Ray and FRACTURE images by radiologist 1 were 0.879, 0.867 and 0.847 respectively, and for radiologist 2 were 0.899, 0.930, and 0.879, respectively. The evaluation consistency of fracture, osteophyte and bone destruction via CT and FRACTURE images by radiologist 1 were 0.938, 0.937 and 0.868 respectively, and for radiologist 2 were 0.961, 0.930, and 0.818, respectively. Conclusion For fracture, osteophyte, and bone destruction of knee and ankle joints. FRACTURE MRI showed a high consistency with X-Ray/CT examinations.
Collapse
Affiliation(s)
- Nan Wang
- Department of Radiology, the First Affiliated Hospital of Dalian Medical University, Dalian, China
| | - Zhengshi Jin
- Department of Radiology, the First Affiliated Hospital of Dalian Medical University, Dalian, China
| | - Funing Liu
- Department of Radiology, the First Affiliated Hospital of Dalian Medical University, Dalian, China
| | - Lihua Chen
- Department of Radiology, the First Affiliated Hospital of Dalian Medical University, Dalian, China
| | - Ying Zhao
- Department of Radiology, the First Affiliated Hospital of Dalian Medical University, Dalian, China
| | - Liangjie Lin
- Clinical and Technical Support, Philips Healthcare, Beijing, China
| | - Ailian Liu
- Department of Radiology, the First Affiliated Hospital of Dalian Medical University, Dalian, China
| | - Qingwei Song
- Department of Radiology, the First Affiliated Hospital of Dalian Medical University, Dalian, China
| |
Collapse
|
10
|
Singhrao K, Dugan CL, Calvin C, Pelayo L, Yom SS, Chan JW, Scholey JE, Singer L. Evaluating the Hounsfield unit assignment and dose differences between CT-based standard and deep learning-based synthetic CT images for MRI-only radiation therapy of the head and neck. J Appl Clin Med Phys 2024; 25:e14239. [PMID: 38128040 PMCID: PMC10795453 DOI: 10.1002/acm2.14239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Revised: 11/22/2023] [Accepted: 11/29/2023] [Indexed: 12/23/2023] Open
Abstract
BACKGROUND Magnetic resonance image only (MRI-only) simulation for head and neck (H&N) radiotherapy (RT) could allow for single-image modality planning with excellent soft tissue contrast. In the MRI-only simulation workflow, synthetic computed tomography (sCT) is generated from MRI to provide electron density information for dose calculation. Bone/air regions produce little MRI signal which could lead to electron density misclassification in sCT. Establishing the dosimetric impact of this error could inform quality assurance (QA) procedures using MRI-only RT planning or compensatory methods for accurate dosimetric calculation. PURPOSE The aim of this study was to investigate if Hounsfield unit (HU) voxel misassignments from sCT images result in dosimetric errors in clinical treatment plans. METHODS Fourteen H&N cancer patients undergoing same-day CT and 3T MRI simulation were retrospectively identified. MRI was deformed to the CT using multimodal deformable image registration. sCTs were generated from T1w DIXON MRIs using a commercially available deep learning-based generator (MRIplanner, Spectronic Medical AB, Helsingborg, Sweden). Tissue voxel assignment was quantified by creating a CT-derived HU threshold contour. CT/sCT HU differences for anatomical/target contours and tissue classification regions including air (<250 HU), adipose tissue (-250 HU to -51 HU), soft tissue (-50 HU to 199 HU), spongy (200 HU to 499 HU) and cortical bone (>500 HU) were quantified. t-test was used to determine if sCT/CT HU differences were significant. The frequency of structures that had a HU difference > 80 HU (the CT window-width setting for intra-cranial structures) was computed to establish structure classification accuracy. Clinical intensity modulated radiation therapy (IMRT) treatment plans created on CT were retrospectively recalculated on sCT images and compared using the gamma metric. RESULTS The mean ratio of sCT HUs relative to CT for air, adipose tissue, soft tissue, spongy and cortical bone were 1.7 ± 0.3, 1.1 ± 0.1, 1.0 ± 0.1, 0.9 ± 0.1 and 0.8 ± 0.1 (value of 1 indicates perfect agreement). T-tests (significance set at t = 0.05) identified differences in HU values for air, spongy and cortical bone in sCT images compared to CT. The structures with sCT/CT HU differences > 80 HU of note were the left and right (L/R) cochlea and mandible (>79% of the tested cohort), the oral cavity (for 57% of the tested cohort), the epiglottis (for 43% of the tested cohort) and the L/R TM joints (occurring > 29% of the cohort). In the case of the cochlea and TM joints, these structures contain dense bone/air interfaces. In the case of the oral cavity and mandible, these structures suffer the additional challenge of being positionally altered in CT versus MRI simulation (due to a non-MR safe immobilizing bite block requiring absence of bite block in MR). Finally, the epiglottis HU assignment suffers from its small size and unstable positionality. Plans recalculated on sCT yielded global/local gamma pass rates of 95.5% ± 2% (3 mm, 3%) and 92.7% ± 2.1% (2 mm, 2%). The largest mean differences in D95, Dmean , D50 dose volume histogram (DVH) metrics for organ-at-risk (OAR) and planning tumor volumes (PTVs) were 2.3% ± 3.0% and 0.7% ± 1.9% respectively. CONCLUSIONS In this cohort, HU differences of CT and sCT were observed but did not translate into a reduction in gamma pass rates or differences in average PTV/OAR dose metrics greater than 3%. For sites such as the H&N where there are many tissue interfaces we did not observe large scale dose deviations but further studies using larger retrospective cohorts are merited to establish the variation in sCT dosimetric accuracy which could help to inform QA limits on clinical sCT usage.
Collapse
Affiliation(s)
- Kamal Singhrao
- Department of Radiation OncologyBrigham and Women's Hospital, Dana‐Farber Cancer Institute, Harvard Medical SchoolBostonMassachusettsUSA
| | - Catherine Lu Dugan
- Department of Radiation OncologyUniversity of California, San FranciscoSan FranciscoCaliforniaUSA
| | - Christina Calvin
- Department of Radiation OncologyUniversity of California, San FranciscoSan FranciscoCaliforniaUSA
| | - Luis Pelayo
- Department of Radiation OncologyUniversity of California, San FranciscoSan FranciscoCaliforniaUSA
| | - Sue Sun Yom
- Department of Radiation OncologyUniversity of California, San FranciscoSan FranciscoCaliforniaUSA
| | - Jason Wing‐Hong Chan
- Department of Radiation OncologyUniversity of California, San FranciscoSan FranciscoCaliforniaUSA
| | | | - Lisa Singer
- Department of Radiation OncologyUniversity of California, San FranciscoSan FranciscoCaliforniaUSA
| |
Collapse
|
11
|
Tsilivigkos C, Athanasopoulos M, Micco RD, Giotakis A, Mastronikolis NS, Mulita F, Verras GI, Maroulis I, Giotakis E. Deep Learning Techniques and Imaging in Otorhinolaryngology-A State-of-the-Art Review. J Clin Med 2023; 12:6973. [PMID: 38002588 PMCID: PMC10672270 DOI: 10.3390/jcm12226973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2023] [Revised: 11/02/2023] [Accepted: 11/06/2023] [Indexed: 11/26/2023] Open
Abstract
Over the last decades, the field of medicine has witnessed significant progress in artificial intelligence (AI), the Internet of Medical Things (IoMT), and deep learning (DL) systems. Otorhinolaryngology, and imaging in its various subspecialties, has not remained untouched by this transformative trend. As the medical landscape evolves, the integration of these technologies becomes imperative in augmenting patient care, fostering innovation, and actively participating in the ever-evolving synergy between computer vision techniques in otorhinolaryngology and AI. To that end, we conducted a thorough search on MEDLINE for papers published until June 2023, utilizing the keywords 'otorhinolaryngology', 'imaging', 'computer vision', 'artificial intelligence', and 'deep learning', and at the same time conducted manual searching in the references section of the articles included in our manuscript. Our search culminated in the retrieval of 121 related articles, which were subsequently subdivided into the following categories: imaging in head and neck, otology, and rhinology. Our objective is to provide a comprehensive introduction to this burgeoning field, tailored for both experienced specialists and aspiring residents in the domain of deep learning algorithms in imaging techniques in otorhinolaryngology.
Collapse
Affiliation(s)
- Christos Tsilivigkos
- 1st Department of Otolaryngology, National and Kapodistrian University of Athens, Hippocrateion Hospital, 115 27 Athens, Greece; (A.G.); (E.G.)
| | - Michail Athanasopoulos
- Department of Otolaryngology, University Hospital of Patras, 265 04 Patras, Greece; (M.A.); (N.S.M.)
| | - Riccardo di Micco
- Department of Otolaryngology and Head and Neck Surgery, Medical School of Hannover, 30625 Hannover, Germany;
| | - Aris Giotakis
- 1st Department of Otolaryngology, National and Kapodistrian University of Athens, Hippocrateion Hospital, 115 27 Athens, Greece; (A.G.); (E.G.)
| | - Nicholas S. Mastronikolis
- Department of Otolaryngology, University Hospital of Patras, 265 04 Patras, Greece; (M.A.); (N.S.M.)
| | - Francesk Mulita
- Department of Surgery, University Hospital of Patras, 265 04 Patras, Greece; (G.-I.V.); (I.M.)
| | - Georgios-Ioannis Verras
- Department of Surgery, University Hospital of Patras, 265 04 Patras, Greece; (G.-I.V.); (I.M.)
| | - Ioannis Maroulis
- Department of Surgery, University Hospital of Patras, 265 04 Patras, Greece; (G.-I.V.); (I.M.)
| | - Evangelos Giotakis
- 1st Department of Otolaryngology, National and Kapodistrian University of Athens, Hippocrateion Hospital, 115 27 Athens, Greece; (A.G.); (E.G.)
| |
Collapse
|
12
|
Fujima N, Kamagata K, Ueda D, Fujita S, Fushimi Y, Yanagawa M, Ito R, Tsuboyama T, Kawamura M, Nakaura T, Yamada A, Nozaki T, Fujioka T, Matsui Y, Hirata K, Tatsugami F, Naganawa S. Current State of Artificial Intelligence in Clinical Applications for Head and Neck MR Imaging. Magn Reson Med Sci 2023; 22:401-414. [PMID: 37532584 PMCID: PMC10552661 DOI: 10.2463/mrms.rev.2023-0047] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 07/09/2023] [Indexed: 08/04/2023] Open
Abstract
Due primarily to the excellent soft tissue contrast depictions provided by MRI, the widespread application of head and neck MRI in clinical practice serves to assess various diseases. Artificial intelligence (AI)-based methodologies, particularly deep learning analyses using convolutional neural networks, have recently gained global recognition and have been extensively investigated in clinical research for their applicability across a range of categories within medical imaging, including head and neck MRI. Analytical approaches using AI have shown potential for addressing the clinical limitations associated with head and neck MRI. In this review, we focus primarily on the technical advancements in deep-learning-based methodologies and their clinical utility within the field of head and neck MRI, encompassing aspects such as image acquisition and reconstruction, lesion segmentation, disease classification and diagnosis, and prognostic prediction for patients presenting with head and neck diseases. We then discuss the limitations of current deep-learning-based approaches and offer insights regarding future challenges in this field.
Collapse
Affiliation(s)
- Noriyuki Fujima
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, Sapporo, Hokkaido, Japan
| | - Koji Kamagata
- Department of Radiology, Juntendo University Graduate School of Medicine, Tokyo, Japan
| | - Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Osaka, Japan
| | - Shohei Fujita
- Department of Radiology, University of Tokyo, Tokyo, Japan
| | - Yasutaka Fushimi
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, Kyoto, Kyoto, Japan
| | - Masahiro Yanagawa
- Department of Radiology, Osaka University Graduate School of Medicine, Suita, Osaka, Japan
| | - Rintaro Ito
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| | - Takahiro Tsuboyama
- Department of Radiology, Osaka University Graduate School of Medicine, Suita, Osaka, Japan
| | - Mariko Kawamura
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| | - Takeshi Nakaura
- Department of Diagnostic Radiology, Kumamoto University Graduate School of Medicine, Kumamoto, Kumamoto, Japan
| | - Akira Yamada
- Department of Radiology, Shinshu University School of Medicine, Matsumoto, Nagano, Japan
| | - Taiki Nozaki
- Department of Radiology, Keio University School of Medicine, Tokyo, Japan
| | - Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Yusuke Matsui
- Department of Radiology, Faculty of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Okayama, Okayama, Japan
| | - Kenji Hirata
- Department of Diagnostic Imaging, Graduate School of Medicine, Hokkaido University, Sapporo, Hokkaido, Japan
| | - Fuminari Tatsugami
- Department of Diagnostic Radiology, Hiroshima University, Hiroshima, Hiroshima, Japan
| | - Shinji Naganawa
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| |
Collapse
|
13
|
McNaughton J, Fernandez J, Holdsworth S, Chong B, Shim V, Wang A. Machine Learning for Medical Image Translation: A Systematic Review. Bioengineering (Basel) 2023; 10:1078. [PMID: 37760180 PMCID: PMC10525905 DOI: 10.3390/bioengineering10091078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 07/30/2023] [Accepted: 09/07/2023] [Indexed: 09/29/2023] Open
Abstract
BACKGROUND CT scans are often the first and only form of brain imaging that is performed to inform treatment plans for neurological patients due to its time- and cost-effective nature. However, MR images give a more detailed picture of tissue structure and characteristics and are more likely to pick up abnormalities and lesions. The purpose of this paper is to review studies which use deep learning methods to generate synthetic medical images of modalities such as MRI and CT. METHODS A literature search was performed in March 2023, and relevant articles were selected and analyzed. The year of publication, dataset size, input modality, synthesized modality, deep learning architecture, motivations, and evaluation methods were analyzed. RESULTS A total of 103 studies were included in this review, all of which were published since 2017. Of these, 74% of studies investigated MRI to CT synthesis, and the remaining studies investigated CT to MRI, Cross MRI, PET to CT, and MRI to PET. Additionally, 58% of studies were motivated by synthesizing CT scans from MRI to perform MRI-only radiation therapy. Other motivations included synthesizing scans to aid diagnosis and completing datasets by synthesizing missing scans. CONCLUSIONS Considerably more research has been carried out on MRI to CT synthesis, despite CT to MRI synthesis yielding specific benefits. A limitation on medical image synthesis is that medical datasets, especially paired datasets of different modalities, are lacking in size and availability; it is therefore recommended that a global consortium be developed to obtain and make available more datasets for use. Finally, it is recommended that work be carried out to establish all uses of the synthesis of medical scans in clinical practice and discover which evaluation methods are suitable for assessing the synthesized images for these needs.
Collapse
Affiliation(s)
- Jake McNaughton
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
| | - Justin Fernandez
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Department of Engineering Science and Biomedical Engineering, University of Auckland, 3/70 Symonds Street, Auckland 1010, New Zealand
| | - Samantha Holdsworth
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Mātai Medical Research Institute, 400 Childers Road, Tairāwhiti Gisborne 4010, New Zealand
| | - Benjamin Chong
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
| | - Vickie Shim
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Mātai Medical Research Institute, 400 Childers Road, Tairāwhiti Gisborne 4010, New Zealand
| | - Alan Wang
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
| |
Collapse
|