1
|
Khatun R, Chatterjee S, Bert C, Wadepohl M, Ott OJ, Semrau S, Fietkau R, Nürnberger A, Gaipl US, Frey B. Complex-valued neural networks to speed-up MR thermometry during hyperthermia using Fourier PD and PDUNet. Sci Rep 2025; 15:11765. [PMID: 40189690 PMCID: PMC11973158 DOI: 10.1038/s41598-025-96071-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Accepted: 03/25/2025] [Indexed: 04/09/2025] Open
Abstract
Hyperthermia (HT) in combination with radio- and/or chemotherapy has become an accepted cancer treatment for distinct solid tumour entities. In HT, tumour tissue is exogenously heated to temperatures between 39 and 43 °C for 60 min. Temperature monitoring can be performed non-invasively using dynamic magnetic resonance imaging (MRI). However, the slow nature of MRI leads to motion artefacts in the images due to the movements of patients during image acquisition. By discarding parts of the data, the speed of the acquisition can be increased - known as undersampling. However, due to the invalidation of the Nyquist criterion, the acquired images might be blurry and can also produce aliasing artefacts. The aim of this work was, therefore, to reconstruct highly undersampled MR thermometry acquisitions with better resolution and with fewer artefacts compared to conventional methods. The use of deep learning in the medical field has emerged in recent times, and various studies have shown that deep learning has the potential to solve inverse problems such as MR image reconstruction. However, most of the published work only focuses on the magnitude images, while the phase images are ignored, which are fundamental requirements for MR thermometry. This work, for the first time, presents deep learning-based solutions for reconstructing undersampled MR thermometry data. Two different deep learning models have been employed here, the Fourier Primal-Dual network and the Fourier Primal-Dual UNet, to reconstruct highly undersampled complex images of MR thermometry. MR images of 44 patients with different sarcoma types who received HT treatment in combination with radiotherapy and/or chemotherapy were used in this study. The method reduced the temperature difference between the undersampled MRIs and the fully sampled MRIs from 1.3 to 0.6 °C in full volume and 0.49 °C to 0.06 °C in the tumour region for a theoretical acceleration factor of 10.
Collapse
Affiliation(s)
- Rupali Khatun
- Translational Radiobiology, Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Comprehensive Cancer Centre Erlangen-EMN, Erlangen, Germany
| | - Soumick Chatterjee
- Data and Knowledge Engineering Group, Faculty of Computer Science, Otto von Guericke University Magdeburg, Magdeburg, Germany.
- Genomics Research Centre, Human Technopole, Milan, Italy.
| | - Christoph Bert
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Comprehensive Cancer Centre Erlangen-EMN, Erlangen, Germany
| | | | - Oliver J Ott
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Comprehensive Cancer Centre Erlangen-EMN, Erlangen, Germany
| | - Sabine Semrau
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Comprehensive Cancer Centre Erlangen-EMN, Erlangen, Germany
| | - Rainer Fietkau
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Comprehensive Cancer Centre Erlangen-EMN, Erlangen, Germany
| | - Andreas Nürnberger
- Data and Knowledge Engineering Group, Faculty of Computer Science, Otto von Guericke University Magdeburg, Magdeburg, Germany
| | - Udo S Gaipl
- Translational Radiobiology, Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Comprehensive Cancer Centre Erlangen-EMN, Erlangen, Germany
| | - Benjamin Frey
- Translational Radiobiology, Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Comprehensive Cancer Centre Erlangen-EMN, Erlangen, Germany
| |
Collapse
|
2
|
Gearhart A, Anjewierden S, Buddhe S, Tandon A. Review of the Current State of Artificial Intelligence in Pediatric Cardiovascular Magnetic Resonance Imaging. CHILDREN (BASEL, SWITZERLAND) 2025; 12:416. [PMID: 40310065 PMCID: PMC12025873 DOI: 10.3390/children12040416] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/14/2025] [Revised: 03/14/2025] [Accepted: 03/21/2025] [Indexed: 05/02/2025]
Abstract
Cardiovascular magnetic resonance (CMR) imaging is essential for the management of congenital heart disease (CHD), due to the ability to perform anatomic and physiologic assessments of patients. However, CMR scans can be time-consuming to perform and analyze, creating roadblocks to broader use of CMR in CHD. Recent publications have shown artificial intelligence (AI) has the potential to increase efficiency, improve image quality, and reduce errors. This review examines the use of AI techniques to improve CMR in CHD, by focusing on deep learning techniques applied to image acquisition and reconstruction, image processing and reporting, clinical use cases, and future directions.
Collapse
Affiliation(s)
- Addison Gearhart
- Department of Cardiology, Seattle Children’s Hospital, Seattle, WA 98105, USA
- Department of Pediatrics, University of Washington, Seattle, WA 98195, USA
| | - Scott Anjewierden
- Division of Pediatric Cardiology, University of Utah, Salt Lake City, UT 84112, USA
| | - Sujatha Buddhe
- Division of Pediatric Cardiology, School of Medicine, Stanford University, Stanford, CA 94305, USA
| | - Animesh Tandon
- Department of Heart, Vascular and Thoracic, Division of Cardiology and Cardiovascular Medicine, Children’s Institute, Cleveland Clinic, Cleveland, OH 44195, USA
- Cleveland Clinic Children’s Center for Artificial Intelligence (C4AI), Cleveland Clinic Children’s, Cleveland, OH 44195, USA
- Department of Pediatrics, Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH 44106, USA
| |
Collapse
|
3
|
Ma ZP, Zhu YM, Zhang XD, Zhao YX, Zheng W, Yuan SR, Li GY, Zhang TL. Investigating the Use of Generative Adversarial Networks-Based Deep Learning for Reducing Motion Artifacts in Cardiac Magnetic Resonance. J Multidiscip Healthc 2025; 18:787-799. [PMID: 39963324 PMCID: PMC11830935 DOI: 10.2147/jmdh.s492163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2024] [Accepted: 01/21/2025] [Indexed: 02/20/2025] Open
Abstract
Objective To evaluate the effectiveness of deep learning technology based on generative adversarial networks (GANs) in reducing motion artifacts in cardiac magnetic resonance (CMR) cine sequences. Methods The training and testing datasets consisted of 2000 and 200 pairs of clear and blurry images, respectively, acquired through simulated motion artifacts in CMR cine sequences. These datasets were used to establish and train a deep learning GAN model. To assess the efficacy of the deep learning network in mitigating motion artifacts, 100 images with simulated motion artifacts and 37 images with real-world motion artifacts encountered in clinical practice were selected. Image quality pre- and post-optimization was assessed using metrics including Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), Leningrad Focus Measure, and a 5-point Likert scale. Results After GAN optimization, notable improvements were observed in the PSNR, SSIM, and focus measure metrics for the 100 images with simulated artifacts. These metrics increased from initial values of 23.85±2.85, 0.71±0.08, and 4.56±0.67, respectively, to 27.91±1.74, 0.83±0.05, and 7.74±0.39 post-optimization. Additionally, the subjective assessment scores significantly improved from 2.44±1.08 to 4.44±0.66 (P<0.001). For the 37 images with real-world artifacts, the Tenengrad Focus Measure showed a significant enhancement, rising from 6.06±0.91 to 10.13±0.48 after artifact removal. Subjective ratings also increased from 3.03±0.73 to 3.73±0.87 (P<0.001). Conclusion GAN-based deep learning technology effectively reduces motion artifacts present in CMR cine images, demonstrating significant potential for clinical application in optimizing CMR motion artifact management.
Collapse
Affiliation(s)
- Ze-Peng Ma
- Department of Radiology, Affiliated Hospital of Hebei University/ Clinical Medical College, Hebei University, Baoding, 071000, People’s Republic of China
- Hebei Key Laboratory of Precise Imaging of inflammation Tumors, Baoding, Hebei Province, 071000, People’s Republic of China
| | - Yue-Ming Zhu
- College of Electronic and Information Engineering, Hebei University, Baoding, Hebei Province, 071002, People’s Republic of China
| | - Xiao-Dan Zhang
- Department of Ultrasound, Affiliated Hospital of Hebei University, Baoding, Hebei Province, 071000, People’s Republic of China
| | - Yong-Xia Zhao
- Department of Radiology, Affiliated Hospital of Hebei University/ Clinical Medical College, Hebei University, Baoding, 071000, People’s Republic of China
| | - Wei Zheng
- College of Electronic and Information Engineering, Hebei University, Baoding, Hebei Province, 071002, People’s Republic of China
| | - Shuang-Rui Yuan
- Department of Radiology, Affiliated Hospital of Hebei University/ Clinical Medical College, Hebei University, Baoding, 071000, People’s Republic of China
| | - Gao-Yang Li
- Department of Radiology, Affiliated Hospital of Hebei University/ Clinical Medical College, Hebei University, Baoding, 071000, People’s Republic of China
| | - Tian-Le Zhang
- Department of Radiology, Affiliated Hospital of Hebei University/ Clinical Medical College, Hebei University, Baoding, 071000, People’s Republic of China
| |
Collapse
|
4
|
Chen Z, Gong Y, Chen H, Emu Y, Gao J, Zhou Z, Shen Y, Tang X, Hua S, Jin W, Hu C. Joint suppression of cardiac bSSFP cine banding and flow artifacts using twofold phase-cycling and a dual-encoder neural network. J Cardiovasc Magn Reson 2024; 26:101123. [PMID: 39521347 DOI: 10.1016/j.jocmr.2024.101123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2024] [Revised: 10/23/2024] [Accepted: 11/03/2024] [Indexed: 11/16/2024] Open
Abstract
BACKGROUND Cardiac balanced steady state free precession (bSSFP) cine imaging suffers from banding and flow artifacts induced by off-resonance. The work aimed to develop a twofold phase cycling sequence with a neural network-based reconstruction (2P-SSFP+Network) for a joint suppression of banding and flow artifacts in cardiac cine imaging. METHODS A dual-encoder neural network was trained on 1620 pairs of phase-cycled left ventricular (LV) cine images collected from 18 healthy subjects. Twenty healthy subjects and 25 patients were prospectively scanned using the proposed 2P-SSFP sequence. bSSFP cine of a single RF phase increment (1P-SSFP), bSSFP cine of a single radiofrequency (RF) phase increment with a network-based artifact reduction (1P-SSFP+Network), the averaging of the two phase-cycled images (2P-SSFP+Average), and the proposed method were mutually compared, in terms of artifact suppression performance in the LV, generalizability over altered scan parameters and scanners, suppression of large-area banding artifacts in the left atrium (LA), and accuracy of downstream segmentation tasks. RESULTS In the healthy subjects, 2P-SSFP+Network showed robust suppressions of artifacts across a range of phase combinations. Compared with 1P-SSFP and 2P-SSFP+Average, 2P-SSFP+Network improved banding artifacts (3.85 ± 0.67 and 4.50 ± 0.45 vs 5.00 ± 0.00, P < 0.01 and P = 0.02, respectively), flow artifacts (3.35 ± 0.78 and 2.10 ± 0.77 vs 4.90 ± 0.20, both P < 0.01), and overall image quality (3.25 ± 0.51 and 2.30 ± 0.60 vs 4.75 ± 0.25, both P < 0.01). 1P-SSFP+Network and 2P-SSFP+Network achieved a similar artifact suppression performance, yet the latter had fewer hallucinations (two-chamber, 4.25 ± 0.51 vs 4.85 ± 0.45, P = 0.04; four-chamber, 3.45 ± 1.21 vs 4.65 ± 0.50, P = 0.03; and left atrium (LA), 3.35 ± 1.00 vs 4.65 ± 0.45, P < 0.01). Furthermore, in the pulmonary veins and LA, 1P-SSFP+Network could not eliminate banding artifacts since they occupied a large area, whereas 2P-SSFP+Network reliably suppressed the artifacts. In the downstream automated myocardial segmentation task, 2P-SSFP+Network achieved more accurate segmentations than 1P-SSFP with different phase increments. CONCLUSIONS 2P-SSFP+Network jointly suppresses banding and flow artifacts while manifesting a good generalizability against variations of anatomy and scan parameters. It provides a feasible solution for robust suppression of the two types of artifacts in bSSFP cine imaging.
Collapse
Affiliation(s)
- Zhuo Chen
- National Engineering Research Center of Advanced Magnetic Resonance Technologies for Diagnosis and Therapy, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yiwen Gong
- Department of Cardiovascular Medicine, Heart Failure Center, Ruijin Hospital Lu Wan Branch, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Haiyang Chen
- National Engineering Research Center of Advanced Magnetic Resonance Technologies for Diagnosis and Therapy, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yixin Emu
- National Engineering Research Center of Advanced Magnetic Resonance Technologies for Diagnosis and Therapy, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Juan Gao
- National Engineering Research Center of Advanced Magnetic Resonance Technologies for Diagnosis and Therapy, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Zhongjie Zhou
- Department of Cardiovascular Medicine, Heart Failure Center, Ruijin Hospital Lu Wan Branch, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Yiwen Shen
- Department of Cardiovascular Medicine, Heart Failure Center, Ruijin Hospital Lu Wan Branch, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xin Tang
- United Imaging Healthcare Co., Ltd, Shanghai, China
| | - Sha Hua
- Department of Cardiovascular Medicine, Heart Failure Center, Ruijin Hospital Lu Wan Branch, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Wei Jin
- Department of Cardiovascular Medicine, Heart Failure Center, Ruijin Hospital Lu Wan Branch, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Chenxi Hu
- National Engineering Research Center of Advanced Magnetic Resonance Technologies for Diagnosis and Therapy, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
5
|
Lin J, Zhu S, Gao X, Liu X, Xu C, Xu Z, Zhu J. Evaluation of super resolution technology for digestive endoscopic images. Heliyon 2024; 10:e38920. [PMID: 39430485 PMCID: PMC11489312 DOI: 10.1016/j.heliyon.2024.e38920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2024] [Revised: 09/25/2024] [Accepted: 10/02/2024] [Indexed: 10/22/2024] Open
Abstract
Object This study aims to evaluate the value of super resolution (SR) technology in augmenting the quality of digestive endoscopic images. Methods In the retrospective study, we employed two advanced SR models, i.e., SwimIR and ESRGAN. Two discrete datasets were utilized, with training conducted using the dataset of the First Affiliated Hospital of Soochow University (12,212 high-resolution images) and evaluation conducted using the HyperKvasir dataset (2,566 low-resolution images). Furthermore, an assessment of the impact of enhanced low-resolution images was conducted using a 5-point Likert scale from the perspectives of endoscopists. Finally, two endoscopic image classification tasks were employed to evaluate the effect of SR technology on computer vision (CV). Results SwinIR demonstrated superior performance, which achieved a PSNR of 32.60, an SSIM of 0.90, and a VIF of 0.47 in test set. 90 % of endoscopists supported that SR preprocessing moderately ameliorated the readability of endoscopic images. For CV, enhanced images bolstered the performance of convolutional neural networks, whether in the classification task of Barrett's esophagus (improved F1-score: 0.04) or Mayo Endoscopy Score (improved F1-score: 0.04). Conclusions SR technology demonstrates the capacity to produce high-resolution endoscopic images. The approach enhanced clinical readability and CV models' performance of low-resolution endoscopic images.
Collapse
Affiliation(s)
- Jiaxi Lin
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
- Key Laboratory of Hepatoaplenic Surgery, Ministry of Education, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Shiqi Zhu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
- Key Laboratory of Hepatoaplenic Surgery, Ministry of Education, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Xin Gao
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
- Key Laboratory of Hepatoaplenic Surgery, Ministry of Education, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Xiaolin Liu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
- Key Laboratory of Hepatoaplenic Surgery, Ministry of Education, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Chunfang Xu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
| | - Zhonghua Xu
- Department of Orthopedics, Jintan Affiliated Hospital to Jiangsu University, Changzhou, China
| | - Jinzhou Zhu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
- Key Laboratory of Hepatoaplenic Surgery, Ministry of Education, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| |
Collapse
|
6
|
Rafiee MJ, Eyre K, Leo M, Benovoy M, Friedrich MG, Chetrit M. Comprehensive review of artifacts in cardiac MRI and their mitigation. THE INTERNATIONAL JOURNAL OF CARDIOVASCULAR IMAGING 2024; 40:2021-2039. [PMID: 39292396 DOI: 10.1007/s10554-024-03234-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/17/2024] [Accepted: 08/27/2024] [Indexed: 09/19/2024]
Abstract
Cardiac magnetic resonance imaging (CMR) is an important clinical tool that obtains high-quality images for assessment of cardiac morphology, function, and tissue characteristics. However, the technique may be prone to artifacts that may limit the diagnostic interpretation of images. This article reviews common artifacts which may appear in CMR exams by describing their appearance, the challenges they mitigate true pathology, and offering possible solutions to reduce their impact. Additionally, this article acts as an update to previous CMR artifacts reports by including discussion about new CMR innovations.
Collapse
Affiliation(s)
| | - Katerina Eyre
- Research Institute, McGill University Health Centre, Montreal, Canada
| | - Margherita Leo
- Research Institute, McGill University Health Centre, Montreal, Canada
| | | | - Matthias G Friedrich
- Research Institute, McGill University Health Centre, Montreal, Canada
- Area19 Medical Inc, Montreal, Canada
- Department of Diagnostic Radiology, Division of Cardiology, McGill University Health Centre, Montreal, Canada
| | - Michael Chetrit
- Research Institute, McGill University Health Centre, Montreal, Canada
- Department of Diagnostic Radiology, Division of Cardiology, McGill University Health Centre, Montreal, Canada
| |
Collapse
|
7
|
Aromiwura AA, Cavalcante JL, Kwong RY, Ghazipour A, Amini A, Bax J, Raman S, Pontone G, Kalra DK. The role of artificial intelligence in cardiovascular magnetic resonance imaging. Prog Cardiovasc Dis 2024; 86:13-25. [PMID: 38925255 DOI: 10.1016/j.pcad.2024.06.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/23/2024] [Accepted: 06/23/2024] [Indexed: 06/28/2024]
Abstract
Cardiovascular magnetic resonance (CMR) imaging is the gold standard test for myocardial tissue characterization and chamber volumetric and functional evaluation. However, manual CMR analysis can be time-consuming and is subject to intra- and inter-observer variability. Artificial intelligence (AI) is a field that permits automated task performance through the identification of high-level and complex data relationships. In this review, we review the rapidly growing role of AI in CMR, including image acquisition, sequence prescription, artifact detection, reconstruction, segmentation, and data reporting and analysis including quantification of volumes, function, myocardial infarction (MI) and scar detection, and prediction of outcomes. We conclude with a discussion of the emerging challenges to widespread adoption and solutions that will allow for successful, broader uptake of this powerful technology.
Collapse
Affiliation(s)
| | | | - Raymond Y Kwong
- Division of Cardiovascular Medicine, Brigham and Women's Hospital, Boston, MA, USA
| | - Aryan Ghazipour
- Medical Imaging Laboratory, Department of Electrical and Computer Engineering, University of Louisville, Louisville, KY, USA
| | - Amir Amini
- Medical Imaging Laboratory, Department of Electrical and Computer Engineering, University of Louisville, Louisville, KY, USA
| | - Jeroen Bax
- Department of Cardiology, Leiden University, Leiden, the Netherlands
| | - Subha Raman
- Division of Cardiology, Indiana University School of Medicine, Indianapolis, IN, USA
| | - Gianluca Pontone
- Department of Cardiology and Cardiovascular Imaging, Centro Cardiologico Monzino IRCCS, University of Milan, Milan, Italy
| | - Dinesh K Kalra
- Division of Cardiology, Department of Medicine, University of Louisville, Louisville, KY, USA; Center for Artificial Intelligence in Radiological Sciences (CAIRS), Department of Radiology, University of Louisville, Louisville, KY, USA.
| |
Collapse
|
8
|
Wang S, Wu R, Jia S, Diakite A, Li C, Liu Q, Zheng H, Ying L. Knowledge-driven deep learning for fast MR imaging: Undersampled MR image reconstruction from supervised to un-supervised learning. Magn Reson Med 2024; 92:496-518. [PMID: 38624162 DOI: 10.1002/mrm.30105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 03/19/2024] [Accepted: 03/20/2024] [Indexed: 04/17/2024]
Abstract
Deep learning (DL) has emerged as a leading approach in accelerating MRI. It employs deep neural networks to extract knowledge from available datasets and then applies the trained networks to reconstruct accurate images from limited measurements. Unlike natural image restoration problems, MRI involves physics-based imaging processes, unique data properties, and diverse imaging tasks. This domain knowledge needs to be integrated with data-driven approaches. Our review will introduce the significant challenges faced by such knowledge-driven DL approaches in the context of fast MRI along with several notable solutions, which include learning neural networks and addressing different imaging application scenarios. The traits and trends of these techniques have also been given which have shifted from supervised learning to semi-supervised learning, and finally, to unsupervised learning methods. In addition, MR vendors' choices of DL reconstruction have been provided along with some discussions on open questions and future directions, which are critical for the reliable imaging systems.
Collapse
Affiliation(s)
- Shanshan Wang
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Ruoyou Wu
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Sen Jia
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Alou Diakite
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Cheng Li
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qiegen Liu
- Department of Electronic Information Engineering, Nanchang University, Nanchang, China
| | - Hairong Zheng
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Leslie Ying
- Department of Biomedical Engineering and Department of Electrical Engineering, The State University of New York, Buffalo, New York, USA
| |
Collapse
|
9
|
Zhou Z, Hu P, Qi H. Stop moving: MR motion correction as an opportunity for artificial intelligence. MAGMA (NEW YORK, N.Y.) 2024; 37:397-409. [PMID: 38386151 DOI: 10.1007/s10334-023-01144-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 12/09/2023] [Accepted: 12/19/2023] [Indexed: 02/23/2024]
Abstract
Subject motion is a long-standing problem of magnetic resonance imaging (MRI), which can seriously deteriorate the image quality. Various prospective and retrospective methods have been proposed for MRI motion correction, among which deep learning approaches have achieved state-of-the-art motion correction performance. This survey paper aims to provide a comprehensive review of deep learning-based MRI motion correction methods. Neural networks used for motion artifacts reduction and motion estimation in the image domain or frequency domain are detailed. Furthermore, besides motion-corrected MRI reconstruction, how estimated motion is applied in other downstream tasks is briefly introduced, aiming to strengthen the interaction between different research areas. Finally, we identify current limitations and point out future directions of deep learning-based MRI motion correction.
Collapse
Affiliation(s)
- Zijian Zhou
- School of Biomedical Engineering, ShanghaiTech University, 4th Floor, BME Building, 393 Middle Huaxia Road, Pudong District, Shanghai, 201210, China
- Shanghai Clinical Research and Trial Center, ShanghaiTech University, Shanghai, China
| | - Peng Hu
- School of Biomedical Engineering, ShanghaiTech University, 4th Floor, BME Building, 393 Middle Huaxia Road, Pudong District, Shanghai, 201210, China.
- Shanghai Clinical Research and Trial Center, ShanghaiTech University, Shanghai, China.
| | - Haikun Qi
- School of Biomedical Engineering, ShanghaiTech University, 4th Floor, BME Building, 393 Middle Huaxia Road, Pudong District, Shanghai, 201210, China.
- Shanghai Clinical Research and Trial Center, ShanghaiTech University, Shanghai, China.
| |
Collapse
|
10
|
Lyu J, Wang S, Tian Y, Zou J, Dong S, Wang C, Aviles-Rivero AI, Qin J. STADNet: Spatial-Temporal Attention-Guided Dual-Path Network for cardiac cine MRI super-resolution. Med Image Anal 2024; 94:103142. [PMID: 38492252 DOI: 10.1016/j.media.2024.103142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Revised: 02/29/2024] [Accepted: 03/05/2024] [Indexed: 03/18/2024]
Abstract
Cardiac cine magnetic resonance imaging (MRI) is a commonly used clinical tool for evaluating cardiac function and morphology. However, its diagnostic accuracy may be compromised by the low spatial resolution. Current methods for cine MRI super-resolution reconstruction still have limitations. They typically rely on 3D convolutional neural networks or recurrent neural networks, which may not effectively capture long-range or non-local features due to their limited receptive fields. Optical flow estimators are also commonly used to align neighboring frames, which may cause information loss and inaccurate motion estimation. Additionally, pre-warping strategies may involve interpolation, leading to potential loss of texture details and complicated anatomical structures. To overcome these challenges, we propose a novel Spatial-Temporal Attention-Guided Dual-Path Network (STADNet) for cardiac cine MRI super-resolution. We utilize transformers to model long-range dependencies in cardiac cine MR images and design a cross-frame attention module in the location-aware spatial path, which enhances the spatial details of the current frame by using complementary information from neighboring frames. We also introduce a recurrent flow-enhanced attention module in the motion-aware temporal path that exploits the correlation between cine MRI frames and extracts the motion information of the heart. Experimental results demonstrate that STADNet outperforms SOTA approaches and has significant potential for clinical practice.
Collapse
Affiliation(s)
- Jun Lyu
- School of Computer and Control Engineering, Yantai University, Yantai, China
| | - Shuo Wang
- School of Basic Medical Sciences, Fudan University, Shanghai, China
| | - Yapeng Tian
- Department of Computer Science, The University of Texas at Dallas, Richardson, TX, USA
| | - Jing Zou
- Centre for Smart Health, School of Nursing, The Hong Kong Polytechnic University, Hong Kong
| | - Shunjie Dong
- College of Information Science & Electronic Engineering, Zhejiang University, Hangzhou, China
| | - Chengyan Wang
- Human Phenome Institute, Fudan University, Shanghai, China.
| | - Angelica I Aviles-Rivero
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, UK
| | - Jing Qin
- Centre for Smart Health, School of Nursing, The Hong Kong Polytechnic University, Hong Kong
| |
Collapse
|
11
|
Jiang N, Zhang Y, Li Q, Fu X, Fang D. A cardiac MRI motion artifact reduction method based on edge enhancement network. Phys Med Biol 2024; 69:095004. [PMID: 38537303 DOI: 10.1088/1361-6560/ad3884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Accepted: 03/26/2024] [Indexed: 04/16/2024]
Abstract
Cardiac magnetic resonance imaging (MRI) usually requires a long acquisition time. The movement of the patients during MRI acquisition will produce image artifacts. Previous studies have shown that clear MR image texture edges are of great significance for pathological diagnosis. In this paper, a motion artifact reduction method for cardiac MRI based on edge enhancement network is proposed. Firstly, the four-plane normal vector adaptive fractional differential mask is applied to extract the edge features of blurred images. The four-plane normal vector method can reduce the noise information in the edge feature maps. The adaptive fractional order is selected according to the normal mean gradient and the local Gaussian curvature entropy of the images. Secondly, the extracted edge feature maps and blurred images are input into the de-artifact network. In this network, the edge fusion feature extraction network and the edge fusion transformer network are specially designed. The former combines the edge feature maps with the fuzzy feature maps to extract the edge feature information. The latter combines the edge attention network and the fuzzy attention network, which can focus on the blurred image edges. Finally, extensive experiments show that the proposed method can obtain higher peak signal-to-noise ratio and structural similarity index measure compared to state-of-art methods. The de-artifact images have clear texture edges.
Collapse
Affiliation(s)
- Nanhe Jiang
- School of Electrical Engineering, Yanshan University, Qinhuangdao, 066004, Hebei, People's Republic of China
| | - Yucun Zhang
- School of Electrical Engineering, Yanshan University, Qinhuangdao, 066004, Hebei, People's Republic of China
| | - Qun Li
- School of Mechanical Engineering, Yanshan University, Qinhuangdao, 066004, Hebei, People's Republic of China
| | - Xianbin Fu
- Hebei University of Environmental Engineering, Qinhuangdao, 066102, Hebei, People's Republic of China
| | - Dongqing Fang
- Capital Aerospace Machinery Co, Ltd, Fengtai, 100076, Beijing, People's Republic of China
| |
Collapse
|
12
|
Kang SH, Lee Y. Motion Artifact Reduction Using U-Net Model with Three-Dimensional Simulation-Based Datasets for Brain Magnetic Resonance Images. Bioengineering (Basel) 2024; 11:227. [PMID: 38534500 DOI: 10.3390/bioengineering11030227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 02/20/2024] [Accepted: 02/23/2024] [Indexed: 03/28/2024] Open
Abstract
This study aimed to remove motion artifacts from brain magnetic resonance (MR) images using a U-Net model. In addition, a simulation method was proposed to increase the size of the dataset required to train the U-Net model while avoiding the overfitting problem. The volume data were rotated and translated with random intensity and frequency, in three dimensions, and were iterated as the number of slices in the volume data. Then, for every slice, a portion of the motion-free k-space data was replaced with motion k-space data, respectively. In addition, based on the transposed k-space data, we acquired MR images with motion artifacts and residual maps and constructed datasets. For a quantitative evaluation, the root mean square error (RMSE), peak signal-to-noise ratio (PSNR), coefficient of correlation (CC), and universal image quality index (UQI) were measured. The U-Net models for motion artifact reduction with the residual map-based dataset showed the best performance across all evaluation factors. In particular, the RMSE, PSNR, CC, and UQI improved by approximately 5.35×, 1.51×, 1.12×, and 1.01×, respectively, and the U-Net model with the residual map-based dataset was compared with the direct images. In conclusion, our simulation-based dataset demonstrates that U-Net models can be effectively trained for motion artifact reduction.
Collapse
Affiliation(s)
- Seong-Hyeon Kang
- Department of Biomedical Engineering, Eulji University, Seongnam 13135, Republic of Korea
| | - Youngjin Lee
- Department of Radiological Science, Gachon University, Incheon 21936, Republic of Korea
| |
Collapse
|
13
|
Andrews A, Doctor P, Gaur L, Greil FG, Hussain T, Zou Q. Manifold-based denoising for Ferumoxytol-enhanced 3D cardiac cine MRI. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2024; 21:3695-3712. [PMID: 38549302 DOI: 10.3934/mbe.2024163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/02/2024]
Abstract
The two-dimensional (2D) cine cardiovascular magnetic resonance (CMR) technique is the reference standard for assessing cardiac function. However, one challenge with 2D cine is that the acquisition time for the whole cine stack is long and requires multiple breath holds, which may not be feasible for pediatric or ill patients. Though single breath-hold multi-slice cine may address the issue, it can only acquire low-resolution images, and hence, affect the accuracy of cardiac function assessment. To address these challenges, a Ferumoxytol-enhanced, free breathing, isotropic high-resolution 3D cine technique was developed. The method produces high-contrast cine images with short acquisition times by using compressed sensing together with a manifold-based method for image denoising. This study included fifteen patients (9.1 $ \pm $ 5.6 yrs.) who were referred for clinical cardiovascular magnetic resonance imaging (MRI) with Ferumoxytol contrast and were prescribed the 3D cine sequence. The data was acquired on a 1.5T scanner. Statistical analysis shows that the manifold-based denoised 3D cine can accurately measure ventricular function with no significant differences when compared to the conventional 2D breath-hold (BH) cine. The multiplanar reconstructed images of the proposed 3D cine method are visually comparable to the golden standard 2D BH cine method in terms of clarity, contrast, and anatomical precision. The proposed method eliminated the need for breath holds, reduced scan times, enabled multiplanar reconstruction within an isotropic data set, and has the potential to be used as an effective tool to access cardiovascular conditions.
Collapse
Affiliation(s)
- Anna Andrews
- Department of Biomedical Engineering, Mercer University, Macon, USA
| | - Pezad Doctor
- Division of Pediatric Cardiology, Department of Pediatrics, The University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Lasya Gaur
- Division of Pediatric Cardiology, Department of Pediatrics, The University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - F Gerald Greil
- Division of Pediatric Cardiology, Department of Pediatrics, The University of Texas Southwestern Medical Center, Dallas, TX, USA
- Department of Radiology, The University of Texas Southwestern Medical Center, Dallas, TX, USA
- Advanced Imaging Research Center, The University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Tarique Hussain
- Division of Pediatric Cardiology, Department of Pediatrics, The University of Texas Southwestern Medical Center, Dallas, TX, USA
- Department of Radiology, The University of Texas Southwestern Medical Center, Dallas, TX, USA
- Advanced Imaging Research Center, The University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Qing Zou
- Division of Pediatric Cardiology, Department of Pediatrics, The University of Texas Southwestern Medical Center, Dallas, TX, USA
- Department of Radiology, The University of Texas Southwestern Medical Center, Dallas, TX, USA
- Advanced Imaging Research Center, The University of Texas Southwestern Medical Center, Dallas, TX, USA
| |
Collapse
|
14
|
Spieker V, Eichhorn H, Hammernik K, Rueckert D, Preibisch C, Karampinos DC, Schnabel JA. Deep Learning for Retrospective Motion Correction in MRI: A Comprehensive Review. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:846-859. [PMID: 37831582 DOI: 10.1109/tmi.2023.3323215] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/15/2023]
Abstract
Motion represents one of the major challenges in magnetic resonance imaging (MRI). Since the MR signal is acquired in frequency space, any motion of the imaged object leads to complex artefacts in the reconstructed image in addition to other MR imaging artefacts. Deep learning has been frequently proposed for motion correction at several stages of the reconstruction process. The wide range of MR acquisition sequences, anatomies and pathologies of interest, and motion patterns (rigid vs. deformable and random vs. regular) makes a comprehensive solution unlikely. To facilitate the transfer of ideas between different applications, this review provides a detailed overview of proposed methods for learning-based motion correction in MRI together with their common challenges and potentials. This review identifies differences and synergies in underlying data usage, architectures, training and evaluation strategies. We critically discuss general trends and outline future directions, with the aim to enhance interaction between different application areas and research fields.
Collapse
|
15
|
Hu P, Tong X, Lin L, Wang LV. Data-driven system matrix manipulation enabling fast functional imaging and intra-image nonrigid motion correction in tomography. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.07.574504. [PMID: 38260429 PMCID: PMC10802502 DOI: 10.1101/2024.01.07.574504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2024]
Abstract
Tomographic imaging modalities are described by large system matrices. Sparse sampling and tissue motion degrade system matrix and image quality. Various existing techniques improve the image quality without correcting the system matrices. Here, we compress the system matrices to improve computational efficiency (e.g., 42 times) using singular value decomposition and fast Fourier transform. Enabled by the efficiency, we propose (1) fast sparsely sampling functional imaging by incorporating a densely sampled prior image into the system matrix, which maintains the critical linearity while mitigating artifacts and (2) intra-image nonrigid motion correction by incorporating the motion as subdomain translations into the system matrix and reconstructing the translations together with the image iteratively. We demonstrate the methods in 3D photoacoustic computed tomography with significantly improved image qualities and clarify their applicability to X-ray CT and MRI or other types of imperfections due to the similarities in system matrices.
Collapse
Affiliation(s)
- Peng Hu
- Caltech Optical Imaging Laboratory, Andrew and Peggy Cherng Department of Medical Engineering, Department of Electrical Engineering, California Institute of Technology, Pasadena, CA 91125, USA
| | - Xin Tong
- Caltech Optical Imaging Laboratory, Andrew and Peggy Cherng Department of Medical Engineering, Department of Electrical Engineering, California Institute of Technology, Pasadena, CA 91125, USA
| | - Li Lin
- Caltech Optical Imaging Laboratory, Andrew and Peggy Cherng Department of Medical Engineering, Department of Electrical Engineering, California Institute of Technology, Pasadena, CA 91125, USA
- Present address: College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310027, China
| | - Lihong V. Wang
- Caltech Optical Imaging Laboratory, Andrew and Peggy Cherng Department of Medical Engineering, Department of Electrical Engineering, California Institute of Technology, Pasadena, CA 91125, USA
| |
Collapse
|
16
|
Deng Z, Zhang W, Chen K, Zhou Y, Tian J, Quan G, Zhao J. TT U-Net: Temporal Transformer U-Net for Motion Artifact Reduction Using PAD (Pseudo All-Phase Clinical-Dataset) in Cardiac CT. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3805-3816. [PMID: 37651491 DOI: 10.1109/tmi.2023.3310933] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Abstract
Involuntary motion of the heart remains a challenge for cardiac computed tomography (CT) imaging. Although the electrocardiogram (ECG) gating strategy is widely adopted to perform CT scans at the quasi-quiescent cardiac phase, motion-induced artifacts are still unavoidable for patients with high heart rates or irregular rhythms. Dynamic cardiac CT, which provides functional information of the heart, suffers even more severe motion artifacts. In this paper, we develop a deep learning based framework for motion artifact reduction in dynamic cardiac CT. First, we build a PAD (Pseudo All-phase clinical-Dataset) based on a whole-heart motion model and single-phase cardiac CT images. This dataset provides dynamic CT images with realistic-looking motion artifacts that help to develop data-driven approaches. Second, we formulate the problem of motion artifact reduction as a video deblurring task according to its dynamic nature. A novel TT U-Net (Temporal Transformer U-Net) is proposed to excavate the spatiotemporal features for better motion artifact reduction. The self-attention mechanism along the temporal dimension effectively encodes motion information and thus aids image recovery. Experiments show that the TT U-Net trained on the proposed PAD performs well on clinical CT scans, which substantiates the effectiveness and fine generalization ability of our method. The source code, trained models, and dynamic demo will be available at https://github.com/ivy9092111111/TT-U-Net.
Collapse
|
17
|
Amirian M, Montoya-Zegarra JA, Herzig I, Eggenberger Hotz P, Lichtensteiger L, Morf M, Züst A, Paysan P, Peterlik I, Scheib S, Füchslin RM, Stadelmann T, Schilling FP. Mitigation of motion-induced artifacts in cone beam computed tomography using deep convolutional neural networks. Med Phys 2023; 50:6228-6242. [PMID: 36995003 DOI: 10.1002/mp.16405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2022] [Revised: 02/25/2023] [Accepted: 03/19/2023] [Indexed: 03/31/2023] Open
Abstract
BACKGROUND Cone beam computed tomography (CBCT) is often employed on radiation therapy treatment devices (linear accelerators) used in image-guided radiation therapy (IGRT). For each treatment session, it is necessary to obtain the image of the day in order to accurately position the patient and to enable adaptive treatment capabilities including auto-segmentation and dose calculation. Reconstructed CBCT images often suffer from artifacts, in particular those induced by patient motion. Deep-learning based approaches promise ways to mitigate such artifacts. PURPOSE We propose a novel deep-learning based approach with the goal to reduce motion induced artifacts in CBCT images and improve image quality. It is based on supervised learning and includes neural network architectures employed as pre- and/or post-processing steps during CBCT reconstruction. METHODS Our approach is based on deep convolutional neural networks which complement the standard CBCT reconstruction, which is performed either with the analytical Feldkamp-Davis-Kress (FDK) method, or with an iterative algebraic reconstruction technique (SART-TV). The neural networks, which are based on refined U-net architectures, are trained end-to-end in a supervised learning setup. Labeled training data are obtained by means of a motion simulation, which uses the two extreme phases of 4D CT scans, their deformation vector fields, as well as time-dependent amplitude signals as input. The trained networks are validated against ground truth using quantitative metrics, as well as by using real patient CBCT scans for a qualitative evaluation by clinical experts. RESULTS The presented novel approach is able to generalize to unseen data and yields significant reductions in motion induced artifacts as well as improvements in image quality compared with existing state-of-the-art CBCT reconstruction algorithms (up to +6.3 dB and +0.19 improvements in peak signal-to-noise ratio, PSNR, and structural similarity index measure, SSIM, respectively), as evidenced by validation with an unseen test dataset, and confirmed by a clinical evaluation on real patient scans (up to 74% preference for motion artifact reduction over standard reconstruction). CONCLUSIONS For the first time, it is demonstrated, also by means of clinical evaluation, that inserting deep neural networks as pre- and post-processing plugins in the existing 3D CBCT reconstruction and trained end-to-end yield significant improvements in image quality and reduction of motion artifacts.
Collapse
Affiliation(s)
- Mohammadreza Amirian
- Centre for Artificial Intelligence CAI, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
- Institute of Neural Information Processing, Ulm University, Ulm, Germany
| | - Javier A Montoya-Zegarra
- Centre for Artificial Intelligence CAI, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
| | - Ivo Herzig
- Institute for Applied Mathematics and Physics IAMP, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
| | - Peter Eggenberger Hotz
- Institute for Applied Mathematics and Physics IAMP, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
| | - Lukas Lichtensteiger
- Institute for Applied Mathematics and Physics IAMP, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
| | - Marco Morf
- Institute for Applied Mathematics and Physics IAMP, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
| | - Alexander Züst
- Institute for Applied Mathematics and Physics IAMP, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
| | - Pascal Paysan
- Varian Medical Systems Imaging Laboratory GmbH, Baden, Switzerland
| | - Igor Peterlik
- Varian Medical Systems Imaging Laboratory GmbH, Baden, Switzerland
| | - Stefan Scheib
- Varian Medical Systems Imaging Laboratory GmbH, Baden, Switzerland
| | - Rudolf Marcel Füchslin
- Institute for Applied Mathematics and Physics IAMP, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
- European Centre for Living Technology, Venice, Italy
| | - Thilo Stadelmann
- Centre for Artificial Intelligence CAI, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
- European Centre for Living Technology, Venice, Italy
| | - Frank-Peter Schilling
- Centre for Artificial Intelligence CAI, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
| |
Collapse
|
18
|
Kim H, Kang SW, Kim JH, Nagar H, Sabuncu M, Margolis DJA, Kim CK. The role of AI in prostate MRI quality and interpretation: Opportunities and challenges. Eur J Radiol 2023; 165:110887. [PMID: 37245342 DOI: 10.1016/j.ejrad.2023.110887] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2023] [Revised: 05/06/2023] [Accepted: 05/20/2023] [Indexed: 05/30/2023]
Abstract
Prostate MRI plays an important role in imaging the prostate gland and surrounding tissues, particularly in the diagnosis and management of prostate cancer. With the widespread adoption of multiparametric magnetic resonance imaging in recent years, the concerns surrounding the variability of imaging quality have garnered increased attention. Several factors contribute to the inconsistency of image quality, such as acquisition parameters, scanner differences and interobserver variabilities. While efforts have been made to standardize image acquisition and interpretation via the development of systems, such as PI-RADS and PI-QUAL, the scoring systems still depend on the subjective experience and acumen of humans. Artificial intelligence (AI) has been increasingly used in many applications, including medical imaging, due to its ability to automate tasks and lower human error rates. These advantages have the potential to standardize the tasks of image interpretation and quality control of prostate MRI. Despite its potential, thorough validation is required before the implementation of AI in clinical practice. In this article, we explore the opportunities and challenges of AI, with a focus on the interpretation and quality of prostate MRI.
Collapse
Affiliation(s)
- Heejong Kim
- Department of Radiology, Weill Cornell Medical College, 525 E 68th St Box 141, New York, NY 10021, United States
| | - Shin Won Kang
- Research Institute for Future Medicine, Samsung Medical Center, Republic of Korea
| | - Jae-Hun Kim
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Republic of Korea
| | - Himanshu Nagar
- Department of Radiation Oncology, Weill Cornell Medical College, 525 E 68th St, New York, NY 10021, United States
| | - Mert Sabuncu
- Department of Radiology, Weill Cornell Medical College, 525 E 68th St Box 141, New York, NY 10021, United States
| | - Daniel J A Margolis
- Department of Radiology, Weill Cornell Medical College, 525 E 68th St Box 141, New York, NY 10021, United States.
| | - Chan Kyo Kim
- Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, Republic of Korea
| |
Collapse
|
19
|
Palla A, Ramanarayanan S, Ram K, Sivaprakasam M. Generalizable Deep Learning Method for Suppressing Unseen and Multiple MRI Artifacts Using Meta-learning. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-5. [PMID: 38082950 DOI: 10.1109/embc40787.2023.10341123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Magnetic Resonance (MR) images suffer from various types of artifacts due to motion, spatial resolution, and under-sampling. Conventional deep learning methods deal with removing a specific type of artifact, leading to separately trained models for each artifact type that lack the shared knowledge generalizable across artifacts. Moreover, training a model for each type and amount of artifact is a tedious process that consumes more training time and storage of models. On the other hand, the shared knowledge learned by jointly training the model on multiple artifacts might be inadequate to generalize under deviations in the types and amounts of artifacts. Model-agnostic meta-learning (MAML), a nested bi-level optimization framework is a promising technique to learn common knowledge across artifacts in the outer level of optimization, and artifact-specific restoration in the inner level. We propose curriculum-MAML (CMAML), a learning process that integrates MAML with curriculum learning to impart the knowledge of variable artifact complexity to adaptively learn restoration of multiple artifacts during training. Comparative studies against Stochastic Gradient Descent and MAML, using two cardiac datasets reveal that CMAML exhibits (i) better generalization with improved PSNR for 83% of unseen types and amounts of artifacts and improved SSIM in all cases, and (ii) better artifact suppression in 4 out of 5 cases of composite artifacts (scans with multiple artifacts).Clinical relevance- Our results show that CMAML has the potential to minimize the number of artifact-specific models; which is essential to deploy deep learning models for clinical use. Furthermore, we have also taken another practical scenario of an image affected by multiple artifacts and show that our method performs better in 80% of cases.
Collapse
|
20
|
Lu J, Jin R, Wang M, Song E, Ma G. A bidirectional registration neural network for cardiac motion tracking using cine MRI images. Comput Biol Med 2023; 160:107001. [PMID: 37187138 DOI: 10.1016/j.compbiomed.2023.107001] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Revised: 03/15/2023] [Accepted: 05/02/2023] [Indexed: 05/17/2023]
Abstract
Using cine magnetic resonance imaging (cine MRI) images to track cardiac motion helps users to analyze the myocardial strain, and is of great importance in clinical applications. At present, most of the automatic deep learning-based motion tracking methods compare two images without considering temporal information between MRI frames, which easily leads to the lack of consistency of the generated motion fields. Even though a small number of works take into account the temporal factor, they are usually computationally intensive or have limitations on image length. To solve this problem, we propose a bidirectional convolution neural network for motion tracking of cardiac cine MRI images. This network leverages convolutional blocks to extract spatial features from three-dimensional (3D) image registration pairs, and models the temporal relations through a bidirectional recurrent neural network to obtain the Lagrange motion field between the reference image and other images. Compared with previous pairwise registration methods, the proposed method can automatically learn spatiotemporal information from multiple images with fewer parameters. We evaluated our model on three public cardiac cine MRI datasets. The experimental results demonstrated that the proposed method can significantly improve the motion tracking accuracy. The average Dice coefficient between estimated segmentation and manual segmentation has reached almost 0.85 on the widely used Automatic Cardiac Diagnostic Challenge (ACDC) dataset.
Collapse
Affiliation(s)
- Jiayi Lu
- School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Renchao Jin
- School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.
| | - Manyang Wang
- School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Enmin Song
- School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Guangzhi Ma
- School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| |
Collapse
|
21
|
Dai M, Xiao G, Shao M, Zhang YS. The Synergy between Deep Learning and Organs-on-Chips for High-Throughput Drug Screening: A Review. BIOSENSORS 2023; 13:389. [PMID: 36979601 PMCID: PMC10046732 DOI: 10.3390/bios13030389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/17/2022] [Revised: 02/22/2023] [Accepted: 03/07/2023] [Indexed: 06/18/2023]
Abstract
Organs-on-chips (OoCs) are miniature microfluidic systems that have arguably become a class of advanced in vitro models. Deep learning, as an emerging topic in machine learning, has the ability to extract a hidden statistical relationship from the input data. Recently, these two areas have become integrated to achieve synergy for accelerating drug screening. This review provides a brief description of the basic concepts of deep learning used in OoCs and exemplifies the successful use cases for different types of OoCs. These microfluidic chips are of potential to be assembled as highly potent human-on-chips with complex physiological or pathological functions. Finally, we discuss the future supply with perspectives and potential challenges in terms of combining OoCs and deep learning for image processing and automation designs.
Collapse
Affiliation(s)
- Manna Dai
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China
- Computing and Intelligence Department, Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), 1 Fusionopolis Way, #16-16 Connexis, Singapore 138632, Singapore
| | - Gao Xiao
- College of Environment and Safety Engineering, Fuzhou University, Fuzhou 350108, China
- Department of Biomedical Engineering, Tsinghua University, Beijing 100084, China
| | - Ming Shao
- Department of Computer and Information Science, College of Engineering, University of Massachusetts Dartmouth, North Dartmouth, MA 02747, USA
| | - Yu Shrike Zhang
- Division of Engineering in Medicine, Department of Medicine, Brigham and Women’s Hospital, Harvard Medical School, Cambridge, MA 02139, USA
| |
Collapse
|
22
|
Huang P, Zhou X, He P, Feng P, Tian S, Sun Y, Mercaldo F, Santone A, Qin J, Xiao H. Interpretable laryngeal tumor grading of histopathological images via depth domain adaptive network with integration gradient CAM and priori experience-guided attention. Comput Biol Med 2023; 154:106447. [PMID: 36706570 DOI: 10.1016/j.compbiomed.2022.106447] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Revised: 11/29/2022] [Accepted: 12/19/2022] [Indexed: 12/24/2022]
Abstract
Tumor grading and interpretability of laryngeal cancer is a key yet challenging task in the clinical diagnosis, mainly because of the commonly used low-magnification pathological images lack fine cellular structure information and accurate localization, the diagnosis results of pathologists are different from those of attentional convolutional network -based methods, and the gradient-weighted class activation mapping method cannot be optimized to create the best visualization map. To address this problem, we propose an end-to-end depth domain adaptive network (DDANet) with integration gradient CAM and priori experience-guided attention to improve the tumor grading performance and interpretability by introducing the pathologist's a priori experience in high-magnification into the depth model. Specifically, a novel priori experience-guided attention (PE-GA) method is developed to solve the traditional unsupervised attention optimization problem. Besides, a novel integration gradient CAM is proposed to mitigate overfitting, information redundancies and low sparsity of the Grad-CAM graphs generated by the PE-GA method. Furthermore, we establish a set of quantitative evaluation metric systems for model visual interpretation. Extensive experimental results show that compared with the state-of-the-art methods, the average grading accuracy is increased to 88.43% (↑4.04%), the effective interpretable rate is increased to 52.73% (↑11.45%). Additionally, it effectively reduces the difference between CV-based method and pathology in diagnosis results. Importantly, the visualized interpretive maps are closer to the region of interest of concern by pathologists, and our model outperforms pathologists with different levels of experience.
Collapse
Affiliation(s)
- Pan Huang
- Key Laboratory of Optoelectronic Technology & Systems (Ministry of Education), College of Optoelectronic Engineering, Chongqing University, Chongqing, China
| | - Xiaoli Zhou
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, China
| | - Peng He
- Key Laboratory of Optoelectronic Technology & Systems (Ministry of Education), College of Optoelectronic Engineering, Chongqing University, Chongqing, China.
| | - Peng Feng
- Key Laboratory of Optoelectronic Technology & Systems (Ministry of Education), College of Optoelectronic Engineering, Chongqing University, Chongqing, China.
| | - Sukun Tian
- Center of Digital Dentistry, School and Hospital of Stomatology, Peking University, Beijing, China
| | - Yuchun Sun
- Center of Digital Dentistry, School and Hospital of Stomatology, Peking University, Beijing, China.
| | - Francesco Mercaldo
- Department of Medicine and Health Sciences "Vincenzo Tiberio", University of Molise, Campobasso, Italy
| | - Antonella Santone
- Department of Medicine and Health Sciences "Vincenzo Tiberio", University of Molise, Campobasso, Italy
| | - Jing Qin
- School of Nursing, The Hong Kong Polytechnic University, Hong Kong, China
| | - Hualiang Xiao
- Department of Pathology, Daping Hospital, Army Medical University, Chongqing, China
| |
Collapse
|
23
|
Poonkodi S, Kanchana M. 3D-MedTranCSGAN: 3D Medical Image Transformation using CSGAN. Comput Biol Med 2023; 153:106541. [PMID: 36652868 DOI: 10.1016/j.compbiomed.2023.106541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 11/30/2022] [Accepted: 01/10/2023] [Indexed: 01/15/2023]
Abstract
Computer vision techniques are a rapidly growing area of transforming medical images for various specific medical applications. In an end-to-end application, this paper proposes a 3D Medical Image Transformation Using a CSGAN model named a 3D-MedTranCSGAN. The 3D-MedTranCSGAN model is an integration of non-adversarial loss components and the Cyclic Synthesized Generative Adversarial Networks. The proposed model utilizes PatchGAN's discriminator network, to penalize the difference between the synthesized image and the original image. The model also computes the non-adversary loss functions such as content, perception, and style transfer losses. 3DCascadeNet is a new generator architecture introduced in the paper, which is used to enhance the perceptiveness of the transformed medical image by encoding-decoding pairs. We use the 3D-MedTranCSGAN model to do various tasks without modifying specific applications: PET to CT image transformation; reconstruction of CT to PET; modification of movement artefacts in MR images; and removing noise in PET images. We found that 3D-MedTranCSGAN outperformed other transformation methods in our experiments. For the first task, the proposed model yields SSIM is 0.914, PSNR is 26.12, MSE is 255.5, VIF is 0.4862, UQI is 0.9067 and LPIPs is 0.2284. For the second task, the model yields 0.9197, 25.7, 257.56, 0.4962, 0.9027, 0.2262. For the third task, the model yields 0.8862, 24.94, 0.4071, 0.6410, 0.2196. For the final task, the model yields 0.9521, 33.67, 33.57, 0.6091, 0.9255, 0.0244. Based on the result analysis, the proposed model outperforms the other techniques.
Collapse
Affiliation(s)
- S Poonkodi
- Department of Computing Technologies, School of Computing, SRM Institute of Science and Technology, Kattankulathur, India
| | - M Kanchana
- Department of Computing Technologies, School of Computing, SRM Institute of Science and Technology, Kattankulathur, India.
| |
Collapse
|
24
|
Sarasaen C, Chatterjee S, Breitkopf M, Rose G, Nürnberger A, Speck O. Fine-tuning deep learning model parameters for improved super-resolution of dynamic MRI with prior-knowledge. Artif Intell Med 2021; 121:102196. [PMID: 34763811 DOI: 10.1016/j.artmed.2021.102196] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 10/07/2021] [Accepted: 10/12/2021] [Indexed: 10/20/2022]
Abstract
Dynamic imaging is a beneficial tool for interventions to assess physiological changes. Nonetheless during dynamic MRI, while achieving a high temporal resolution, the spatial resolution is compromised. To overcome this spatio-temporal trade-off, this research presents a super-resolution (SR) MRI reconstruction with prior knowledge based fine-tuning to maximise spatial information while reducing the required scan-time for dynamic MRIs. A U-Net based network with perceptual loss is trained on a benchmark dataset and fine-tuned using one subject-specific static high resolution MRI as prior knowledge to obtain high resolution dynamic images during the inference stage. 3D dynamic data for three subjects were acquired with different parameters to test the generalisation capabilities of the network. The method was tested for different levels of in-plane undersampling for dynamic MRI. The reconstructed dynamic SR results after fine-tuning showed higher similarity with the high resolution ground-truth, while quantitatively achieving statistically significant improvement. The average SSIM of the lowest resolution experimented during this research (6.25% of the k-space) before and after fine-tuning were 0.939 ± 0.008 and 0.957 ± 0.006 respectively. This could theoretically result in an acceleration factor of 16, which can potentially be acquired in less than half a second. The proposed approach shows that the super-resolution MRI reconstruction with prior-information can alleviate the spatio-temporal trade-off in dynamic MRI, even for high acceleration factors.
Collapse
Affiliation(s)
- Chompunuch Sarasaen
- Biomedical Magnetic Resonance, Otto von Guericke University Magdeburg, Germany; Institute for Medical Engineering, Otto von Guericke University Magdeburg, Germany; Research Campus STIMULATE, Otto von Guericke University Magdeburg, Germany.
| | - Soumick Chatterjee
- Biomedical Magnetic Resonance, Otto von Guericke University Magdeburg, Germany; Research Campus STIMULATE, Otto von Guericke University Magdeburg, Germany; Faculty of Computer Science, Otto von Guericke University Magdeburg, Germany; Data and Knowledge Engineering Group, Otto von Guericke University Magdeburg, Germany
| | - Mario Breitkopf
- Biomedical Magnetic Resonance, Otto von Guericke University Magdeburg, Germany; Research Campus STIMULATE, Otto von Guericke University Magdeburg, Germany
| | - Georg Rose
- Institute for Medical Engineering, Otto von Guericke University Magdeburg, Germany; Research Campus STIMULATE, Otto von Guericke University Magdeburg, Germany
| | - Andreas Nürnberger
- Faculty of Computer Science, Otto von Guericke University Magdeburg, Germany; Data and Knowledge Engineering Group, Otto von Guericke University Magdeburg, Germany; Center for Behavioral Brain Sciences, Magdeburg, Germany
| | - Oliver Speck
- Biomedical Magnetic Resonance, Otto von Guericke University Magdeburg, Germany; Research Campus STIMULATE, Otto von Guericke University Magdeburg, Germany; German Center for Neurodegenerative Disease, Magdeburg, Germany; Center for Behavioral Brain Sciences, Magdeburg, Germany; Leibniz Institute for Neurobiology, Magdeburg, Germany
| |
Collapse
|