1
|
Vega F, Addeh A, Ganesh A, Smith EE, MacDonald ME. Image Translation for Estimating Two-Dimensional Axial Amyloid-Beta PET From Structural MRI. J Magn Reson Imaging 2024; 59:1021-1031. [PMID: 37921361 DOI: 10.1002/jmri.29070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 09/29/2023] [Accepted: 10/02/2023] [Indexed: 11/04/2023] Open
Abstract
BACKGROUND Amyloid-beta and brain atrophy are hallmarks for Alzheimer's Disease that can be targeted with positron emission tomography (PET) and MRI, respectively. MRI is cheaper, less-invasive, and more available than PET. There is a known relationship between amyloid-beta and brain atrophy, meaning PET images could be inferred from MRI. PURPOSE To build an image translation model using a Conditional Generative Adversarial Network able to synthesize Amyloid-beta PET images from structural MRI. STUDY TYPE Retrospective. POPULATION Eight hundred eighty-two adults (348 males/534 females) with different stages of cognitive decline (control, mild cognitive impairment, moderate cognitive impairment, and severe cognitive impairment). Five hundred fifty-two subjects for model training and 331 for testing (80%:20%). FIELD STRENGTH/SEQUENCE 3 T, T1-weighted structural (T1w). ASSESSMENT The testing cohort was used to evaluate the performance of the model using the Structural Similarity Index Measure (SSIM) and Peak Signal-to-Noise Ratio (PSNR), comparing the likeness of the overall synthetic PET images created from structural MRI with the overall true PET images. SSIM was computed in the overall image to include the luminance, contrast, and structural similarity components. Experienced observers reviewed the images for quality, performance and tried to determine if they could tell the difference between real and synthetic images. STATISTICAL TESTS Pixel wise Pearson correlation was significant, and had an R2 greater than 0.96 in example images. From blinded readings, a Pearson Chi-squared test showed that there was no significant difference between the real and synthetic images by the observers (P = 0.68). RESULTS A high degree of likeness across the evaluation set, which had a mean SSIM = 0.905 and PSNR = 2.685. The two observers were not able to determine the difference between the real and synthetic images, with accuracies of 54% and 46%, respectively. CONCLUSION Amyloid-beta PET images can be synthesized from structural MRI with a high degree of similarity to the real PET images. EVIDENCE LEVEL 3 TECHNICAL EFFICACY: Stage 1.
Collapse
Affiliation(s)
- Fernando Vega
- Department of Biomedical, University of Calgary, Calgary, Alberta, Canada
- Department of Electrical and Software Engineering, University of Calgary, Calgary, Alberta, Canada
- Department of Radiology, University of Calgary, Calgary, Alberta, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, Alberta, Canada
| | - Abdoljalil Addeh
- Department of Biomedical, University of Calgary, Calgary, Alberta, Canada
- Department of Electrical and Software Engineering, University of Calgary, Calgary, Alberta, Canada
- Department of Radiology, University of Calgary, Calgary, Alberta, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, Alberta, Canada
| | - Aravind Ganesh
- Department of Radiology, University of Calgary, Calgary, Alberta, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, Alberta, Canada
- Department of Clinical Neuroscience, University of Calgary, Calgary, Alberta, Canada
| | - Eric E Smith
- Department of Radiology, University of Calgary, Calgary, Alberta, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, Alberta, Canada
- Department of Clinical Neuroscience, University of Calgary, Calgary, Alberta, Canada
| | - M Ethan MacDonald
- Department of Biomedical, University of Calgary, Calgary, Alberta, Canada
- Department of Electrical and Software Engineering, University of Calgary, Calgary, Alberta, Canada
- Department of Radiology, University of Calgary, Calgary, Alberta, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, Alberta, Canada
| |
Collapse
|
2
|
Huang P, Zhang C, Zhang X, Li X, Dong L, Ying L. Self-Supervised Deep Unrolled Reconstruction Using Regularization by Denoising. IEEE Trans Med Imaging 2024; 43:1203-1213. [PMID: 37962993 PMCID: PMC11056277 DOI: 10.1109/tmi.2023.3332614] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2023]
Abstract
Deep learning methods have been successfully used in various computer vision tasks. Inspired by that success, deep learning has been explored in magnetic resonance imaging (MRI) reconstruction. In particular, integrating deep learning and model-based optimization methods has shown considerable advantages. However, a large amount of labeled training data is typically needed for high reconstruction quality, which is challenging for some MRI applications. In this paper, we propose a novel reconstruction method, named DURED-Net, that enables interpretable self-supervised learning for MR image reconstruction by combining a self-supervised denoising network and a plug-and-play method. We aim to boost the reconstruction performance of Noise2Noise in MR reconstruction by adding an explicit prior that utilizes imaging physics. Specifically, the leverage of a denoising network for MRI reconstruction is achieved using Regularization by Denoising (RED). Experiment results demonstrate that the proposed method requires a reduced amount of training data to achieve high reconstruction quality among the state-of-the-art approaches utilizing Noise2Noise.
Collapse
|
3
|
Shao L, Chen B, Zhang Z, Zhang Z, Chen X. Artificial intelligence generated content (AIGC) in medicine: A narrative review. Math Biosci Eng 2024; 21:1672-1711. [PMID: 38303483 DOI: 10.3934/mbe.2024073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/03/2024]
Abstract
Recently, artificial intelligence generated content (AIGC) has been receiving increased attention and is growing exponentially. AIGC is generated based on the intentional information extracted from human-provided instructions by generative artificial intelligence (AI) models. AIGC quickly and automatically generates large amounts of high-quality content. Currently, there is a shortage of medical resources and complex medical procedures in medicine. Due to its characteristics, AIGC can help alleviate these problems. As a result, the application of AIGC in medicine has gained increased attention in recent years. Therefore, this paper provides a comprehensive review on the recent state of studies involving AIGC in medicine. First, we present an overview of AIGC. Furthermore, based on recent studies, the application of AIGC in medicine is reviewed from two aspects: medical image processing and medical text generation. The basic generative AI models, tasks, target organs, datasets and contribution of studies are considered and summarized. Finally, we also discuss the limitations and challenges faced by AIGC and propose possible solutions with relevant studies. We hope this review can help readers understand the potential of AIGC in medicine and obtain some innovative ideas in this field.
Collapse
Affiliation(s)
- Liangjing Shao
- Academy for Engineering & Technology, Fudan University, Shanghai 200433, China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Fudan University, Shanghai 200032, China
| | - Benshuang Chen
- Academy for Engineering & Technology, Fudan University, Shanghai 200433, China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Fudan University, Shanghai 200032, China
| | - Ziqun Zhang
- Information office, Fudan University, Shanghai 200032, China
| | - Zhen Zhang
- Baoshan Branch of Ren Ji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200444, China
| | - Xinrong Chen
- Academy for Engineering & Technology, Fudan University, Shanghai 200433, China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Fudan University, Shanghai 200032, China
| |
Collapse
|
4
|
Sheikhi M, Sina S, Karimipourfard M. Deep-learned generation of renal dual-energy CT from a single-energy scan. Clin Radiol 2024; 79:e17-e25. [PMID: 37923626 DOI: 10.1016/j.crad.2023.09.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 09/14/2023] [Accepted: 09/24/2023] [Indexed: 11/07/2023]
Abstract
AIM To investigate the role of the deep-learning (DL) method in the generation of dual-energy computed tomography (DECT) images from single-energy images for precise diagnosis of kidney stone type. MATERIALS AND METHODS DECT of 23 patients was acquired, and the stone types were investigated based on the DECT software suggestions. The data were divided into two paired groups:120 kVp input and 80 kVp target and 120 kVp input and 135 kVp targets, p2p-UNet-GAN was exploited to generate the different energy images based on the common CT protocols. RESULTS The images generated of the generative adversarial network (GAN) network were evaluated based on the SSIM, PSNR, and MSE metrics, and the values were estimated as 0.85-0.95, 28-32, and 0.85-0.89 respectively. The attenuation ratio of test patient images were estimated and compared with real patient reports. The network achieved high accuracy in stone region localisation and resulted in accurate stone type predictions. CONCLUSION This study presents a useful method based on the DL technique to reduce patient radiation dose and facilitate the prediction of urinary stone types using single-energy CT imaging.
Collapse
Affiliation(s)
- M Sheikhi
- Nuclear Engineering Department, School of Mechanical Engineering, Shiraz University, Shiraz, Iran; Abu Ali Sina Hospital, Shiraz, Iran
| | - S Sina
- Nuclear Engineering Department, School of Mechanical Engineering, Shiraz University, Shiraz, Iran; Radiation Research Center, Shiraz University, Shiraz, Iran.
| | - M Karimipourfard
- Nuclear Engineering Department, School of Mechanical Engineering, Shiraz University, Shiraz, Iran
| |
Collapse
|
5
|
Liu H, Deng D, Zeng W, Huang Y, Zheng C, Li X, Li H, Xie C, He H, Xu G. AI-assisted compressed sensing and parallel imaging sequences for MRI of patients with nasopharyngeal carcinoma: comparison of their capabilities in terms of examination time and image quality. Eur Radiol 2023; 33:7686-7696. [PMID: 37219618 PMCID: PMC10598173 DOI: 10.1007/s00330-023-09742-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 03/21/2023] [Accepted: 04/14/2023] [Indexed: 05/24/2023]
Abstract
OBJECTIVE To compare examination time and image quality between artificial intelligence (AI)-assisted compressed sensing (ACS) technique and parallel imaging (PI) technique in MRI of patients with nasopharyngeal carcinoma (NPC). METHODS Sixty-six patients with pathologically confirmed NPC underwent nasopharynx and neck examination using a 3.0-T MRI system. Transverse T2-weighted fast spin-echo (FSE) sequence, transverse T1-weighted FSE sequence, post-contrast transverse T1-weighted FSE sequence, and post-contrast coronal T1-weighted FSE were obtained by both ACS and PI techniques, respectively. The signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and duration of scanning of both sets of images analyzed by ACS and PI techniques were compared. The images from the ACS and PI techniques were scored for lesion detection, margin sharpness of lesions, artifacts, and overall image quality using the 5-point Likert scale. RESULTS The examination time with ACS technique was significantly shorter than that with PI technique (p < 0.0001). The comparison of SNR and CNR showed that ACS technique was significantly superior with PI technique (p < 0.005). Qualitative image analysis showed that the scores of lesion detection, margin sharpness of lesions, artifacts, and overall image quality were higher in the ACS sequences than those in the PI sequences (p < 0.0001). Inter-observer agreement was evaluated for all qualitative indicators for each method, in which the results showed satisfactory-to-excellent agreement (p < 0.0001). CONCLUSION Compared with the PI technique, the ACS technique for MR examination of NPC can not only shorten scanning time but also improve image quality. CLINICAL RELEVANCE STATEMENT The artificial intelligence (AI)-assisted compressed sensing (ACS) technique shortens examination time for patients with nasopharyngeal carcinoma, while improving the image quality and examination success rate, which will benefit more patients. KEY POINTS • Compared with the parallel imaging (PI) technique, the artificial intelligence (AI)-assisted compressed sensing (ACS) technique not only reduced examination time, but also improved image quality. • Artificial intelligence (AI)-assisted compressed sensing (ACS) pulls the state-of-the-art deep learning technique into the reconstruction procedure and helps find an optimal balance of imaging speed and image quality.
Collapse
Affiliation(s)
- Haibin Liu
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou, 510060, People's Republic of China
| | - Dele Deng
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou, 510060, People's Republic of China
| | - Weilong Zeng
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou, 510060, People's Republic of China
| | - Yingyi Huang
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou, 510060, People's Republic of China
| | - Chunling Zheng
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou, 510060, People's Republic of China
| | - Xinyang Li
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Hui Li
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou, 510060, People's Republic of China
| | - Chuanmiao Xie
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou, 510060, People's Republic of China
| | - Haoqiang He
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou, 510060, People's Republic of China.
| | - Guixiao Xu
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou, 510060, People's Republic of China.
| |
Collapse
|
6
|
Lyu J, Tian Y, Cai Q, Wang C, Qin J. Adaptive channel-modulated personalized federated learning for magnetic resonance image reconstruction. Comput Biol Med 2023; 165:107330. [PMID: 37611426 DOI: 10.1016/j.compbiomed.2023.107330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 07/17/2023] [Accepted: 08/07/2023] [Indexed: 08/25/2023]
Abstract
Magnetic resonance imaging (MRI) is extensively utilized in clinical practice for diagnostic purposes, owing to its non-invasive nature and remarkable ability to provide detailed characterization of soft tissues. However, its drawback lies in the prolonged scanning time. To accelerate MR imaging, how to reconstruct MR images from under-sampled data quickly and accurately has drawn intensive research interest; it, however, remains a challenging task. While some deep learning models have achieved promising performance in MRI reconstruction, these models usually require a substantial quantity of paired data for training, which proves challenging to gather and share owing to high scanning costs and data privacy concerns. Federated learning (FL) is a potential tool to alleviate these difficulties. It enables multiple clinical clients to collaboratively train a global model without compromising privacy. However, it is extremely challenging to fit a single model to diverse data distributions of different clients. Moreover, existing FL algorithms treat the features of each channel equally, lacking discriminative learning ability across feature channels, and hence hindering their representational capability. In this study, we propose a novel Adaptive Channel-Modulated Federal learning framework for personalized MRI reconstruction, dubbed as ACM-FedMRI. Specifically, considering each local client may focus on features in different channels, we first design a client-specific hypernetwork to guide the channel selection operation in order to optimize the extracted features. Additionally, we introduce a performance-based channel decoupling scheme, which dynamically separates the global model at the channel level to facilitate personalized adjustments based on the performance of individual clients. This approach eliminates the need for heuristic design of specific personalization layers. Extensive experiments on four datasets under two different settings show that our ACM-FedMRI achieves outstanding results compared to other cutting-edge federated learning techniques in the field of MRI reconstruction.
Collapse
Affiliation(s)
- Jun Lyu
- School of Nursing, The Hong Kong Polytechnic University, HongKong.
| | - Yapeng Tian
- Department of Computer Science, The University of Texas at Dallas, Richardson, TX, USA.
| | - Qing Cai
- School of Information Science and Engineering, Ocean University of China, Qingdao, Shandong, China.
| | - Chengyan Wang
- Human Phenome Institute, Fudan University, Shanghai, China.
| | - Jing Qin
- School of Nursing, The Hong Kong Polytechnic University, HongKong.
| |
Collapse
|
7
|
蔡 昕, 侯 学, 杨 光, 聂 生. [Application of generative adversarial network in magnetic resonance image reconstruction]. Sheng Wu Yi Xue Gong Cheng Xue Za Zhi 2023; 40:582-588. [PMID: 37380400 PMCID: PMC10307593 DOI: 10.7507/1001-5515.202204007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 04/02/2022] [Revised: 04/06/2023] [Indexed: 06/30/2023]
Abstract
Magnetic resonance imaging (MRI) is an important medical imaging method, whose major limitation is its long scan time due to the imaging mechanism, increasing patients' cost and waiting time for the examination. Currently, parallel imaging (PI) and compress sensing (CS) together with other reconstruction technologies have been proposed to accelerate image acquisition. However, the image quality of PI and CS depends on the image reconstruction algorithms, which is far from satisfying in respect to both the image quality and the reconstruction speed. In recent years, image reconstruction based on generative adversarial network (GAN) has become a research hotspot in the field of magnetic resonance imaging because of its excellent performance. In this review, we summarized the recent development of application of GAN in MRI reconstruction in both single- and multi-modality acceleration, hoping to provide a useful reference for interested researchers. In addition, we analyzed the characteristics and limitations of existing technologies and forecasted some development trends in this field.
Collapse
Affiliation(s)
- 昕 蔡
- 上海理工大学 健康科学与工程学院(上海 200093)School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai 200093, P. R. China
| | - 学文 侯
- 上海理工大学 健康科学与工程学院(上海 200093)School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai 200093, P. R. China
| | - 光 杨
- 上海理工大学 健康科学与工程学院(上海 200093)School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai 200093, P. R. China
| | - 生东 聂
- 上海理工大学 健康科学与工程学院(上海 200093)School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai 200093, P. R. China
| |
Collapse
|
8
|
Gao Z, Guo Y, Zhang J, Zeng T, Yang G. Hierarchical Perception Adversarial Learning Framework for Compressed Sensing MRI. IEEE Trans Med Imaging 2023; 42:1859-1874. [PMID: 37022266 DOI: 10.1109/tmi.2023.3240862] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
The long acquisition time has limited the accessibility of magnetic resonance imaging (MRI) because it leads to patient discomfort and motion artifacts. Although several MRI techniques have been proposed to reduce the acquisition time, compressed sensing in magnetic resonance imaging (CS-MRI) enables fast acquisition without compromising SNR and resolution. However, existing CS-MRI methods suffer from the challenge of aliasing artifacts. This challenge results in the noise-like textures and missing the fine details, thus leading to unsatisfactory reconstruction performance. To tackle this challenge, we propose a hierarchical perception adversarial learning framework (HP-ALF). HP-ALF can perceive the image information in the hierarchical mechanism: image-level perception and patch-level perception. The former can reduce the visual perception difference in the entire image, and thus achieve aliasing artifact removal. The latter can reduce this difference in the regions of the image, and thus recover fine details. Specifically, HP-ALF achieves the hierarchical mechanism by utilizing multilevel perspective discrimination. This discrimination can provide the information from two perspectives (overall and regional) for adversarial learning. It also utilizes a global and local coherent discriminator to provide structure information to the generator during training. In addition, HP-ALF contains a context-aware learning block to effectively exploit the slice information between individual images for better reconstruction performance. The experiments validated on three datasets demonstrate the effectiveness of HP-ALF and its superiority to the comparative methods.
Collapse
|
9
|
Jafari M, Shoeibi A, Khodatars M, Ghassemi N, Moridian P, Alizadehsani R, Khosravi A, Ling SH, Delfan N, Zhang YD, Wang SH, Gorriz JM, Alinejad-Rokny H, Acharya UR. Automated diagnosis of cardiovascular diseases from cardiac magnetic resonance imaging using deep learning models: A review. Comput Biol Med 2023; 160:106998. [PMID: 37182422 DOI: 10.1016/j.compbiomed.2023.106998] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Revised: 03/01/2023] [Accepted: 04/28/2023] [Indexed: 05/16/2023]
Abstract
In recent years, cardiovascular diseases (CVDs) have become one of the leading causes of mortality globally. At early stages, CVDs appear with minor symptoms and progressively get worse. The majority of people experience symptoms such as exhaustion, shortness of breath, ankle swelling, fluid retention, and other symptoms when starting CVD. Coronary artery disease (CAD), arrhythmia, cardiomyopathy, congenital heart defect (CHD), mitral regurgitation, and angina are the most common CVDs. Clinical methods such as blood tests, electrocardiography (ECG) signals, and medical imaging are the most effective methods used for the detection of CVDs. Among the diagnostic methods, cardiac magnetic resonance imaging (CMRI) is increasingly used to diagnose, monitor the disease, plan treatment and predict CVDs. Coupled with all the advantages of CMR data, CVDs diagnosis is challenging for physicians as each scan has many slices of data, and the contrast of it might be low. To address these issues, deep learning (DL) techniques have been employed in the diagnosis of CVDs using CMR data, and much research is currently being conducted in this field. This review provides an overview of the studies performed in CVDs detection using CMR images and DL techniques. The introduction section examined CVDs types, diagnostic methods, and the most important medical imaging techniques. The following presents research to detect CVDs using CMR images and the most significant DL methods. Another section discussed the challenges in diagnosing CVDs from CMRI data. Next, the discussion section discusses the results of this review, and future work in CVDs diagnosis from CMR images and DL techniques are outlined. Finally, the most important findings of this study are presented in the conclusion section.
Collapse
Affiliation(s)
- Mahboobeh Jafari
- Internship in BioMedical Machine Learning Lab, The Graduate School of Biomedical Engineering, UNSW Sydney, Sydney, NSW, 2052, Australia
| | - Afshin Shoeibi
- Internship in BioMedical Machine Learning Lab, The Graduate School of Biomedical Engineering, UNSW Sydney, Sydney, NSW, 2052, Australia; Data Science and Computational Intelligence Institute, University of Granada, Spain.
| | - Marjane Khodatars
- Data Science and Computational Intelligence Institute, University of Granada, Spain
| | - Navid Ghassemi
- Internship in BioMedical Machine Learning Lab, The Graduate School of Biomedical Engineering, UNSW Sydney, Sydney, NSW, 2052, Australia
| | - Parisa Moridian
- Data Science and Computational Intelligence Institute, University of Granada, Spain
| | - Roohallah Alizadehsani
- Institute for Intelligent Systems Research and Innovation, Deakin University, Geelong, Australia
| | - Abbas Khosravi
- Institute for Intelligent Systems Research and Innovation, Deakin University, Geelong, Australia
| | - Sai Ho Ling
- Faculty of Engineering and IT, University of Technology Sydney (UTS), Australia
| | - Niloufar Delfan
- Faculty of Computer Engineering, Dept. of Artificial Intelligence Engineering, K. N. Toosi University of Technology, Tehran, Iran
| | - Yu-Dong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, UK
| | - Shui-Hua Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, UK
| | - Juan M Gorriz
- Data Science and Computational Intelligence Institute, University of Granada, Spain; Department of Psychiatry, University of Cambridge, UK
| | - Hamid Alinejad-Rokny
- BioMedical Machine Learning Lab, The Graduate School of Biomedical Engineering, UNSW Sydney, Sydney, NSW, 2052, Australia; UNSW Data Science Hub, The University of New South Wales, Sydney, NSW, 2052, Australia; Health Data Analytics Program, Centre for Applied Artificial Intelligence, Macquarie University, Sydney, 2109, Australia
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, Australia; Dept. of Biomedical Informatics and Medical Engineering, Asia University, Taichung, Taiwan
| |
Collapse
|
10
|
Qiu D, Cheng Y, Wang X. Medical image super-resolution reconstruction algorithms based on deep learning: A survey. Comput Methods Programs Biomed 2023; 238:107590. [PMID: 37201252 DOI: 10.1016/j.cmpb.2023.107590] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 03/21/2023] [Accepted: 05/05/2023] [Indexed: 05/20/2023]
Abstract
BACKGROUND AND OBJECTIVE With the high-resolution (HR) requirements of medical images in clinical practice, super-resolution (SR) reconstruction algorithms based on low-resolution (LR) medical images have become a research hotspot. This type of method can significantly improve image SR without improving hardware equipment, so it is of great significance to review it. METHODS Aiming at the unique SR reconstruction algorithms in the field of medical images, based on subdivided medical fields such as magnetic resonance (MR) images, computed tomography (CT) images, and ultrasound images. Firstly, we deeply analyzed the research progress of SR reconstruction algorithms, and summarized and compared the different types of algorithms. Secondly, we introduced the evaluation indicators corresponding to the SR reconstruction algorithms. Finally, we prospected the development trend of SR reconstruction technology in the medical field. RESULTS The medical image SR reconstruction technology based on deep learning can provide more abundant lesion information, relieve the expert's diagnosis pressure, and improve the diagnosis efficiency and accuracy. CONCLUSION The medical image SR reconstruction technology based on deep learning helps to improve the quality of medicine, provides help for the diagnosis of experts, and lays a solid foundation for the subsequent analysis and identification tasks of the computer, which is of great significance for improving the diagnosis efficiency of experts and realizing intelligent medical care.
Collapse
Affiliation(s)
- Defu Qiu
- Engineering Research Center of Intelligent Control for Underground Space, Ministry of Education, China University of Mining and Technology, Xuzhou 221116, China; School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China
| | - Yuhu Cheng
- Engineering Research Center of Intelligent Control for Underground Space, Ministry of Education, China University of Mining and Technology, Xuzhou 221116, China; School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China
| | - Xuesong Wang
- Engineering Research Center of Intelligent Control for Underground Space, Ministry of Education, China University of Mining and Technology, Xuzhou 221116, China; School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China.
| |
Collapse
|
11
|
Karimipourfard M, Sina S, Khodadai Shoshtari F, Alavi M. Synthesis of Prospective Multiple Time Points F-18 FDG PET Images from a Single Scan Using a Supervised Generative Adversarial Network. Nuklearmedizin 2023; 62:61-72. [PMID: 36878470 DOI: 10.1055/a-2026-0784] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/08/2023]
Abstract
The cumulative activity map estimation are essential tools for patient specific dosimetry with high accuracy, which is estimated using biokinetic models instead of patient dynamic data or the number of static PET scans, owing to economical and time-consuming points of view. In the era of deep learning applications in medicine, the pix-to-pix (p2 p) GAN neural networks play a significant role in image translation between imaging modalities. In this pilot study, we extended the p2 p GAN networks to generate PET images of patients at different times according to a 60 min scan time after the injection of F-18 FDG. In this regard, the study was conducted in two sections: phantom and patient studies. In the phantom study section, the SSIM, PSNR, and MSE metric results of the generated images varied from 0.98-0.99, 31-34 and 1-2 respectively and the fine-tuned Resnet-50 network classified the different timing images with high performance. In the patient study, these values varied from 0.88-0.93, 36-41 and 1.7-2.2, respectively and the classification network classified the generated images in the true group with high accuracy. The results of phantom studies showed high values of evaluation metrics owing to ideal image quality conditions. However, in the patient study, promising results were achieved which showed that the image quality and training data number affected the network performance. This study aims to assess the feasibility of p2 p GAN network application for different timing image generation.
Collapse
Affiliation(s)
| | | | | | - Mehrsadat Alavi
- Shiraz University of Medical Sciences, Shiraz, Iran (the Islamic Republic of)
| |
Collapse
|
12
|
Lyu J, Li Y, Yan F, Chen W, Wang C, Li R. Multi-channel GAN-based calibration-free diffusion-weighted liver imaging with simultaneous coil sensitivity estimation and reconstruction. Front Oncol 2023; 13:1095637. [PMID: 36845688 PMCID: PMC9945270 DOI: 10.3389/fonc.2023.1095637] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Accepted: 01/09/2023] [Indexed: 02/10/2023] Open
Abstract
Introduction Diffusion-weighted imaging (DWI) with parallel reconstruction may suffer from a mismatch between the coil calibration scan and imaging scan due to motions, especially for abdominal imaging. Methods This study aimed to construct an iterative multichannel generative adversarial network (iMCGAN)-based framework for simultaneous sensitivity map estimation and calibration-free image reconstruction. The study included 106 healthy volunteers and 10 patients with tumors. Results The performance of iMCGAN was evaluated in healthy participants and patients and compared with the SAKE, ALOHA-net, and DeepcomplexMRI reconstructions. The peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), root mean squared error (RMSE), and histograms of apparent diffusion coefficient (ADC) maps were calculated for assessing image qualities. The proposed iMCGAN outperformed the other methods in terms of the PSNR (iMCGAN: 41.82 ± 2.14; SAKE: 17.38 ± 1.78; ALOHA-net: 20.43 ± 2.11 and DeepcomplexMRI: 39.78 ± 2.78) for b = 800 DWI with an acceleration factor of 4. Besides, the ghosting artifacts in the SENSE due to the mismatch between the DW image and the sensitivity maps were avoided using the iMCGAN model. Discussion The current model iteratively refined the sensitivity maps and the reconstructed images without additional acquisitions. Thus, the quality of the reconstructed image was improved, and the aliasing artifact was alleviated when motions occurred during the imaging procedure.
Collapse
Affiliation(s)
- Jun Lyu
- School of Computer and Control Engineering, Yantai University, Yantai, Shandong, China
| | - Yan Li
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China,College of Health Science and Technology, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Fuhua Yan
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China,College of Health Science and Technology, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Weibo Chen
- Philips Healthcare (China), Shanghai, China
| | - Chengyan Wang
- Human Phenome Institute, Fudan University, Shanghai, China,*Correspondence: Chengyan Wang, ; Ruokun Li,
| | - Ruokun Li
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China,College of Health Science and Technology, Shanghai Jiao Tong University School of Medicine, Shanghai, China,*Correspondence: Chengyan Wang, ; Ruokun Li,
| |
Collapse
|
13
|
Abstract
Magnetic resonance imaging (MRI) is a widely used non-radiative and non-invasive method for clinical interro-gation of organ structures and metabolism, with an inherently long scanning time. Methods by k-space undersampling and deep learning based reconstruction have been popularised to accelerate the scanning process. This work focuses on investigating how powerful transformers are for fast MRI by exploiting and comparing different novel network architectures. In particular, a generative adversarial network (GAN) based Swin transformer (ST-GAN) was introduced for the fast MRI reconstruction. To further preserve the edge and texture information, edge enhanced GAN based Swin transformer (EES-GAN) and texture enhanced GAN based Swin transformer (TES-GAN) were also developed, where a dual-discriminator GAN structure was applied. We compared our proposed GAN based transformers, standalone Swin transformer and other convolutional neural networks based GAN model in terms of the evaluation metrics PSNR, SSIM and FID. We showed that transformers work well for the MRI reconstruction from different undersampling conditions. The utilisation of GAN's adversarial structure improves the quality of images reconstructed when undersampled for 30% or higher. The code is publicly available at https://github.comJayanglab/SwinGANMR.
Collapse
|
14
|
Wu W, Hu D, Cong W, Shan H, Wang S, Niu C, Yan P, Yu H, Vardhanabhuti V, Wang G. Stabilizing deep tomographic reconstruction: Part A. Hybrid framework and experimental results. Patterns (N Y) 2022; 3:100474. [PMID: 35607623 PMCID: PMC9122961 DOI: 10.1016/j.patter.2022.100474] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/26/2021] [Revised: 12/24/2021] [Accepted: 03/01/2022] [Indexed: 12/16/2022]
Abstract
A recent PNAS paper reveals that several popular deep reconstruction networks are unstable. Specifically, three kinds of instabilities were reported: (1) strong image artefacts from tiny perturbations, (2) small features missed in a deeply reconstructed image, and (3) decreased imaging performance with increased input data. Here, we propose an analytic compressed iterative deep (ACID) framework to address this challenge. ACID synergizes a deep network trained on big data, kernel awareness from compressed sensing (CS)-inspired processing, and iterative refinement to minimize the data residual relative to real measurement. Our study demonstrates that the ACID reconstruction is accurate, is stable, and sheds light on the converging mechanism of the ACID iteration under a bounded relative error norm assumption. ACID not only stabilizes an unstable deep reconstruction network but also is resilient against adversarial attacks to the whole ACID workflow, being superior to classic sparsity-regularized reconstruction and eliminating the three kinds of instabilities.
Collapse
Affiliation(s)
- Weiwen Wu
- Biomedical Imaging Center, Center for Biotechnology and Interdisciplinary Studies, Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen, Guangdong, China
- Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, University of Hong Kong, Hong Kong SAR, China
| | - Dianlin Hu
- The Laboratory of Image Science and Technology, Southeast University, Nanjing, China
| | - Wenxiang Cong
- Biomedical Imaging Center, Center for Biotechnology and Interdisciplinary Studies, Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Hongming Shan
- Biomedical Imaging Center, Center for Biotechnology and Interdisciplinary Studies, Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
- Institute of Science and Technology for Brain-inspired Intelligence, Fudan University, Shanghai, China
| | - Shaoyu Wang
- Department of Electrical & Computer Engineering, University of Massachusetts Lowell, Lowell, MA, USA
| | - Chuang Niu
- Biomedical Imaging Center, Center for Biotechnology and Interdisciplinary Studies, Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Pingkun Yan
- Biomedical Imaging Center, Center for Biotechnology and Interdisciplinary Studies, Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Hengyong Yu
- Department of Electrical & Computer Engineering, University of Massachusetts Lowell, Lowell, MA, USA
| | - Varut Vardhanabhuti
- Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, University of Hong Kong, Hong Kong SAR, China
| | - Ge Wang
- Biomedical Imaging Center, Center for Biotechnology and Interdisciplinary Studies, Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| |
Collapse
|
15
|
Pal A, Rathi Y. A review and experimental evaluation of deep learning methods for MRI reconstruction. J Mach Learn Biomed Imaging 2022; 1:001. [PMID: 35722657 PMCID: PMC9202830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Following the success of deep learning in a wide range of applications, neural network-based machine-learning techniques have received significant interest for accelerating magnetic resonance imaging (MRI) acquisition and reconstruction strategies. A number of ideas inspired by deep learning techniques for computer vision and image processing have been successfully applied to nonlinear image reconstruction in the spirit of compressed sensing for accelerated MRI. Given the rapidly growing nature of the field, it is imperative to consolidate and summarize the large number of deep learning methods that have been reported in the literature, to obtain a better understanding of the field in general. This article provides an overview of the recent developments in neural-network based approaches that have been proposed specifically for improving parallel imaging. A general background and introduction to parallel MRI is also given from a classical view of k-space based reconstruction methods. Image domain based techniques that introduce improved regularizers are covered along with k-space based methods which focus on better interpolation strategies using neural networks. While the field is rapidly evolving with plenty of papers published each year, in this review, we attempt to cover broad categories of methods that have shown good performance on publicly available data sets. Limitations and open problems are also discussed and recent efforts for producing open data sets and benchmarks for the community are examined.
Collapse
|
16
|
Bian W, Chen Y, Ye X. An optimal control framework for joint-channel parallel MRI reconstruction without coil sensitivities. Magn Reson Imaging 2022. [DOI: 10.1016/j.mri.2022.01.011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Revised: 11/09/2021] [Accepted: 01/23/2022] [Indexed: 01/30/2023]
|
17
|
Huang J, Ding W, Lv J, Yang J, Dong H, Del Ser J, Xia J, Ren T, Wong ST, Yang G. Edge-enhanced dual discriminator generative adversarial network for fast MRI with parallel imaging using multi-view information. APPL INTELL 2022; 52:14693-14710. [PMID: 36199853 PMCID: PMC9526695 DOI: 10.1007/s10489-021-03092-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/09/2021] [Indexed: 12/24/2022]
Abstract
In clinical medicine, magnetic resonance imaging (MRI) is one of the most important tools for diagnosis, triage, prognosis, and treatment planning. However, MRI suffers from an inherent slow data acquisition process because data is collected sequentially in k-space. In recent years, most MRI reconstruction methods proposed in the literature focus on holistic image reconstruction rather than enhancing the edge information. This work steps aside this general trend by elaborating on the enhancement of edge information. Specifically, we introduce a novel parallel imaging coupled dual discriminator generative adversarial network (PIDD-GAN) for fast multi-channel MRI reconstruction by incorporating multi-view information. The dual discriminator design aims to improve the edge information in MRI reconstruction. One discriminator is used for holistic image reconstruction, whereas the other one is responsible for enhancing edge information. An improved U-Net with local and global residual learning is proposed for the generator. Frequency channel attention blocks (FCA Blocks) are embedded in the generator for incorporating attention mechanisms. Content loss is introduced to train the generator for better reconstruction quality. We performed comprehensive experiments on Calgary-Campinas public brain MR dataset and compared our method with state-of-the-art MRI reconstruction methods. Ablation studies of residual learning were conducted on the MICCAI13 dataset to validate the proposed modules. Results show that our PIDD-GAN provides high-quality reconstructed MR images, with well-preserved edge information. The time of single-image reconstruction is below 5ms, which meets the demand of faster processing.
Collapse
Affiliation(s)
- Jiahao Huang
- College of Information Science and Technology, Zhejiang Shuren University, 310015 Hangzhou, China
- National Heart and Lung Institute, Imperial College London, London, UK
| | - Weiping Ding
- School of Information Science and Technology, Nantong University, 226019 Nantong, China
| | - Jun Lv
- School of Computer and Control Engineering, Yantai University, 264005 Yantai, China
| | - Jingwen Yang
- Department of Prosthodontics, Peking University School and Hospital of Stomatology, Beijing, China
| | - Hao Dong
- Center on Frontiers of Computing Studies, Peking University, Beijing, China
| | - Javier Del Ser
- TECNALIA, Basque Research and Technology Alliance (BRTA), 48160 Derio, Spain
- University of the Basque Country (UPV/EHU), 48013 Bilbao, Spain
| | - Jun Xia
- Department of Radiology, Shenzhen Second People’s Hospital, The First Afliated Hospital of Shenzhen University Health Science Center, Shenzhen, China
| | - Tiaojuan Ren
- College of Information Science and Technology, Zhejiang Shuren University, 310015 Hangzhou, China
| | - Stephen T. Wong
- Systems Medicine and Bioengineering Department, Departments of Radiology and Pathology, Houston Methodist Cancer Center, Houston Methodist Hospital, Weill Cornell Medicine, 77030 Houston, TX USA
| | - Guang Yang
- National Heart and Lung Institute, Imperial College London, London, UK
- Cardiovascular Research Centre, Royal Brompton Hospital, London, UK
| |
Collapse
|
18
|
Wu X, Li C, Zeng X, Wei H, Deng HW, Zhang J, Xu M. CryoETGAN: Cryo-Electron Tomography Image Synthesis via Unpaired Image Translation. Front Physiol 2022; 13:760404. [PMID: 35370760 PMCID: PMC8970048 DOI: 10.3389/fphys.2022.760404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Accepted: 01/17/2022] [Indexed: 12/02/2022] Open
Abstract
Cryo-electron tomography (Cryo-ET) has been regarded as a revolution in structural biology and can reveal molecular sociology. Its unprecedented quality enables it to visualize cellular organelles and macromolecular complexes at nanometer resolution with native conformations. Motivated by developments in nanotechnology and machine learning, establishing machine learning approaches such as classification, detection and averaging for Cryo-ET image analysis has inspired broad interest. Yet, deep learning-based methods for biomedical imaging typically require large labeled datasets for good results, which can be a great challenge due to the expense of obtaining and labeling training data. To deal with this problem, we propose a generative model to simulate Cryo-ET images efficiently and reliably: CryoETGAN. This cycle-consistent and Wasserstein generative adversarial network (GAN) is able to generate images with an appearance similar to the original experimental data. Quantitative and visual grading results on generated images are provided to show that the results of our proposed method achieve better performance compared to the previous state-of-the-art simulation methods. Moreover, CryoETGAN is stable to train and capable of generating plausibly diverse image samples.
Collapse
Affiliation(s)
- Xindi Wu
- Computational Biology Department, Carnegie Mellon University, Pittsburgh, PA, United States
| | - Chengkun Li
- École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Xiangrui Zeng
- Computational Biology Department, Carnegie Mellon University, Pittsburgh, PA, United States
| | - Haocheng Wei
- Department of Electrical & Computer Engineering, University of Toronto, Toronto, ON, Canada
| | - Hong-Wen Deng
- Center for Biomedical Informatics & Genomics, Tulane University, New Orleans, LA, United States
| | - Jing Zhang
- Department of Computer Science, University of California, Irvine, Irvine, CA, United States
| | - Min Xu
- Computational Biology Department, Carnegie Mellon University, Pittsburgh, PA, United States
| |
Collapse
|
19
|
Ng WY, Zhang S, Wang Z, Ong CJT, Gunasekeran DV, Lim GYS, Zheng F, Tan SCY, Tan GSW, Rim TH, Schmetterer L, Ting DSW. Updates in deep learning research in ophthalmology. Clin Sci (Lond) 2021; 135:2357-76. [PMID: 34661658 DOI: 10.1042/CS20210207] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2021] [Revised: 09/14/2021] [Accepted: 09/29/2021] [Indexed: 12/13/2022]
Abstract
Ophthalmology has been one of the early adopters of artificial intelligence (AI) within the medical field. Deep learning (DL), in particular, has garnered significant attention due to the availability of large amounts of data and digitized ocular images. Currently, AI in Ophthalmology is mainly focused on improving disease classification and supporting decision-making when treating ophthalmic diseases such as diabetic retinopathy, age-related macular degeneration (AMD), glaucoma and retinopathy of prematurity (ROP). However, most of the DL systems (DLSs) developed thus far remain in the research stage and only a handful are able to achieve clinical translation. This phenomenon is due to a combination of factors including concerns over security and privacy, poor generalizability, trust and explainability issues, unfavorable end-user perceptions and uncertain economic value. Overcoming this challenge would require a combination approach. Firstly, emerging techniques such as federated learning (FL), generative adversarial networks (GANs), autonomous AI and blockchain will be playing an increasingly critical role to enhance privacy, collaboration and DLS performance. Next, compliance to reporting and regulatory guidelines, such as CONSORT-AI and STARD-AI, will be required to in order to improve transparency, minimize abuse and ensure reproducibility. Thirdly, frameworks will be required to obtain patient consent, perform ethical assessment and evaluate end-user perception. Lastly, proper health economic assessment (HEA) must be performed to provide financial visibility during the early phases of DLS development. This is necessary to manage resources prudently and guide the development of DLS.
Collapse
|
20
|
Wang F, Zhang H, Dai F, Chen W, Wang C, Wang H. MAGnitude-Image-to-Complex K-space (MAGIC-K) Net: A Data Augmentation Network for Image Reconstruction. Diagnostics (Basel) 2021; 11:1935. [PMID: 34679632 PMCID: PMC8534839 DOI: 10.3390/diagnostics11101935] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Revised: 09/19/2021] [Accepted: 10/15/2021] [Indexed: 11/16/2022] Open
Abstract
Deep learning has demonstrated superior performance in image reconstruction compared to most conventional iterative algorithms. However, their effectiveness and generalization capability are highly dependent on the sample size and diversity of the training data. Deep learning-based reconstruction requires multi-coil raw k-space data, which are not collected by routine scans. On the other hand, large amounts of magnitude images are readily available in hospitals. Hence, we proposed the MAGnitude Images to Complex K-space (MAGIC-K) Net to generate multi-coil k-space data from existing magnitude images and a limited number of required raw k-space data to facilitate the reconstruction. Compared to some basic data augmentation methods applying global intensity and displacement transformations to the source images, the MAGIC-K Net can generate more realistic intensity variations and displacements from pairs of anatomical Digital Imaging and Communications in Medicine (DICOM) images. The reconstruction performance was validated in 30 healthy volunteers and 6 patients with different types of tumors. The experimental results demonstrated that the high-resolution Diffusion Weighted Image (DWI) reconstruction benefited from the proposed augmentation method. The MAGIC-K Net enabled the deep learning network to reconstruct images with superior performance in both healthy and tumor patients, qualitatively and quantitatively.
Collapse
Affiliation(s)
- Fanwen Wang
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China; (F.W.); (H.Z.); (F.D.)
| | - Hui Zhang
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China; (F.W.); (H.Z.); (F.D.)
| | - Fei Dai
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China; (F.W.); (H.Z.); (F.D.)
| | - Weibo Chen
- Philips Healthcare, Shanghai 200072, China;
| | - Chengyan Wang
- Human Phenome Institute, Fudan University, Shanghai 201203, China
| | - He Wang
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China; (F.W.); (H.Z.); (F.D.)
- Human Phenome Institute, Fudan University, Shanghai 201203, China
| |
Collapse
|
21
|
Mabu S, Miyake M, Kuremoto T, Kido S. Semi-supervised CycleGAN for domain transformation of chest CT images and its application to opacity classification of diffuse lung diseases. Int J Comput Assist Radiol Surg 2021; 16:1925-1935. [PMID: 34661818 PMCID: PMC8522550 DOI: 10.1007/s11548-021-02490-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Accepted: 08/31/2021] [Indexed: 11/05/2022]
Abstract
Purpose The performance of deep learning may fluctuate depending on the imaging devices and settings. Although domain transformation such as CycleGAN for normalizing images is useful, CycleGAN does not use information on the disease classes. Therefore, we propose a semi-supervised CycleGAN with an additional classification loss to transform images suitable for the diagnosis. The method is evaluated by opacity classification of chest CT. Methods (1) CT images taken at two hospitals (source and target domains) are used. (2) A classifier is trained on the target domain. (3) Class labels are given to a small number of source domain images for semi-supervised learning. (4) The source domain images are transformed to the target domain. (5) A classification loss of the transformed images with class labels is calculated. Results The proposed method showed an F-measure of 0.727 in the domain transformation from hospital A to B, and 0.745 in that from hospital B to A, where significant differences are between the proposed method and the other three methods. Conclusions The proposed method not only transforms the appearance of the images but also retains the features being important to classify opacities, and shows the best precision, recall, and F-measure.
Collapse
Affiliation(s)
- Shingo Mabu
- Graduate School of Sciences and Technology for Innovation, Yamaguchi University, 2-16-1, Tokiwadai, Ube, Yamaguchi, 755-8611, Japan.
| | - Masashi Miyake
- Graduate School of Sciences and Technology for Innovation, Yamaguchi University, 2-16-1, Tokiwadai, Ube, Yamaguchi, 755-8611, Japan
| | - Takashi Kuremoto
- Department of Information Technology and Media Design, Nippon Institute of Technology, 4-1 Gakuendai, Miyashiro-machi, Minamisaitama-gun, Saitama, 345-8501, Japan
| | - Shoji Kido
- Graduate School of Medicine, Osaka University, 2-2, Yamadaoka, Suita, Osaka, 565-0871, Japan
| |
Collapse
|
22
|
Sui B, Lv J, Tong X, Li Y, Wang C. Simultaneous image reconstruction and lesion segmentation in accelerated MRI using multitasking learning. Med Phys 2021; 48:7189-7198. [PMID: 34542180 DOI: 10.1002/mp.15213] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Revised: 08/02/2021] [Accepted: 08/26/2021] [Indexed: 12/25/2022] Open
Abstract
PURPOSE Magnetic resonance imaging (MRI) serves as an important medical imaging modality for a variety of clinical applications. However, the problem of long imaging time limited its wide usage. In addition, prolonged scan time will cause discomfort to the patient, leading to severe image artifacts. On the other hand, manually lesion segmentation is time consuming. Algorithm-based automatic lesion segmentation is still challenging, especially for accelerated imaging with low quality. METHODS In this paper, we proposed a multitask learning-based method to perform image reconstruction and lesion segmentation simultaneously, called "RecSeg". Our hypothesis is that both tasks can benefit from the usage of the proposed combined model. In the experiment, we validated the proposed multitask model on MR k-space data with different acceleration factors (2×, 4×, and 6×). Two connected U-nets were used for the tasks of liver and renal image reconstruction and segmentation. A total of 50 healthy subjects and 100 patients with hepatocellular carcinoma were included for training and testing. For the segmentation part, we use healthy subjects to verify organ segmentation, and hepatocellular carcinoma patients to verify lesion segmentation. The organs and lesions were manually contoured by an experienced radiologist. RESULTS Experimental results show that the proposed RecSeg yielded the highest PSNR (RecSeg: 32.39 ± 1.64 vs. KSVD: 29.53 ± 2.74 and single U-net: 31.18 ± 1.68, respectively, p < 0.05) and highest structural similarity index measure (SSIM) (RecSeg: 0.93 ± 0.01 vs. KSVD: 0.88 ± 0.02 and single U-net: 0.90 ± 0.01, respectively, p < 0.05) under 6× acceleration. Moreover, in the task of lesion segmentation, it is proposed that RecSeg produced the highest Dice score (RecSeg: 0.86 ± 0.01 vs. KSVD: 0.82 ± 0.01 and single U-net: 0.84 ± 0.01, respectively, p < 0.05). CONCLUSIONS This study focused on the simultaneous reconstruction of medical images and the segmentation of organs and lesions. It is observed that the multitask learning-based method can improve performances of both image reconstruction and lesion segmentation.
Collapse
Affiliation(s)
- Bin Sui
- School of Computer and Control Engineering, Yantai University, Yantai, China
| | - Jun Lv
- School of Computer and Control Engineering, Yantai University, Yantai, China
| | - Xiangrong Tong
- School of Computer and Control Engineering, Yantai University, Yantai, China
| | - Yan Li
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Chengyan Wang
- Human Phenome Institute, Fudan University, Shanghai, China
| |
Collapse
|
23
|
Li GY, Wang CY, Lv J. Current status of deep learning in abdominal image reconstruction. Artif Intell Med Imaging 2021; 2:86-94. [DOI: 10.35711/aimi.v2.i4.86] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Revised: 06/24/2021] [Accepted: 08/17/2021] [Indexed: 02/06/2023] Open
Abstract
Abdominal magnetic resonance imaging (MRI) and computed tomography (CT) are commonly used for disease screening, diagnosis, and treatment guidance. However, abdominal MRI has disadvantages including slow speed and vulnerability to motions, while CT suffers from problems of radiation. It has been reported that deep learning reconstruction can solve such problems while maintaining good image quality. Recently, deep learning-based image reconstruction has become a hot topic in the field of medical imaging. This study reviews the latest research on deep learning reconstruction in abdominal imaging, including the widely used convolutional neural network, generative adversarial network, and recurrent neural network.
Collapse
Affiliation(s)
- Guang-Yuan Li
- School of Computer and Control Engineering, Yantai University, Yantai 264000, Shandong Province, China
| | - Cheng-Yan Wang
- Human Phenome Institute, Fudan University, Shanghai 201203, China
| | - Jun Lv
- School of Computer and Control Engineering, Yantai University, Yantai 264000, Shandong Province, China
| |
Collapse
|
24
|
Li G, Lv J, Tong X, Wang C, Yang G. High-Resolution Pelvic MRI Reconstruction Using a Generative Adversarial Network With Attention and Cyclic Loss. IEEE Access 2021; 9:105951-105964. [DOI: 10.1109/access.2021.3099695] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/29/2023]
|