1
|
Yang S, Kim KD, Ariji E, Kise Y. Generative adversarial networks in dental imaging: a systematic review. Oral Radiol 2024; 40:93-108. [PMID: 38001347 DOI: 10.1007/s11282-023-00719-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 10/27/2023] [Indexed: 11/26/2023]
Abstract
OBJECTIVES This systematic review on generative adversarial network (GAN) architectures for dental image analysis provides a comprehensive overview to readers regarding current GAN trends in dental imagery and potential future applications. METHODS Electronic databases (PubMed/MEDLINE, Scopus, Embase, and Cochrane Library) were searched to identify studies involving GANs for dental image analysis. Eighteen full-text articles describing the applications of GANs in dental imagery were reviewed. Risk of bias and applicability concerns were assessed using the QUADAS-2 tool. RESULTS GANs were used for various imaging modalities, including two-dimensional and three-dimensional images. In dental imaging, GANs were utilized for tasks such as artifact reduction, denoising, and super-resolution, domain transfer, image generation for augmentation, outcome prediction, and identification. The generated images were incorporated into tasks such as landmark detection, object detection and classification. Because of heterogeneity among the studies, a meta-analysis could not be conducted. Most studies (72%) had a low risk of bias in all four domains. However, only three (17%) studies had a low risk of applicability concerns. CONCLUSIONS This extensive analysis of GANs in dental imaging highlighted their broad application potential within the dental field. Future studies should address limitations related to the stability, repeatability, and overall interpretability of GAN architectures. By overcoming these challenges, the applicability of GANs in dentistry can be enhanced, ultimately benefiting the dental field in its use of GANs and artificial intelligence.
Collapse
Affiliation(s)
- Sujin Yang
- Department of Advanced General Dentistry, College of Dentistry, Yonsei University, Seoul, Korea
| | - Kee-Deog Kim
- Department of Advanced General Dentistry, College of Dentistry, Yonsei University, Seoul, Korea
| | - Eiichiro Ariji
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Aichi Gakuin University, 2-11 Suemori-dori, Chikusa-ku, Nagoya, 464-8651, Japan
| | - Yoshitaka Kise
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Aichi Gakuin University, 2-11 Suemori-dori, Chikusa-ku, Nagoya, 464-8651, Japan.
| |
Collapse
|
2
|
Lu X, Liang X, Liu W, Miao X, Guan X. ReeGAN: MRI image edge-preserving synthesis based on GANs trained with misaligned data. Med Biol Eng Comput 2024:10.1007/s11517-024-03035-w. [PMID: 38396277 DOI: 10.1007/s11517-024-03035-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Accepted: 01/27/2024] [Indexed: 02/25/2024]
Abstract
As a crucial medical examination technique, different modalities of magnetic resonance imaging (MRI) complement each other, offering multi-angle and multi-dimensional insights into the body's internal information. Therefore, research on MRI cross-modality conversion is of great significance, and many innovative techniques have been explored. However, most methods are trained on well-aligned data, and the impact of misaligned data has not received sufficient attention. Additionally, many methods focus on transforming the entire image and ignore crucial edge information. To address these challenges, we propose a generative adversarial network based on multi-feature fusion, which effectively preserves edge information while training on noisy data. Notably, we consider images with limited range random transformations as noisy labels and use an additional small auxiliary registration network to help the generator adapt to the noise distribution. Moreover, we inject auxiliary edge information to improve the quality of synthesized target modality images. Our goal is to find the best solution for cross-modality conversion. Comprehensive experiments and ablation studies demonstrate the effectiveness of the proposed method.
Collapse
Affiliation(s)
- Xiangjiang Lu
- Guangxi Key Lab of Multi-Source Information Mining & Security, School of Computer Science and Engineering & School of Software, Guangxi Normal University, Guilin, 541004, China.
| | - Xiaoshuang Liang
- Guangxi Key Lab of Multi-Source Information Mining & Security, School of Computer Science and Engineering & School of Software, Guangxi Normal University, Guilin, 541004, China
| | - Wenjing Liu
- Guangxi Key Lab of Multi-Source Information Mining & Security, School of Computer Science and Engineering & School of Software, Guangxi Normal University, Guilin, 541004, China
| | - Xiuxia Miao
- Guangxi Key Lab of Multi-Source Information Mining & Security, School of Computer Science and Engineering & School of Software, Guangxi Normal University, Guilin, 541004, China
| | - Xianglong Guan
- Guangxi Key Lab of Multi-Source Information Mining & Security, School of Computer Science and Engineering & School of Software, Guangxi Normal University, Guilin, 541004, China
| |
Collapse
|
3
|
Li W, Liu J, Wang S, Feng C. MTFN: multi-temporal feature fusing network with co-attention for DCE-MRI synthesis. BMC Med Imaging 2024; 24:47. [PMID: 38373915 PMCID: PMC10875895 DOI: 10.1186/s12880-024-01201-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 01/15/2024] [Indexed: 02/21/2024] Open
Abstract
BACKGROUND Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) plays an important role in the diagnosis and treatment of breast cancer. However, obtaining complete eight temporal images of DCE-MRI requires a long scanning time, which causes patients' discomfort in the scanning process. Therefore, to reduce the time, the multi temporal feature fusing neural network with Co-attention (MTFN) is proposed to generate the eighth temporal images of DCE-MRI, which enables the acquisition of DCE-MRI images without scanning. In order to reduce the time, multi-temporal feature fusion cooperative attention mechanism neural network (MTFN) is proposed to generate the eighth temporal images of DCE-MRI, which enables DCE-MRI image acquisition without scanning. METHODS In this paper, we propose multi temporal feature fusing neural network with Co-attention (MTFN) for DCE-MRI Synthesis, in which the Co-attention module can fully fuse the features of the first and third temporal image to obtain the hybrid features. The Co-attention explore long-range dependencies, not just relationships between pixels. Therefore, the hybrid features are more helpful to generate the eighth temporal images. RESULTS We conduct experiments on the private breast DCE-MRI dataset from hospitals and the multi modal Brain Tumor Segmentation Challenge2018 dataset (BraTs2018). Compared with existing methods, the experimental results of our method show the improvement and our method can generate more realistic images. In the meanwhile, we also use synthetic images to classify the molecular typing of breast cancer that the accuracy on the original eighth time-series images and the generated images are 89.53% and 92.46%, which have been improved by about 3%, and the classification results verify the practicability of the synthetic images. CONCLUSIONS The results of subjective evaluation and objective image quality evaluation indicators show the effectiveness of our method, which can obtain comprehensive and useful information. The improvement of classification accuracy proves that the images generated by our method are practical.
Collapse
Affiliation(s)
- Wei Li
- Key Laboratory of Intelligent Computing in Medical Image MIIC, Northeastern University, Shenyang, China
| | - Jiaye Liu
- School of Computer Science and Engineering, Northeastern University, Shenyang, China
| | - Shanshan Wang
- School of Computer Science and Engineering, Northeastern University, Shenyang, China.
| | - Chaolu Feng
- Key Laboratory of Intelligent Computing in Medical Image MIIC, Northeastern University, Shenyang, China
| |
Collapse
|
4
|
Hognon C, Conze PH, Bourbonne V, Gallinato O, Colin T, Jaouen V, Visvikis D. Contrastive image adaptation for acquisition shift reduction in medical imaging. Artif Intell Med 2024; 148:102747. [PMID: 38325919 DOI: 10.1016/j.artmed.2023.102747] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Revised: 10/21/2023] [Accepted: 12/10/2023] [Indexed: 02/09/2024]
Abstract
The domain shift, or acquisition shift in medical imaging, is responsible for potentially harmful differences between development and deployment conditions of medical image analysis techniques. There is a growing need in the community for advanced methods that could mitigate this issue better than conventional approaches. In this paper, we consider configurations in which we can expose a learning-based pixel level adaptor to a large variability of unlabeled images during its training, i.e. sufficient to span the acquisition shift expected during the training or testing of a downstream task model. We leverage the ability of convolutional architectures to efficiently learn domain-agnostic features and train a many-to-one unsupervised mapping between a source collection of heterogeneous images from multiple unknown domains subjected to the acquisition shift and a homogeneous subset of this source set of lower cardinality, potentially constituted of a single image. To this end, we propose a new cycle-free image-to-image architecture based on a combination of three loss functions : a contrastive PatchNCE loss, an adversarial loss and an edge preserving loss allowing for rich domain adaptation to the target image even under strong domain imbalance and low data regimes. Experiments support the interest of the proposed contrastive image adaptation approach for the regularization of downstream deep supervised segmentation and cross-modality synthesis models.
Collapse
Affiliation(s)
- Clément Hognon
- UMR U1101 Inserm LaTIM, IMT Atlantique, Université de Bretagne Occidentale, France; SOPHiA Genetics, Pessac, France
| | - Pierre-Henri Conze
- UMR U1101 Inserm LaTIM, IMT Atlantique, Université de Bretagne Occidentale, France
| | - Vincent Bourbonne
- UMR U1101 Inserm LaTIM, IMT Atlantique, Université de Bretagne Occidentale, France
| | | | | | - Vincent Jaouen
- UMR U1101 Inserm LaTIM, IMT Atlantique, Université de Bretagne Occidentale, France.
| | - Dimitris Visvikis
- UMR U1101 Inserm LaTIM, IMT Atlantique, Université de Bretagne Occidentale, France
| |
Collapse
|
5
|
Honkamaa J, Khan U, Koivukoski S, Valkonen M, Latonen L, Ruusuvuori P, Marttinen P. Deformation equivariant cross-modality image synthesis with paired non-aligned training data. Med Image Anal 2023; 90:102940. [PMID: 37666115 DOI: 10.1016/j.media.2023.102940] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 08/14/2023] [Accepted: 08/18/2023] [Indexed: 09/06/2023]
Abstract
Cross-modality image synthesis is an active research topic with multiple medical clinically relevant applications. Recently, methods allowing training with paired but misaligned data have started to emerge. However, no robust and well-performing methods applicable to a wide range of real world data sets exist. In this work, we propose a generic solution to the problem of cross-modality image synthesis with paired but non-aligned data by introducing new deformation equivariance encouraging loss functions. The method consists of joint training of an image synthesis network together with separate registration networks and allows adversarial training conditioned on the input even with misaligned data. The work lowers the bar for new clinical applications by allowing effortless training of cross-modality image synthesis networks for more difficult data sets.
Collapse
Affiliation(s)
- Joel Honkamaa
- Department of Computer Science, Aalto University, Finland.
| | - Umair Khan
- Institute of Biomedicine, University of Turku, Finland
| | - Sonja Koivukoski
- Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland
| | - Mira Valkonen
- Faculty of Medicine and Health Technology, Tampere University, Finland
| | - Leena Latonen
- Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland
| | - Pekka Ruusuvuori
- Institute of Biomedicine, University of Turku, Finland; Faculty of Medicine and Health Technology, Tampere University, Finland
| | | |
Collapse
|
6
|
Raza O, Lawson M, Zouari F, Wong EC, Chan RW, Cao P. CycleGAN with mutual information loss constraint generates structurally aligned CT images from functional EIT images. Annu Int Conf IEEE Eng Med Biol Soc 2023; 2023:1-4. [PMID: 38082767 DOI: 10.1109/embc40787.2023.10340711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Electrical impedance tomography (EIT) has been employed in the field of medical imaging due to its cost effectiveness, safety profile and portability, but the images generated are relatively low resolution. To address these limitations, we create a novel method using EIT images to generate high resolution structurally aligned images of lungs like those from CT scans. A way to achieve this transformation is via Cycle generative adversarial networks (CycleGAN), which have demonstrated image-to-image translation capabilities across different modalities. However, a generic implementation yields images which may not be aligned with their input image. To solve this issue, we construct and incorporate a Mutual Information (MI) constraint in CycleGAN to translate functional lung EIT images to structural high resolution CT images. The CycleGAN is first trained on unpaired EIT and CT lung images. Afterwards, we generate CT image pairs from EIT images via CycleGANs constrained with MI loss and without this loss. Finally, through generating these 1560 CT image pairs and then comparing the visual results and quantitative metrics, we show that MI constrained CycleGAN produces more structurally aligned CT images, where Normalised Mutual Information (NMI) is increased to 0.2621+/- 0.0052 versus 0.2600 +/- 0.0066, p<0.0001 for non-MI constrained images. By this process, we simultaneously provide functional and structural information, and potentially enable more detailed assessment of lungs.Clinical Relevance- By establishing a structurally aligning generative process via MI Loss in CycleGAN, this study enables EIT-CT conversion, thereby providing functional and structural images for enhanced lung assessment, from just EIT images.
Collapse
|
7
|
Zhou T, Cheng Q, Lu H, Li Q, Zhang X, Qiu S. Deep learning methods for medical image fusion: A review. Comput Biol Med 2023; 160:106959. [PMID: 37141652 DOI: 10.1016/j.compbiomed.2023.106959] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Revised: 04/12/2023] [Accepted: 04/17/2023] [Indexed: 05/06/2023]
Abstract
The image fusion methods based on deep learning has become a research hotspot in the field of computer vision in recent years. This paper reviews these methods from five aspects: Firstly, the principle and advantages of image fusion methods based on deep learning are expounded; Secondly, the image fusion methods are summarized in two aspects: End-to-End and Non-End-to-End, according to the different tasks of deep learning in the feature processing stage, the non-end-to-end image fusion methods are divided into two categories: deep learning for decision mapping and deep learning for feature extraction. According to the different types of the networks, the end-to-end image fusion methods are divided into three categories: image fusion methods based on Convolutional Neural Network, Generative Adversarial Network, and Encoder-Decoder Network; Thirdly, the application of the image fusion methods based on deep learning in medical image field is summarized from two aspects: method and data set; Fourthly, evaluation metrics commonly used in the field of medical image fusion are sorted out from 14 aspects; Fifthly, the main challenges faced by the medical image fusion are discussed from two aspects: data sets and fusion methods. And the future development direction is prospected. This paper systematically summarizes the image fusion methods based on the deep learning, which has a positive guiding significance for the in-depth study of multi modal medical images.
Collapse
Affiliation(s)
- Tao Zhou
- School of Computer Science and Engineering, North Minzu University, Yinchuan, 750021, China; Key Laboratory of Image and Graphics Intelligent Processing of State Ethnic Affairs Commission, North Minzu University, Yinchuan, 750021, China
| | - QianRu Cheng
- School of Computer Science and Engineering, North Minzu University, Yinchuan, 750021, China; Key Laboratory of Image and Graphics Intelligent Processing of State Ethnic Affairs Commission, North Minzu University, Yinchuan, 750021, China.
| | - HuiLing Lu
- School of Science, Ningxia Medical University, Yinchuan, 750004, China.
| | - Qi Li
- School of Computer Science and Engineering, North Minzu University, Yinchuan, 750021, China; Key Laboratory of Image and Graphics Intelligent Processing of State Ethnic Affairs Commission, North Minzu University, Yinchuan, 750021, China
| | - XiangXiang Zhang
- School of Computer Science and Engineering, North Minzu University, Yinchuan, 750021, China; Key Laboratory of Image and Graphics Intelligent Processing of State Ethnic Affairs Commission, North Minzu University, Yinchuan, 750021, China
| | - Shi Qiu
- Key Laboratory of Spectral Imaging Technology CAS, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an, 710119, China
| |
Collapse
|
8
|
Liu Z, Wolfe S, Yu Z, Laforest R, Mhlanga JC, Fraum TJ, Itani M, Dehdashti F, Siegel BA, Jha AK. Observer-study-based approaches to quantitatively evaluate the realism of synthetic medical images. Phys Med Biol 2023; 68:10.1088/1361-6560/acc0ce. [PMID: 36863028 PMCID: PMC10411234 DOI: 10.1088/1361-6560/acc0ce] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Accepted: 03/02/2023] [Indexed: 03/04/2023]
Abstract
Objective.Synthetic images generated by simulation studies have a well-recognized role in developing and evaluating imaging systems and methods. However, for clinically relevant development and evaluation, the synthetic images must be clinically realistic and, ideally, have the same distribution as that of clinical images. Thus, mechanisms that can quantitatively evaluate this clinical realism and, ideally, the similarity in distributions of the real and synthetic images, are much needed.Approach.We investigated two observer-study-based approaches to quantitatively evaluate the clinical realism of synthetic images. In the first approach, we presented a theoretical formalism for the use of an ideal-observer study to quantitatively evaluate the similarity in distributions between the real and synthetic images. This theoretical formalism provides a direct relationship between the area under the receiver operating characteristic curve, AUC, for an ideal observer and the distributions of real and synthetic images. The second approach is based on the use of expert-human-observer studies to quantitatively evaluate the realism of synthetic images. In this approach, we developed a web-based software to conduct two-alternative forced-choice (2-AFC) experiments with expert human observers. The usability of this software was evaluated by conducting a system usability scale (SUS) survey with seven expert human readers and five observer-study designers. Further, we demonstrated the application of this software to evaluate a stochastic and physics-based image-synthesis technique for oncologic positron emission tomography (PET). In this evaluation, the 2-AFC study with our software was performed by six expert human readers, who were highly experienced in reading PET scans, with years of expertise ranging from 7 to 40 years (median: 12 years, average: 20.4 years).Main results.In the ideal-observer-study-based approach, we theoretically demonstrated that the AUC for an ideal observer can be expressed, to an excellent approximation, by the Bhattacharyya distance between the distributions of the real and synthetic images. This relationship shows that a decrease in the ideal-observer AUC indicates a decrease in the distance between the two image distributions. Moreover, a lower bound of ideal-observer AUC = 0.5 implies that the distributions of synthetic and real images exactly match. For the expert-human-observer-study-based approach, our software for performing the 2-AFC experiments is available athttps://apps.mir.wustl.edu/twoafc. Results from the SUS survey demonstrate that the web application is very user friendly and accessible. As a secondary finding, evaluation of a stochastic and physics-based PET image-synthesis technique using our software showed that expert human readers had limited ability to distinguish the real images from the synthetic images.Significance.This work addresses the important need for mechanisms to quantitatively evaluate the clinical realism of synthetic images. The mathematical treatment in this paper shows that quantifying the similarity in the distribution of real and synthetic images is theoretically possible by using an ideal-observer-study-based approach. Our developed software provides a platform for designing and performing 2-AFC experiments with human observers in a highly accessible, efficient, and secure manner. Additionally, our results on the evaluation of the stochastic and physics-based image-synthesis technique motivate the application of this technique to develop and evaluate a wide array of PET imaging methods.
Collapse
Affiliation(s)
- Ziping Liu
- Department of Biomedical Engineering, Washington University, St. Louis, MO 63130, United States of America
| | - Scott Wolfe
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, United States of America
| | - Zitong Yu
- Department of Biomedical Engineering, Washington University, St. Louis, MO 63130, United States of America
| | - Richard Laforest
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, United States of America
- Alvin J. Siteman Cancer Center, Washington University School of Medicine, St. Louis, MO 63110, United States of America
| | - Joyce C Mhlanga
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, United States of America
| | - Tyler J Fraum
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, United States of America
- Alvin J. Siteman Cancer Center, Washington University School of Medicine, St. Louis, MO 63110, United States of America
| | - Malak Itani
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, United States of America
| | - Farrokh Dehdashti
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, United States of America
- Alvin J. Siteman Cancer Center, Washington University School of Medicine, St. Louis, MO 63110, United States of America
| | - Barry A Siegel
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, United States of America
- Alvin J. Siteman Cancer Center, Washington University School of Medicine, St. Louis, MO 63110, United States of America
| | - Abhinav K Jha
- Department of Biomedical Engineering, Washington University, St. Louis, MO 63130, United States of America
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, United States of America
- Alvin J. Siteman Cancer Center, Washington University School of Medicine, St. Louis, MO 63110, United States of America
| |
Collapse
|
9
|
Chen J, Chen S, Wee L, Dekker A, Bermejo I. Deep learning based unpaired image-to-image translation applications for medical physics: a systematic review. Phys Med Biol 2023; 68. [PMID: 36753766 DOI: 10.1088/1361-6560/acba74] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 02/08/2023] [Indexed: 02/10/2023]
Abstract
Purpose. There is a growing number of publications on the application of unpaired image-to-image (I2I) translation in medical imaging. However, a systematic review covering the current state of this topic for medical physicists is lacking. The aim of this article is to provide a comprehensive review of current challenges and opportunities for medical physicists and engineers to apply I2I translation in practice.Methods and materials. The PubMed electronic database was searched using terms referring to unpaired (unsupervised), I2I translation, and medical imaging. This review has been reported in compliance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. From each full-text article, we extracted information extracted regarding technical and clinical applications of methods, Transparent Reporting for Individual Prognosis Or Diagnosis (TRIPOD) study type, performance of algorithm and accessibility of source code and pre-trained models.Results. Among 461 unique records, 55 full-text articles were included in the review. The major technical applications described in the selected literature are segmentation (26 studies), unpaired domain adaptation (18 studies), and denoising (8 studies). In terms of clinical applications, unpaired I2I translation has been used for automatic contouring of regions of interest in MRI, CT, x-ray and ultrasound images, fast MRI or low dose CT imaging, CT or MRI only based radiotherapy planning, etc Only 5 studies validated their models using an independent test set and none were externally validated by independent researchers. Finally, 12 articles published their source code and only one study published their pre-trained models.Conclusion. I2I translation of medical images offers a range of valuable applications for medical physicists. However, the scarcity of external validation studies of I2I models and the shortage of publicly available pre-trained models limits the immediate applicability of the proposed methods in practice.
Collapse
Affiliation(s)
- Junhua Chen
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Shenlun Chen
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Leonard Wee
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Andre Dekker
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Inigo Bermejo
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| |
Collapse
|
10
|
Inan MSK, Hossain S, Uddin MN. Data augmentation guided breast cancer diagnosis and prognosis using an integrated deep-generative framework based on breast tumor’s morphological information. Informatics in Medicine Unlocked 2023. [DOI: 10.1016/j.imu.2023.101171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023] Open
|
11
|
Zou J, Gao B, Song Y, Qin J. A review of deep learning-based deformable medical image registration. Front Oncol 2022; 12:1047215. [PMID: 36568171 PMCID: PMC9768226 DOI: 10.3389/fonc.2022.1047215] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2022] [Accepted: 11/08/2022] [Indexed: 12/12/2022] Open
Abstract
The alignment of images through deformable image registration is vital to clinical applications (e.g., atlas creation, image fusion, and tumor targeting in image-guided navigation systems) and is still a challenging problem. Recent progress in the field of deep learning has significantly advanced the performance of medical image registration. In this review, we present a comprehensive survey on deep learning-based deformable medical image registration methods. These methods are classified into five categories: Deep Iterative Methods, Supervised Methods, Unsupervised Methods, Weakly Supervised Methods, and Latest Methods. A detailed review of each category is provided with discussions about contributions, tasks, and inadequacies. We also provide statistical analysis for the selected papers from the point of view of image modality, the region of interest (ROI), evaluation metrics, and method categories. In addition, we summarize 33 publicly available datasets that are used for benchmarking the registration algorithms. Finally, the remaining challenges, future directions, and potential trends are discussed in our review.
Collapse
Affiliation(s)
- Jing Zou
- Hong Kong Polytechnic University, Hong Kong, Hong Kong SAR, China
| | | | | | | |
Collapse
|
12
|
Tavse S, Varadarajan V, Bachute M, Gite S, Kotecha K. A Systematic Literature Review on Applications of GAN-Synthesized Images for Brain MRI. Future Internet 2022; 14:351. [DOI: 10.3390/fi14120351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
With the advances in brain imaging, magnetic resonance imaging (MRI) is evolving as a popular radiological tool in clinical diagnosis. Deep learning (DL) methods can detect abnormalities in brain images without an extensive manual feature extraction process. Generative adversarial network (GAN)-synthesized images have many applications in this field besides augmentation, such as image translation, registration, super-resolution, denoising, motion correction, segmentation, reconstruction, and contrast enhancement. The existing literature was reviewed systematically to understand the role of GAN-synthesized dummy images in brain disease diagnosis. Web of Science and Scopus databases were extensively searched to find relevant studies from the last 6 years to write this systematic literature review (SLR). Predefined inclusion and exclusion criteria helped in filtering the search results. Data extraction is based on related research questions (RQ). This SLR identifies various loss functions used in the above applications and software to process brain MRIs. A comparative study of existing evaluation metrics for GAN-synthesized images helps choose the proper metric for an application. GAN-synthesized images will have a crucial role in the clinical sector in the coming years, and this paper gives a baseline for other researchers in the field.
Collapse
|
13
|
Zhang W, Zhou Z, Gao Z, Yang G, Xu L, Wu W, Zhang H. Multiple Adversarial Learning based Angiography Reconstruction for Ultra-low-dose Contrast Medium CT. IEEE J Biomed Health Inform 2022; 27:409-420. [PMID: 36219660 DOI: 10.1109/jbhi.2022.3213595] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Iodinated contrast medium (ICM) dose reduction is beneficial for decreasing potential health risk to renal-insufficiency patients in CT scanning. Due to the lowintensity vessel in ultra-low-dose-ICM CT angiography, it cannot provide clinical diagnosis of vascular diseases. Angiography reconstruction for ultra-low-dose-ICM CT can enhance vascular intensity for directly vascular diseases diagnosis. However, the angiography reconstruction is challenging since patient individual differences and vascular disease diversity. In this paper, we propose a Multiple Adversarial Learning based Angiography Reconstruction (i.e., MALAR) framework to enhance vascular intensity. Specifically, a bilateral learning mechanism is developed for mapping a relationship between source and target domains rather than the image-to-image mapping. Then, a dual correlation constraint is introduced to characterize both distribution uniformity from across-domain features and sample inconsistency with domain simultaneously. Finally, an adaptive fusion module by combining multiscale information and long-range interactive dependency is explored to alleviate the interference of high-noise metal. Experiments are performed on CT sequences with different ICM doses. Quantitative results based on multiple metrics demonstrate the effectiveness of our MALAR on angiography reconstruction. Qualitative assessments by radiographers confirm the potential of our MALAR for the clinical diagnosis of vascular diseases. The code and model are available at https://github.com/HIC-SYSU/MALAR.
Collapse
Affiliation(s)
- Weiwei Zhang
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen, China
| | - Zhen Zhou
- Department of Radiology, Beijing Anzhen Hospital, Capital Medical University, Beijing, China
| | - Zhifan Gao
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen, China
| | - Guang Yang
- Cardiovascular Research Centre, Royal Brompton Hospital, London, U.K
| | - Lei Xu
- Department of Radiology, Beijing Anzhen Hospital, Capital Medical University, Beijing, China
| | - Weiwen Wu
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen, China
| | - Heye Zhang
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen, China
| |
Collapse
|
14
|
Wang K, Kesavadas T. Validation of FEA-based breast deformation simulation using an artificial neural network. Informatics in Medicine Unlocked 2022. [DOI: 10.1016/j.imu.2022.101044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022] Open
|
15
|
Reaungamornrat S, Sari H, Catana C, Kamen A. Multimodal image synthesis based on disentanglement representations of anatomical and modality specific features, learned using uncooperative relativistic GAN. Med Image Anal 2022; 80:102514. [PMID: 35717874 PMCID: PMC9810205 DOI: 10.1016/j.media.2022.102514] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Revised: 05/20/2022] [Accepted: 06/10/2022] [Indexed: 01/05/2023]
Abstract
Growing number of methods for attenuation-coefficient map estimation from magnetic resonance (MR) images have recently been proposed because of the increasing interest in MR-guided radiotherapy and the introduction of positron emission tomography (PET) MR hybrid systems. We propose a deep-network ensemble incorporating stochastic-binary-anatomical encoders and imaging-modality variational autoencoders, to disentangle image-latent spaces into a space of modality-invariant anatomical features and spaces of modality attributes. The ensemble integrates modality-modulated decoders to normalize features and image intensities based on imaging modality. Besides promoting disentanglement, the architecture fosters uncooperative learning, offering ability to maintain anatomical structure in a cross-modality reconstruction. Introduction of a modality-invariant structural consistency constraint further enforces faithful embedding of anatomy. To improve training stability and fidelity of synthesized modalities, the ensemble is trained in a relativistic generative adversarial framework incorporating multiscale discriminators. Analyses of priors and network architectures as well as performance validation were performed on computed tomography (CT) and MR pelvis datasets. The proposed method demonstrated robustness against intensity inhomogeneity, improved tissue-class differentiation, and offered synthetic CT in Hounsfield units with intensities consistent and smooth across slices compared to the state-of-the-art approaches, offering median normalized mutual information of 1.28, normalized cross correlation of 0.97, and gradient cross correlation of 0.59 over 324 images.
Collapse
Affiliation(s)
| | - Hasan Sari
- Havard Medical School, Boston, MA 02115 USA
| | | | - Ali Kamen
- Siemens Healthineers, Digital Technology and Innovation, Princeton, NJ 08540 USA
| |
Collapse
|
16
|
Arora A, Arora A. Generative adversarial networks and synthetic patient data: current challenges and future perspectives. Future Healthc J 2022; 9:190-193. [DOI: 10.7861/fhj.2022-0013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
17
|
Kamran SA, Hossain KF, Moghnieh H, Riar S, Bartlett A, Tavakkoli A, Sanders KM, Baker SA. New open-source software for subcellular segmentation and analysis of spatiotemporal fluorescence signals using deep learning. iScience 2022; 25:104277. [PMID: 35573197 PMCID: PMC9095751 DOI: 10.1016/j.isci.2022.104277] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Revised: 04/04/2022] [Accepted: 04/18/2022] [Indexed: 11/20/2022] Open
Abstract
Cellular imaging instrumentation advancements as well as readily available optogenetic and fluorescence sensors have yielded a profound need for fast, accurate, and standardized analysis. Deep-learning architectures have revolutionized the field of biomedical image analysis and have achieved state-of-the-art accuracy. Despite these advancements, deep learning architectures for the segmentation of subcellular fluorescence signals is lacking. Cellular dynamic fluorescence signals can be plotted and visualized using spatiotemporal maps (STMaps), and currently their segmentation and quantification are hindered by slow workflow speed and lack of accuracy, especially for large datasets. In this study, we provide a software tool that utilizes a deep-learning methodology to fundamentally overcome signal segmentation challenges. The software framework demonstrates highly optimized and accurate calcium signal segmentation and provides a fast analysis pipeline that can accommodate different patterns of signals across multiple cell types. The software allows seamless data accessibility, quantification, and graphical visualization and enables large dataset analysis throughput. 4SM: an automated software solution for cellular dynamic fluorescence signal analysis 4SM relies on a novel machine-learning pipeline for fluorescence signal segmentation 4SM is fast and provides a consistent method for high-throughput analysis of datasets 4SM provides instant signal quantification and graphical representation of the results
Collapse
|
18
|
Wu X, Li C, Zeng X, Wei H, Deng HW, Zhang J, Xu M. CryoETGAN: Cryo-Electron Tomography Image Synthesis via Unpaired Image Translation. Front Physiol 2022; 13:760404. [PMID: 35370760 PMCID: PMC8970048 DOI: 10.3389/fphys.2022.760404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Accepted: 01/17/2022] [Indexed: 12/02/2022] Open
Abstract
Cryo-electron tomography (Cryo-ET) has been regarded as a revolution in structural biology and can reveal molecular sociology. Its unprecedented quality enables it to visualize cellular organelles and macromolecular complexes at nanometer resolution with native conformations. Motivated by developments in nanotechnology and machine learning, establishing machine learning approaches such as classification, detection and averaging for Cryo-ET image analysis has inspired broad interest. Yet, deep learning-based methods for biomedical imaging typically require large labeled datasets for good results, which can be a great challenge due to the expense of obtaining and labeling training data. To deal with this problem, we propose a generative model to simulate Cryo-ET images efficiently and reliably: CryoETGAN. This cycle-consistent and Wasserstein generative adversarial network (GAN) is able to generate images with an appearance similar to the original experimental data. Quantitative and visual grading results on generated images are provided to show that the results of our proposed method achieve better performance compared to the previous state-of-the-art simulation methods. Moreover, CryoETGAN is stable to train and capable of generating plausibly diverse image samples.
Collapse
Affiliation(s)
- Xindi Wu
- Computational Biology Department, Carnegie Mellon University, Pittsburgh, PA, United States
| | - Chengkun Li
- École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Xiangrui Zeng
- Computational Biology Department, Carnegie Mellon University, Pittsburgh, PA, United States
| | - Haocheng Wei
- Department of Electrical & Computer Engineering, University of Toronto, Toronto, ON, Canada
| | - Hong-Wen Deng
- Center for Biomedical Informatics & Genomics, Tulane University, New Orleans, LA, United States
| | - Jing Zhang
- Department of Computer Science, University of California, Irvine, Irvine, CA, United States
| | - Min Xu
- Computational Biology Department, Carnegie Mellon University, Pittsburgh, PA, United States
| |
Collapse
|
19
|
Wang P, Nie P, Dang Y, Wang L, Zhu K, Wang H, Wang J, Liu R, Ren J, Feng J, Fan H, Yu J, Chen B. Synthesizing the First Phase of Dynamic Sequences of Breast MRI for Enhanced Lesion Identification. Front Oncol 2021; 11:792516. [PMID: 34950593 PMCID: PMC8689139 DOI: 10.3389/fonc.2021.792516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Accepted: 11/15/2021] [Indexed: 12/24/2022] Open
Abstract
Objective To develop a deep learning model for synthesizing the first phases of dynamic (FP-Dyn) sequences to supplement the lack of information in unenhanced breast MRI examinations. Methods In total, 97 patients with breast MRI images were collected as the training set (n = 45), the validation set (n = 31), and the test set (n = 21), respectively. An enhance border lifelike synthesize (EDLS) model was developed in the training set and used to synthesize the FP-Dyn images from the T1WI images in the validation set. The peak signal-to-noise ratio (PSNR), structural similarity (SSIM), mean square error (MSE) and mean absolute error (MAE) of the synthesized images were measured. Moreover, three radiologists subjectively assessed image quality, respectively. The diagnostic value of the synthesized FP-Dyn sequences was further evaluated in the test set. Results The image synthesis performance in the EDLS model was superior to that in conventional models from the results of PSNR, SSIM, MSE, and MAE. Subjective results displayed a remarkable visual consistency between the synthesized and original FP-Dyn images. Moreover, by using a combination of synthesized FP-Dyn sequence and an unenhanced protocol, the sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of MRI were 100%, 72.73%, 76.92%, and 100%, respectively, which had a similar diagnostic value to full MRI protocols. Conclusions The EDLS model could synthesize the realistic FP-Dyn sequence to supplement the lack of enhanced images. Compared with full MRI examinations, it thus provides a new approach for reducing examination time and cost, and avoids the use of contrast agents without influencing diagnostic accuracy.
Collapse
Affiliation(s)
- Pingping Wang
- Clinical Experimental Centre, Xi'an International Medical Center Hospital, Xi'an, China
| | - Pin Nie
- Imaging Diagnosis and Treatment Center, Xi'an International Medical Center Hospital, Xi'an, China
| | - Yanli Dang
- Imaging Diagnosis and Treatment Center, Xi'an International Medical Center Hospital, Xi'an, China
| | - Lifang Wang
- Imaging Diagnosis and Treatment Center, Xi'an International Medical Center Hospital, Xi'an, China
| | - Kaiguo Zhu
- Imaging Diagnosis and Treatment Center, Xi'an International Medical Center Hospital, Xi'an, China
| | - Hongyu Wang
- The School of Computer Science and Technology, Xi'an University of Posts and Telecommunications, Xi'an, China
| | - Jiawei Wang
- Imaging Diagnosis and Treatment Center, Xi'an International Medical Center Hospital, Xi'an, China
| | - Rumei Liu
- Imaging Diagnosis and Treatment Center, Xi'an International Medical Center Hospital, Xi'an, China
| | | | - Jun Feng
- The School of Information of Science and Technology, Northwest University, Xi'an, China
| | - Haiming Fan
- The School of Medicine, Northwest University, Xi'an, China
| | - Jun Yu
- Clinical Experimental Centre, Xi'an International Medical Center Hospital, Xi'an, China
| | - Baoying Chen
- Imaging Diagnosis and Treatment Center, Xi'an International Medical Center Hospital, Xi'an, China
| |
Collapse
|
20
|
Wang S, Celebi ME, Zhang YD, Yu X, Lu S, Yao X, Zhou Q, Miguel MG, Tian Y, Gorriz JM, Tyukin I. Advances in Data Preprocessing for Biomedical Data Fusion: An Overview of the Methods, Challenges, and Prospects. Information Fusion 2021; 76:376-421. [DOI: 10.1016/j.inffus.2021.07.001] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/30/2023]
|
21
|
Mabu S, Miyake M, Kuremoto T, Kido S. Semi-supervised CycleGAN for domain transformation of chest CT images and its application to opacity classification of diffuse lung diseases. Int J Comput Assist Radiol Surg 2021; 16:1925-1935. [PMID: 34661818 PMCID: PMC8522550 DOI: 10.1007/s11548-021-02490-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Accepted: 08/31/2021] [Indexed: 11/05/2022]
Abstract
Purpose The performance of deep learning may fluctuate depending on the imaging devices and settings. Although domain transformation such as CycleGAN for normalizing images is useful, CycleGAN does not use information on the disease classes. Therefore, we propose a semi-supervised CycleGAN with an additional classification loss to transform images suitable for the diagnosis. The method is evaluated by opacity classification of chest CT. Methods (1) CT images taken at two hospitals (source and target domains) are used. (2) A classifier is trained on the target domain. (3) Class labels are given to a small number of source domain images for semi-supervised learning. (4) The source domain images are transformed to the target domain. (5) A classification loss of the transformed images with class labels is calculated. Results The proposed method showed an F-measure of 0.727 in the domain transformation from hospital A to B, and 0.745 in that from hospital B to A, where significant differences are between the proposed method and the other three methods. Conclusions The proposed method not only transforms the appearance of the images but also retains the features being important to classify opacities, and shows the best precision, recall, and F-measure.
Collapse
Affiliation(s)
- Shingo Mabu
- Graduate School of Sciences and Technology for Innovation, Yamaguchi University, 2-16-1, Tokiwadai, Ube, Yamaguchi, 755-8611, Japan.
| | - Masashi Miyake
- Graduate School of Sciences and Technology for Innovation, Yamaguchi University, 2-16-1, Tokiwadai, Ube, Yamaguchi, 755-8611, Japan
| | - Takashi Kuremoto
- Department of Information Technology and Media Design, Nippon Institute of Technology, 4-1 Gakuendai, Miyashiro-machi, Minamisaitama-gun, Saitama, 345-8501, Japan
| | - Shoji Kido
- Graduate School of Medicine, Osaka University, 2-2, Yamadaoka, Suita, Osaka, 565-0871, Japan
| |
Collapse
|
22
|
Casamitjana A, Mancini M, Iglesias JE. Synth-by-Reg (SbR): Contrastive learning for synthesis-based registration of paired images. Simul Synth Med Imaging 2021; 12965:44-54. [PMID: 34778892 PMCID: PMC8582976 DOI: 10.1007/978-3-030-87592-3_5] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Nonlinear inter-modality registration is often challenging due to the lack of objective functions that are good proxies for alignment. Here we propose a synthesis-by-registration method to convert this problem into an easier intra-modality task. We introduce a registration loss for weakly supervised image translation between domains that does not require perfectly aligned training data. This loss capitalises on a registration U-Net with frozen weights, to drive a synthesis CNN towards the desired translation. We complement this loss with a structure preserving constraint based on contrastive learning, which prevents blurring and content shifts due to overfitting. We apply this method to the registration of histological sections to MRI slices, a key step in 3D histology reconstruction. Results on two public datasets show improvements over registration based on mutual information (13% reduction in landmark error) and synthesis-based algorithms such as CycleGAN (11% reduction), and are comparable to registration with label supervision. Code and data are publicly available at https://github.com/acasamitjana/SynthByReg.
Collapse
Affiliation(s)
| | - Matteo Mancini
- Department of Neuroscience, University of Sussex, Brighton, UK
- NeuroPoly Lab, Polytechnique Montreal, Canada
- CUBRIC, Cardiff University, UK
| | - Juan Eugenio Iglesias
- Center for Medical Image Computing, University College London, UK
- Martinos Center for Biomedical Imaging, MGH and Harvard Medical School, USA
- Computer Science and AI Laboratory, Massachusetts Institute of Technology, USA
| |
Collapse
|
23
|
Abu-Srhan A, Almallahi I, Abushariah MAM, Mahafza W, Al-Kadi OS. Paired-unpaired Unsupervised Attention Guided GAN with transfer learning for bidirectional brain MR-CT synthesis. Comput Biol Med 2021; 136:104763. [PMID: 34449305 DOI: 10.1016/j.compbiomed.2021.104763] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Revised: 08/04/2021] [Accepted: 08/10/2021] [Indexed: 11/28/2022]
Abstract
Medical image acquisition plays a significant role in the diagnosis and management of diseases. Magnetic Resonance (MR) and Computed Tomography (CT) are considered two of the most popular modalities for medical image acquisition. Some considerations, such as cost and radiation dose, may limit the acquisition of certain image modalities. Therefore, medical image synthesis can be used to generate required medical images without actual acquisition. In this paper, we propose a paired-unpaired Unsupervised Attention Guided Generative Adversarial Network (uagGAN) model to translate MR images to CT images and vice versa. The uagGAN model is pre-trained with a paired dataset for initialization and then retrained on an unpaired dataset using a cascading process. In the paired pre-training stage, we enhance the loss function of our model by combining the Wasserstein GAN adversarial loss function with a new combination of non-adversarial losses (content loss and L1) to generate fine structure images. This will ensure global consistency, and better capture of the high and low frequency details of the generated images. The uagGAN model is employed as it generates more accurate and sharper images through the production of attention masks. Knowledge from a non-medical pre-trained model is also transferred to the uagGAN model for improved learning and better image translation performance. Quantitative evaluation and qualitative perceptual analysis by radiologists indicate that employing transfer learning with the proposed paired-unpaired uagGAN model can achieve better performance as compared to other rival image-to-image translation models.
Collapse
Affiliation(s)
- Alaa Abu-Srhan
- Department of Basic Science, The Hashemite University, Zarqa, Jordan
| | - Israa Almallahi
- Department of Diagnostic Radiology, Jordan University Hospital, Amman, 11942, Jordan
| | - Mohammad A M Abushariah
- King Abdullah II School of Information Technology, The University of Jordan, Amman, 11942, Jordan
| | - Waleed Mahafza
- Department of Diagnostic Radiology, Jordan University Hospital, Amman, 11942, Jordan
| | - Omar S Al-Kadi
- King Abdullah II School of Information Technology, The University of Jordan, Amman, 11942, Jordan.
| |
Collapse
|
24
|
Slart RHJA, Williams MC, Juarez-Orozco LE, Rischpler C, Dweck MR, Glaudemans AWJM, Gimelli A, Georgoulias P, Gheysens O, Gaemperli O, Habib G, Hustinx R, Cosyns B, Verberne HJ, Hyafil F, Erba PA, Lubberink M, Slomka P, Išgum I, Visvikis D, Kolossváry M, Saraste A. Position paper of the EACVI and EANM on artificial intelligence applications in multimodality cardiovascular imaging using SPECT/CT, PET/CT, and cardiac CT. Eur J Nucl Med Mol Imaging 2021; 48:1399-1413. [PMID: 33864509 PMCID: PMC8113178 DOI: 10.1007/s00259-021-05341-z] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Accepted: 03/25/2021] [Indexed: 12/18/2022]
Abstract
In daily clinical practice, clinicians integrate available data to ascertain the diagnostic and prognostic probability of a disease or clinical outcome for their patients. For patients with suspected or known cardiovascular disease, several anatomical and functional imaging techniques are commonly performed to aid this endeavor, including coronary computed tomography angiography (CCTA) and nuclear cardiology imaging. Continuous improvement in positron emission tomography (PET), single-photon emission computed tomography (SPECT), and CT hardware and software has resulted in improved diagnostic performance and wide implementation of these imaging techniques in daily clinical practice. However, the human ability to interpret, quantify, and integrate these data sets is limited. The identification of novel markers and application of machine learning (ML) algorithms, including deep learning (DL) to cardiovascular imaging techniques will further improve diagnosis and prognostication for patients with cardiovascular diseases. The goal of this position paper of the European Association of Nuclear Medicine (EANM) and the European Association of Cardiovascular Imaging (EACVI) is to provide an overview of the general concepts behind modern machine learning-based artificial intelligence, highlights currently prefered methods, practices, and computational models, and proposes new strategies to support the clinical application of ML in the field of cardiovascular imaging using nuclear cardiology (hybrid) and CT techniques.
Collapse
Affiliation(s)
- Riemer H J A Slart
- Medical Imaging Centre, Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Hanzeplein 1, PO 9700 RB, Groningen, The Netherlands.
- Faculty of Science and Technology Biomedical, Photonic Imaging, University of Twente, Enschede, The Netherlands.
| | - Michelle C Williams
- British Heart Foundation Centre for Cardiovascular Science, University of Edinburgh, Edinburgh, UK
- Edinburgh Imaging facility QMRI, Edinburgh, UK
| | - Luis Eduardo Juarez-Orozco
- Department of Cardiology, Division Heart & Lungs, University Medical Center Utrecht, Utrecht University, Utrecht, the Netherlands
- University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Christoph Rischpler
- Department of Nuclear Medicine, University Hospital Essen, University of Duisburg-Essen, Essen, Germany
| | - Marc R Dweck
- British Heart Foundation Centre for Cardiovascular Science, University of Edinburgh, Edinburgh, UK
- Edinburgh Imaging facility QMRI, Edinburgh, UK
| | - Andor W J M Glaudemans
- Medical Imaging Centre, Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Hanzeplein 1, PO 9700 RB, Groningen, The Netherlands
| | | | - Panagiotis Georgoulias
- Department of Nuclear Medicine, Faculty of Medicine, University of Thessaly, University Hospital of Larissa, Larissa, Greece
| | - Olivier Gheysens
- Department of Nuclear Medicine, Cliniques Universitaires Saint-Luc and Institute of Clinical and Experimental Research (IREC), Université catholique de Louvain (UCLouvain), Brussels, Belgium
| | | | - Gilbert Habib
- APHM, Cardiology Department, La Timone Hospital, Marseille, France
- IRD, APHM, MEPHI, IHU-Méditerranée Infection, Aix Marseille Université, Marseille, France
| | - Roland Hustinx
- Division of Nuclear Medicine and Oncological Imaging, Department of Medical Physics, ULiège, Liège, Belgium
| | - Bernard Cosyns
- Department of Cardiology, Centrum voor Hart en Vaatziekten, Universitair Ziekenhuis Brussel, 101 Laarbeeklaan, 1090, Brussels, Belgium
| | - Hein J Verberne
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, location AMC, University of Amsterdam, Amsterdam, The Netherlands
| | - Fabien Hyafil
- Department of Nuclear Medicine, DMU IMAGINA, Georges-Pompidou European Hospital, Assistance Publique - Hôpitaux de Paris, F-75015, Paris, France
- University of Paris, PARCC, INSERM, F-75006, Paris, France
| | - Paola A Erba
- Medical Imaging Centre, Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Hanzeplein 1, PO 9700 RB, Groningen, The Netherlands
- Department of Nuclear Medicine (P.A.E.), University of Pisa, Pisa, Italy
- Department of Translational Research and New Technology in Medicine (P.A.E.), University of Pisa, Pisa, Italy
| | - Mark Lubberink
- Department of Surgical Sciences/Radiology, Uppsala University, Uppsala, Sweden
- Medical Physics, Uppsala University Hospital, Uppsala, Sweden
| | - Piotr Slomka
- Department of Imaging, Medicine, and Biomedical Sciences, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Ivana Išgum
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, location AMC, University of Amsterdam, Amsterdam, The Netherlands
- Department of Biomedical Engineering and Physics, Amsterdam UMC - location AMC, University of Amsterdam, 1105, Amsterdam, AZ, Netherlands
| | | | - Márton Kolossváry
- MTA-SE Cardiovascular Imaging Research Group, Heart and Vascular Center, Semmelweis University, 68 Városmajor Street, Budapest, Hungary
| | - Antti Saraste
- Turku PET Centre, Turku University Hospital, University of Turku, Turku, Finland
- Heart Center, Turku University Hospital, Turku, Finland
| |
Collapse
|