1
|
Zeng Q, Liu W, Li B, Didier R, Grant PE, Karimi D. Towards automatic US-MR fetal brain image registration with learning-based methods. Neuroimage 2025; 310:121104. [PMID: 40058533 PMCID: PMC12021370 DOI: 10.1016/j.neuroimage.2025.121104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2024] [Revised: 01/30/2025] [Accepted: 02/27/2025] [Indexed: 03/17/2025] Open
Abstract
Fetal brain imaging is essential for prenatal care, with ultrasound (US) and magnetic resonance imaging (MRI) providing complementary strengths. While MRI has superior soft tissue contrast, US offers portable and inexpensive screening of neurological abnormalities. Despite the great potential synergy of combined fetal brain US and MR imaging to enhance diagnostic accuracy, little effort has been made to integrate these modalities. An essential step towards this integration is accurate automatic spatial alignment, which is technically very challenging due to the inherent differences in contrast and modality-specific imaging artifacts. In this work, we present a novel atlas-assisted multi-task learning technique to address this problem. Instead of training the registration model solely with intra-subject US-MR image pairs, our approach enables the network to also learn from domain-specific image-to-atlas registration tasks. This leads to an end-to-end multi-task learning framework with superior registration performance. Our proposed method was validated using a dataset of same-day intra-subject 3D US-MR image pairs. The results show that our method outperforms conventional optimization-based methods and recent learning-based techniques for rigid image registration. Specifically, the average target registration error for our method is less than 4 mm, which is significantly better than existing methods. Extensive experiments have also shown that our method has a much wider capture range and is robust to brain abnormalities. Given these advantages over existing techniques, our method is more suitable for deployment in clinical workflows and may contribute to streamlined multimodal imaging pipelines for fetal brain assessment.
Collapse
Affiliation(s)
- Qi Zeng
- Department of Radiology, Boston Children's Hospital, USA; Harvard Medical School, USA.
| | - Weide Liu
- Department of Radiology, Boston Children's Hospital, USA; Harvard Medical School, USA
| | - Bo Li
- Department of Radiology, Boston Children's Hospital, USA; Harvard Medical School, USA
| | - Ryne Didier
- Department of Radiology, Boston Children's Hospital, USA; Harvard Medical School, USA
| | - P Ellen Grant
- Department of Radiology, Boston Children's Hospital, USA; Harvard Medical School, USA
| | - Davood Karimi
- Department of Radiology, Boston Children's Hospital, USA; Harvard Medical School, USA
| |
Collapse
|
2
|
Lasala A, Fiorentino MC, Bandini A, Moccia S. FetalBrainAwareNet: Bridging GANs with anatomical insight for fetal ultrasound brain plane synthesis. Comput Med Imaging Graph 2024; 116:102405. [PMID: 38824716 DOI: 10.1016/j.compmedimag.2024.102405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Revised: 04/25/2024] [Accepted: 05/22/2024] [Indexed: 06/04/2024]
Abstract
Over the past decade, deep-learning (DL) algorithms have become a promising tool to aid clinicians in identifying fetal head standard planes (FHSPs) during ultrasound (US) examination. However, the adoption of these algorithms in clinical settings is still hindered by the lack of large annotated datasets. To overcome this barrier, we introduce FetalBrainAwareNet, an innovative framework designed to synthesize anatomically accurate images of FHSPs. FetalBrainAwareNet introduces a cutting-edge approach that utilizes class activation maps as a prior in its conditional adversarial training process. This approach fosters the presence of the specific anatomical landmarks in the synthesized images. Additionally, we investigate specialized regularization terms within the adversarial training loss function to control the morphology of the fetal skull and foster the differentiation between the standard planes, ensuring that the synthetic images faithfully represent real US scans in both structure and overall appearance. The versatility of our FetalBrainAwareNet framework is highlighted by its ability to generate high-quality images of three predominant FHSPs using a singular, integrated framework. Quantitative (Fréchet inception distance of 88.52) and qualitative (t-SNE) results suggest that our framework generates US images with greater variability compared to state-of-the-art methods. By using the synthetic images generated with our framework, we increase the accuracy of FHSP classifiers by 3.2% compared to training the same classifiers solely with real acquisitions. These achievements suggest that using our synthetic images to increase the training set could provide benefits to enhance the performance of DL algorithms for FHSPs classification that could be integrated in real clinical scenarios.
Collapse
Affiliation(s)
- Angelo Lasala
- The BioRobotics Institute and Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, Pisa, Italy.
| | | | - Andrea Bandini
- The BioRobotics Institute and Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, Pisa, Italy; Health Science Interdisciplinary Research Center, Scuola Superiore Sant'Anna, Pisa, Italy
| | - Sara Moccia
- The BioRobotics Institute and Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, Pisa, Italy
| |
Collapse
|
3
|
Jing C, Kuai H, Matsumoto H, Yamaguchi T, Liao IY, Wang S. Addiction-related brain networks identification via Graph Diffusion Reconstruction Network. Brain Inform 2024; 11:1. [PMID: 38190053 PMCID: PMC10774517 DOI: 10.1186/s40708-023-00216-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Accepted: 12/13/2023] [Indexed: 01/09/2024] Open
Abstract
Functional magnetic resonance imaging (fMRI) provides insights into complex patterns of brain functional changes, making it a valuable tool for exploring addiction-related brain connectivity. However, effectively extracting addiction-related brain connectivity from fMRI data remains challenging due to the intricate and non-linear nature of brain connections. Therefore, this paper proposed the Graph Diffusion Reconstruction Network (GDRN), a novel framework designed to capture addiction-related brain connectivity from fMRI data acquired from addicted rats. The proposed GDRN incorporates a diffusion reconstruction module that effectively maintains the unity of data distribution by reconstructing the training samples, thereby enhancing the model's ability to reconstruct nicotine addiction-related brain networks. Experimental evaluations conducted on a nicotine addiction rat dataset demonstrate that the proposed GDRN effectively explores nicotine addiction-related brain connectivity. The findings suggest that the GDRN holds promise for uncovering and understanding the complex neural mechanisms underlying addiction using fMRI data.
Collapse
Affiliation(s)
- Changhong Jing
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Hongzhi Kuai
- Faculty of Engineering, Maebashi Institute of Technology, Maebashi, 371-0816, Japan
| | - Hiroki Matsumoto
- Faculty of Engineering, Maebashi Institute of Technology, Maebashi, 371-0816, Japan
| | | | - Iman Yi Liao
- University of Nottingham Malaysia Campus, Semenyih, Malaysia
| | - Shuqiang Wang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.
| |
Collapse
|
4
|
Vukovic D, Ruvinov I, Antico M, Steffens M, Fontanarosa D. Automatic GAN-based MRI volume synthesis from US volumes: a proof of concept investigation. Sci Rep 2023; 13:21716. [PMID: 38066019 PMCID: PMC10709581 DOI: 10.1038/s41598-023-48595-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2023] [Accepted: 11/28/2023] [Indexed: 12/18/2023] Open
Abstract
Usually, a baseline image, either through magnetic resonance imaging (MRI) or computed tomography (CT), is captured as a reference before medical procedures such as respiratory interventions like Thoracentesis. In these procedures, ultrasound (US) imaging is often employed for guiding needle placement during Thoracentesis or providing image guidance in MISS procedures within the thoracic region. Following the procedure, a post-procedure image is acquired to monitor and evaluate the patient's progress. Currently, there are no real-time guidance and tracking capabilities that allow a surgeon to perform their procedure using the familiarity of the reference imaging modality. In this work, we propose a real-time volumetric indirect registration using a deep learning approach where the fusion of multi-imaging modalities will allow for guidance and tracking of surgical procedures using US while displaying the resultant changes in a clinically friendly reference imaging modality (MRI). The deep learning method employs a series of generative adversarial networks (GANs), specifically CycleGAN, to conduct an unsupervised image-to-image translation. This process produces spatially aligned US and MRI volumes corresponding to their respective input volumes (MRI and US) of the thoracic spine anatomical region. In this preliminary proof-of-concept study, the focus was on the T9 vertebrae. A clinical expert performs anatomical validation of randomly selected real and generated volumes of the T9 thoracic vertebrae and gives a score of 0 (conclusive anatomical structures present) or 1 (inconclusive anatomical structures present) to each volume to check if the volumes are anatomically accurate. The Dice and Overlap metrics show how accurate the shape of T9 is when compared to real volumes and how consistent the shape of T9 is when compared to other generated volumes. The average Dice, Overlap and Accuracy to clearly label all the anatomical structures of the T9 vertebrae are approximately 80% across the board.
Collapse
Affiliation(s)
- Damjan Vukovic
- School of Clinical Sciences, Queensland University of Technology, Gardens Point Campus, 2 George St, Brisbane, QLD, 4000, Australia.
- Centre for Biomedical Technologies (CBT), Queensland University of Technology, Brisbane, QLD, 4000, Australia.
| | - Igor Ruvinov
- School of Clinical Sciences, Queensland University of Technology, Gardens Point Campus, 2 George St, Brisbane, QLD, 4000, Australia
| | - Maria Antico
- CSIRO Health and Biosecurity, The Australian eHealth Research Centre, Herston, QLD, 4029, Australia
| | - Marian Steffens
- School of Clinical Sciences, Queensland University of Technology, Gardens Point Campus, 2 George St, Brisbane, QLD, 4000, Australia
| | - Davide Fontanarosa
- School of Clinical Sciences, Queensland University of Technology, Gardens Point Campus, 2 George St, Brisbane, QLD, 4000, Australia.
- Centre for Biomedical Technologies (CBT), Queensland University of Technology, Brisbane, QLD, 4000, Australia.
| |
Collapse
|
5
|
Chen Z, Zhuo W, Wang T, Cheng J, Xue W, Ni D. Semi-Supervised Representation Learning for Segmentation on Medical Volumes and Sequences. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3972-3986. [PMID: 37756175 DOI: 10.1109/tmi.2023.3319973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/29/2023]
Abstract
Benefiting from the massive labeled samples, deep learning-based segmentation methods have achieved great success for two dimensional natural images. However, it is still a challenging task to segment high dimensional medical volumes and sequences, due to the considerable efforts for clinical expertise to make large scale annotations. Self/semi-supervised learning methods have been shown to improve the performance by exploiting unlabeled data. However, they are still lack of mining local semantic discrimination and exploitation of volume/sequence structures. In this work, we propose a semi-supervised representation learning method with two novel modules to enhance the features in the encoder and decoder, respectively. For the encoder, based on the continuity between slices/frames and the common spatial layout of organs across subjects, we propose an asymmetric network with an attention-guided predictor to enable prediction between feature maps of different slices of unlabeled data. For the decoder, based on the semantic consistency between labeled data and unlabeled data, we introduce a novel semantic contrastive learning to regularize the feature maps in the decoder. The two parts are trained jointly with both labeled and unlabeled volumes/sequences in a semi-supervised manner. When evaluated on three benchmark datasets of medical volumes and sequences, our model outperforms existing methods with a large margin of 7.3% DSC on ACDC, 6.5% on Prostate, and 3.2% on CAMUS when only a few labeled data is available. Further, results on the M&M dataset show that the proposed method yields improvement without using any domain adaption techniques for data from unknown domain. Intensive evaluations reveal the effectiveness of representation mining, and superiority on performance of our method. The code is available at https://github.com/CcchenzJ/BootstrapRepresentation.
Collapse
|
6
|
Li Y, Zhou T, He K, Zhou Y, Shen D. Multi-Scale Transformer Network With Edge-Aware Pre-Training for Cross-Modality MR Image Synthesis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3395-3407. [PMID: 37339020 DOI: 10.1109/tmi.2023.3288001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/22/2023]
Abstract
Cross-modality magnetic resonance (MR) image synthesis can be used to generate missing modalities from given ones. Existing (supervised learning) methods often require a large number of paired multi-modal data to train an effective synthesis model. However, it is often challenging to obtain sufficient paired data for supervised training. In reality, we often have a small number of paired data while a large number of unpaired data. To take advantage of both paired and unpaired data, in this paper, we propose a Multi-scale Transformer Network (MT-Net) with edge-aware pre-training for cross-modality MR image synthesis. Specifically, an Edge-preserving Masked AutoEncoder (Edge-MAE) is first pre-trained in a self-supervised manner to simultaneously perform 1) image imputation for randomly masked patches in each image and 2) whole edge map estimation, which effectively learns both contextual and structural information. Besides, a novel patch-wise loss is proposed to enhance the performance of Edge-MAE by treating different masked patches differently according to the difficulties of their respective imputations. Based on this proposed pre-training, in the subsequent fine-tuning stage, a Dual-scale Selective Fusion (DSF) module is designed (in our MT-Net) to synthesize missing-modality images by integrating multi-scale features extracted from the encoder of the pre-trained Edge-MAE. Furthermore, this pre-trained encoder is also employed to extract high-level features from the synthesized image and corresponding ground-truth image, which are required to be similar (consistent) in the training. Experimental results show that our MT-Net achieves comparable performance to the competing methods even using 70% of all available paired data. Our code will be released at https://github.com/lyhkevin/MT-Net.
Collapse
|
7
|
Dorent R, Haouchine N, Kogl F, Joutard S, Juvekar P, Torio E, Golby A, Ourselin S, Frisken S, Vercauteren T, Kapur T, Wells WM. Unified Brain MR-Ultrasound Synthesis using Multi-Modal Hierarchical Representations. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2023; 2023:448-458. [PMID: 38655383 PMCID: PMC7615858 DOI: 10.1007/978-3-031-43999-5_43] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 04/26/2024]
Abstract
We introduce MHVAE, a deep hierarchical variational autoencoder (VAE) that synthesizes missing images from various modalities. Extending multi-modal VAEs with a hierarchical latent structure, we introduce a probabilistic formulation for fusing multi-modal images in a common latent representation while having the flexibility to handle incomplete image sets as input. Moreover, adversarial learning is employed to generate sharper images. Extensive experiments are performed on the challenging problem of joint intra-operative ultrasound (iUS) and Magnetic Resonance (MR) synthesis. Our model outperformed multi-modal VAEs, conditional GANs, and the current state-of-the-art unified method (ResViT) for synthesizing missing images, demonstrating the advantage of using a hierarchical latent representation and a principled probabilistic fusion operation. Our code is publicly available.
Collapse
Affiliation(s)
- Reuben Dorent
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | - Nazim Haouchine
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | - Fryderyk Kogl
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | | | - Parikshit Juvekar
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | - Erickson Torio
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | - Alexandra Golby
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | | | - Sarah Frisken
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | | | - Tina Kapur
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | - William M Wells
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
- Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
8
|
Gong C, Jing C, Chen X, Pun CM, Huang G, Saha A, Nieuwoudt M, Li HX, Hu Y, Wang S. Generative AI for brain image computing and brain network computing: a review. Front Neurosci 2023; 17:1203104. [PMID: 37383107 PMCID: PMC10293625 DOI: 10.3389/fnins.2023.1203104] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Accepted: 05/22/2023] [Indexed: 06/30/2023] Open
Abstract
Recent years have witnessed a significant advancement in brain imaging techniques that offer a non-invasive approach to mapping the structure and function of the brain. Concurrently, generative artificial intelligence (AI) has experienced substantial growth, involving using existing data to create new content with a similar underlying pattern to real-world data. The integration of these two domains, generative AI in neuroimaging, presents a promising avenue for exploring various fields of brain imaging and brain network computing, particularly in the areas of extracting spatiotemporal brain features and reconstructing the topological connectivity of brain networks. Therefore, this study reviewed the advanced models, tasks, challenges, and prospects of brain imaging and brain network computing techniques and intends to provide a comprehensive picture of current generative AI techniques in brain imaging. This review is focused on novel methodological approaches and applications of related new methods. It discussed fundamental theories and algorithms of four classic generative models and provided a systematic survey and categorization of tasks, including co-registration, super-resolution, enhancement, classification, segmentation, cross-modality, brain network analysis, and brain decoding. This paper also highlighted the challenges and future directions of the latest work with the expectation that future research can be beneficial.
Collapse
Affiliation(s)
- Changwei Gong
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Department of Computer Science, University of Chinese Academy of Sciences, Beijing, China
| | - Changhong Jing
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Department of Computer Science, University of Chinese Academy of Sciences, Beijing, China
| | - Xuhang Chen
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Department of Computer and Information Science, University of Macau, Macau, China
| | - Chi Man Pun
- Department of Computer and Information Science, University of Macau, Macau, China
| | - Guoli Huang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Ashirbani Saha
- Department of Oncology and School of Biomedical Engineering, McMaster University, Hamilton, ON, Canada
| | - Martin Nieuwoudt
- Institute for Biomedical Engineering, Stellenbosch University, Stellenbosch, South Africa
| | - Han-Xiong Li
- Department of Systems Engineering, City University of Hong Kong, Hong Kong, China
| | - Yong Hu
- Department of Orthopaedics and Traumatology, The University of Hong Kong, Hong Kong, China
| | - Shuqiang Wang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Department of Computer Science, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
9
|
van Tulder G, de Bruijne M. Unpaired, unsupervised domain adaptation assumes your domains are already similar. Med Image Anal 2023; 87:102825. [PMID: 37116296 DOI: 10.1016/j.media.2023.102825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 03/30/2023] [Accepted: 04/17/2023] [Indexed: 04/30/2023]
Abstract
Unsupervised domain adaptation is a popular method in medical image analysis, but it can be tricky to make it work: without labels to link the domains, domains must be matched using feature distributions. If there is no additional information, this often leaves a choice between multiple possibilities to map the data that may be equally likely but not equally correct. In this paper we explore the fundamental problems that may arise in unsupervised domain adaptation, and discuss conditions that might still make it work. Focusing on medical image analysis, we argue that images from different domains may have similar class balance, similar intensities, similar spatial structure, or similar textures. We demonstrate how these implicit conditions can affect domain adaptation performance in experiments with synthetic data, MNIST digits, and medical images. We observe that practical success of unsupervised domain adaptation relies on existing similarities in the data, and is anything but guaranteed in the general case. Understanding these implicit assumptions is a key step in identifying potential problems in domain adaptation and improving the reliability of the results.
Collapse
Affiliation(s)
- Gijs van Tulder
- Data Science group, Faculty of Science, Radboud University, Postbus 9010, 6500 GL Nijmegen, The Netherlands; Biomedical Imaging Group, Erasmus MC, Postbus 2040, 3000 CA Rotterdam, The Netherlands.
| | - Marleen de Bruijne
- Biomedical Imaging Group, Erasmus MC, Postbus 2040, 3000 CA Rotterdam, The Netherlands; Department of Computer Science, University of Copenhagen, Universitetsparken 1, 2100 Copenhagen, Denmark.
| |
Collapse
|
10
|
Barkat L, Freiman M, Azhari H. Image Translation of Breast Ultrasound to Pseudo Anatomical Display by CycleGAN. Bioengineering (Basel) 2023; 10:bioengineering10030388. [PMID: 36978779 PMCID: PMC10045378 DOI: 10.3390/bioengineering10030388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2023] [Revised: 03/15/2023] [Accepted: 03/16/2023] [Indexed: 03/30/2023] Open
Abstract
Ultrasound imaging is cost effective, radiation-free, portable, and implemented routinely in clinical procedures. Nonetheless, image quality is characterized by a granulated appearance, a poor SNR, and speckle noise. Specific for breast tumors, the margins are commonly blurred and indistinct. Thus, there is a need for improving ultrasound image quality. We hypothesize that this can be achieved by translation into a more realistic display which mimics a pseudo anatomical cut through the tissue, using a cycle generative adversarial network (CycleGAN). In order to train CycleGAN for this translation, two datasets were used, "Breast Ultrasound Images" (BUSI) and a set of optical images of poultry breast tissues. The generated pseudo anatomical images provide improved visual discrimination of the lesions through clearer border definition and pronounced contrast. In order to evaluate the preservation of the anatomical features, the lesions in both datasets were segmented and compared. This comparison yielded median dice scores of 0.91 and 0.70; median center errors of 0.58% and 3.27%; and median area errors of 0.40% and 4.34% for the benign and malignancies, respectively. In conclusion, generated pseudo anatomical images provide a more intuitive display, enhance tissue anatomy, and preserve tumor geometry; and can potentially improve diagnoses and clinical outcomes.
Collapse
Affiliation(s)
- Lilach Barkat
- Biomedical Engineering Faculty, Technion-Israel Institute of Technology, Haifa 3200001, Israel
| | - Moti Freiman
- Biomedical Engineering Faculty, Technion-Israel Institute of Technology, Haifa 3200001, Israel
| | - Haim Azhari
- Biomedical Engineering Faculty, Technion-Israel Institute of Technology, Haifa 3200001, Israel
| |
Collapse
|
11
|
Pasquini L, Napolitano A, Pignatelli M, Tagliente E, Parrillo C, Nasta F, Romano A, Bozzao A, Di Napoli A. Synthetic Post-Contrast Imaging through Artificial Intelligence: Clinical Applications of Virtual and Augmented Contrast Media. Pharmaceutics 2022; 14:pharmaceutics14112378. [PMID: 36365197 PMCID: PMC9695136 DOI: 10.3390/pharmaceutics14112378] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 10/25/2022] [Accepted: 10/26/2022] [Indexed: 11/06/2022] Open
Abstract
Contrast media are widely diffused in biomedical imaging, due to their relevance in the diagnosis of numerous disorders. However, the risk of adverse reactions, the concern of potential damage to sensitive organs, and the recently described brain deposition of gadolinium salts, limit the use of contrast media in clinical practice. In recent years, the application of artificial intelligence (AI) techniques to biomedical imaging has led to the development of 'virtual' and 'augmented' contrasts. The idea behind these applications is to generate synthetic post-contrast images through AI computational modeling starting from the information available on other images acquired during the same scan. In these AI models, non-contrast images (virtual contrast) or low-dose post-contrast images (augmented contrast) are used as input data to generate synthetic post-contrast images, which are often undistinguishable from the native ones. In this review, we discuss the most recent advances of AI applications to biomedical imaging relative to synthetic contrast media.
Collapse
Affiliation(s)
- Luca Pasquini
- Neuroradiology Unit, Department of Radiology, Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065, USA
- Neuroradiology Unit, NESMOS Department, Sant’Andrea Hospital, La Sapienza University, Via di Grottarossa 1035, 00189 Rome, Italy
| | - Antonio Napolitano
- Medical Physics Department, Bambino Gesù Children’s Hospital, IRCCS, Piazza di Sant’Onofrio, 4, 00165 Rome, Italy
- Correspondence:
| | - Matteo Pignatelli
- Radiology Department, Castelli Hospital, Via Nettunense Km 11.5, 00040 Ariccia, Italy
| | - Emanuela Tagliente
- Medical Physics Department, Bambino Gesù Children’s Hospital, IRCCS, Piazza di Sant’Onofrio, 4, 00165 Rome, Italy
| | - Chiara Parrillo
- Medical Physics Department, Bambino Gesù Children’s Hospital, IRCCS, Piazza di Sant’Onofrio, 4, 00165 Rome, Italy
| | - Francesco Nasta
- Medical Physics Department, Bambino Gesù Children’s Hospital, IRCCS, Piazza di Sant’Onofrio, 4, 00165 Rome, Italy
| | - Andrea Romano
- Neuroradiology Unit, NESMOS Department, Sant’Andrea Hospital, La Sapienza University, Via di Grottarossa 1035, 00189 Rome, Italy
| | - Alessandro Bozzao
- Neuroradiology Unit, NESMOS Department, Sant’Andrea Hospital, La Sapienza University, Via di Grottarossa 1035, 00189 Rome, Italy
| | - Alberto Di Napoli
- Neuroradiology Unit, NESMOS Department, Sant’Andrea Hospital, La Sapienza University, Via di Grottarossa 1035, 00189 Rome, Italy
- Neuroimaging Lab, IRCCS Fondazione Santa Lucia, 00179 Rome, Italy
| |
Collapse
|
12
|
Pang Y, Chen X, Huang Y, Yap PT, Lian J. Weakly Supervised MR-TRUS Image Synthesis for Brachytherapy of Prostate Cancer. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2022; 13436:485-494. [PMID: 38863462 PMCID: PMC11165422 DOI: 10.1007/978-3-031-16446-0_46] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2024]
Abstract
Prostate magnetic resonance imaging (MRI) offers accurate details of structures and tumors for prostate cancer brachytherapy. However, it is unsuitable for routine treatment since MR images differ significantly from trans-rectal ultrasound (TRUS) images conventionally used for radioactive seed implants in brachytherapy. TRUS imaging is fast, convenient, and widely available in the operation room but is known for its low soft-tissue contrast and tumor visualization capability in the prostate area. Conventionally, practitioners usually rely on prostate segmentation to fuse the two imaging modalities with non-rigid registration. However, prostate delineation is often not available on diagnostic MR images. Besides, the high non-linear intensity relationship between two imaging modalities poses a challenge to non-rigid registration. Hence, we propose a method to generate a TRUS-styled image from a prostate MR image to replace the role of the TRUS image in radiation therapy dose pre-planning. We propose a structural constraint to handle non-linear projections of anatomical structures between MR and TRUS images. We further include an adversarial mechanism to enforce the model to preserve anatomical features in an MR image (such as prostate boundary and dominant intraprostatic lesion (DIL)) while synthesizing the TRUS-styled counterpart image. The proposed method is compared with other state-of-art methods with real TRUS images as the reference. The results demonstrate that the TRUS images synthesized by our method can be used for brachytherapy treatment planning for prostate cancer.
Collapse
Affiliation(s)
- Yunkui Pang
- University of North Carolina, Chapel Hill, NC 27599, USA
| | - Xu Chen
- College of Computer Science and Technology, Huaqiao University, Xiamen 361021, China
| | - Yunzhi Huang
- School of Automation, Nanjing University of Information Science and Technology, Nanjing 210044, China
| | - Pew-Thian Yap
- University of North Carolina, Chapel Hill, NC 27599, USA
| | - Jun Lian
- University of North Carolina, Chapel Hill, NC 27599, USA
| |
Collapse
|
13
|
Zhou Q, Zou H. A layer-wise fusion network incorporating self-supervised learning for multimodal MR image synthesis. Front Genet 2022; 13:937042. [PMID: 36017492 PMCID: PMC9396279 DOI: 10.3389/fgene.2022.937042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Accepted: 07/04/2022] [Indexed: 11/13/2022] Open
Abstract
Magnetic resonance (MR) imaging plays an important role in medical diagnosis and treatment; different modalities of MR images can provide rich and complementary information to improve the accuracy of diagnosis. However, due to the limitations of scanning time and medical conditions, certain modalities of MR may be unavailable or of low quality in clinical practice. In this study, we propose a new multimodal MR image synthesis network to generate missing MR images. The proposed model comprises three stages: feature extraction, feature fusion, and image generation. During feature extraction, 2D and 3D self-supervised pretext tasks are introduced to pre-train the backbone for better representations of each modality. Then, a channel attention mechanism is used when fusing features so that the network can adaptively weigh different fusion operations to learn common representations of all modalities. Finally, a generative adversarial network is considered as the basic framework to generate images, in which a feature-level edge information loss is combined with the pixel-wise loss to ensure consistency between the synthesized and real images in terms of anatomical characteristics. 2D and 3D self-supervised pre-training can have better performance on feature extraction to retain more details in the synthetic images. Moreover, the proposed multimodal attention feature fusion block (MAFFB) in the well-designed layer-wise fusion strategy can model both common and unique information in all modalities, consistent with the clinical analysis. We also perform an interpretability analysis to confirm the rationality and effectiveness of our method. The experimental results demonstrate that our method can be applied in both single-modal and multimodal synthesis with high robustness and outperforms other state-of-the-art approaches objectively and subjectively.
Collapse
|
14
|
Zhang X, Liu F, Wang X. Application of Ultrasound Combined with Magnetic Resonance Imaging in the Diagnosis and Grading of Patients with Prenatal Placenta Accreta. SCANNING 2022; 2022:1199210. [PMID: 35937669 PMCID: PMC9337953 DOI: 10.1155/2022/1199210] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/05/2022] [Revised: 06/29/2022] [Accepted: 07/07/2022] [Indexed: 06/15/2023]
Abstract
In order to study the clinical application value of placenta accreta (PIA) diagnosis and grading, the authors propose a method based on ultrasound combined with magnetic resonance imaging in the diagnosis and grading of prenatal placenta accreta patients. This method is adopted in materials and methods: a retrospective analysis of hospital patients with high suspicion of placenta accreta by clinical or ultrasonography between October 2019 and October 2021, the imaging and clinical data of 312 patients who underwent placental MRI examination. The MRI imaging data of all patients were jointly analyzed, and the main observation indicators are as follows: (1) dark zone in the placenta, (2) disruption of the border of the myometrium, (3) disruption of the myometrium, (4) abnormal blood vessels in the placenta, (5) enlargement of the lower part of the uterus, and (6) local bulge of the bladder/or invasion of the adjacent tissues of the uterus. The results show the following: in MRI combined with ultrasonography (P < 0.05), there was no statistical significance in the specificity and accuracy of MRI combined with ultrasound to diagnose PIA (P > 0.05). The comparison of graded diagnostic accuracy showed that in ultrasound alone < MRI alone < MRI combined with ultrasound, the differences were statistically significant (P < 0.05). Ultrasound combined with MRI in the diagnosis of placenta accreta is in good agreement with the clinical and surgical pathological results; MRI examination can be used as an important method for prenatal placenta accreta screening. MRI can classify placenta accreta to some extent.
Collapse
Affiliation(s)
- Xiaoyan Zhang
- Department of Obstetrics and Gynecology Women and Children's Hospital of Chongqing Medical University, China
| | - Fengfeng Liu
- Department of Obstetrics and Gynecology Women and Children's Hospital of Chongqing Medical University, China
| | - Xiaoyan Wang
- Department of Obstetrics and Gynecology Women and Children's Hospital of Chongqing Medical University, China
| |
Collapse
|
15
|
Wang Z, Lim G, Ng WY, Keane PA, Campbell JP, Tan GSW, Schmetterer L, Wong TY, Liu Y, Ting DSW. Generative adversarial networks in ophthalmology: what are these and how can they be used? Curr Opin Ophthalmol 2021; 32:459-467. [PMID: 34324454 PMCID: PMC10276657 DOI: 10.1097/icu.0000000000000794] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
PURPOSE OF REVIEW The development of deep learning (DL) systems requires a large amount of data, which may be limited by costs, protection of patient information and low prevalence of some conditions. Recent developments in artificial intelligence techniques have provided an innovative alternative to this challenge via the synthesis of biomedical images within a DL framework known as generative adversarial networks (GANs). This paper aims to introduce how GANs can be deployed for image synthesis in ophthalmology and to discuss the potential applications of GANs-produced images. RECENT FINDINGS Image synthesis is the most relevant function of GANs to the medical field, and it has been widely used for generating 'new' medical images of various modalities. In ophthalmology, GANs have mainly been utilized for augmenting classification and predictive tasks, by synthesizing fundus images and optical coherence tomography images with and without pathologies such as age-related macular degeneration and diabetic retinopathy. Despite their ability to generate high-resolution images, the development of GANs remains data intensive, and there is a lack of consensus on how best to evaluate the outputs produced by GANs. SUMMARY Although the problem of artificial biomedical data generation is of great interest, image synthesis by GANs represents an innovation with yet unclear relevance for ophthalmology.
Collapse
Affiliation(s)
- Zhaoran Wang
- Duke-NUS Medical School, National University of Singapore
| | - Gilbert Lim
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| | - Wei Yan Ng
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| | - Pearse A. Keane
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore
| | - J. Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon, USA
| | - Gavin Siew Wei Tan
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| | - Leopold Schmetterer
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE)
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore
- Institute of Molecular and Clinical Ophthalmology Basel, Basel, Switzerland
- Department of Clinical Pharmacology
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Tien Yin Wong
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| | - Yong Liu
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore
| | - Daniel Shu Wei Ting
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| |
Collapse
|