1
|
Mekki L, Ladra M, Acharya S, Lee J. Generative evidential synthesis with integrated segmentation framework for MR-only radiation therapy treatment planning. Med Phys 2025. [PMID: 40219601 DOI: 10.1002/mp.17828] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2025] [Revised: 03/10/2025] [Accepted: 03/24/2025] [Indexed: 04/14/2025] Open
Abstract
BACKGROUND Radiation therapy (RT) planning is a time-consuming process involving the contouring of target volumes and organs at risk, followed by treatment plan optimization. CT is typically used as the primary planning image modality as it provides electron density information needed for dose calculation. MRI is widely used for contouring after registration to CT due to its high soft tissue contrast. However, there exists uncertainties in registration, which propagate throughout treatment planning as contouring errors, and lead to dose inaccuracies. MR-only RT planning has been proposed as a solution to eliminate the need for CT scan and image registration, by synthesizing CT from MRI. A challenge in deploying MR-only planning in clinic is the lack of a method to estimate the reliability of a synthetic CT in the absence of ground truth. While methods have used sampling-based approaches to estimate model uncertainty over multiple inferences, such methods suffer from long run time and are therefore inconvenient for clinical use. PURPOSE To develop a fast and robust method for the joint synthesis of CT from MRI, estimation of model uncertainty related to the synthesis accuracy, and segmentation of organs at risk (OARs), in a single model inference. METHODS In this work, deep evidential regression is applied to MR-only brain RT planning. The proposed framework uses a multi-task vision transformer combining a single joint nested encoder with two distinct convolutional decoder paths for synthesis and segmentation separately. An evidential layer was added at the end of the synthesis decoder to jointly estimate model uncertainty in a single inference. The framework was trained and tested on a dataset of 119 (80 for training, 9 for validation, and 30 for test) paired T1-weighted MRI and CT scans with OARs contours. RESULTS The proposed method achieved mean ± SD SSIM of 0.820 ± 0.039, MAE of 47.4 ± 8.49 HU, and PSNR of 23.4 ± 1.13 for the synthesis task and dice similarity coefficient of 0.799 ± 0.132 (lenses), 0.945 ± 0.020 (eyes), 0.834 ± 0.059 (optic nerves), 0.679 ± 0.148 (chiasm), 0.947 ± 0.014 (temporal lobes), 0.849 ± 0.027 (hippocampus), 0.953 ± 0.024 (brainstem), 0.752 ± 0.228 (cochleae) for segmentation-in a total run time of 6.71 ± 0.25 s. Additionally, experiments on challenging test cases revealed that the proposed evidential uncertainty estimation highlighted the same uncertain regions as Monte Carlo-based epistemic uncertainty, thus highlighting the reliability of the proposed method. CONCLUSION A framework leveraging deep evidential regression to jointly synthesize CT from MRI, predict the related synthesis uncertainty, and segment OARs in a single model inference was developed. The proposed approach has the potential to streamline the planning process and provide clinicians with a measure of the reliability of a synthetic CT in the absence of ground truth.
Collapse
Affiliation(s)
- Lina Mekki
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Matthew Ladra
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, Maryland, USA
| | - Sahaja Acharya
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, Maryland, USA
| | - Junghoon Lee
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, Maryland, USA
| |
Collapse
|
2
|
Cai Z, Xin J, You C, Shi P, Dong S, Dvornek NC, Zheng N, Duncan JS. Style mixup enhanced disentanglement learning for unsupervised domain adaptation in medical image segmentation. Med Image Anal 2025; 101:103440. [PMID: 39764933 DOI: 10.1016/j.media.2024.103440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Revised: 11/22/2024] [Accepted: 12/13/2024] [Indexed: 03/05/2025]
Abstract
Unsupervised domain adaptation (UDA) has shown impressive performance by improving the generalizability of the model to tackle the domain shift problem for cross-modality medical segmentation. However, most of the existing UDA approaches depend on high-quality image translation with diversity constraints to explicitly augment the potential data diversity, which is hard to ensure semantic consistency and capture domain-invariant representation. In this paper, free of image translation and diversity constraints, we propose a novel Style Mixup Enhanced Disentanglement Learning (SMEDL) for UDA medical image segmentation to further improve domain generalization and enhance domain-invariant learning ability. Firstly, our method adopts disentangled style mixup to implicitly generate style-mixed domains with diverse styles in the feature space through a convex combination of disentangled style factors, which can effectively improve the model generalization. Meanwhile, we further introduce pixel-wise consistency regularization to ensure the effectiveness of style-mixed domains and provide domain consistency guidance. Secondly, we introduce dual-level domain-invariant learning, including intra-domain contrastive learning and inter-domain adversarial learning to mine the underlying domain-invariant representation under both intra- and inter-domain variations. We have conducted comprehensive experiments to evaluate our method on two public cardiac datasets and one brain dataset. Experimental results demonstrate that our proposed method achieves superior performance compared to the state-of-the-art methods for UDA medical image segmentation.
Collapse
Affiliation(s)
- Zhuotong Cai
- National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, and Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, Shaanxi, China; Department of Biomedical Engineering, Yale University, New Haven, CT, USA.
| | - Jingmin Xin
- National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, and Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, Shaanxi, China.
| | - Chenyu You
- Department of Electrical Engineering, Yale University, New Haven, CT, USA
| | - Peiwen Shi
- National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, and Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, Shaanxi, China
| | - Siyuan Dong
- Department of Electrical Engineering, Yale University, New Haven, CT, USA
| | - Nicha C Dvornek
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Nanning Zheng
- National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, and Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, Shaanxi, China
| | - James S Duncan
- Department of Electrical Engineering, Yale University, New Haven, CT, USA; Department of Biomedical Engineering, Yale University, New Haven, CT, USA.
| |
Collapse
|
3
|
Wang Y, Meng C, Tang Z, Bai X, Ji P, Bai X. Unsupervised Domain Adaptation for Cross-Modality Cerebrovascular Segmentation. IEEE J Biomed Health Inform 2025; 29:2871-2884. [PMID: 40030830 DOI: 10.1109/jbhi.2024.3523103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
Cerebrovascular segmentation from time-of-flight magnetic resonance angiography (TOF-MRA) and computed tomography angiography (CTA) is essential in providing supportive information for diagnosing and treatment planning of multiple intracranial vascular diseases. Different imaging modalities utilize distinct principles to visualize the cerebral vasculature, which leads to the limitations of expensive annotations and performance degradation while training and deploying deep learning models. In this paper, we propose an unsupervised domain adaptation framework CereTS to perform translation and segmentation of cross-modality unpaired cerebral angiography. Considering the commonality of vascular structures and stylistic textures as domain-invariant and domain-specific features, CereTS adopts a multi-level domain alignment pattern that includes an image-level cyclic geometric consistency constraint, a patch-level masked contrastive constraint and a feature-level semantic perception constraint to shrink domain discrepancy while preserving consistency of vascular structures. Conducted on a publicly available TOF-MRA dataset and a private CTA dataset, our experiment shows that CereTS outperforms current state-of-the-art methods by a large margin.
Collapse
|
4
|
Qian X, Shao HC, Li Y, Lu W, Zhang Y. Histogram matching-enhanced adversarial learning for unsupervised domain adaptation in medical image segmentation. Med Phys 2025. [PMID: 40102198 DOI: 10.1002/mp.17757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2024] [Revised: 02/20/2025] [Accepted: 02/26/2025] [Indexed: 03/20/2025] Open
Abstract
BACKGROUND Unsupervised domain adaptation (UDA) seeks to mitigate the performance degradation of deep neural networks when applied to new, unlabeled domains by leveraging knowledge from source domains. In medical image segmentation, prevailing UDA techniques often utilize adversarial learning to address domain shifts for cross-modality adaptation. Current research on adversarial learning tends to adopt increasingly complex models and loss functions, making the training process highly intricate and less stable/robust. Furthermore, most methods primarily focused on segmentation accuracy while neglecting the associated confidence levels and uncertainties. PURPOSE To develop a simple yet effective UDA method based on histogram matching-enhanced adversarial learning (HMeAL-UDA), and provide comprehensive uncertainty estimations of the model predictions. METHODS Aiming to bridge the domain gap while reducing the model complexity, we developed a novel adversarial learning approach to align multi-modality features. The method, termed HMeAL-UDA, integrates a plug-and-play histogram matching strategy to mitigate domain-specific image style biases across modalities. We employed adversarial learning to constrain the model in the prediction space, enabling it to focus on domain-invariant features during segmentation. Moreover, we quantified the model's prediction confidence using Monte Carlo (MC) dropouts to assess two voxel-level uncertainty estimates of the segmentation results, which were subsequently aggregated into a volume-level uncertainty score, providing an overall measure of the model's reliability. The proposed method was evaluated on three public datasets (Combined Healthy Abdominal Organ Segmentation [CHAOS], Beyond the Cranial Vault [BTCV], and Abdominal Multi-Organ Segmentation Challenge [AMOS]) and one in-house clinical dataset (UTSW). We used 30 MRI scans (20 from the CHAOS dataset and 10 from the in-house dataset) and 30 CT scans from the BTCV dataset for UDA-based, cross-modality liver segmentation. Additionally, 240 CT scans and 60 MRI scans from the AMOS dataset were utilized for cross-modality multi-organ segmentation. The training and testing sets for each modality were split with ratios of approximately 4:1-3:1. RESULTS Extensive experiments on cross-modality medical image segmentation demonstrated the superiority of HMeAL-UDA over two state-of-the-art approaches. HMeAL-UDA achieved a mean (± s.d.) Dice similarity coefficient (DSC) of 91.34% ± 1.23% and an HD95 of 6.18 ± 2.93 mm for cross-modality (from CT to MRI) adaptation of abdominal multi-organ segmentation, and a DSC of 87.13% ± 3.67% with an HD95 of 2.48 ± 1.56 mm for segmentation adaptation in the opposite direction (MRI to CT). The results are approaching or even outperforming those of supervised methods trained with "ground-truth" labels in the target domain. In addition, we provide a comprehensive assessment of the model's uncertainty, which can help with the understanding of segmentation reliability to guide clinical decisions. CONCLUSION HMeAL-UDA provides a powerful segmentation tool to address cross-modality domain shifts, with the potential to generalize to other deep learning applications in medical imaging.
Collapse
Affiliation(s)
- Xiaoxue Qian
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Hua-Chieh Shao
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Yunxiang Li
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Weiguo Lu
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - You Zhang
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| |
Collapse
|
5
|
Zhao S, Sun Q, Yang J, Yuan Y, Huang Y, Li Z. Structure preservation constraints for unsupervised domain adaptation intracranial vessel segmentation. Med Biol Eng Comput 2025; 63:609-627. [PMID: 39432222 DOI: 10.1007/s11517-024-03195-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Accepted: 09/11/2024] [Indexed: 10/22/2024]
Abstract
Unsupervised domain adaptation (UDA) has received interest as a means to alleviate the burden of data annotation. Nevertheless, existing UDA segmentation methods exhibit performance degradation in fine intracranial vessel segmentation tasks due to the problem of structure mismatch in the image synthesis procedure. To improve the image synthesis quality and the segmentation performance, a novel UDA segmentation method with structure preservation approaches, named StruP-Net, is proposed. The StruP-Net employs adversarial learning for image synthesis and utilizes two domain-specific segmentation networks to enhance the semantic consistency between real images and synthesized images. Additionally, two distinct structure preservation approaches, feature-level structure preservation (F-SP) and image-level structure preservation (I-SP), are proposed to alleviate the problem of structure mismatch in the image synthesis procedure. The F-SP, composed of two domain-specific graph convolutional networks (GCN), focuses on providing feature-level constraints to enhance the structural similarity between real images and synthesized images. Meanwhile, the I-SP imposes constraints on structure similarity based on perceptual loss. The cross-modality experimental results from magnetic resonance angiography (MRA) images to computed tomography angiography (CTA) images indicate that StruP-Net achieves better segmentation performance compared with other state-of-the-art methods. Furthermore, high inference efficiency demonstrates the clinical application potential of StruP-Net. The code is available at https://github.com/Mayoiuta/StruP-Net .
Collapse
Affiliation(s)
- Sizhe Zhao
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
- School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Qi Sun
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
- School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Jinzhu Yang
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China.
- School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China.
| | - Yuliang Yuan
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
- School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Yan Huang
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
- School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Zhiqing Li
- The First Affiliated Hospital of China Medical University, Shenyang, Liaoning, China
| |
Collapse
|
6
|
Chen Z, Bian Y, Shen E, Fan L, Zhu W, Shi F, Shao C, Chen X, Xiang D. Moment-Consistent Contrastive CycleGAN for Cross-Domain Pancreatic Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2025; 44:422-435. [PMID: 39167524 DOI: 10.1109/tmi.2024.3447071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/23/2024]
Abstract
CT and MR are currently the most common imaging techniques for pancreatic cancer diagnosis. Accurate segmentation of the pancreas in CT and MR images can provide significant help in the diagnosis and treatment of pancreatic cancer. Traditional supervised segmentation methods require a large number of labeled CT and MR training data, which is usually time-consuming and laborious. Meanwhile, due to domain shift, traditional segmentation networks are difficult to be deployed on different imaging modality datasets. Cross-domain segmentation can utilize labeled source domain data to assist unlabeled target domains in solving the above problems. In this paper, a cross-domain pancreas segmentation algorithm is proposed based on Moment-Consistent Contrastive Cycle Generative Adversarial Networks (MC-CCycleGAN). MC-CCycleGAN is a style transfer network, in which the encoder of its generator is used to extract features from real images and style transfer images, constrain feature extraction through a contrastive loss, and fully extract structural features of input images during style transfer while eliminate redundant style features. The multi-order central moments of the pancreas are proposed to describe its anatomy in high dimensions and a contrastive loss is also proposed to constrain the moment consistency, so as to maintain consistency of the pancreatic structure and shape before and after style transfer. Multi-teacher knowledge distillation framework is proposed to transfer the knowledge from multiple teachers to a single student, so as to improve the robustness and performance of the student network. The experimental results have demonstrated the superiority of our framework over state-of-the-art domain adaptation methods.
Collapse
|
7
|
Salle G, Andrade-Miranda G, Conze PH, Boussion N, Bert J, Visvikis D, Jaouen V. Cross-Modal Tumor Segmentation Using Generative Blending Augmentation and Self-Training. IEEE Trans Biomed Eng 2025; 72:370-380. [PMID: 38557627 DOI: 10.1109/tbme.2024.3384014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
OBJECTIVES Data scarcity and domain shifts lead to biased training sets that do not accurately represent deployment conditions. A related practical problem is cross-modal image segmentation, where the objective is to segment unlabelled images using previously labelled datasets from other imaging modalities. METHODS We propose a cross-modal segmentation method based on conventional image synthesis boosted by a new data augmentation technique called Generative Blending Augmentation (GBA). GBA leverages a SinGAN model to learn representative generative features from a single training image to diversify realistically tumor appearances. This way, we compensate for image synthesis errors, subsequently improving the generalization power of a downstream segmentation model. The proposed augmentation is further combined to an iterative self-training procedure leveraging pseudo labels at each pass. RESULTS The proposed solution ranked first for vestibular schwannoma (VS) segmentation during the validation and test phases of the MICCAI CrossMoDA 2022 challenge, with best mean Dice similarity and average symmetric surface distance measures. CONCLUSION AND SIGNIFICANCE Local contrast alteration of tumor appearances and iterative self-training with pseudo labels are likely to lead to performance improvements in a variety of segmentation contexts.
Collapse
|
8
|
Wang S, Liu L, Wang J, Peng X, Liu B. MSR-UNet: enhancing multi-scale and long-range dependencies in medical image segmentation. PeerJ Comput Sci 2024; 10:e2563. [PMID: 39650414 PMCID: PMC11623095 DOI: 10.7717/peerj-cs.2563] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2024] [Accepted: 11/08/2024] [Indexed: 12/11/2024]
Abstract
Transformer-based technology has attracted widespread attention in medical image segmentation. Due to the diversity of organs, effective modelling of multi-scale information and establishing long-range dependencies between pixels are crucial for successful medical image segmentation. However, most studies rely on a fixed single-scale window for modeling, which ignores the potential impact of window size on performance. This limitation can hinder window-based models' ability to fully explore multi-scale and long-range relationships within medical images. To address this issue, we propose a multi-scale reconfiguration self-attention (MSR-SA) module that accurately models multi-scale information and long-range dependencies in medical images. The MSR-SA module first divides the attention heads into multiple groups, each assigned an ascending dilation rate. These groups are then uniformly split into several non-overlapping local windows. Using dilated sampling, we gather the same number of keys to obtain both long-range and multi-scale information. Finally, dynamic information fusion is achieved by integrating features from the sampling points at corresponding positions across different windows. Based on the MSR-SA module, we propose a multi-scale reconfiguration U-Net (MSR-UNet) framework for medical image segmentation. Experiments on the Synapse and automated cardiac diagnosis challenge (ACDC) datasets show that MSR-UNet can achieve satisfactory segmentation results. The code is available at https://github.com/davidsmithwj/MSR-UNet (DOI: 10.5281/zenodo.13969855).
Collapse
Affiliation(s)
- Shuai Wang
- School of Computer Science and Technology, Huaibei Normal University, Huaibei, China
| | - Lei Liu
- School of Computer Science and Technology, Huaibei Normal University, Huaibei, China
- Huaibei Key Laboratory of Digital Multimedia Intelligent Information Processing, Huaibei, China
| | - Jun Wang
- College of Electronic and Information Engineering, Hebei University, Baoding, China
| | - Xinyue Peng
- School of Computer Science and Technology, Huaibei Normal University, Huaibei, China
| | - Baosen Liu
- Huaibei People’s Hospital, Huaibei, China
| |
Collapse
|
9
|
Chen X, Pang Y, Yap PT, Lian J. Multi-scale anatomical regularization for domain-adaptive segmentation of pelvic CBCT images. Med Phys 2024; 51:8804-8813. [PMID: 39225652 PMCID: PMC11672636 DOI: 10.1002/mp.17378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Revised: 07/22/2024] [Accepted: 08/16/2024] [Indexed: 09/04/2024] Open
Abstract
BACKGROUND Cone beam computed tomography (CBCT) image segmentation is crucial in prostate cancer radiotherapy, enabling precise delineation of the prostate gland for accurate treatment planning and delivery. However, the poor quality of CBCT images poses challenges in clinical practice, making annotation difficult due to factors such as image noise, low contrast, and organ deformation. PURPOSE The objective of this study is to create a segmentation model for the label-free target domain (CBCT), leveraging valuable insights derived from the label-rich source domain (CT). This goal is achieved by addressing the domain gap across diverse domains through the implementation of a cross-modality medical image segmentation framework. METHODS Our approach introduces a multi-scale domain adaptive segmentation method, performing domain adaptation simultaneously at both the image and feature levels. The primary innovation lies in a novel multi-scale anatomical regularization approach, which (i) aligns the target domain feature space with the source domain feature space at multiple spatial scales simultaneously, and (ii) exchanges information across different scales to fuse knowledge from multi-scale perspectives. RESULTS Quantitative and qualitative experiments were conducted on pelvic CBCT segmentation tasks. The training dataset comprises 40 unpaired CBCT-CT images with only CT images annotated. The validation and testing datasets consist of 5 and 10 CT images, respectively, all with annotations. The experimental results demonstrate the superior performance of our method compared to other state-of-the-art cross-modality medical image segmentation methods. The Dice similarity coefficients (DSC) for CBCT image segmentation results is74.6 ± 9.3 $74.6 \pm 9.3$ %, and the average symmetric surface distance (ASSD) is3.9 ± 1.8 mm $3.9\pm 1.8\;\mathrm{mm}$ . Statistical analysis confirms the statistical significance of the improvements achieved by our method. CONCLUSIONS Our method exhibits superiority in pelvic CBCT image segmentation compared to its counterparts.
Collapse
Affiliation(s)
- Xu Chen
- College of Computer Science and Technology, Huaqiao University, Xiamen, Fujian, China
- Key Laboratory of Computer Vision and Machine Learning (Huaqiao University), Fujian Province University, Xiamen, Fujian, China
- Xiamen Key Laboratory of Computer Vision and Pattern Recognition, Huaqiao University, Xiamen, Fujian, China
| | - Yunkui Pang
- Department of Computer Science, University of North Carolina, Chapel Hill, North Carolina, USA
| | - Pew-Thian Yap
- Department of Radiology, University of North Carolina, Chapel Hill, North Carolina, USA
- Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, North Carolina, USA
| | - Jun Lian
- Department of Radiation Oncology, University of North Carolina, Chapel Hill, North Carolina, USA
| |
Collapse
|
10
|
de Boisredon d'Assier MA, Portafaix A, Vorontsov E, Le WT, Kadoury S. Image-level supervision and self-training for transformer-based cross-modality tumor segmentation. Med Image Anal 2024; 97:103287. [PMID: 39111265 DOI: 10.1016/j.media.2024.103287] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 06/20/2024] [Accepted: 07/24/2024] [Indexed: 08/30/2024]
Abstract
Deep neural networks are commonly used for automated medical image segmentation, but models will frequently struggle to generalize well across different imaging modalities. This issue is particularly problematic due to the limited availability of annotated data, both in the target as well as the source modality, making it difficult to deploy these models on a larger scale. To overcome these challenges, we propose a new semi-supervised training strategy called MoDATTS. Our approach is designed for accurate cross-modality 3D tumor segmentation on unpaired bi-modal datasets. An image-to-image translation strategy between modalities is used to produce synthetic but annotated images and labels in the desired modality and improve generalization to the unannotated target modality. We also use powerful vision transformer architectures for both image translation (TransUNet) and segmentation (Medformer) tasks and introduce an iterative self-training procedure in the later task to further close the domain gap between modalities, thus also training on unlabeled images in the target modality. MoDATTS additionally allows the possibility to exploit image-level labels with a semi-supervised objective that encourages the model to disentangle tumors from the background. This semi-supervised methodology helps in particular to maintain downstream segmentation performance when pixel-level label scarcity is also present in the source modality dataset, or when the source dataset contains healthy controls. The proposed model achieves superior performance compared to other methods from participating teams in the CrossMoDA 2022 vestibular schwannoma (VS) segmentation challenge, as evidenced by its reported top Dice score of 0.87±0.04 for the VS segmentation. MoDATTS also yields consistent improvements in Dice scores over baselines on a cross-modality adult brain gliomas segmentation task composed of four different contrasts from the BraTS 2020 challenge dataset, where 95% of a target supervised model performance is reached when no target modality annotations are available. We report that 99% and 100% of this maximum performance can be attained if 20% and 50% of the target data is additionally annotated, which further demonstrates that MoDATTS can be leveraged to reduce the annotation burden.
Collapse
Affiliation(s)
| | - Aloys Portafaix
- Polytechnique Montreal, Montreal, QC, Canada; Centre de Recherche du Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada
| | | | - William Trung Le
- Polytechnique Montreal, Montreal, QC, Canada; Centre de Recherche du Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada
| | - Samuel Kadoury
- Polytechnique Montreal, Montreal, QC, Canada; Centre de Recherche du Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada.
| |
Collapse
|
11
|
Zheng B, Zhang R, Diao S, Zhu J, Yuan Y, Cai J, Shao L, Li S, Qin W. Dual domain distribution disruption with semantics preservation: Unsupervised domain adaptation for medical image segmentation. Med Image Anal 2024; 97:103275. [PMID: 39032395 DOI: 10.1016/j.media.2024.103275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Revised: 06/14/2024] [Accepted: 07/10/2024] [Indexed: 07/23/2024]
Abstract
Recent unsupervised domain adaptation (UDA) methods in medical image segmentation commonly utilize Generative Adversarial Networks (GANs) for domain translation. However, the translated images often exhibit a distribution deviation from the ideal due to the inherent instability of GANs, leading to challenges such as visual inconsistency and incorrect style, consequently causing the segmentation model to fall into the fixed wrong pattern. To address this problem, we propose a novel UDA framework known as Dual Domain Distribution Disruption with Semantics Preservation (DDSP). Departing from the idea of generating images conforming to the target domain distribution in GAN-based UDA methods, we make the model domain-agnostic and focus on anatomical structural information by leveraging semantic information as constraints to guide the model to adapt to images with disrupted distributions in both source and target domains. Furthermore, we introduce the inter-channel similarity feature alignment based on the domain-invariant structural prior information, which facilitates the shared pixel-wise classifier to achieve robust performance on target domain features by aligning the source and target domain features across channels. Without any exaggeration, our method significantly outperforms existing state-of-the-art UDA methods on three public datasets (i.e., the heart dataset, the brain dataset, and the prostate dataset). The code is available at https://github.com/MIXAILAB/DDSPSeg.
Collapse
Affiliation(s)
- Boyun Zheng
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen 518055, China
| | - Ranran Zhang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Songhui Diao
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen 518055, China
| | - Jingke Zhu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen 518055, China
| | - Yixuan Yuan
- Department of Electronic Engineering, The Chinese University of Hong Kong, 999077, Hong Kong, China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, 999077, Hong Kong, China
| | - Liang Shao
- Department of Cardiology, Jiangxi Provincial People's Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang 330013, China
| | - Shuo Li
- Department of Biomedical Engineering, Department of Computer and Data Science, Case Western Reserve University, Cleveland, United States.
| | - Wenjian Qin
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China.
| |
Collapse
|
12
|
Chen L, Bian Y, Zeng J, Meng Q, Zhu W, Shi F, Shao C, Chen X, Xiang D. Style Consistency Unsupervised Domain Adaptation Medical Image Segmentation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2024; 33:4882-4895. [PMID: 39236126 DOI: 10.1109/tip.2024.3451934] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/07/2024]
Abstract
Unsupervised domain adaptation medical image segmentation is aimed to segment unlabeled target domain images with labeled source domain images. However, different medical imaging modalities lead to large domain shift between their images, in which well-trained models from one imaging modality often fail to segment images from anothor imaging modality. In this paper, to mitigate domain shift between source domain and target domain, a style consistency unsupervised domain adaptation image segmentation method is proposed. First, a local phase-enhanced style fusion method is designed to mitigate domain shift and produce locally enhanced organs of interest. Second, a phase consistency discriminator is constructed to distinguish the phase consistency of domain-invariant features between source domain and target domain, so as to enhance the disentanglement of the domain-invariant and style encoders and removal of domain-specific features from the domain-invariant encoder. Third, a style consistency estimation method is proposed to obtain inconsistency maps from intermediate synthesized target domain images with different styles to measure the difficult regions, mitigate domain shift between synthesized target domain images and real target domain images, and improve the integrity of interested organs. Fourth, style consistency entropy is defined for target domain images to further improve the integrity of the interested organ by the concentration on the inconsistent regions. Comprehensive experiments have been performed with an in-house dataset and a publicly available dataset. The experimental results have demonstrated the superiority of our framework over state-of-the-art methods.
Collapse
|
13
|
Li H, Liu H, von Busch H, Grimm R, Huisman H, Tong A, Winkel D, Penzkofer T, Shabunin I, Choi MH, Yang Q, Szolar D, Shea S, Coakley F, Harisinghani M, Oguz I, Comaniciu D, Kamen A, Lou B. Deep Learning-based Unsupervised Domain Adaptation via a Unified Model for Prostate Lesion Detection Using Multisite Biparametric MRI Datasets. Radiol Artif Intell 2024; 6:e230521. [PMID: 39166972 PMCID: PMC11449150 DOI: 10.1148/ryai.230521] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/23/2024]
Abstract
Purpose To determine whether the unsupervised domain adaptation (UDA) method with generated images improves the performance of a supervised learning (SL) model for prostate cancer (PCa) detection using multisite biparametric (bp) MRI datasets. Materials and Methods This retrospective study included data from 5150 patients (14 191 samples) collected across nine different imaging centers. A novel UDA method using a unified generative model was developed for PCa detection using multisite bpMRI datasets. This method translates diffusion-weighted imaging (DWI) acquisitions, including apparent diffusion coefficient (ADC) and individual diffusion-weighted (DW) images acquired using various b values, to align with the style of images acquired using b values recommended by Prostate Imaging Reporting and Data System (PI-RADS) guidelines. The generated ADC and DW images replace the original images for PCa detection. An independent set of 1692 test cases (2393 samples) was used for evaluation. The area under the receiver operating characteristic curve (AUC) was used as the primary metric, and statistical analysis was performed via bootstrapping. Results For all test cases, the AUC values for baseline SL and UDA methods were 0.73 and 0.79 (P < .001), respectively, for PCa lesions with PI-RADS score of 3 or greater and 0.77 and 0.80 (P < .001) for lesions with PI-RADS scores of 4 or greater. In the 361 test cases under the most unfavorable image acquisition setting, the AUC values for baseline SL and UDA were 0.49 and 0.76 (P < .001) for lesions with PI-RADS scores of 3 or greater and 0.50 and 0.77 (P < .001) for lesions with PI-RADS scores of 4 or greater. Conclusion UDA with generated images improved the performance of SL methods in PCa lesion detection across multisite datasets with various b values, especially for images acquired with significant deviations from the PI-RADS-recommended DWI protocol (eg, with an extremely high b value). Keywords: Prostate Cancer Detection, Multisite, Unsupervised Domain Adaptation, Diffusion-weighted Imaging, b Value Supplemental material is available for this article. © RSNA, 2024.
Collapse
Affiliation(s)
- Hao Li
- From Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (H. Li, H. Liu, D.C., A.K., B.L.); Diagnostic Imaging, Siemens Healthineers, Erlangen, Bavaria, Germany (H.v.B., R.G.); Vanderbilt University, Nashville, Tenn (H. Li, H. Liu, I.O.); Radboud University Medical Center, Nijmegen, the Netherlands (H.H.); New York University, New York, NY (A.T.); Universitätsspital Basel, Basel, Switzerland (D.W.); Charité, Universitätsmedizin Berlin, Berlin, Germany (T.P.); Patero Clinic, Moscow, Russia (I.S.); Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea (M.H.C.); Department of Radiology, Changhai Hospital of Shanghai, Shanghai, China (Q.Y.); Diagnostikum Graz Süd-West, Graz, Austria (D.S.); Department of Radiology, Loyola University Medical Center, Maywood, Ill (S.S.); Department of Diagnostic Radiology, Oregon Health and Science University School of Medicine, Portland, Ore (F.C.); and Massachusetts General Hospital, Boston, Mass (M.H.)
| | - Han Liu
- From Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (H. Li, H. Liu, D.C., A.K., B.L.); Diagnostic Imaging, Siemens Healthineers, Erlangen, Bavaria, Germany (H.v.B., R.G.); Vanderbilt University, Nashville, Tenn (H. Li, H. Liu, I.O.); Radboud University Medical Center, Nijmegen, the Netherlands (H.H.); New York University, New York, NY (A.T.); Universitätsspital Basel, Basel, Switzerland (D.W.); Charité, Universitätsmedizin Berlin, Berlin, Germany (T.P.); Patero Clinic, Moscow, Russia (I.S.); Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea (M.H.C.); Department of Radiology, Changhai Hospital of Shanghai, Shanghai, China (Q.Y.); Diagnostikum Graz Süd-West, Graz, Austria (D.S.); Department of Radiology, Loyola University Medical Center, Maywood, Ill (S.S.); Department of Diagnostic Radiology, Oregon Health and Science University School of Medicine, Portland, Ore (F.C.); and Massachusetts General Hospital, Boston, Mass (M.H.)
| | - Heinrich von Busch
- From Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (H. Li, H. Liu, D.C., A.K., B.L.); Diagnostic Imaging, Siemens Healthineers, Erlangen, Bavaria, Germany (H.v.B., R.G.); Vanderbilt University, Nashville, Tenn (H. Li, H. Liu, I.O.); Radboud University Medical Center, Nijmegen, the Netherlands (H.H.); New York University, New York, NY (A.T.); Universitätsspital Basel, Basel, Switzerland (D.W.); Charité, Universitätsmedizin Berlin, Berlin, Germany (T.P.); Patero Clinic, Moscow, Russia (I.S.); Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea (M.H.C.); Department of Radiology, Changhai Hospital of Shanghai, Shanghai, China (Q.Y.); Diagnostikum Graz Süd-West, Graz, Austria (D.S.); Department of Radiology, Loyola University Medical Center, Maywood, Ill (S.S.); Department of Diagnostic Radiology, Oregon Health and Science University School of Medicine, Portland, Ore (F.C.); and Massachusetts General Hospital, Boston, Mass (M.H.)
| | - Robert Grimm
- From Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (H. Li, H. Liu, D.C., A.K., B.L.); Diagnostic Imaging, Siemens Healthineers, Erlangen, Bavaria, Germany (H.v.B., R.G.); Vanderbilt University, Nashville, Tenn (H. Li, H. Liu, I.O.); Radboud University Medical Center, Nijmegen, the Netherlands (H.H.); New York University, New York, NY (A.T.); Universitätsspital Basel, Basel, Switzerland (D.W.); Charité, Universitätsmedizin Berlin, Berlin, Germany (T.P.); Patero Clinic, Moscow, Russia (I.S.); Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea (M.H.C.); Department of Radiology, Changhai Hospital of Shanghai, Shanghai, China (Q.Y.); Diagnostikum Graz Süd-West, Graz, Austria (D.S.); Department of Radiology, Loyola University Medical Center, Maywood, Ill (S.S.); Department of Diagnostic Radiology, Oregon Health and Science University School of Medicine, Portland, Ore (F.C.); and Massachusetts General Hospital, Boston, Mass (M.H.)
| | - Henkjan Huisman
- From Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (H. Li, H. Liu, D.C., A.K., B.L.); Diagnostic Imaging, Siemens Healthineers, Erlangen, Bavaria, Germany (H.v.B., R.G.); Vanderbilt University, Nashville, Tenn (H. Li, H. Liu, I.O.); Radboud University Medical Center, Nijmegen, the Netherlands (H.H.); New York University, New York, NY (A.T.); Universitätsspital Basel, Basel, Switzerland (D.W.); Charité, Universitätsmedizin Berlin, Berlin, Germany (T.P.); Patero Clinic, Moscow, Russia (I.S.); Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea (M.H.C.); Department of Radiology, Changhai Hospital of Shanghai, Shanghai, China (Q.Y.); Diagnostikum Graz Süd-West, Graz, Austria (D.S.); Department of Radiology, Loyola University Medical Center, Maywood, Ill (S.S.); Department of Diagnostic Radiology, Oregon Health and Science University School of Medicine, Portland, Ore (F.C.); and Massachusetts General Hospital, Boston, Mass (M.H.)
| | - Angela Tong
- From Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (H. Li, H. Liu, D.C., A.K., B.L.); Diagnostic Imaging, Siemens Healthineers, Erlangen, Bavaria, Germany (H.v.B., R.G.); Vanderbilt University, Nashville, Tenn (H. Li, H. Liu, I.O.); Radboud University Medical Center, Nijmegen, the Netherlands (H.H.); New York University, New York, NY (A.T.); Universitätsspital Basel, Basel, Switzerland (D.W.); Charité, Universitätsmedizin Berlin, Berlin, Germany (T.P.); Patero Clinic, Moscow, Russia (I.S.); Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea (M.H.C.); Department of Radiology, Changhai Hospital of Shanghai, Shanghai, China (Q.Y.); Diagnostikum Graz Süd-West, Graz, Austria (D.S.); Department of Radiology, Loyola University Medical Center, Maywood, Ill (S.S.); Department of Diagnostic Radiology, Oregon Health and Science University School of Medicine, Portland, Ore (F.C.); and Massachusetts General Hospital, Boston, Mass (M.H.)
| | - David Winkel
- From Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (H. Li, H. Liu, D.C., A.K., B.L.); Diagnostic Imaging, Siemens Healthineers, Erlangen, Bavaria, Germany (H.v.B., R.G.); Vanderbilt University, Nashville, Tenn (H. Li, H. Liu, I.O.); Radboud University Medical Center, Nijmegen, the Netherlands (H.H.); New York University, New York, NY (A.T.); Universitätsspital Basel, Basel, Switzerland (D.W.); Charité, Universitätsmedizin Berlin, Berlin, Germany (T.P.); Patero Clinic, Moscow, Russia (I.S.); Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea (M.H.C.); Department of Radiology, Changhai Hospital of Shanghai, Shanghai, China (Q.Y.); Diagnostikum Graz Süd-West, Graz, Austria (D.S.); Department of Radiology, Loyola University Medical Center, Maywood, Ill (S.S.); Department of Diagnostic Radiology, Oregon Health and Science University School of Medicine, Portland, Ore (F.C.); and Massachusetts General Hospital, Boston, Mass (M.H.)
| | - Tobias Penzkofer
- From Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (H. Li, H. Liu, D.C., A.K., B.L.); Diagnostic Imaging, Siemens Healthineers, Erlangen, Bavaria, Germany (H.v.B., R.G.); Vanderbilt University, Nashville, Tenn (H. Li, H. Liu, I.O.); Radboud University Medical Center, Nijmegen, the Netherlands (H.H.); New York University, New York, NY (A.T.); Universitätsspital Basel, Basel, Switzerland (D.W.); Charité, Universitätsmedizin Berlin, Berlin, Germany (T.P.); Patero Clinic, Moscow, Russia (I.S.); Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea (M.H.C.); Department of Radiology, Changhai Hospital of Shanghai, Shanghai, China (Q.Y.); Diagnostikum Graz Süd-West, Graz, Austria (D.S.); Department of Radiology, Loyola University Medical Center, Maywood, Ill (S.S.); Department of Diagnostic Radiology, Oregon Health and Science University School of Medicine, Portland, Ore (F.C.); and Massachusetts General Hospital, Boston, Mass (M.H.)
| | - Ivan Shabunin
- From Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (H. Li, H. Liu, D.C., A.K., B.L.); Diagnostic Imaging, Siemens Healthineers, Erlangen, Bavaria, Germany (H.v.B., R.G.); Vanderbilt University, Nashville, Tenn (H. Li, H. Liu, I.O.); Radboud University Medical Center, Nijmegen, the Netherlands (H.H.); New York University, New York, NY (A.T.); Universitätsspital Basel, Basel, Switzerland (D.W.); Charité, Universitätsmedizin Berlin, Berlin, Germany (T.P.); Patero Clinic, Moscow, Russia (I.S.); Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea (M.H.C.); Department of Radiology, Changhai Hospital of Shanghai, Shanghai, China (Q.Y.); Diagnostikum Graz Süd-West, Graz, Austria (D.S.); Department of Radiology, Loyola University Medical Center, Maywood, Ill (S.S.); Department of Diagnostic Radiology, Oregon Health and Science University School of Medicine, Portland, Ore (F.C.); and Massachusetts General Hospital, Boston, Mass (M.H.)
| | - Moon Hyung Choi
- From Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (H. Li, H. Liu, D.C., A.K., B.L.); Diagnostic Imaging, Siemens Healthineers, Erlangen, Bavaria, Germany (H.v.B., R.G.); Vanderbilt University, Nashville, Tenn (H. Li, H. Liu, I.O.); Radboud University Medical Center, Nijmegen, the Netherlands (H.H.); New York University, New York, NY (A.T.); Universitätsspital Basel, Basel, Switzerland (D.W.); Charité, Universitätsmedizin Berlin, Berlin, Germany (T.P.); Patero Clinic, Moscow, Russia (I.S.); Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea (M.H.C.); Department of Radiology, Changhai Hospital of Shanghai, Shanghai, China (Q.Y.); Diagnostikum Graz Süd-West, Graz, Austria (D.S.); Department of Radiology, Loyola University Medical Center, Maywood, Ill (S.S.); Department of Diagnostic Radiology, Oregon Health and Science University School of Medicine, Portland, Ore (F.C.); and Massachusetts General Hospital, Boston, Mass (M.H.)
| | - Qingsong Yang
- From Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (H. Li, H. Liu, D.C., A.K., B.L.); Diagnostic Imaging, Siemens Healthineers, Erlangen, Bavaria, Germany (H.v.B., R.G.); Vanderbilt University, Nashville, Tenn (H. Li, H. Liu, I.O.); Radboud University Medical Center, Nijmegen, the Netherlands (H.H.); New York University, New York, NY (A.T.); Universitätsspital Basel, Basel, Switzerland (D.W.); Charité, Universitätsmedizin Berlin, Berlin, Germany (T.P.); Patero Clinic, Moscow, Russia (I.S.); Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea (M.H.C.); Department of Radiology, Changhai Hospital of Shanghai, Shanghai, China (Q.Y.); Diagnostikum Graz Süd-West, Graz, Austria (D.S.); Department of Radiology, Loyola University Medical Center, Maywood, Ill (S.S.); Department of Diagnostic Radiology, Oregon Health and Science University School of Medicine, Portland, Ore (F.C.); and Massachusetts General Hospital, Boston, Mass (M.H.)
| | - Dieter Szolar
- From Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (H. Li, H. Liu, D.C., A.K., B.L.); Diagnostic Imaging, Siemens Healthineers, Erlangen, Bavaria, Germany (H.v.B., R.G.); Vanderbilt University, Nashville, Tenn (H. Li, H. Liu, I.O.); Radboud University Medical Center, Nijmegen, the Netherlands (H.H.); New York University, New York, NY (A.T.); Universitätsspital Basel, Basel, Switzerland (D.W.); Charité, Universitätsmedizin Berlin, Berlin, Germany (T.P.); Patero Clinic, Moscow, Russia (I.S.); Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea (M.H.C.); Department of Radiology, Changhai Hospital of Shanghai, Shanghai, China (Q.Y.); Diagnostikum Graz Süd-West, Graz, Austria (D.S.); Department of Radiology, Loyola University Medical Center, Maywood, Ill (S.S.); Department of Diagnostic Radiology, Oregon Health and Science University School of Medicine, Portland, Ore (F.C.); and Massachusetts General Hospital, Boston, Mass (M.H.)
| | - Steven Shea
- From Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (H. Li, H. Liu, D.C., A.K., B.L.); Diagnostic Imaging, Siemens Healthineers, Erlangen, Bavaria, Germany (H.v.B., R.G.); Vanderbilt University, Nashville, Tenn (H. Li, H. Liu, I.O.); Radboud University Medical Center, Nijmegen, the Netherlands (H.H.); New York University, New York, NY (A.T.); Universitätsspital Basel, Basel, Switzerland (D.W.); Charité, Universitätsmedizin Berlin, Berlin, Germany (T.P.); Patero Clinic, Moscow, Russia (I.S.); Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea (M.H.C.); Department of Radiology, Changhai Hospital of Shanghai, Shanghai, China (Q.Y.); Diagnostikum Graz Süd-West, Graz, Austria (D.S.); Department of Radiology, Loyola University Medical Center, Maywood, Ill (S.S.); Department of Diagnostic Radiology, Oregon Health and Science University School of Medicine, Portland, Ore (F.C.); and Massachusetts General Hospital, Boston, Mass (M.H.)
| | - Fergus Coakley
- From Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (H. Li, H. Liu, D.C., A.K., B.L.); Diagnostic Imaging, Siemens Healthineers, Erlangen, Bavaria, Germany (H.v.B., R.G.); Vanderbilt University, Nashville, Tenn (H. Li, H. Liu, I.O.); Radboud University Medical Center, Nijmegen, the Netherlands (H.H.); New York University, New York, NY (A.T.); Universitätsspital Basel, Basel, Switzerland (D.W.); Charité, Universitätsmedizin Berlin, Berlin, Germany (T.P.); Patero Clinic, Moscow, Russia (I.S.); Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea (M.H.C.); Department of Radiology, Changhai Hospital of Shanghai, Shanghai, China (Q.Y.); Diagnostikum Graz Süd-West, Graz, Austria (D.S.); Department of Radiology, Loyola University Medical Center, Maywood, Ill (S.S.); Department of Diagnostic Radiology, Oregon Health and Science University School of Medicine, Portland, Ore (F.C.); and Massachusetts General Hospital, Boston, Mass (M.H.)
| | - Mukesh Harisinghani
- From Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (H. Li, H. Liu, D.C., A.K., B.L.); Diagnostic Imaging, Siemens Healthineers, Erlangen, Bavaria, Germany (H.v.B., R.G.); Vanderbilt University, Nashville, Tenn (H. Li, H. Liu, I.O.); Radboud University Medical Center, Nijmegen, the Netherlands (H.H.); New York University, New York, NY (A.T.); Universitätsspital Basel, Basel, Switzerland (D.W.); Charité, Universitätsmedizin Berlin, Berlin, Germany (T.P.); Patero Clinic, Moscow, Russia (I.S.); Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea (M.H.C.); Department of Radiology, Changhai Hospital of Shanghai, Shanghai, China (Q.Y.); Diagnostikum Graz Süd-West, Graz, Austria (D.S.); Department of Radiology, Loyola University Medical Center, Maywood, Ill (S.S.); Department of Diagnostic Radiology, Oregon Health and Science University School of Medicine, Portland, Ore (F.C.); and Massachusetts General Hospital, Boston, Mass (M.H.)
| | - Ipek Oguz
- From Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (H. Li, H. Liu, D.C., A.K., B.L.); Diagnostic Imaging, Siemens Healthineers, Erlangen, Bavaria, Germany (H.v.B., R.G.); Vanderbilt University, Nashville, Tenn (H. Li, H. Liu, I.O.); Radboud University Medical Center, Nijmegen, the Netherlands (H.H.); New York University, New York, NY (A.T.); Universitätsspital Basel, Basel, Switzerland (D.W.); Charité, Universitätsmedizin Berlin, Berlin, Germany (T.P.); Patero Clinic, Moscow, Russia (I.S.); Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea (M.H.C.); Department of Radiology, Changhai Hospital of Shanghai, Shanghai, China (Q.Y.); Diagnostikum Graz Süd-West, Graz, Austria (D.S.); Department of Radiology, Loyola University Medical Center, Maywood, Ill (S.S.); Department of Diagnostic Radiology, Oregon Health and Science University School of Medicine, Portland, Ore (F.C.); and Massachusetts General Hospital, Boston, Mass (M.H.)
| | - Dorin Comaniciu
- From Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (H. Li, H. Liu, D.C., A.K., B.L.); Diagnostic Imaging, Siemens Healthineers, Erlangen, Bavaria, Germany (H.v.B., R.G.); Vanderbilt University, Nashville, Tenn (H. Li, H. Liu, I.O.); Radboud University Medical Center, Nijmegen, the Netherlands (H.H.); New York University, New York, NY (A.T.); Universitätsspital Basel, Basel, Switzerland (D.W.); Charité, Universitätsmedizin Berlin, Berlin, Germany (T.P.); Patero Clinic, Moscow, Russia (I.S.); Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea (M.H.C.); Department of Radiology, Changhai Hospital of Shanghai, Shanghai, China (Q.Y.); Diagnostikum Graz Süd-West, Graz, Austria (D.S.); Department of Radiology, Loyola University Medical Center, Maywood, Ill (S.S.); Department of Diagnostic Radiology, Oregon Health and Science University School of Medicine, Portland, Ore (F.C.); and Massachusetts General Hospital, Boston, Mass (M.H.)
| | - Ali Kamen
- From Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (H. Li, H. Liu, D.C., A.K., B.L.); Diagnostic Imaging, Siemens Healthineers, Erlangen, Bavaria, Germany (H.v.B., R.G.); Vanderbilt University, Nashville, Tenn (H. Li, H. Liu, I.O.); Radboud University Medical Center, Nijmegen, the Netherlands (H.H.); New York University, New York, NY (A.T.); Universitätsspital Basel, Basel, Switzerland (D.W.); Charité, Universitätsmedizin Berlin, Berlin, Germany (T.P.); Patero Clinic, Moscow, Russia (I.S.); Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea (M.H.C.); Department of Radiology, Changhai Hospital of Shanghai, Shanghai, China (Q.Y.); Diagnostikum Graz Süd-West, Graz, Austria (D.S.); Department of Radiology, Loyola University Medical Center, Maywood, Ill (S.S.); Department of Diagnostic Radiology, Oregon Health and Science University School of Medicine, Portland, Ore (F.C.); and Massachusetts General Hospital, Boston, Mass (M.H.)
| | - Bin Lou
- From Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (H. Li, H. Liu, D.C., A.K., B.L.); Diagnostic Imaging, Siemens Healthineers, Erlangen, Bavaria, Germany (H.v.B., R.G.); Vanderbilt University, Nashville, Tenn (H. Li, H. Liu, I.O.); Radboud University Medical Center, Nijmegen, the Netherlands (H.H.); New York University, New York, NY (A.T.); Universitätsspital Basel, Basel, Switzerland (D.W.); Charité, Universitätsmedizin Berlin, Berlin, Germany (T.P.); Patero Clinic, Moscow, Russia (I.S.); Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea (M.H.C.); Department of Radiology, Changhai Hospital of Shanghai, Shanghai, China (Q.Y.); Diagnostikum Graz Süd-West, Graz, Austria (D.S.); Department of Radiology, Loyola University Medical Center, Maywood, Ill (S.S.); Department of Diagnostic Radiology, Oregon Health and Science University School of Medicine, Portland, Ore (F.C.); and Massachusetts General Hospital, Boston, Mass (M.H.)
| |
Collapse
|
14
|
Chen H, Wang X, Li H, Wang L. 3D Vessel Segmentation With Limited Guidance of 2D Structure-Agnostic Vessel Annotations. IEEE J Biomed Health Inform 2024; 28:5410-5421. [PMID: 38833403 DOI: 10.1109/jbhi.2024.3409382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/06/2024]
Abstract
Delineating 3D blood vessels of various anatomical structures is essential for clinical diagnosis and treatment, however, is challenging due to complex structure variations and varied imaging conditions. Although recent supervised deep learning models have demonstrated their superior capacity in automatic 3D vessel segmentation, the reliance on expensive 3D manual annotations and limited capacity for annotation reuse among different vascular structures hinder their clinical applications. To avoid the repetitive and costly annotating process for each vascular structure and make full use of existing annotations, this paper proposes a novel 3D shape-guided local discrimination (3D-SLD) model for 3D vascular segmentation under limited guidance from public 2D vessel annotations. The primary hypothesis is that 3D vessels are composed of semantically similar voxels and often exhibit tree-shaped morphology. Accordingly, the 3D region discrimination loss is firstly proposed to learn the discriminative representation measuring voxel-wise similarities and cluster semantically consistent voxels to form the candidate 3D vascular segmentation in unlabeled images. Secondly, the shape distribution from existing 2D structure-agnostic vessel annotations is introduced to guide the 3D vessels with the tree-shaped morphology by the adversarial shape constraint loss. Thirdly, to enhance training stability and prediction credibility, the highlighting-reviewing-summarizing (HRS) mechanism is proposed. This mechanism involves summarizing historical models to maintain temporal consistency and identifying credible pseudo labels as reliable supervision signals. Only guided by public 2D coronary artery annotations, our method achieves results comparable to SOTA barely-supervised methods in 3D cerebrovascular segmentation, and the best DSC in 3D hepatic vessel segmentation, demonstrating the effectiveness of our method.
Collapse
|
15
|
Yang M, Wu Z, Zheng H, Huang L, Ding W, Pan L, Yin L. Cross-Modality Medical Image Segmentation via Enhanced Feature Alignment and Cross Pseudo Supervision Learning. Diagnostics (Basel) 2024; 14:1751. [PMID: 39202240 PMCID: PMC11353479 DOI: 10.3390/diagnostics14161751] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2024] [Revised: 08/08/2024] [Accepted: 08/10/2024] [Indexed: 09/03/2024] Open
Abstract
Given the diversity of medical images, traditional image segmentation models face the issue of domain shift. Unsupervised domain adaptation (UDA) methods have emerged as a pivotal strategy for cross modality analysis. These methods typically utilize generative adversarial networks (GANs) for both image-level and feature-level domain adaptation through the transformation and reconstruction of images, assuming the features between domains are well-aligned. However, this assumption falters with significant gaps between different medical image modalities, such as MRI and CT. These gaps hinder the effective training of segmentation networks with cross-modality images and can lead to misleading training guidance and instability. To address these challenges, this paper introduces a novel approach comprising a cross-modality feature alignment sub-network and a cross pseudo supervised dual-stream segmentation sub-network. These components work together to bridge domain discrepancies more effectively and ensure a stable training environment. The feature alignment sub-network is designed for the bidirectional alignment of features between the source and target domains, incorporating a self-attention module to aid in learning structurally consistent and relevant information. The segmentation sub-network leverages an enhanced cross-pseudo-supervised loss to harmonize the output of the two segmentation networks, assessing pseudo-distances between domains to improve the pseudo-label quality and thus enhancing the overall learning efficiency of the framework. This method's success is demonstrated by notable advancements in segmentation precision across target domains for abdomen and brain tasks.
Collapse
Affiliation(s)
- Mingjing Yang
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China; (M.Y.); (Z.W.); (H.Z.); (L.H.)
| | - Zhicheng Wu
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China; (M.Y.); (Z.W.); (H.Z.); (L.H.)
| | - Hanyu Zheng
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China; (M.Y.); (Z.W.); (H.Z.); (L.H.)
| | - Liqin Huang
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China; (M.Y.); (Z.W.); (H.Z.); (L.H.)
| | - Wangbin Ding
- School of Medical Imaging, Fujian Medical University, Fuzhou 350122, China;
| | - Lin Pan
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China; (M.Y.); (Z.W.); (H.Z.); (L.H.)
| | - Lei Yin
- The Departments of Radiology, Shengli Clinical Medical College of Fujian Medical University, Fuzhou 350001, China
- Fujian Provincial Hospital, Fuzhou 350001, China
- Fuzhou University Affiliated Provincial Hospital, Fuzhou 350001, China
| |
Collapse
|
16
|
Stan S, Rostami M. Unsupervised model adaptation for source-free segmentation of medical images. Med Image Anal 2024; 95:103179. [PMID: 38626666 DOI: 10.1016/j.media.2024.103179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 04/09/2024] [Accepted: 04/11/2024] [Indexed: 04/18/2024]
Abstract
The recent prevalence of deep neural networks has led semantic segmentation networks to achieve human-level performance in the medical field, provided they are given sufficient training data. However, these networks often fail to generalize when tasked with creating semantic maps for out-of-distribution images, necessitating re-training on new distributions. This labor-intensive process requires expert knowledge for generating training labels. In the medical field, distribution shifts can naturally occur due to the choice of imaging devices, such as MRI or CT scanners. To mitigate the need for labeling images in a target domain after successful model training in a fully annotated source domain with a different data distribution, unsupervised domain adaptation (UDA) can be employed. Most UDA approaches ensure target generalization by generating a shared source/target latent feature space, allowing a source-trained classifier to maintain performance in the target domain. However, such approaches necessitate joint source and target data access, potentially leading to privacy leaks with respect to patient information. We propose a UDA algorithm for medical image segmentation that does not require access to source data during adaptation, thereby preserving patient data privacy. Our method relies on approximating the source latent features at the time of adaptation and creates a joint source/target embedding space by minimizing a distributional distance metric based on optimal transport. We demonstrate that our approach is competitive with recent UDA medical segmentation works, even with the added requirement of privacy. 1.
Collapse
Affiliation(s)
- Serban Stan
- University of Southern California, United States of America
| | | |
Collapse
|
17
|
Zhao Z, Cai R, Xu K, Liu Z, Yang X, Cheng J, Guan C. Multi-dataset Collaborative Learning for Liver Tumor Segmentation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2024; 2024:1-4. [PMID: 40031465 DOI: 10.1109/embc53108.2024.10781844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
Automatic segmentation of biomedical images has emerged due to its potential in improving real-world clinical processes and has achieved great success in recent years thanks to the development of deep learning. However, it is the limited availability of certain modalities of datasets and the scarcity of labels that still present challenges. In this work, we propose a workflow of MRI liver and tumor segmentation methods utilizing external publicly available datasets. By employing pseudo-labeling, unpaired image-to-image translation, and self-ensemble learning, we improve the task performance from the nnU-Net baseline model with an average Dice score of 95.7% and 72.2%, and an average symmetric surface of 1.23 mm and 15.6 mm for the whole liver and the tumor, respectively, resulting in more robust and efficient segmentation. Our results demonstrate that the utilization of external datasets can significantly enhance liver tumor segmentation performance.
Collapse
|
18
|
Guo Z, Feng J, Lu W, Yin Y, Yang G, Zhou J. Cross-modality cerebrovascular segmentation based on pseudo-label generation via paired data. Comput Med Imaging Graph 2024; 115:102393. [PMID: 38704993 DOI: 10.1016/j.compmedimag.2024.102393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2024] [Revised: 04/26/2024] [Accepted: 04/26/2024] [Indexed: 05/07/2024]
Abstract
Accurate segmentation of cerebrovascular structures from Computed Tomography Angiography (CTA), Magnetic Resonance Angiography (MRA), and Digital Subtraction Angiography (DSA) is crucial for clinical diagnosis of cranial vascular diseases. Recent advancements in deep Convolution Neural Network (CNN) have significantly improved the segmentation process. However, training segmentation networks for all modalities requires extensive data labeling for each modality, which is often expensive and time-consuming. To circumvent this limitation, we introduce an approach to train cross-modality cerebrovascular segmentation network based on paired data from source and target domains. Our approach involves training a universal vessel segmentation network with manually labeled source domain data, which automatically produces initial labels for target domain training images. We improve the initial labels of target domain training images by fusing paired images, which are then used to refine the target domain segmentation network. A series of experimental arrangements is presented to assess the efficacy of our method in various practical application scenarios. The experiments conducted on an MRA-CTA dataset and a DSA-CTA dataset demonstrate that the proposed method is effective for cross-modality cerebrovascular segmentation and achieves state-of-the-art performance.
Collapse
Affiliation(s)
- Zhanqiang Guo
- Department of Automation, BNRist, Tsinghua University, Beijing, China
| | - Jianjiang Feng
- Department of Automation, BNRist, Tsinghua University, Beijing, China.
| | - Wangsheng Lu
- UnionStrong (Beijing) Technology Co.Ltd, Beijing, China
| | - Yin Yin
- UnionStrong (Beijing) Technology Co.Ltd, Beijing, China
| | | | - Jie Zhou
- Department of Automation, BNRist, Tsinghua University, Beijing, China
| |
Collapse
|
19
|
He Y, Kong J, Li J, Zheng C. Entropy and distance-guided super self-ensembling for optic disc and cup segmentation. BIOMEDICAL OPTICS EXPRESS 2024; 15:3975-3992. [PMID: 38867792 PMCID: PMC11166439 DOI: 10.1364/boe.521778] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Revised: 04/14/2024] [Accepted: 05/06/2024] [Indexed: 06/14/2024]
Abstract
Segmenting the optic disc (OD) and optic cup (OC) is crucial to accurately detect changes in glaucoma progression in the elderly. Recently, various convolutional neural networks have emerged to deal with OD and OC segmentation. Due to the domain shift problem, achieving high-accuracy segmentation of OD and OC from different domain datasets remains highly challenging. Unsupervised domain adaptation has taken extensive focus as a way to address this problem. In this work, we propose a novel unsupervised domain adaptation method, called entropy and distance-guided super self-ensembling (EDSS), to enhance the segmentation performance of OD and OC. EDSS is comprised of two self-ensembling models, and the Gaussian noise is added to the weights of the whole network. Firstly, we design a super self-ensembling (SSE) framework, which can combine two self-ensembling to learn more discriminative information about images. Secondly, we propose a novel exponential moving average with Gaussian noise (G-EMA) to enhance the robustness of the self-ensembling framework. Thirdly, we propose an effective multi-information fusion strategy (MFS) to guide and improve the domain adaptation process. We evaluate the proposed EDSS on two public fundus image datasets RIGA+ and REFUGE. Large amounts of experimental results demonstrate that the proposed EDSS outperforms state-of-the-art segmentation methods with unsupervised domain adaptation, e.g., the Dicemean score on three test sub-datasets of RIGA+ are 0.8442, 0.8772 and 0.9006, respectively, and the Dicemean score on the REFUGE dataset is 0.9154.
Collapse
Affiliation(s)
- Yanlin He
- College of Information Sciences and Technology, Northeast Normal University, Changchun 130117, China
| | - Jun Kong
- College of Information Sciences and Technology, Northeast Normal University, Changchun 130117, China
| | - Juan Li
- Jilin Engineering Normal University, Changchun 130052, China
- Business School, Northeast Normal University, Changchun 130117, China
| | - Caixia Zheng
- College of Information Sciences and Technology, Northeast Normal University, Changchun 130117, China
- Key Laboratory of Applied Statistics of MOE, Northeast Normal University, Changchun 130024, China
| |
Collapse
|
20
|
Luu HM, Yoo GS, Park W, Park SH. CycleSeg: Simultaneous synthetic CT generation and unsupervised segmentation for MR-only radiotherapy treatment planning of prostate cancer. Med Phys 2024; 51:4365-4379. [PMID: 38323835 DOI: 10.1002/mp.16976] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Revised: 01/22/2024] [Accepted: 01/25/2024] [Indexed: 02/08/2024] Open
Abstract
BACKGROUND MR-only radiotherapy treatment planning is an attractive alternative to conventional workflow, reducing scan time and ionizing radiation. It is crucial to derive the electron density map or synthetic CT (sCT) from MR data to perform dose calculations to enable MR-only treatment planning. Automatic segmentation of relevant organs in MR images can accelerate the process by preventing the time-consuming manual contouring step. However, the segmentation label is available only for CT data in many cases. PURPOSE We propose CycleSeg, a unified framework that generates sCT and corresponding segmentation from MR images without access to MR segmentation labels METHODS: CycleSeg utilizes the CycleGAN formulation to perform unpaired synthesis of sCT and image alignment. To enable MR (sCT) segmentation, CycleSeg incorporates unsupervised domain adaptation by using a pseudo-labeling approach with feature alignment in semantic segmentation space. In contrast to previous approaches that perform segmentation on MR data, CycleSeg could perform segmentation on both MR and sCT. Experiments were performed with data from prostate cancer patients, with 78/7/10 subjects in the training/validation/test sets, respectively. RESULTS CycleSeg showed the best sCT generation results, with the lowest mean absolute error of 102.2 and the lowest Fréchet inception distance of 13.0. CycleSeg also performed best on MR segmentation, with the highest average dice score of 81.0 and 81.1 for MR and sCT segmentation, respectively. Ablation experiments confirmed the contribution of the proposed components of CycleSeg. CONCLUSION CycleSeg effectively synthesized CT and performed segmentation on MR images of prostate cancer patients. Thus, CycleSeg has the potential to expedite MR-only radiotherapy treatment planning, reducing the prescribed scans and manual segmentation effort, and increasing throughput.
Collapse
Affiliation(s)
- Huan Minh Luu
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Gyu Sang Yoo
- Department of Radiation Oncology, Chungbuk National University Hospital, Cheongju, Republic of Korea
| | - Won Park
- Department of Radiation Oncology, Samsung Medical Center, Seoul, Republic of Korea
| | - Sung-Hong Park
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| |
Collapse
|
21
|
Bao S, Guo J, Lee HH, Deng R, Cui C, Remedios LW, Liu Q, Yang Q, Xu K, Yu X, Li J, Li Y, Roland JT, Liu Q, Lau KS, Wilson KT, Coburn LA, Landman BA, Huo Y. MITIGATING OVER-SATURATED FLUORESCENCE IMAGES THROUGH A SEMI-SUPERVISED GENERATIVE ADVERSARIAL NETWORK. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2024; 2024:10.1109/isbi56570.2024.10635687. [PMID: 39867569 PMCID: PMC11756911 DOI: 10.1109/isbi56570.2024.10635687] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/28/2025]
Abstract
Multiplex immunofluorescence (MxIF) imaging is a critical tool in biomedical research, offering detailed insights into cell composition and spatial context. As an example, DAPI staining identifies cell nuclei, while CD20 staining helps segment cell membranes in MxIF. However, a persistent challenge in MxIF is saturation artifacts, which hinder single-cell level analysis in areas with over-saturated pixels. Traditional gamma correction methods for fixing saturation are limited, often incorrectly assuming uniform distribution of saturation, which is rarely the case in practice. This paper introduces a novel approach to correct saturation artifacts from a data-driven perspective. We introduce a two-stage, high-resolution hybrid generative adversarial network (HDmixGAN), which merges unpaired (CycleGAN) and paired (pix2pixHD) network architectures. This approach is designed to capitalize on the available small-scale paired data and the more extensive unpaired data from costly MxIF data. Specifically, we generate pseudo-paired data from large-scale unpaired over-saturated datasets with a CycleGAN, and train a Pix2pixGAN using both small-scale real and large-scale synthetic data derived from multiple DAPI staining rounds in MxIF. This method was validated against various baselines in a downstream nuclei detection task, improving the F1 score by 6% over the baseline. This is, to our knowledge, the first focused effort to address multi-round saturation in MxIF images, offering a specialized solution for enhancing cell analysis accuracy through improved image quality. The source code and implementation of the proposed method are available at https://github.com/MASILab/DAPIArtifactRemoval.git.
Collapse
Affiliation(s)
- Shunxing Bao
- Department of Electrical and Computer Engineering, Nashville, TN, USA
| | - Junlin Guo
- Department of Electrical and Computer Engineering, Nashville, TN, USA
| | - Ho Hin Lee
- Department of Computer Science, Nashville, TN, USA
| | - Ruining Deng
- Department of Computer Science, Nashville, TN, USA
| | - Can Cui
- Department of Computer Science, Nashville, TN, USA
| | | | - Quan Liu
- Department of Computer Science, Nashville, TN, USA
| | - Qi Yang
- Department of Computer Science, Nashville, TN, USA
| | - Kaiwen Xu
- Department of Computer Science, Nashville, TN, USA
| | - Xin Yu
- Department of Computer Science, Nashville, TN, USA
| | - Jia Li
- Department of Biostatistics, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Yike Li
- Department of Otolaryngology-Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Joseph T Roland
- Department of Surgery, Vanderbilt University, Medical Center, Nashville, TN, USA
| | - Qi Liu
- Department of Biostatistics, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Ken S Lau
- Department of Cell and Developmental Biology, Vanderbilt University School of Medicine, Nashville, TN, USA
| | - Keith T Wilson
- Division of Gastroenterology, Hepatology, and Nutrition, Department of Medicine, Vanderbilt University Medical Center, Nashville, TN, USA
- Veterans Affairs Tennessee Valley Healthcare System, Nashville, TN, USA
| | - Lori A Coburn
- Division of Gastroenterology, Hepatology, and Nutrition, Department of Medicine, Vanderbilt University Medical Center, Nashville, TN, USA
- Veterans Affairs Tennessee Valley Healthcare System, Nashville, TN, USA
| | - Bennett A Landman
- Department of Electrical and Computer Engineering, Nashville, TN, USA
- Department of Computer Science, Nashville, TN, USA
| | - Yuankai Huo
- Department of Electrical and Computer Engineering, Nashville, TN, USA
- Department of Computer Science, Nashville, TN, USA
| |
Collapse
|
22
|
Kumari S, Singh P. Deep learning for unsupervised domain adaptation in medical imaging: Recent advancements and future perspectives. Comput Biol Med 2024; 170:107912. [PMID: 38219643 DOI: 10.1016/j.compbiomed.2023.107912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 11/02/2023] [Accepted: 12/24/2023] [Indexed: 01/16/2024]
Abstract
Deep learning has demonstrated remarkable performance across various tasks in medical imaging. However, these approaches primarily focus on supervised learning, assuming that the training and testing data are drawn from the same distribution. Unfortunately, this assumption may not always hold true in practice. To address these issues, unsupervised domain adaptation (UDA) techniques have been developed to transfer knowledge from a labeled domain to a related but unlabeled domain. In recent years, significant advancements have been made in UDA, resulting in a wide range of methodologies, including feature alignment, image translation, self-supervision, and disentangled representation methods, among others. In this paper, we provide a comprehensive literature review of recent deep UDA approaches in medical imaging from a technical perspective. Specifically, we categorize current UDA research in medical imaging into six groups and further divide them into finer subcategories based on the different tasks they perform. We also discuss the respective datasets used in the studies to assess the divergence between the different domains. Finally, we discuss emerging areas and provide insights and discussions on future research directions to conclude this survey.
Collapse
Affiliation(s)
- Suruchi Kumari
- Department of Computer Science and Engineering, Indian Institute of Technology Roorkee, India.
| | - Pravendra Singh
- Department of Computer Science and Engineering, Indian Institute of Technology Roorkee, India.
| |
Collapse
|
23
|
Yang X, Chin BB, Silosky M, Wehrend J, Litwiller DV, Ghosh D, Xing F. Learning Without Real Data Annotations to Detect Hepatic Lesions in PET Images. IEEE Trans Biomed Eng 2024; 71:679-688. [PMID: 37708016 DOI: 10.1109/tbme.2023.3315268] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/16/2023]
Abstract
OBJECTIVE Deep neural networks have been recently applied to lesion identification in fluorodeoxyglucose (FDG) positron emission tomography (PET) images, but they typically rely on a large amount of well-annotated data for model training. This is extremely difficult to achieve for neuroendocrine tumors (NETs), because of low incidence of NETs and expensive lesion annotation in PET images. The objective of this study is to design a novel, adaptable deep learning method, which uses no real lesion annotations but instead low-cost, list mode-simulated data, for hepatic lesion detection in real-world clinical NET PET images. METHODS We first propose a region-guided generative adversarial network (RG-GAN) for lesion-preserved image-to-image translation. Then, we design a specific data augmentation module for our list-mode simulated data and incorporate this module into the RG-GAN to improve model training. Finally, we combine the RG-GAN, the data augmentation module and a lesion detection neural network into a unified framework for joint-task learning to adaptatively identify lesions in real-world PET data. RESULTS The proposed method outperforms recent state-of-the-art lesion detection methods in real clinical 68Ga-DOTATATE PET images, and produces very competitive performance with the target model that is trained with real lesion annotations. CONCLUSION With RG-GAN modeling and specific data augmentation, we can obtain good lesion detection performance without using any real data annotations. SIGNIFICANCE This study introduces an adaptable deep learning method for hepatic lesion identification in NETs, which can significantly reduce human effort for data annotation and improve model generalizability for lesion detection with PET imaging.
Collapse
|
24
|
Bao S, Lee HH, Yang Q, Remedios LW, Deng R, Cui C, Cai LY, Xu K, Yu X, Chiron S, Li Y, Patterson NH, Wang Y, Li J, Liu Q, Lau KS, Roland JT, Coburn LA, Wilson KT, Landman BA, Huo Y. Alleviating tiling effect by random walk sliding window in high-resolution histological whole slide image synthesis. PROCEEDINGS OF MACHINE LEARNING RESEARCH 2024; 227:1406-1422. [PMID: 38993526 PMCID: PMC11238901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 07/13/2024]
Abstract
Multiplex immunofluorescence (MxIF) is an advanced molecular imaging technique that can simultaneously provide biologists with multiple (i.e., more than 20) molecular markers on a single histological tissue section. Unfortunately, due to imaging restrictions, the more routinely used hematoxylin and eosin (H&E) stain is typically unavailable with MxIF on the same tissue section. As biological H&E staining is not feasible, previous efforts have been made to obtain H&E whole slide image (WSI) from MxIF via deep learning empowered virtual staining. However, the tiling effect is a long-lasting problem in high-resolution WSI-wise synthesis. The MxIF to H&E synthesis is no exception. Limited by computational resources, the cross-stain image synthesis is typically performed at the patch-level. Thus, discontinuous intensities might be visually identified along with the patch boundaries assembling all individual patches back to a WSI. In this work, we propose a deep learning based unpaired high-resolution image synthesis method to obtain virtual H&E WSIs from MxIF WSIs (each with 27 markers/stains) with reduced tiling effects. Briefly, we first extend the CycleGAN framework by adding simultaneous nuclei and mucin segmentation supervision as spatial constraints. Then, we introduce a random walk sliding window shifting strategy during the optimized inference stage, to alleviate the tiling effects. The validation results show that our spatially constrained synthesis method achieves a 56% performance gain for the downstream cell segmentation task. The proposed inference method reduces the tiling effects by using 50% fewer computation resources without compromising performance. The proposed random sliding window inference method is a plug-and-play module, which can be generalized for other high-resolution WSI image synthesis applications. The source code with our proposed model are available at https://github.com/MASILab/RandomWalkSlidingWindow.git.
Collapse
Affiliation(s)
- Shunxing Bao
- Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
| | - Ho Hin Lee
- Department of Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Qi Yang
- Department of Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Lucas W Remedios
- Department of Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Ruining Deng
- Department of Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Can Cui
- Department of Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Leon Y Cai
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA
| | - Kaiwen Xu
- Department of Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Xin Yu
- Department of Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Sophie Chiron
- Division of Gastroenterology, Hepatology, and Nutrition, Department of Medicine, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Yike Li
- Department of Otolaryngology-Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
| | | | - Yaohong Wang
- Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Jia Li
- Dept. of Biostatistics, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Qi Liu
- Dept. of Biostatistics, Vanderbilt University Medical Center, Nashville, TN, USA
- Center for Quantitative Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Ken S Lau
- Center for Quantitative Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
- Epithelial Biology Center, Vanderbilt University Medical Center, Nashville, TN, USA
- Dept. of Cell and Developmental Biology, Vanderbilt University School of Medicine, Nashville, TN, USA
| | - Joseph T Roland
- Epithelial Biology Center, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Lori A Coburn
- Division of Gastroenterology, Hepatology, and Nutrition, Department of Medicine, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Center for Mucosal Inflammation and Cancer, Nashville, TN, USA
- Veterans Affairs Tennessee Valley Healthcare System, Nashville, TN, USA
| | - Keith T Wilson
- Division of Gastroenterology, Hepatology, and Nutrition, Department of Medicine, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Center for Mucosal Inflammation and Cancer, Nashville, TN, USA
- Veterans Affairs Tennessee Valley Healthcare System, Nashville, TN, USA
- Program in Cancer Biology, Vanderbilt University School of Medicine, Nashville, TN, USA
| | - Bennett A Landman
- Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
- Department of Computer Science, Vanderbilt University, Nashville, TN, USA
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA
| | - Yuankai Huo
- Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
- Department of Computer Science, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
25
|
Tiwary P, Bhattacharyya K, A P P. Cycle consistent twin energy-based models for image-to-image translation. Med Image Anal 2024; 91:103031. [PMID: 37988920 DOI: 10.1016/j.media.2023.103031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 09/10/2023] [Accepted: 11/13/2023] [Indexed: 11/23/2023]
Abstract
Domain shift refers to change of distributional characteristics between the training (source) and the testing (target) datasets of a learning task, leading to performance drop. For tasks involving medical images, domain shift may be caused because of several factors such as change in underlying imaging modalities, measuring devices and staining mechanisms. Recent approaches address this issue via generative models based on the principles of adversarial learning albeit they suffer from issues such as difficulty in training and lack of diversity. Motivated by the aforementioned observations, we adapt an alternative class of deep generative models called the Energy-Based Models (EBMs) for the task of unpaired image-to-image translation of medical images. Specifically, we propose a novel method called the Cycle Consistent Twin EBMs (CCT-EBM) which employs a pair of EBMs in the latent space of an Auto-Encoder trained on the source data. While one of the EBMs translates the source to the target domain the other does vice-versa along with a novel consistency loss, ensuring translation symmetry and coupling between the domains. We theoretically analyze the proposed method and show that our design leads to better translation between the domains with reduced langevin mixing steps. We demonstrate the efficacy of our method through detailed quantitative and qualitative experiments on image segmentation tasks on three different datasets vis-a-vis state-of-the-art methods.
Collapse
Affiliation(s)
- Piyush Tiwary
- Department of Electrical Communication Engineering, Indian Institute of Science, Bangalore, Karnataka 560012, India.
| | - Kinjawl Bhattacharyya
- Department of Electrical Engineering, Indian Institute of Technology Kharagpur, Kharagpur, West Bengal 721302, India
| | - Prathosh A P
- Department of Electrical Communication Engineering, Indian Institute of Science, Bangalore, Karnataka 560012, India
| |
Collapse
|
26
|
Xie Q, Li Y, He N, Ning M, Ma K, Wang G, Lian Y, Zheng Y. Unsupervised Domain Adaptation for Medical Image Segmentation by Disentanglement Learning and Self-Training. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:4-14. [PMID: 35853072 DOI: 10.1109/tmi.2022.3192303] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Unsupervised domain adaption (UDA), which aims to enhance the segmentation performance of deep models on unlabeled data, has recently drawn much attention. In this paper, we propose a novel UDA method (namely DLaST) for medical image segmentation via disentanglement learning and self-training. Disentanglement learning factorizes an image into domain-invariant anatomy and domain-specific modality components. To make the best of disentanglement learning, we propose a novel shape constraint to boost the adaptation performance. The self-training strategy further adaptively improves the segmentation performance of the model for the target domain through adversarial learning and pseudo label, which implicitly facilitates feature alignment in the anatomy space. Experimental results demonstrate that the proposed method outperforms the state-of-the-art UDA methods for medical image segmentation on three public datasets, i.e., a cardiac dataset, an abdominal dataset and a brain dataset. The code will be released soon.
Collapse
|
27
|
Baldeon-Calisto M, Lai-Yuen SK, Puente-Mejia B. StAC-DA: Structure aware cross-modality domain adaptation framework with image and feature-level adaptation for medical image segmentation. Digit Health 2024; 10:20552076241277440. [PMID: 39229464 PMCID: PMC11369866 DOI: 10.1177/20552076241277440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Accepted: 08/06/2024] [Indexed: 09/05/2024] Open
Abstract
Objective Convolutional neural networks (CNNs) have achieved state-of-the-art results in various medical image segmentation tasks. However, CNNs often assume that the source and target dataset follow the same probability distribution and when this assumption is not satisfied their performance degrades significantly. This poses a limitation in medical image analysis, where including information from different imaging modalities can bring large clinical benefits. In this work, we present an unsupervised Structure Aware Cross-modality Domain Adaptation (StAC-DA) framework for medical image segmentation. Methods StAC-DA implements an image- and feature-level adaptation in a sequential two-step approach. The first step performs an image-level alignment, where images from the source domain are translated to the target domain in pixel space by implementing a CycleGAN-based model. The latter model includes a structure-aware network that preserves the shape of the anatomical structure during translation. The second step consists of a feature-level alignment. A U-Net network with deep supervision is trained with the transformed source domain images and target domain images in an adversarial manner to produce probable segmentations for the target domain. Results The framework is evaluated on bidirectional cardiac substructure segmentation. StAC-DA outperforms leading unsupervised domain adaptation approaches, being ranked first in the segmentation of the ascending aorta when adapting from Magnetic Resonance Imaging (MRI) to Computed Tomography (CT) domain and from CT to MRI domain. Conclusions The presented framework overcomes the limitations posed by differing distributions in training and testing datasets. Moreover, the experimental results highlight its potential to improve the accuracy of medical image segmentation across diverse imaging modalities.
Collapse
Affiliation(s)
- Maria Baldeon-Calisto
- Departamento de Ingeniería Industrial, Colegio de Ciencias e Ingeniería, Instituto de Innovación en Productividad y Logística CATENA-USFQ, Universidad San Francisco de Quito, Quito, Ecuador
| | - Susana K. Lai-Yuen
- Department of Industrial and Management Systems, University of South Florida, Tampa, FL, USA
| | - Bernardo Puente-Mejia
- Departamento de Ingeniería Industrial, Colegio de Ciencias e Ingeniería, Instituto de Innovación en Productividad y Logística CATENA-USFQ, Universidad San Francisco de Quito, Quito, Ecuador
| |
Collapse
|
28
|
Liu S, Yin S, Qu L, Wang M, Song Z. A Structure-Aware Framework of Unsupervised Cross-Modality Domain Adaptation via Frequency and Spatial Knowledge Distillation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3919-3931. [PMID: 37738201 DOI: 10.1109/tmi.2023.3318006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/24/2023]
Abstract
Unsupervised domain adaptation (UDA) aims to train a model on a labeled source domain and adapt it to an unlabeled target domain. In medical image segmentation field, most existing UDA methods rely on adversarial learning to address the domain gap between different image modalities. However, this process is complicated and inefficient. In this paper, we propose a simple yet effective UDA method based on both frequency and spatial domain transfer under a multi-teacher distillation framework. In the frequency domain, we introduce non-subsampled contourlet transform for identifying domain-invariant and domain-variant frequency components (DIFs and DVFs) and replace the DVFs of the source domain images with those of the target domain images while keeping the DIFs unchanged to narrow the domain gap. In the spatial domain, we propose a batch momentum update-based histogram matching strategy to minimize the domain-variant image style bias. Additionally, we further propose a dual contrastive learning module at both image and pixel levels to learn structure-related information. Our proposed method outperforms state-of-the-art methods on two cross-modality medical image segmentation datasets (cardiac and abdominal). Codes are avaliable at https://github.com/slliuEric/FSUDA.
Collapse
|
29
|
Xing F, Yang X, Cornish TC, Ghosh D. Learning with limited target data to detect cells in cross-modality images. Med Image Anal 2023; 90:102969. [PMID: 37802010 DOI: 10.1016/j.media.2023.102969] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 08/16/2023] [Accepted: 09/11/2023] [Indexed: 10/08/2023]
Abstract
Deep neural networks have achieved excellent cell or nucleus quantification performance in microscopy images, but they often suffer from performance degradation when applied to cross-modality imaging data. Unsupervised domain adaptation (UDA) based on generative adversarial networks (GANs) has recently improved the performance of cross-modality medical image quantification. However, current GAN-based UDA methods typically require abundant target data for model training, which is often very expensive or even impossible to obtain for real applications. In this paper, we study a more realistic yet challenging UDA situation, where (unlabeled) target training data is limited and previous work seldom delves into cell identification. We first enhance a dual GAN with task-specific modeling, which provides additional supervision signals to assist with generator learning. We explore both single-directional and bidirectional task-augmented GANs for domain adaptation. Then, we further improve the GAN by introducing a differentiable, stochastic data augmentation module to explicitly reduce discriminator overfitting. We examine source-, target-, and dual-domain data augmentation for GAN enhancement, as well as joint task and data augmentation in a unified GAN-based UDA framework. We evaluate the framework for cell detection on multiple public and in-house microscopy image datasets, which are acquired with different imaging modalities, staining protocols and/or tissue preparations. The experiments demonstrate that our method significantly boosts performance when compared with the reference baseline, and it is superior to or on par with fully supervised models that are trained with real target annotations. In addition, our method outperforms recent state-of-the-art UDA approaches by a large margin on different datasets.
Collapse
Affiliation(s)
- Fuyong Xing
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA.
| | - Xinyi Yang
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| | - Toby C Cornish
- Department of Pathology, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| | - Debashis Ghosh
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| |
Collapse
|
30
|
Chen H, Wang R, Wang X, Li J, Fang Q, Li H, Bai J, Peng Q, Meng D, Wang L. Unsupervised Local Discrimination for Medical Images. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:15912-15929. [PMID: 37494162 DOI: 10.1109/tpami.2023.3299038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/28/2023]
Abstract
Contrastive learning, which aims to capture general representation from unlabeled images to initialize the medical analysis models, has been proven effective in alleviating the high demand for expensive annotations. Current methods mainly focus on instance-wise comparisons to learn the global discriminative features, however, pretermitting the local details to distinguish tiny anatomical structures, lesions, and tissues. To address this challenge, in this paper, we propose a general unsupervised representation learning framework, named local discrimination (LD), to learn local discriminative features for medical images by closely embedding semantically similar pixels and identifying regions of similar structures across different images. Specifically, this model is equipped with an embedding module for pixel-wise embedding and a clustering module for generating segmentation. And these two modules are unified by optimizing our novel region discrimination loss function in a mutually beneficial mechanism, which enables our model to reflect structure information as well as measure pixel-wise and region-wise similarity. Furthermore, based on LD, we propose a center-sensitive one-shot landmark localization algorithm and a shape-guided cross-modality segmentation model to foster the generalizability of our model. When transferred to downstream tasks, the learned representation by our method shows a better generalization, outperforming representation from 18 state-of-the-art (SOTA) methods and winning 9 out of all 12 downstream tasks. Especially for the challenging lesion segmentation tasks, the proposed method achieves significantly better performance.
Collapse
|
31
|
Li X, Wang L, Liu H, Ma B, Chu L, Dong X, Zeng D, Che T, Jiang X, Wang W, Hu J, Li S. Syn_SegNet: A Joint Deep Neural Network for Ultrahigh-Field 7T MRI Synthesis and Hippocampal Subfield Segmentation in Routine 3T MRI. IEEE J Biomed Health Inform 2023; 27:4866-4877. [PMID: 37581964 DOI: 10.1109/jbhi.2023.3305377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/17/2023]
Abstract
Precise delineation of hippocampus subfields is crucial for the identification and management of various neurological and psychiatric disorders. However, segmenting these subfields automatically in routine 3T MRI is challenging due to their complex morphology and small size, as well as the limited signal contrast and resolution of the 3T images. This research proposes Syn_SegNet, an end-to-end, multitask joint deep neural network that leverages ultrahigh-field 7T MRI synthesis to improve hippocampal subfield segmentation in 3T MRI. Our approach involves two key components. First, we employ a modified Pix2PixGAN as the synthesis model, incorporating self-attention modules, image and feature matching loss, and ROI loss to generate high-quality 7T-like MRI around the hippocampal region. Second, we utilize a variant of 3D-U-Net with multiscale deep supervision as the segmentation subnetwork, incorporating an anatomic weighted cross-entropy loss that capitalizes on prior anatomical knowledge. We evaluate our method on hippocampal subfield segmentation in paired 3T MRI and 7T MRI with seven different anatomical structures. The experimental findings demonstrate that Syn_SegNet's segmentation performance benefits from integrating synthetic 7T data in an online manner and is superior to competing methods. Furthermore, we assess the generalizability of the proposed approach using a publicly accessible 3T MRI dataset. The developed method would be an efficient tool for segmenting hippocampal subfields in routine clinical 3T MRI.
Collapse
|
32
|
Wang R, Zhou Q, Zheng G. EDRL: Entropy-guided disentangled representation learning for unsupervised domain adaptation in semantic segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 240:107729. [PMID: 37531690 DOI: 10.1016/j.cmpb.2023.107729] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Revised: 07/15/2023] [Accepted: 07/19/2023] [Indexed: 08/04/2023]
Abstract
BACKGROUND AND OBJECTIVE Deep learning-based approaches are excellent at learning from large amounts of data, but can be poor at generalizing the learned knowledge to testing datasets with domain shift, i.e., when there exists distribution discrepancy between the training dataset (source domain) and the testing dataset (target domain). In this paper, we investigate unsupervised domain adaptation (UDA) techniques to train a cross-domain segmentation method which is robust to domain shift, eliminating the requirement of any annotations on the target domain. METHODS To this end, we propose an Entropy-guided Disentangled Representation Learning, referred as EDRL, for UDA in semantic segmentation. Concretely, we synergistically integrate image alignment via disentangled representation learning with feature alignment via entropy-based adversarial learning into one network, which is trained end-to-end. We additionally introduce a dynamic feature selection mechanism via soft gating, which helps to further enhance the task-specific feature alignment. We validate the proposed method on two publicly available datasets: the CT-MR dataset and the multi-sequence cardiac MR (MS-CMR) dataset. RESULTS On both datasets, our method achieved better results than the state-of-the-art (SOTA) methods. Specifically, on the CT-MR dataset, our method achieved an average DSC of 84.8% when taking CT as the source domain and MR as the target domain, and an average DSC of 84.0% when taking MR as the source domain and CT as the target domain. CONCLUSIONS Results from comprehensive experiments demonstrate the efficacy of the proposed EDRL model for cross-domain medical image segmentation.
Collapse
Affiliation(s)
- Runze Wang
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, No. 800, Dongchuan Road, Shanghai, 200240, China
| | - Qin Zhou
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, No. 800, Dongchuan Road, Shanghai, 200240, China
| | - Guoyan Zheng
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, No. 800, Dongchuan Road, Shanghai, 200240, China.
| |
Collapse
|
33
|
Su Z, Yao K, Yang X, Wang Q, Yan Y, Sun J, Huang K. Mind the Gap: Alleviating Local Imbalance for Unsupervised Cross-Modality Medical Image Segmentation. IEEE J Biomed Health Inform 2023; 27:3396-3407. [PMID: 37134027 DOI: 10.1109/jbhi.2023.3270434] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Unsupervised cross-modality medical image adaptation aims to alleviate the severe domain gap between different imaging modalities without using the target domain label. A key in this campaign relies upon aligning the distributions of source and target domain. One common attempt is to enforce the global alignment between two domains, which, however, ignores the fatal local-imbalance domain gap problem, i.e., some local features with larger domain gap are harder to transfer. Recently, some methods conduct alignment focusing on local regions to improve the efficiency of model learning. While this operation may cause a deficiency of critical information from contexts. To tackle this limitation, we propose a novel strategy to alleviate the domain gap imbalance considering the characteristics of medical images, namely Global-Local Union Alignment. Specifically, a feature-disentanglement style-transfer module first synthesizes the target-like source images to reduce the global domain gap. Then, a local feature mask is integrated to reduce the 'inter-gap' for local features by prioritizing those discriminative features with larger domain gap. This combination of global and local alignment can precisely localize the crucial regions in segmentation target while preserving the overall semantic consistency. We conduct a series of experiments with two cross-modality adaptation tasks, i,e. cardiac substructure and abdominal multi-organ segmentation. Experimental results indicate that our method achieves state-of-the-art performance in both tasks.
Collapse
|
34
|
Zhu J, Ye J, Dong L, Ma X, Tang N, Xu P, Jin W, Li R, Yang G, Lai X. Non-invasive prediction of overall survival time for glioblastoma multiforme patients based on multimodal MRI radiomics. INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY 2023; 33:1261-1274. [PMID: 38505467 PMCID: PMC10946632 DOI: 10.1002/ima.22869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 02/08/2023] [Accepted: 02/23/2023] [Indexed: 03/21/2024]
Abstract
Glioblastoma multiforme (GBM) is the most common and deadly primary malignant brain tumor. As GBM tumor is aggressive and shows high biological heterogeneity, the overall survival (OS) time is extremely low even with the most aggressive treatment. If the OS time can be predicted before surgery, developing personalized treatment plans for GBM patients will be beneficial. Magnetic resonance imaging (MRI) is a commonly used diagnostic tool for brain tumors with high-resolution and sound imaging effects. However, in clinical practice, doctors mainly rely on manually segmenting the tumor regions in MRI and predicting the OS time of GBM patients, which is time-consuming, subjective and repetitive, limiting the effectiveness of clinical diagnosis and treatment. Therefore, it is crucial to segment the brain tumor regions in MRI, and an accurate pre-operative prediction of OS time for personalized treatment is highly desired. In this study, we present a multimodal MRI radiomics-based automatic framework for non-invasive prediction of the OS time for GBM patients. A modified 3D-UNet model is built to segment tumor subregions in MRI of GBM patients; then, the radiomic features in the tumor subregions are extracted and combined with the clinical features input into the Support Vector Regression (SVR) model to predict the OS time. In the experiments, the BraTS2020, BraTS2019 and BraTS2018 datasets are used to evaluate our framework. Our model achieves competitive OS time prediction accuracy compared to most typical approaches.
Collapse
Affiliation(s)
- Jingyu Zhu
- Department of UrologyHangzhou TCM Hospital Affiliated to Zhejiang Chinese Medical UniversityHangzhouChina
| | - Jianming Ye
- First Affiliated HospitalGannan Medical UniversityGanzhouChina
| | - Leshui Dong
- School of Medical Technology and Information EngineeringZhejiang Chinese Medical UniversityHangzhouChina
| | - Xiaofei Ma
- School of Medical Technology and Information EngineeringZhejiang Chinese Medical UniversityHangzhouChina
| | - Na Tang
- School of Medical Technology and Information EngineeringZhejiang Chinese Medical UniversityHangzhouChina
| | - Peng Xu
- The Third Affiliated HospitalZhejiang Chinese Medical UniversityHangzhouChina
| | - Wei Jin
- School of Medical Technology and Information EngineeringZhejiang Chinese Medical UniversityHangzhouChina
| | - Ruipeng Li
- Department of UrologyHangzhou Third People's HospitalHangzhouChina
| | - Guang Yang
- Cardiovascular Research CentreRoyal Brompton HospitalLondonUK
- National Heart and Lung InstituteImperial College LondonLondonUK
| | - Xiaobo Lai
- Department of UrologyHangzhou TCM Hospital Affiliated to Zhejiang Chinese Medical UniversityHangzhouChina
- School of Medical Technology and Information EngineeringZhejiang Chinese Medical UniversityHangzhouChina
| |
Collapse
|
35
|
Yang Q, Yu X, Lee HH, Cai LY, Xu K, Bao S, Huo Y, Moore AZ, Makrogiannis S, Ferrucci L, Landman BA. Single slice thigh CT muscle group segmentation with domain adaptation and self-training. J Med Imaging (Bellingham) 2023; 10:044001. [PMID: 37448597 PMCID: PMC10336322 DOI: 10.1117/1.jmi.10.4.044001] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 06/09/2023] [Accepted: 06/20/2023] [Indexed: 07/15/2023] Open
Abstract
Purpose Thigh muscle group segmentation is important for assessing muscle anatomy, metabolic disease, and aging. Many efforts have been put into quantifying muscle tissues with magnetic resonance (MR) imaging, including manual annotation of individual muscles. However, leveraging publicly available annotations in MR images to achieve muscle group segmentation on single-slice computed tomography (CT) thigh images is challenging. Approach We propose an unsupervised domain adaptation pipeline with self-training to transfer labels from three-dimensional MR to single CT slices. First, we transform the image appearance from MR to CT with CycleGAN and feed the synthesized CT images to a segmenter simultaneously. Single CT slices are divided into hard and easy cohorts based on the entropy of pseudo-labels predicted by the segmenter. After refining easy cohort pseudo-labels based on anatomical assumption, self-training with easy and hard splits is applied to fine-tune the segmenter. Results On 152 withheld single CT thigh images, the proposed pipeline achieved a mean Dice of 0.888 (0.041) across all muscle groups, including gracilis, hamstrings, quadriceps femoris, and sartorius muscle. Conclusions To our best knowledge, this is the first pipeline to achieve domain adaptation from MR to CT for thigh images. The proposed pipeline effectively and robustly extracts muscle groups on two-dimensional single-slice CT thigh images. The container is available for public use in GitHub repository available at: https://github.com/MASILab/DA_CT_muscle_seg.
Collapse
Affiliation(s)
- Qi Yang
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Xin Yu
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Ho Hin Lee
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Leon Y. Cai
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States
| | - Kaiwen Xu
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Shunxing Bao
- Vanderbilt University, Department of Electrical and Computer Engineering, Nashville, Tennessee, United States
| | - Yuankai Huo
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Ann Zenobia Moore
- National Institute on Aging, NIH, Translational Gerontology Branch, Baltimore, Maryland, United States
| | | | - Luigi Ferrucci
- National Institute on Aging, NIH, Translational Gerontology Branch, Baltimore, Maryland, United States
| | - Bennett A. Landman
- Vanderbilt University, Department of Electrical and Computer Engineering, Nashville, Tennessee, United States
| |
Collapse
|
36
|
Wang W, Yu X, Fang B, Zhao Y, Chen Y, Wei W, Chen J. Cross-Modality LGE-CMR Segmentation Using Image-to-Image Translation Based Data Augmentation. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:2367-2375. [PMID: 34982688 DOI: 10.1109/tcbb.2022.3140306] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Accurate segmentation of ventricle and myocardium from the late gadolinium enhancement (LGE) cardiac magnetic resonance (CMR) is an important tool for myocardial infarction (MI) analysis. However, the complex enhancement pattern of LGE-CMR and the lack of labeled samples make its automatic segmentation difficult to be implemented. In this paper, we propose an unsupervised LGE-CMR segmentation algorithm by using multiple style transfer networks for data augmentation. It adopts two different style transfer networks to perform style transfer of the easily available annotated balanced-Steady State Free Precession (bSSFP)-CMR images. Then, multiple sets of synthetic LGE-CMR images are generated by the style transfer networks and used as the training data for the improved U-Net. The entire implementation of the algorithm does not require the labeled LGE-CMR. Validation experiments demonstrate the effectiveness and advantages of the proposed algorithm.
Collapse
|
37
|
Chen Z, Pan Y, Xia Y. Reconstruction-Driven Dynamic Refinement Based Unsupervised Domain Adaptation for Joint Optic Disc and Cup Segmentation. IEEE J Biomed Health Inform 2023; 27:3537-3548. [PMID: 37043317 DOI: 10.1109/jbhi.2023.3266576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/13/2023]
Abstract
Glaucoma is one of the leading causes of irreversible blindness. Segmentation of optic disc (OD) and optic cup (OC) on fundus images is a crucial step in glaucoma screening. Although many deep learning models have been constructed for this task, it remains challenging to train an OD/OC segmentation model that could be deployed successfully to different healthcare centers. The difficulties mainly comes from the domain shift issue, i.e., the fundus images collected at these centers usually vary greatly in the tone, contrast, and brightness. To address this issue, in this paper, we propose a novel unsupervised domain adaptation (UDA) method called Reconstruction-driven Dynamic Refinement Network (RDR-Net), where we employ a due-path segmentation backbone for simultaneous edge detection and region prediction and design three modules to alleviate the domain gap. The reconstruction alignment (RA) module uses a variational auto-encoder (VAE) to reconstruct the input image and thus boosts the image representation ability of the network in a self-supervised way. It also uses a style-consistency constraint to force the network to retain more domain-invariant information. The low-level feature refinement (LFR) module employs input-specific dynamic convolutions to suppress the domain-variant information in the obtained low-level features. The prediction-map alignment (PMA) module elaborates the entropy-driven adversarial learning to encourage the network to generate source-like boundaries and regions. We evaluated our RDR-Net against state-of-the-art solutions on four public fundus image datasets. Our results indicate that RDR-Net is superior to competing models in both segmentation performance and generalization ability.
Collapse
|
38
|
Billot B, Greve DN, Puonti O, Thielscher A, Van Leemput K, Fischl B, Dalca AV, Iglesias JE. SynthSeg: Segmentation of brain MRI scans of any contrast and resolution without retraining. Med Image Anal 2023; 86:102789. [PMID: 36857946 PMCID: PMC10154424 DOI: 10.1016/j.media.2023.102789] [Citation(s) in RCA: 154] [Impact Index Per Article: 77.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2022] [Revised: 01/20/2023] [Accepted: 02/22/2023] [Indexed: 03/03/2023]
Abstract
Despite advances in data augmentation and transfer learning, convolutional neural networks (CNNs) difficultly generalise to unseen domains. When segmenting brain scans, CNNs are highly sensitive to changes in resolution and contrast: even within the same MRI modality, performance can decrease across datasets. Here we introduce SynthSeg, the first segmentation CNN robust against changes in contrast and resolution. SynthSeg is trained with synthetic data sampled from a generative model conditioned on segmentations. Crucially, we adopt a domain randomisation strategy where we fully randomise the contrast and resolution of the synthetic training data. Consequently, SynthSeg can segment real scans from a wide range of target domains without retraining or fine-tuning, which enables straightforward analysis of huge amounts of heterogeneous clinical data. Because SynthSeg only requires segmentations to be trained (no images), it can learn from labels obtained by automated methods on diverse populations (e.g., ageing and diseased), thus achieving robustness to a wide range of morphological variability. We demonstrate SynthSeg on 5,000 scans of six modalities (including CT) and ten resolutions, where it exhibits unparallelled generalisation compared with supervised CNNs, state-of-the-art domain adaptation, and Bayesian segmentation. Finally, we demonstrate the generalisability of SynthSeg by applying it to cardiac MRI and CT scans.
Collapse
Affiliation(s)
- Benjamin Billot
- Centre for Medical Image Computing, University College London, UK.
| | - Douglas N Greve
- Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, USA
| | - Oula Puonti
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital, Denmark
| | - Axel Thielscher
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital, Denmark; Department of Health Technology, Technical University of, Denmark
| | - Koen Van Leemput
- Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, USA; Department of Health Technology, Technical University of, Denmark
| | - Bruce Fischl
- Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, USA; Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, USA; Program in Health Sciences and Technology, Massachusetts Institute of Technology, USA
| | - Adrian V Dalca
- Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, USA; Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, USA
| | - Juan Eugenio Iglesias
- Centre for Medical Image Computing, University College London, UK; Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, USA; Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, USA
| |
Collapse
|
39
|
Yu Z, Han X, Zhang S, Feng J, Peng T, Zhang XY. MouseGAN++: Unsupervised Disentanglement and Contrastive Representation for Multiple MRI Modalities Synthesis and Structural Segmentation of Mouse Brain. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1197-1209. [PMID: 36449589 DOI: 10.1109/tmi.2022.3225528] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Segmenting the fine structure of the mouse brain on magnetic resonance (MR) images is critical for delineating morphological regions, analyzing brain function, and understanding their relationships. Compared to a single MRI modality, multimodal MRI data provide complementary tissue features that can be exploited by deep learning models, resulting in better segmentation results. However, multimodal mouse brain MRI data is often lacking, making automatic segmentation of mouse brain fine structure a very challenging task. To address this issue, it is necessary to fuse multimodal MRI data to produce distinguished contrasts in different brain structures. Hence, we propose a novel disentangled and contrastive GAN-based framework, named MouseGAN++, to synthesize multiple MR modalities from single ones in a structure-preserving manner, thus improving the segmentation performance by imputing missing modalities and multi-modality fusion. Our results demonstrate that the translation performance of our method outperforms the state-of-the-art methods. Using the subsequently learned modality-invariant information as well as the modality-translated images, MouseGAN++ can segment fine brain structures with averaged dice coefficients of 90.0% (T2w) and 87.9% (T1w), respectively, achieving around +10% performance improvement compared to the state-of-the-art algorithms. Our results demonstrate that MouseGAN++, as a simultaneous image synthesis and segmentation method, can be used to fuse cross-modality information in an unpaired manner and yield more robust performance in the absence of multimodal data. We release our method as a mouse brain structural segmentation tool for free academic usage at https://github.com/yu02019.
Collapse
|
40
|
Zhao Z, Zhou F, Xu K, Zeng Z, Guan C, Zhou SK. LE-UDA: Label-Efficient Unsupervised Domain Adaptation for Medical Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:633-646. [PMID: 36227829 DOI: 10.1109/tmi.2022.3214766] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
While deep learning methods hitherto have achieved considerable success in medical image segmentation, they are still hampered by two limitations: (i) reliance on large-scale well-labeled datasets, which are difficult to curate due to the expert-driven and time-consuming nature of pixel-level annotations in clinical practices, and (ii) failure to generalize from one domain to another, especially when the target domain is a different modality with severe domain shifts. Recent unsupervised domain adaptation (UDA) techniques leverage abundant labeled source data together with unlabeled target data to reduce the domain gap, but these methods degrade significantly with limited source annotations. In this study, we address this underexplored UDA problem, investigating a challenging but valuable realistic scenario, where the source domain not only exhibits domain shift w.r.t. the target domain but also suffers from label scarcity. In this regard, we propose a novel and generic framework called "Label-Efficient Unsupervised Domain Adaptation" (LE-UDA). In LE-UDA, we construct self-ensembling consistency for knowledge transfer between both domains, as well as a self-ensembling adversarial learning module to achieve better feature alignment for UDA. To assess the effectiveness of our method, we conduct extensive experiments on two different tasks for cross-modality segmentation between MRI and CT images. Experimental results demonstrate that the proposed LE-UDA can efficiently leverage limited source labels to improve cross-domain segmentation performance, outperforming state-of-the-art UDA approaches in the literature.
Collapse
|
41
|
Chen J, Chen S, Wee L, Dekker A, Bermejo I. Deep learning based unpaired image-to-image translation applications for medical physics: a systematic review. Phys Med Biol 2023; 68. [PMID: 36753766 DOI: 10.1088/1361-6560/acba74] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 02/08/2023] [Indexed: 02/10/2023]
Abstract
Purpose. There is a growing number of publications on the application of unpaired image-to-image (I2I) translation in medical imaging. However, a systematic review covering the current state of this topic for medical physicists is lacking. The aim of this article is to provide a comprehensive review of current challenges and opportunities for medical physicists and engineers to apply I2I translation in practice.Methods and materials. The PubMed electronic database was searched using terms referring to unpaired (unsupervised), I2I translation, and medical imaging. This review has been reported in compliance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. From each full-text article, we extracted information extracted regarding technical and clinical applications of methods, Transparent Reporting for Individual Prognosis Or Diagnosis (TRIPOD) study type, performance of algorithm and accessibility of source code and pre-trained models.Results. Among 461 unique records, 55 full-text articles were included in the review. The major technical applications described in the selected literature are segmentation (26 studies), unpaired domain adaptation (18 studies), and denoising (8 studies). In terms of clinical applications, unpaired I2I translation has been used for automatic contouring of regions of interest in MRI, CT, x-ray and ultrasound images, fast MRI or low dose CT imaging, CT or MRI only based radiotherapy planning, etc Only 5 studies validated their models using an independent test set and none were externally validated by independent researchers. Finally, 12 articles published their source code and only one study published their pre-trained models.Conclusion. I2I translation of medical images offers a range of valuable applications for medical physicists. However, the scarcity of external validation studies of I2I models and the shortage of publicly available pre-trained models limits the immediate applicability of the proposed methods in practice.
Collapse
Affiliation(s)
- Junhua Chen
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Shenlun Chen
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Leonard Wee
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Andre Dekker
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Inigo Bermejo
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| |
Collapse
|
42
|
Zhao Y, Wang X, Che T, Bao G, Li S. Multi-task deep learning for medical image computing and analysis: A review. Comput Biol Med 2023; 153:106496. [PMID: 36634599 DOI: 10.1016/j.compbiomed.2022.106496] [Citation(s) in RCA: 34] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 12/06/2022] [Accepted: 12/27/2022] [Indexed: 12/29/2022]
Abstract
The renaissance of deep learning has provided promising solutions to various tasks. While conventional deep learning models are constructed for a single specific task, multi-task deep learning (MTDL) that is capable to simultaneously accomplish at least two tasks has attracted research attention. MTDL is a joint learning paradigm that harnesses the inherent correlation of multiple related tasks to achieve reciprocal benefits in improving performance, enhancing generalizability, and reducing the overall computational cost. This review focuses on the advanced applications of MTDL for medical image computing and analysis. We first summarize four popular MTDL network architectures (i.e., cascaded, parallel, interacted, and hybrid). Then, we review the representative MTDL-based networks for eight application areas, including the brain, eye, chest, cardiac, abdomen, musculoskeletal, pathology, and other human body regions. While MTDL-based medical image processing has been flourishing and demonstrating outstanding performance in many tasks, in the meanwhile, there are performance gaps in some tasks, and accordingly we perceive the open challenges and the perspective trends. For instance, in the 2018 Ischemic Stroke Lesion Segmentation challenge, the reported top dice score of 0.51 and top recall of 0.55 achieved by the cascaded MTDL model indicate further research efforts in high demand to escalate the performance of current models.
Collapse
Affiliation(s)
- Yan Zhao
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Xiuying Wang
- School of Computer Science, The University of Sydney, Sydney, NSW, 2008, Australia.
| | - Tongtong Che
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Guoqing Bao
- School of Computer Science, The University of Sydney, Sydney, NSW, 2008, Australia
| | - Shuyu Li
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, 100875, China.
| |
Collapse
|
43
|
Dimitriadis A, Trivizakis E, Papanikolaou N, Tsiknakis M, Marias K. Enhancing cancer differentiation with synthetic MRI examinations via generative models: a systematic review. Insights Imaging 2022; 13:188. [PMID: 36503979 PMCID: PMC9742072 DOI: 10.1186/s13244-022-01315-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Accepted: 07/24/2022] [Indexed: 12/14/2022] Open
Abstract
Contemporary deep learning-based decision systems are well-known for requiring high-volume datasets in order to produce generalized, reliable, and high-performing models. However, the collection of such datasets is challenging, requiring time-consuming processes involving also expert clinicians with limited time. In addition, data collection often raises ethical and legal issues and depends on costly and invasive procedures. Deep generative models such as generative adversarial networks and variational autoencoders can capture the underlying distribution of the examined data, allowing them to create new and unique instances of samples. This study aims to shed light on generative data augmentation techniques and corresponding best practices. Through in-depth investigation, we underline the limitations and potential methodology pitfalls from critical standpoint and aim to promote open science research by identifying publicly available open-source repositories and datasets.
Collapse
Affiliation(s)
- Avtantil Dimitriadis
- Computational Biomedicine Laboratory (CBML), Foundation for Research and Technology Hellas (FORTH), 70013 Heraklion, Greece
- Department of Electrical and Computer Engineering, Hellenic Mediterranean University, 71410 Heraklion, Greece
| | - Eleftherios Trivizakis
- Computational Biomedicine Laboratory (CBML), Foundation for Research and Technology Hellas (FORTH), 70013 Heraklion, Greece
- Medical School, University of Crete, 71003 Heraklion, Greece
| | - Nikolaos Papanikolaou
- Computational Biomedicine Laboratory (CBML), Foundation for Research and Technology Hellas (FORTH), 70013 Heraklion, Greece
- Computational Clinical Imaging Group, Centre of the Unknown, Champalimaud Foundation, 1400-038 Lisbon, Portugal
- The Royal Marsden NHS Foundation Trust, THe Institute of Cancer Research, London, UK
| | - Manolis Tsiknakis
- Computational Biomedicine Laboratory (CBML), Foundation for Research and Technology Hellas (FORTH), 70013 Heraklion, Greece
- Department of Electrical and Computer Engineering, Hellenic Mediterranean University, 71410 Heraklion, Greece
| | - Kostas Marias
- Computational Biomedicine Laboratory (CBML), Foundation for Research and Technology Hellas (FORTH), 70013 Heraklion, Greece
- Department of Electrical and Computer Engineering, Hellenic Mediterranean University, 71410 Heraklion, Greece
| |
Collapse
|
44
|
Yang H, Chen C, Jiang M, Liu Q, Cao J, Heng PA, Dou Q. DLTTA: Dynamic Learning Rate for Test-Time Adaptation on Cross-Domain Medical Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3575-3586. [PMID: 35839185 DOI: 10.1109/tmi.2022.3191535] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Test-time adaptation (TTA) has increasingly been an important topic to efficiently tackle the cross-domain distribution shift at test time for medical images from different institutions. Previous TTA methods have a common limitation of using a fixed learning rate for all the test samples. Such a practice would be sub-optimal for TTA, because test data may arrive sequentially therefore the scale of distribution shift would change frequently. To address this problem, we propose a novel dynamic learning rate adjustment method for test-time adaptation, called DLTTA, which dynamically modulates the amount of weights update for each test image to account for the differences in their distribution shift. Specifically, our DLTTA is equipped with a memory bank based estimation scheme to effectively measure the discrepancy of a given test sample. Based on this estimated discrepancy, a dynamic learning rate adjustment strategy is then developed to achieve a suitable degree of adaptation for each test sample. The effectiveness and general applicability of our DLTTA is extensively demonstrated on three tasks including retinal optical coherence tomography (OCT) segmentation, histopathological image classification, and prostate 3D MRI segmentation. Our method achieves effective and fast test-time adaptation with consistent performance improvement over current state-of-the-art test-time adaptation methods. Code is available at https://github.com/med-air/DLTTA.
Collapse
|
45
|
A Systematic Literature Review on Applications of GAN-Synthesized Images for Brain MRI. FUTURE INTERNET 2022. [DOI: 10.3390/fi14120351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
With the advances in brain imaging, magnetic resonance imaging (MRI) is evolving as a popular radiological tool in clinical diagnosis. Deep learning (DL) methods can detect abnormalities in brain images without an extensive manual feature extraction process. Generative adversarial network (GAN)-synthesized images have many applications in this field besides augmentation, such as image translation, registration, super-resolution, denoising, motion correction, segmentation, reconstruction, and contrast enhancement. The existing literature was reviewed systematically to understand the role of GAN-synthesized dummy images in brain disease diagnosis. Web of Science and Scopus databases were extensively searched to find relevant studies from the last 6 years to write this systematic literature review (SLR). Predefined inclusion and exclusion criteria helped in filtering the search results. Data extraction is based on related research questions (RQ). This SLR identifies various loss functions used in the above applications and software to process brain MRIs. A comparative study of existing evaluation metrics for GAN-synthesized images helps choose the proper metric for an application. GAN-synthesized images will have a crucial role in the clinical sector in the coming years, and this paper gives a baseline for other researchers in the field.
Collapse
|
46
|
Zhang F, Li S, Deng J. Unsupervised Domain Adaptation with Shape Constraint and Triple Attention for Joint Optic Disc and Cup Segmentation. SENSORS (BASEL, SWITZERLAND) 2022; 22:8748. [PMID: 36433345 PMCID: PMC9695107 DOI: 10.3390/s22228748] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Revised: 11/06/2022] [Accepted: 11/09/2022] [Indexed: 06/16/2023]
Abstract
Currently, glaucoma has become an important cause of blindness. At present, although glaucoma cannot be cured, early treatment can prevent it from getting worse. A reliable way to detect glaucoma is to segment the optic disc and cup and then measure the cup-to-disc ratio (CDR). Many deep neural network models have been developed to autonomously segment the optic disc and the optic cup to help in diagnosis. However, their performance degrades when subjected to domain shift. While many domain-adaptation methods have been exploited to address this problem, they are apt to produce malformed segmentation results. In this study, it is suggested that the segmentation network be adjusted using a constrained formulation that embeds prior knowledge about the shape of the segmentation areas that is domain-invariant. Based on IOSUDA (i.e., Input and Output Space Unsupervised Domain Adaptation), a novel unsupervised joint optic cup-to-disc segmentation framework with shape constraints is proposed, called SCUDA (short for Shape-Constrained Unsupervised Domain Adaptation). A shape constrained loss function is novelly proposed in this paper which utilizes domain-invariant prior knowledge concerning the segmentation region of the joint optic cup-optical disc of fundus images to constrain the segmentation result during network training. In addition, a convolutional triple attention module is designed to improve the segmentation network, which captures cross-dimensional interactions and provides a rich feature representation to improve the segmentation accuracy. Experiments on the RIM-ONE_r3 and Drishti-GS datasets demonstrate that the algorithm outperforms existing approaches for segmenting optic discs and cups.
Collapse
|
47
|
Xu L, Zhu S, Wen N. Deep reinforcement learning and its applications in medical imaging and radiation therapy: a survey. Phys Med Biol 2022; 67. [PMID: 36270582 DOI: 10.1088/1361-6560/ac9cb3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Accepted: 10/21/2022] [Indexed: 11/07/2022]
Abstract
Reinforcement learning takes sequential decision-making approaches by learning the policy through trial and error based on interaction with the environment. Combining deep learning and reinforcement learning can empower the agent to learn the interactions and the distribution of rewards from state-action pairs to achieve effective and efficient solutions in more complex and dynamic environments. Deep reinforcement learning (DRL) has demonstrated astonishing performance in surpassing the human-level performance in the game domain and many other simulated environments. This paper introduces the basics of reinforcement learning and reviews various categories of DRL algorithms and DRL models developed for medical image analysis and radiation treatment planning optimization. We will also discuss the current challenges of DRL and approaches proposed to make DRL more generalizable and robust in a real-world environment. DRL algorithms, by fostering the designs of the reward function, agents interactions and environment models, can resolve the challenges from scarce and heterogeneous annotated medical image data, which has been a major obstacle to implementing deep learning models in the clinic. DRL is an active research area with enormous potential to improve deep learning applications in medical imaging and radiation therapy planning.
Collapse
Affiliation(s)
- Lanyu Xu
- Department of Computer Science and Engineering, Oakland University, Rochester, MI, United States of America
| | - Simeng Zhu
- Department of Radiation Oncology, Henry Ford Health Systems, Detroit, MI, United States of America
| | - Ning Wen
- Department of Radiology/The Institute for Medical Imaging Technology, Ruijin Hospital, Shanghai Jiaotong University School of Medicine, Shanghai, People's Republic of China.,The Global Institute of Future Technology, Shanghai Jiaotong University, Shanghai, People's Republic of China
| |
Collapse
|
48
|
Li J, Qu Z, Yang Y, Zhang F, Li M, Hu S. TCGAN: a transformer-enhanced GAN for PET synthetic CT. BIOMEDICAL OPTICS EXPRESS 2022; 13:6003-6018. [PMID: 36733758 PMCID: PMC9872870 DOI: 10.1364/boe.467683] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 08/06/2022] [Accepted: 10/05/2022] [Indexed: 06/18/2023]
Abstract
Multimodal medical images can be used in a multifaceted approach to resolve a wide range of medical diagnostic problems. However, these images are generally difficult to obtain due to various limitations, such as cost of capture and patient safety. Medical image synthesis is used in various tasks to obtain better results. Recently, various studies have attempted to use generative adversarial networks for missing modality image synthesis, making good progress. In this study, we propose a generator based on a combination of transformer network and a convolutional neural network (CNN). The proposed method can combine the advantages of transformers and CNNs to promote a better detail effect. The network is designed for positron emission tomography (PET) to computer tomography synthesis, which can be used for PET attenuation correction. We also experimented on two datasets for magnetic resonance T1- to T2-weighted image synthesis. Based on qualitative and quantitative analyses, our proposed method outperforms the existing methods.
Collapse
Affiliation(s)
- Jitao Li
- College of Information Science and Engineering, Linyi University, Linyi, 276000, China
- College of Chemistry and Chemical Engineering, Linyi University, Linyi, 276000, China
- These authors contributed equally
| | - Zongjin Qu
- College of Chemistry and Chemical Engineering, Linyi University, Linyi, 276000, China
- These authors contributed equally
| | - Yue Yang
- College of Information Science and Engineering, Linyi University, Linyi, 276000, China
| | - Fuchun Zhang
- College of Information Science and Engineering, Linyi University, Linyi, 276000, China
| | - Meng Li
- College of Information Science and Engineering, Linyi University, Linyi, 276000, China
| | - Shunbo Hu
- College of Information Science and Engineering, Linyi University, Linyi, 276000, China
| |
Collapse
|
49
|
Chen X, Kuang T, Deng H, Fung SH, Gateno J, Xia JJ, Yap PT. Dual Adversarial Attention Mechanism for Unsupervised Domain Adaptive Medical Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3445-3453. [PMID: 35759585 PMCID: PMC9748599 DOI: 10.1109/tmi.2022.3186698] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Domain adaptation techniques have been demonstrated to be effective in addressing label deficiency challenges in medical image segmentation. However, conventional domain adaptation based approaches often concentrate on matching global marginal distributions between different domains in a class-agnostic fashion. In this paper, we present a dual-attention domain-adaptative segmentation network (DADASeg-Net) for cross-modality medical image segmentation. The key contribution of DADASeg-Net is a novel dual adversarial attention mechanism, which regularizes the domain adaptation module with two attention maps respectively from the space and class perspectives. Specifically, the spatial attention map guides the domain adaptation module to focus on regions that are challenging to align in adaptation. The class attention map encourages the domain adaptation module to capture class-specific instead of class-agnostic knowledge for distribution alignment. DADASeg-Net shows superior performance in two challenging medical image segmentation tasks.
Collapse
|
50
|
Jafari M, Francis S, Garibaldi JM, Chen X. LMISA: A lightweight multi-modality image segmentation network via domain adaptation using gradient magnitude and shape constraint. Med Image Anal 2022; 81:102536. [PMID: 35870297 DOI: 10.1016/j.media.2022.102536] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 04/26/2022] [Accepted: 07/11/2022] [Indexed: 11/20/2022]
Abstract
In medical image segmentation, supervised machine learning models trained using one image modality (e.g. computed tomography (CT)) are often prone to failure when applied to another image modality (e.g. magnetic resonance imaging (MRI)) even for the same organ. This is due to the significant intensity variations of different image modalities. In this paper, we propose a novel end-to-end deep neural network to achieve multi-modality image segmentation, where image labels of only one modality (source domain) are available for model training and the image labels for the other modality (target domain) are not available. In our method, a multi-resolution locally normalized gradient magnitude approach is firstly applied to images of both domains for minimizing the intensity discrepancy. Subsequently, a dual task encoder-decoder network including image segmentation and reconstruction is utilized to effectively adapt a segmentation network to the unlabeled target domain. Additionally, a shape constraint is imposed by leveraging adversarial learning. Finally, images from the target domain are segmented, as the network learns a consistent latent feature representation with shape awareness from both domains. We implement both 2D and 3D versions of our method, in which we evaluate CT and MRI images for kidney and cardiac tissue segmentation. For kidney, a public CT dataset (KiTS19, MICCAI 2019) and a local MRI dataset were utilized. The cardiac dataset was from the Multi-Modality Whole Heart Segmentation (MMWHS) challenge 2017. Experimental results reveal that our proposed method achieves significantly higher performance with a much lower model complexity in comparison with other state-of-the-art methods. More importantly, our method is also capable of producing superior segmentation results than other methods for images of an unseen target domain without model retraining. The code is available at GitHub (https://github.com/MinaJf/LMISA) to encourage method comparison and further research.
Collapse
Affiliation(s)
- Mina Jafari
- Intelligent Modeling and Analysis Group, School of Computer Science, University of Nottingham, UK.
| | - Susan Francis
- The Sir Peter Mansfield Imaging Centre, University of Nottingham, UK
| | - Jonathan M Garibaldi
- Intelligent Modeling and Analysis Group, School of Computer Science, University of Nottingham, UK
| | - Xin Chen
- Intelligent Modeling and Analysis Group, School of Computer Science, University of Nottingham, UK.
| |
Collapse
|