1
|
Zhao S, Sun Q, Yang J, Yuan Y, Huang Y, Li Z. Structure preservation constraints for unsupervised domain adaptation intracranial vessel segmentation. Med Biol Eng Comput 2025; 63:609-627. [PMID: 39432222 DOI: 10.1007/s11517-024-03195-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Accepted: 09/11/2024] [Indexed: 10/22/2024]
Abstract
Unsupervised domain adaptation (UDA) has received interest as a means to alleviate the burden of data annotation. Nevertheless, existing UDA segmentation methods exhibit performance degradation in fine intracranial vessel segmentation tasks due to the problem of structure mismatch in the image synthesis procedure. To improve the image synthesis quality and the segmentation performance, a novel UDA segmentation method with structure preservation approaches, named StruP-Net, is proposed. The StruP-Net employs adversarial learning for image synthesis and utilizes two domain-specific segmentation networks to enhance the semantic consistency between real images and synthesized images. Additionally, two distinct structure preservation approaches, feature-level structure preservation (F-SP) and image-level structure preservation (I-SP), are proposed to alleviate the problem of structure mismatch in the image synthesis procedure. The F-SP, composed of two domain-specific graph convolutional networks (GCN), focuses on providing feature-level constraints to enhance the structural similarity between real images and synthesized images. Meanwhile, the I-SP imposes constraints on structure similarity based on perceptual loss. The cross-modality experimental results from magnetic resonance angiography (MRA) images to computed tomography angiography (CTA) images indicate that StruP-Net achieves better segmentation performance compared with other state-of-the-art methods. Furthermore, high inference efficiency demonstrates the clinical application potential of StruP-Net. The code is available at https://github.com/Mayoiuta/StruP-Net .
Collapse
Affiliation(s)
- Sizhe Zhao
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
- School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Qi Sun
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
- School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Jinzhu Yang
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China.
- School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China.
| | - Yuliang Yuan
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
- School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Yan Huang
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
- School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Zhiqing Li
- The First Affiliated Hospital of China Medical University, Shenyang, Liaoning, China
| |
Collapse
|
2
|
Chen W, Ye Q, Guo L, Wu Q. Unsupervised cross-modality domain adaptation via source-domain labels guided contrastive learning for medical image segmentation. Med Biol Eng Comput 2025:10.1007/s11517-025-03312-2. [PMID: 39939403 DOI: 10.1007/s11517-025-03312-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2024] [Accepted: 01/22/2025] [Indexed: 02/14/2025]
Abstract
Unsupervised domain adaptation (UDA) offers a promising approach to enhance discriminant performance on target domains by utilizing domain adaptation techniques. These techniques enable models to leverage knowledge from the source domain to adjust to the feature distribution in the target domain. This paper proposes a unified domain adaptation framework to carry out cross-modality medical image segmentation from two perspectives: image and feature. To achieve image alignment, the loss function of Fourier-based Contrastive Style Augmentation (FCSA) has been fine-tuned to increase the impact of style change for improving system robustness. For feature alignment, a module called Source-domain Labels Guided Contrastive Learning (SLGCL) has been designed to encourage the target domain to align features of different classes with those in the source domain. In addition, a generative adversarial network has been incorporated to ensure consistency in spatial layout and local context in generated image space. According to our knowledge, our method is the first attempt to utilize source domain class intensity information to guide target domain class intensity information for feature alignment in an unsupervised domain adaptation setting. Extensive experiments conducted on a public whole heart image segmentation task demonstrate that our proposed method outperforms state-of-the-art UDA methods for medical image segmentation.
Collapse
Affiliation(s)
- Wenshuang Chen
- School of Electronic and Information Engineering, South China University of Technology, Wushan Road 381, Guangzhou, Guangdong, 510641, China
| | - Qi Ye
- School of Electronic and Information Engineering, South China University of Technology, Wushan Road 381, Guangzhou, Guangdong, 510641, China
| | - Lihua Guo
- School of Electronic and Information Engineering, South China University of Technology, Wushan Road 381, Guangzhou, Guangdong, 510641, China.
| | - Qi Wu
- School of Electronic and Information Engineering, South China University of Technology, Wushan Road 381, Guangzhou, Guangdong, 510641, China
| |
Collapse
|
3
|
Kumari S, Singh P. Deep learning for unsupervised domain adaptation in medical imaging: Recent advancements and future perspectives. Comput Biol Med 2024; 170:107912. [PMID: 38219643 DOI: 10.1016/j.compbiomed.2023.107912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 11/02/2023] [Accepted: 12/24/2023] [Indexed: 01/16/2024]
Abstract
Deep learning has demonstrated remarkable performance across various tasks in medical imaging. However, these approaches primarily focus on supervised learning, assuming that the training and testing data are drawn from the same distribution. Unfortunately, this assumption may not always hold true in practice. To address these issues, unsupervised domain adaptation (UDA) techniques have been developed to transfer knowledge from a labeled domain to a related but unlabeled domain. In recent years, significant advancements have been made in UDA, resulting in a wide range of methodologies, including feature alignment, image translation, self-supervision, and disentangled representation methods, among others. In this paper, we provide a comprehensive literature review of recent deep UDA approaches in medical imaging from a technical perspective. Specifically, we categorize current UDA research in medical imaging into six groups and further divide them into finer subcategories based on the different tasks they perform. We also discuss the respective datasets used in the studies to assess the divergence between the different domains. Finally, we discuss emerging areas and provide insights and discussions on future research directions to conclude this survey.
Collapse
Affiliation(s)
- Suruchi Kumari
- Department of Computer Science and Engineering, Indian Institute of Technology Roorkee, India.
| | - Pravendra Singh
- Department of Computer Science and Engineering, Indian Institute of Technology Roorkee, India.
| |
Collapse
|
4
|
Liu S, Yin S, Qu L, Wang M, Song Z. A Structure-Aware Framework of Unsupervised Cross-Modality Domain Adaptation via Frequency and Spatial Knowledge Distillation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3919-3931. [PMID: 37738201 DOI: 10.1109/tmi.2023.3318006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/24/2023]
Abstract
Unsupervised domain adaptation (UDA) aims to train a model on a labeled source domain and adapt it to an unlabeled target domain. In medical image segmentation field, most existing UDA methods rely on adversarial learning to address the domain gap between different image modalities. However, this process is complicated and inefficient. In this paper, we propose a simple yet effective UDA method based on both frequency and spatial domain transfer under a multi-teacher distillation framework. In the frequency domain, we introduce non-subsampled contourlet transform for identifying domain-invariant and domain-variant frequency components (DIFs and DVFs) and replace the DVFs of the source domain images with those of the target domain images while keeping the DIFs unchanged to narrow the domain gap. In the spatial domain, we propose a batch momentum update-based histogram matching strategy to minimize the domain-variant image style bias. Additionally, we further propose a dual contrastive learning module at both image and pixel levels to learn structure-related information. Our proposed method outperforms state-of-the-art methods on two cross-modality medical image segmentation datasets (cardiac and abdominal). Codes are avaliable at https://github.com/slliuEric/FSUDA.
Collapse
|
5
|
Su Z, Yao K, Yang X, Wang Q, Yan Y, Sun J, Huang K. Mind the Gap: Alleviating Local Imbalance for Unsupervised Cross-Modality Medical Image Segmentation. IEEE J Biomed Health Inform 2023; 27:3396-3407. [PMID: 37134027 DOI: 10.1109/jbhi.2023.3270434] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Unsupervised cross-modality medical image adaptation aims to alleviate the severe domain gap between different imaging modalities without using the target domain label. A key in this campaign relies upon aligning the distributions of source and target domain. One common attempt is to enforce the global alignment between two domains, which, however, ignores the fatal local-imbalance domain gap problem, i.e., some local features with larger domain gap are harder to transfer. Recently, some methods conduct alignment focusing on local regions to improve the efficiency of model learning. While this operation may cause a deficiency of critical information from contexts. To tackle this limitation, we propose a novel strategy to alleviate the domain gap imbalance considering the characteristics of medical images, namely Global-Local Union Alignment. Specifically, a feature-disentanglement style-transfer module first synthesizes the target-like source images to reduce the global domain gap. Then, a local feature mask is integrated to reduce the 'inter-gap' for local features by prioritizing those discriminative features with larger domain gap. This combination of global and local alignment can precisely localize the crucial regions in segmentation target while preserving the overall semantic consistency. We conduct a series of experiments with two cross-modality adaptation tasks, i,e. cardiac substructure and abdominal multi-organ segmentation. Experimental results indicate that our method achieves state-of-the-art performance in both tasks.
Collapse
|
6
|
Chen J, Chen S, Wee L, Dekker A, Bermejo I. Deep learning based unpaired image-to-image translation applications for medical physics: a systematic review. Phys Med Biol 2023; 68. [PMID: 36753766 DOI: 10.1088/1361-6560/acba74] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 02/08/2023] [Indexed: 02/10/2023]
Abstract
Purpose. There is a growing number of publications on the application of unpaired image-to-image (I2I) translation in medical imaging. However, a systematic review covering the current state of this topic for medical physicists is lacking. The aim of this article is to provide a comprehensive review of current challenges and opportunities for medical physicists and engineers to apply I2I translation in practice.Methods and materials. The PubMed electronic database was searched using terms referring to unpaired (unsupervised), I2I translation, and medical imaging. This review has been reported in compliance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. From each full-text article, we extracted information extracted regarding technical and clinical applications of methods, Transparent Reporting for Individual Prognosis Or Diagnosis (TRIPOD) study type, performance of algorithm and accessibility of source code and pre-trained models.Results. Among 461 unique records, 55 full-text articles were included in the review. The major technical applications described in the selected literature are segmentation (26 studies), unpaired domain adaptation (18 studies), and denoising (8 studies). In terms of clinical applications, unpaired I2I translation has been used for automatic contouring of regions of interest in MRI, CT, x-ray and ultrasound images, fast MRI or low dose CT imaging, CT or MRI only based radiotherapy planning, etc Only 5 studies validated their models using an independent test set and none were externally validated by independent researchers. Finally, 12 articles published their source code and only one study published their pre-trained models.Conclusion. I2I translation of medical images offers a range of valuable applications for medical physicists. However, the scarcity of external validation studies of I2I models and the shortage of publicly available pre-trained models limits the immediate applicability of the proposed methods in practice.
Collapse
Affiliation(s)
- Junhua Chen
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Shenlun Chen
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Leonard Wee
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Andre Dekker
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Inigo Bermejo
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| |
Collapse
|
7
|
Improving myocardial pathology segmentation with U-Net++ and EfficientSeg from multi-sequence cardiac magnetic resonance images. Comput Biol Med 2022; 151:106218. [PMID: 36308898 DOI: 10.1016/j.compbiomed.2022.106218] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Revised: 09/11/2022] [Accepted: 10/15/2022] [Indexed: 12/27/2022]
Abstract
BACKGROUND Myocardial pathology segmentation plays an utmost role in the diagnosis and treatment of myocardial infarction (MI). However, manual segmentation is time-consuming and labor-intensive, and requires a lot of professional knowledge and clinical experience. METHODS In this work, we develop an automatic and accurate coarse-to-fine myocardial pathology segmentation framework based on the U-Net++ and EfficientSeg model. The U-Net++ network with deep supervision is first applied to delineate the cardiac structures from the multi-sequence cardiac magnetic resonance (CMR) images to generate a coarse segmentation map. Then the coarse segmentation map together with the three-sequence CMR data is sent to the EfficientSeg-B1 for fine segmentation, that is, further segmentation of myocardial scar and edema areas. In addition, we apply the Focal loss to replace the original cross-entropy term, for the purpose of encouraging the model to pay more attention to the pathological areas. RESULTS The proposed segmentation approach is tested on the public Myocardial Pathology Segmentation Challenge (MyoPS 2020) dataset. Experimental results demonstrate that our solution achieves an average Dice score of 0.7148 ± 0.2213 for scar, an average Dice score of 0.7439 ± 0.1011 for edema + scar, and the final average score of 0.7294 on the overall 20 testing sets, all of which have outperformed the first place method in the competition. Moreover, extensive ablation experiments are performed, which shows that the two-stage strategy with Focal loss greatly improves the segmentation quality of pathological areas. CONCLUSION Given its effectiveness and superiority, our method can further facilitate myocardial pathology segmentation in medical practice.
Collapse
|