1
|
Liu Y, Wang L, Ning X, Gao Y, Wang D. Enhancing unsupervised learning in medical image registration through scale-aware context aggregation. iScience 2025; 28:111734. [PMID: 39898031 PMCID: PMC11787544 DOI: 10.1016/j.isci.2024.111734] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2024] [Revised: 09/24/2024] [Accepted: 12/30/2024] [Indexed: 02/04/2025] Open
Abstract
Deformable image registration (DIR) is essential for medical image analysis, facilitating the establishment of dense correspondences between images to analyze complex deformations. Traditional registration algorithms often require significant computational resources due to iterative optimization, while deep learning approaches face challenges in managing diverse deformation complexities and task requirements. We introduce ScaMorph, an unsupervised learning model for DIR that employs scale-aware context aggregation, integrating multiscale mixed convolution with lightweight multiscale context fusion. This model effectively combines convolutional networks and vision transformers, addressing various registration tasks. We also present diffeomorphic variants of ScaMorph to maintain topological deformations. Extensive experiments on 3D medical images across five applications-atlas-to-patient and inter-patient brain magnetic resonance imaging (MRI) registration, inter-modal brain MRI registration, inter-patient liver computed tomography (CT) registration as well as inter-modal abdomen MRI-CT registration-demonstrate that our model significantly outperforms existing methods, highlighting its effectiveness and broader implications for enhancing medical image registration techniques.
Collapse
Affiliation(s)
- Yuchen Liu
- School of Instrumentation Science and Opto-electronics Engineering, Beihang University, Beijing 100191, China
| | - Ling Wang
- Institute of Large-Scale Scientific Facility and Centre for Zero Magnetic Field Science, Beihang University, Beijing 100191, China
| | - Xiaolin Ning
- School of Instrumentation Science and Opto-electronics Engineering, Beihang University, Beijing 100191, China
- Institute of Large-Scale Scientific Facility and Centre for Zero Magnetic Field Science, Beihang University, Beijing 100191, China
- Hefei National Laboratory, Hefei 230000, China
| | - Yang Gao
- School of Instrumentation Science and Opto-electronics Engineering, Beihang University, Beijing 100191, China
- Institute of Large-Scale Scientific Facility and Centre for Zero Magnetic Field Science, Beihang University, Beijing 100191, China
- Hefei National Laboratory, Hefei 230000, China
| | - Defeng Wang
- School of Instrumentation Science and Opto-electronics Engineering, Beihang University, Beijing 100191, China
| |
Collapse
|
2
|
Bierbrier J, Gueziri HE, Collins DL. Estimating medical image registration error and confidence: A taxonomy and scoping review. Med Image Anal 2022; 81:102531. [PMID: 35858506 DOI: 10.1016/j.media.2022.102531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 06/16/2022] [Accepted: 07/01/2022] [Indexed: 11/18/2022]
Abstract
Given that image registration is a fundamental and ubiquitous task in both clinical and research domains of the medical field, errors in registration can have serious consequences. Since such errors can mislead clinicians during image-guided therapies or bias the results of a downstream analysis, methods to estimate registration error are becoming more popular. To give structure to this new heterogenous field we developed a taxonomy and performed a scoping review of methods that quantitatively and automatically provide a dense estimation of registration error. The taxonomy breaks down error estimation methods into Approach (Image- or Transformation-based), Framework (Machine Learning or Direct) and Measurement (error or confidence) components. Following the PRISMA guidelines for scoping reviews, the 570 records found were reduced to twenty studies that met inclusion criteria, which were then reviewed according to the proposed taxonomy. Trends in the field, advantages and disadvantages of the methods, and potential sources of bias are also discussed. We provide suggestions for best practices and identify areas of future research.
Collapse
Affiliation(s)
- Joshua Bierbrier
- Department of Biomedical Engineering, McGill University, Montreal, QC, Canada; McConnell Brain Imaging Center, Montreal Neurological Institute and Hospital, Montreal, QC, Canada.
| | - Houssem-Eddine Gueziri
- McConnell Brain Imaging Center, Montreal Neurological Institute and Hospital, Montreal, QC, Canada
| | - D Louis Collins
- Department of Biomedical Engineering, McGill University, Montreal, QC, Canada; McConnell Brain Imaging Center, Montreal Neurological Institute and Hospital, Montreal, QC, Canada; Department of Neurology and Neurosurgery, McGill University, Montreal, QC, Canada
| |
Collapse
|
3
|
Lee EJ, Plishker W, Hata N, Shyn PB, Silverman SG, Bhattacharyya SS, Shekhar R. Rapid Quality Assessment of Nonrigid Image Registration Based on Supervised Learning. J Digit Imaging 2021; 34:1376-1386. [PMID: 34647199 PMCID: PMC8669090 DOI: 10.1007/s10278-021-00523-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2020] [Revised: 08/03/2021] [Accepted: 08/17/2021] [Indexed: 11/25/2022] Open
Abstract
When preprocedural images are overlaid on intraprocedural images, interventional procedures benefit in that more structures are revealed in intraprocedural imaging. However, image artifacts, respiratory motion, and challenging scenarios could limit the accuracy of multimodality image registration necessary before image overlay. Ensuring the accuracy of registration during interventional procedures is therefore critically important. The goal of this study was to develop a novel framework that has the ability to assess the quality (i.e., accuracy) of nonrigid multimodality image registration accurately in near real time. We constructed a solution using registration quality metrics that can be computed rapidly and combined to form a single binary assessment of image registration quality as either successful or poor. Based on expert-generated quality metrics as ground truth, we used a supervised learning method to train and test this system on existing clinical data. Using the trained quality classifier, the proposed framework identified successful image registration cases with an accuracy of 81.5%. The current implementation produced the classification result in 5.5 s, fast enough for typical interventional radiology procedures. Using supervised learning, we have shown that the described framework could enable a clinician to obtain confirmation or caution of registration results during clinical procedures.
Collapse
Affiliation(s)
- Eung-Joo Lee
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD USA
| | - William Plishker
- Institute for Advanced Computer Studies, University of Maryland, College Park, MD USA
| | | | | | | | - Shuvra S. Bhattacharyya
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD USA
- Department of Radiology, Brigham and Women’s Hospital and Harvard Medical School, Boston, MA USA
| | - Raj Shekhar
- Institute for Advanced Computer Studies, University of Maryland, College Park, MD USA
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children’s National Hospital, Washington, DC USA
| |
Collapse
|
4
|
Schlachter M, Preim B, Bühler K, Raidou RG. Principles of Visualization in Radiation Oncology. Oncology 2020; 98:412-422. [PMID: 31940605 DOI: 10.1159/000504940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2019] [Accepted: 11/21/2019] [Indexed: 11/19/2022]
Abstract
BACKGROUND Medical visualization employs elements from computer graphics to create meaningful, interactive visual representations of medical data, and it has become an influential field of research for many advanced applications like radiation oncology, among others. Visual representations employ the user's cognitive capabilities to support and accelerate diagnostic, planning, and quality assurance workflows based on involved patient data. SUMMARY This article discusses the basic underlying principles of visualization in the application domain of radiation oncology. The main visualization strategies, such as slice-based representations and surface and volume rendering are presented. Interaction topics, i.e., the combination of visualization and automated analysis methods, are also discussed. Key Messages: Slice-based representations are a common approach in radiation oncology, while volume visualization also has a long-standing history in the field. Perception within both representations can benefit further from advanced approaches, such as image fusion and multivolume or hybrid rendering. While traditional slice-based and volume representations keep evolving, the dimensionality and complexity of medical data are also increasing. To address this, visual analytics strategies are valuable, particularly for cohort or uncertainty visualization. Interactive visual analytics approaches represent a new opportunity to integrate knowledgeable experts and their cognitive abilities in exploratory processes which cannot be conducted by solely automatized methods.
Collapse
Affiliation(s)
| | - Bernhard Preim
- University of Magdeburg, Magdeburg, Germany.,Research Campus STIMULATE, Magdeburg, Germany
| | | | | |
Collapse
|
5
|
Sun L, Shao W, Wang M, Zhang D, Liu M. High-order Feature Learning for Multi-atlas based Label Fusion: Application to Brain Segmentation with MRI. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 29:2702-2713. [PMID: 31725379 DOI: 10.1109/tip.2019.2952079] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Multi-atlas based segmentation methods have shown their effectiveness in brain regions-of-interesting (ROIs) segmentation, by propagating labels from multiple atlases to a target image based on the similarity between patches in the target image and multiple atlas images. Most of the existing multiatlas based methods use image intensity features to calculate the similarity between a pair of image patches for label fusion. In particular, using only low-level image intensity features cannot adequately characterize the complex appearance patterns (e.g., the high-order relationship between voxels within a patch) of brain magnetic resonance (MR) images. To address this issue, this paper develops a high-order feature learning framework for multi-atlas based label fusion, where high-order features of image patches are extracted and fused for segmenting ROIs of structural brain MR images. Specifically, an unsupervised feature learning method (i.e., means-covariances restricted Boltzmann machine, mcRBM) is employed to learn high-order features (i.e., mean and covariance features) of patches in brain MR images. Then, a group-fused sparsity dictionary learning method is proposed to jointly calculate the voting weights for label fusion, based on the learned high-order and the original image intensity features. The proposed method is compared with several state-of-the-art label fusion methods on ADNI, NIREP and LONI-LPBA40 datasets. The Dice ratio achieved by our method is 88:30%, 88:83%, 79:54% and 81:02% on left and right hippocampus on the ADNI, NIREP and LONI-LPBA40 datasets, respectively, while the best Dice ratio yielded by the other methods are 86:51%, 87:39%, 78:48% and 79:65% on three datasets, respectively.
Collapse
|
6
|
Sokooti H, Saygili G, Glocker B, Lelieveldt BPF, Staring M. Quantitative error prediction of medical image registration using regression forests. Med Image Anal 2019; 56:110-121. [PMID: 31226661 DOI: 10.1016/j.media.2019.05.005] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2018] [Revised: 04/25/2019] [Accepted: 05/10/2019] [Indexed: 11/17/2022]
Abstract
Predicting registration error can be useful for evaluation of registration procedures, which is important for the adoption of registration techniques in the clinic. In addition, quantitative error prediction can be helpful in improving the registration quality. The task of predicting registration error is demanding due to the lack of a ground truth in medical images. This paper proposes a new automatic method to predict the registration error in a quantitative manner, and is applied to chest CT scans. A random regression forest is utilized to predict the registration error locally. The forest is built with features related to the transformation model and features related to the dissimilarity after registration. The forest is trained and tested using manually annotated corresponding points between pairs of chest CT scans in two experiments: SPREAD (trained and tested on SPREAD) and inter-database (including three databases SPREAD, DIR-Lab-4DCT and DIR-Lab-COPDgene). The results show that the mean absolute errors of regression are 1.07 ± 1.86 and 1.76 ± 2.59 mm for the SPREAD and inter-database experiment, respectively. The overall accuracy of classification in three classes (correct, poor and wrong registration) is 90.7% and 75.4%, for SPREAD and inter-database respectively. The good performance of the proposed method enables important applications such as automatic quality control in large-scale image analysis.
Collapse
Affiliation(s)
- Hessam Sokooti
- Leiden University Medical Center, Leiden, the Netherlands.
| | - Gorkem Saygili
- Leiden University Medical Center, Leiden, the Netherlands
| | | | - Boudewijn P F Lelieveldt
- Leiden University Medical Center, Leiden, the Netherlands; Delft University of Technology, Delft, the Netherlands
| | - Marius Staring
- Leiden University Medical Center, Leiden, the Netherlands; Delft University of Technology, Delft, the Netherlands
| |
Collapse
|
7
|
Sun L, Zu C, Shao W, Guang J, Zhang D, Liu M. Reliability-based robust multi-atlas label fusion for brain MRI segmentation. Artif Intell Med 2019; 96:12-24. [DOI: 10.1016/j.artmed.2019.03.004] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2017] [Revised: 03/04/2019] [Accepted: 03/05/2019] [Indexed: 10/27/2022]
|
8
|
Paganelli C, Meschini G, Molinelli S, Riboldi M, Baroni G. “Patient-specific validation of deformable image registration in radiation therapy: Overview and caveats”. Med Phys 2018; 45:e908-e922. [DOI: 10.1002/mp.13162] [Citation(s) in RCA: 58] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2017] [Revised: 07/30/2018] [Accepted: 08/24/2018] [Indexed: 12/26/2022] Open
Affiliation(s)
- Chiara Paganelli
- Dipartimento di Elettronica, Informazione e Bioingegneria; Politecnico di Milano; Milano 20133 Italy
| | - Giorgia Meschini
- Dipartimento di Elettronica, Informazione e Bioingegneria; Politecnico di Milano; Milano 20133 Italy
| | | | - Marco Riboldi
- Department of Medical Physics; Ludwig-Maximilians-Universitat Munchen; Munich 80539 Germany
| | - Guido Baroni
- Dipartimento di Elettronica, Informazione e Bioingegneria; Politecnico di Milano; Milano 20133 Italy
- Centro Nazionale di Adroterapia Oncologica; Pavia 27100 Italy
| |
Collapse
|