1
|
Zheng H, Li H, Fan Y. SurfNet: Reconstruction of Cortical Surfaces via Coupled Diffeomorphic Deformations. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2025.01.30.635814. [PMID: 39974917 PMCID: PMC11838468 DOI: 10.1101/2025.01.30.635814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 02/21/2025]
Abstract
To achieve fast and accurate cortical surface reconstruction from brain magnetic resonance images (MRIs), we develop a method to jointly reconstruct the inner (white-gray matter interface), outer (pial), and midthickness surfaces, regularized by their interdependence. Rather than reconstructing these surfaces separately without taking into consideration their interdependence as in most existing methods, our method learns three diffeomorphic deformations jointly to optimize the midthickness surface to lie halfway between the inner and outer cortical surfaces and simultaneously deforms it inward and outward towards the inner and outer cortical surfaces, respectively. The surfaces are encouraged to have a spherical topology by regularization terms for non-negativeness of the cortical thickness and symmetric cycle-consistency of the coupled surface deformations. The coupled reconstruction of cortical surfaces also facilitates an accurate estimation of the cortical thickness based on the diffeomorphic deformation trajectory of each vertex on the surfaces. Validation experiments have demonstrated that our method achieves state-of-the-art cortical surface reconstruction performance in terms of accuracy and surface topological correctness on large-scale MRI datasets, including ADNI, HCP, and OASIS.
Collapse
Affiliation(s)
- Hao Zheng
- Center for Biomedical Image Computing and Analytics, Philadelphia, PA 19104, USA
- School of Computing and Informatics, University of Louisiana at Lafayette, Lafayette, LA 70503, USA
| | - Hongming Li
- Center for Biomedical Image Computing and Analytics, Philadelphia, PA 19104, USA
- Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Yong Fan
- Center for Biomedical Image Computing and Analytics, Philadelphia, PA 19104, USA
- Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, USA
| |
Collapse
|
2
|
Kim S, Park H, Park SH. A review of deep learning-based reconstruction methods for accelerated MRI using spatiotemporal and multi-contrast redundancies. Biomed Eng Lett 2024; 14:1221-1242. [PMID: 39465106 PMCID: PMC11502678 DOI: 10.1007/s13534-024-00425-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2024] [Revised: 08/27/2024] [Accepted: 09/06/2024] [Indexed: 10/29/2024] Open
Abstract
Accelerated magnetic resonance imaging (MRI) has played an essential role in reducing data acquisition time for MRI. Acceleration can be achieved by acquiring fewer data points in k-space, which results in various artifacts in the image domain. Conventional reconstruction methods have resolved the artifacts by utilizing multi-coil information, but with limited robustness. Recently, numerous deep learning-based reconstruction methods have been developed, enabling outstanding reconstruction performances with higher acceleration. Advances in hardware and developments of specialized network architectures have produced such achievements. Besides, MRI signals contain various redundant information including multi-coil redundancy, multi-contrast redundancy, and spatiotemporal redundancy. Utilization of the redundant information combined with deep learning approaches allow not only higher acceleration, but also well-preserved details in the reconstructed images. Consequently, this review paper introduces the basic concepts of deep learning and conventional accelerated MRI reconstruction methods, followed by review of recent deep learning-based reconstruction methods that exploit various redundancies. Lastly, the paper concludes by discussing the challenges, limitations, and potential directions of future developments.
Collapse
Affiliation(s)
- Seonghyuk Kim
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - HyunWook Park
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Sung-Hong Park
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, 291 Daehak-ro, Yuseong-gu, Daejeon, 34141 Republic of Korea
| |
Collapse
|
3
|
Chang Q, Wang Y, Zhang J. Independently Trained Multi-Scale Registration Network Based on Image Pyramid. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1557-1566. [PMID: 38441699 PMCID: PMC11300729 DOI: 10.1007/s10278-024-01019-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/01/2023] [Revised: 12/23/2023] [Accepted: 12/29/2023] [Indexed: 08/07/2024]
Abstract
Image registration is a fundamental task in various applications of medical image analysis and plays a crucial role in auxiliary diagnosis, treatment, and surgical navigation. However, cardiac image registration is challenging due to the large non-rigid deformation of the heart and the complex anatomical structure. To address this challenge, this paper proposes an independently trained multi-scale registration network based on an image pyramid. By down-sampling the original input image multiple times, we can construct image pyramid pairs, and design a multi-scale registration network using image pyramid pairs of different resolutions as the training set. Using image pairs of different resolutions, train each registration network independently to extract image features from the image pairs at different resolutions. During the testing stage, the large deformation registration is decomposed into a multi-scale registration process. The deformation fields of different resolutions are fused by a step-by-step deformation method, thereby addressing the challenge of directly handling large deformations. Experiments were conducted on the open cardiac dataset ACDC (Automated Cardiac Diagnosis Challenge); the proposed method achieved an average Dice score of 0.828 in the experimental results. Through comparative experiments, it has been demonstrated that the proposed method effectively addressed the challenge of heart image registration and achieved superior registration results for cardiac images.
Collapse
Affiliation(s)
- Qing Chang
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai, China.
| | - Yaqi Wang
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai, China
| | - Jieming Zhang
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai, China
| |
Collapse
|
4
|
Chang Q, Wang Y. Structure-aware independently trained multi-scale registration network for cardiac images. Med Biol Eng Comput 2024; 62:1795-1808. [PMID: 38381202 DOI: 10.1007/s11517-024-03039-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 01/31/2024] [Indexed: 02/22/2024]
Abstract
Image registration is a primary task in various medical image analysis applications. However, cardiac image registration is difficult due to the large non-rigid deformation of the heart and the complex anatomical structure. This paper proposes a structure-aware independently trained multi-scale registration network (SIMReg) to address this challenge. Using image pairs of different resolutions, independently train each registration network to extract image features of large deformation image pairs at different resolutions. In the testing stage, the large deformation registration is decomposed into a multi-scale registration process, and the deformation fields of different resolutions are fused by a step-by-step deformation method, thus solving the difficulty of directly processing large deformation. Meanwhile, the targeted introduction of MIND (modality independent neighborhood descriptor) structural features to guide network training enhances the registration of cardiac structural contours and improves the registration effect of local details. Experiments were carried out on the open cardiac dataset ACDC (automated cardiac diagnosis challenge), and the average Dice value of the experimental results of the proposed method was 0.833. Comparative experiments showed that the proposed SIMReg could better solve the problem of heart image registration and achieve a better registration effect on cardiac images.
Collapse
Affiliation(s)
- Qing Chang
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai, China
| | - Yaqi Wang
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai, China.
| |
Collapse
|
5
|
Wu J, Li H, Fan Y. Diffeomorphic image registration with bijective consistency. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2024; 12926:129262V. [PMID: 40041684 PMCID: PMC11877456 DOI: 10.1117/12.3006871] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2025]
Abstract
Recent image registration methods built upon unsupervised learning have achieved promising diffeomorphic image registration performance. However, the bijective consistency of spatial transformations is not sufficiently investigated in existing image registration studies. In this study, we develop a multi-level image registration framework to achieve diffeomorphic image registration in a coarse-to-fine manner. A novel stationary velocity field computation method is proposed to integrate forward and inverse stationary velocity fields so that the image registration result is invariant to the order of input images to be registered. Moreover, a new bijective consistency regularization is adopted to enforce the bijective consistency of forward and inverse transformations at different time points along the stationary velocity integration paths. Validation experiments have been conducted on two T1-weighted magnetic resonance imaging (MRI) brain datasets with manually annotated anatomical structures. Compared with four state-of-the-art representative diffeomorphic registration methods, including two traditional diffeomorphic registration algorithms and two unsupervised learning-based diffeomorphic registration approaches, our method has achieved better image registration accuracy with superior topology preserving performance.
Collapse
Affiliation(s)
- Jiong Wu
- Center for AI and Data Science for Integrated Diagnostics, Center for Biomedical Image, Computing and Analytics, Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA
| | - Hongming Li
- Center for AI and Data Science for Integrated Diagnostics, Center for Biomedical Image, Computing and Analytics, Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA
| | - Yong Fan
- Center for AI and Data Science for Integrated Diagnostics, Center for Biomedical Image, Computing and Analytics, Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA
| |
Collapse
|
6
|
Liu Y, Wang W, Li Y, Lai H, Huang S, Yang X. Geometry-Consistent Adversarial Registration Model for Unsupervised Multi-Modal Medical Image Registration. IEEE J Biomed Health Inform 2023; 27:3455-3466. [PMID: 37099474 DOI: 10.1109/jbhi.2023.3270199] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/27/2023]
Abstract
Deformable multi-modal medical image registration aligns the anatomical structures of different modalities to the same coordinate system through a spatial transformation. Due to the difficulties of collecting ground-truth registration labels, existing methods often adopt the unsupervised multi-modal image registration setting. However, it is hard to design satisfactory metrics to measure the similarity of multi-modal images, which heavily limits the multi-modal registration performance. Moreover, due to the contrast difference of the same organ in multi-modal images, it is difficult to extract and fuse the representations of different modal images. To address the above issues, we propose a novel unsupervised multi-modal adversarial registration framework that takes advantage of image-to-image translation to translate the medical image from one modality to another. In this way, we are able to use the well-defined uni-modal metrics to better train the models. Inside our framework, we propose two improvements to promote accurate registration. First, to avoid the translation network learning spatial deformation, we propose a geometry-consistent training scheme to encourage the translation network to learn the modality mapping solely. Second, we propose a novel semi-shared multi-scale registration network that extracts features of multi-modal images effectively and predicts multi-scale registration fields in an coarse-to-fine manner to accurately register the large deformation area. Extensive experiments on brain and pelvic datasets demonstrate the superiority of the proposed method over existing methods, revealing our framework has great potential in clinical application.
Collapse
|
7
|
Zheng H, Li H, Fan Y. SurfNN: Joint Reconstruction of Multiple Cortical Surfaces from Magnetic Resonance Images. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2023; 2023:10.1109/isbi53787.2023.10230488. [PMID: 37790882 PMCID: PMC10544794 DOI: 10.1109/isbi53787.2023.10230488] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
To achieve fast, robust, and accurate reconstruction of the human cortical surfaces from 3D magnetic resonance images (MRIs), we develop a novel deep learning-based framework, referred to as SurfNN, to reconstruct simultaneously both inner (between white matter and gray matter) and outer (pial) surfaces from MRIs. Different from existing deep learning-based cortical surface reconstruction methods that either reconstruct the cortical surfaces separately or neglect the interdependence between the inner and outer surfaces, SurfNN reconstructs both the inner and outer cortical surfaces jointly by training a single network to predict a midthickness surface that lies at the center of the inner and outer cortical surfaces. The input of SurfNN consists of a 3D MRI and an initialization of the midthickness surface that is represented both implicitly as a 3D distance map and explicitly as a triangular mesh with spherical topology, and its output includes both the inner and outer cortical surfaces, as well as the midthickness surface. The method has been evaluated on a large-scale MRI dataset and demonstrated competitive cortical surface reconstruction performance.
Collapse
Affiliation(s)
- Hao Zheng
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia USA
| | - Hongming Li
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia USA
| | - Yong Fan
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia USA
| |
Collapse
|
8
|
Wu J, Fan Y. HNAS-Reg: Hierarchical Neural Architecture Search for Deformable Medical Image Registration. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2023; 2023:10.1109/isbi53787.2023.10230534. [PMID: 37790881 PMCID: PMC10544790 DOI: 10.1109/isbi53787.2023.10230534] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
Convolutional neural networks (CNNs) have been widely used to build deep learning models for medical image registration, but manually designed network architectures are not necessarily optimal. This paper presents a hierarchical NAS framework (HNAS-Reg), consisting of both convolutional operation search and network topology search, to identify the optimal network architecture for deformable medical image registration. To mitigate the computational overhead and memory constraints, a partial channel strategy is utilized without losing optimization quality. Experiments on three datasets, consisting of 636 T1-weighted magnetic resonance images (MRIs), have demonstrated that the proposal method can build a deep learning model with improved image registration accuracy and reduced model size, compared with state-of-the-art image registration approaches, including one representative traditional approach and two unsupervised learning-based approaches.
Collapse
Affiliation(s)
- Jiong Wu
- Center for Biomedical Image Computing and Analytics, Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA
| | - Yong Fan
- Center for Biomedical Image Computing and Analytics, Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA
| |
Collapse
|
9
|
Fu J, Tzortzakakis A, Barroso J, Westman E, Ferreira D, Moreno R. Fast three-dimensional image generation for healthy brain aging using diffeomorphic registration. Hum Brain Mapp 2023; 44:1289-1308. [PMID: 36468536 PMCID: PMC9921328 DOI: 10.1002/hbm.26165] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Revised: 11/15/2022] [Accepted: 11/16/2022] [Indexed: 12/12/2022] Open
Abstract
Predicting brain aging can help in the early detection and prognosis of neurodegenerative diseases. Longitudinal cohorts of healthy subjects scanned through magnetic resonance imaging (MRI) have been essential to understand the structural brain changes due to aging. However, these cohorts suffer from missing data due to logistic issues in the recruitment of subjects. This paper proposes a methodology for filling up missing data in longitudinal cohorts with anatomically plausible images that capture the subject-specific aging process. The proposed methodology is developed within the framework of diffeomorphic registration. First, two novel modules are introduced within Synthmorph, a fast, state-of-the-art deep learning-based diffeomorphic registration method, to simulate the aging process between the first and last available MRI scan for each subject in three-dimensional (3D). The use of image registration also makes the generated images plausible by construction. Second, we used six image similarity measurements to rearrange the generated images to the specific age range. Finally, we estimated the age of every generated image by using the assumption of linear brain decay in healthy subjects. The methodology was evaluated on 2662 T1-weighted MRI scans from 796 healthy participants from 3 different longitudinal cohorts: Alzheimer's Disease Neuroimaging Initiative, Open Access Series of Imaging Studies-3, and Group of Neuropsychological Studies of the Canary Islands (GENIC). In total, we generated 7548 images to simulate the access of a scan per subject every 6 months in these cohorts. We evaluated the quality of the synthetic images using six quantitative measurements and a qualitative assessment by an experienced neuroradiologist with state-of-the-art results. The assumption of linear brain decay was accurate in these cohorts (R2 ∈ [.924, .940]). The experimental results show that the proposed methodology can produce anatomically plausible aging predictions that can be used to enhance longitudinal datasets. Compared to deep learning-based generative methods, diffeomorphic registration is more likely to preserve the anatomy of the different structures of the brain, which makes it more appropriate for its use in clinical applications. The proposed methodology is able to efficiently simulate anatomically plausible 3D MRI scans of brain aging of healthy subjects from two images scanned at two different time points.
Collapse
Affiliation(s)
- Jingru Fu
- Division of Biomedical ImagingDepartment of Biomedical Engineering and Health Systems, KTH Royal Institute of TechnologyStockholmSweden
| | - Antonios Tzortzakakis
- Division of RadiologyDepartment for Clinical Science, Intervention and Technology (CLINTEC), Karolinska InstitutetStockholmSweden
- Medical Radiation Physics and Nuclear MedicineFunctional Unit of Nuclear Medicine, Karolinska University HospitalHuddingeStockholmSweden
| | - José Barroso
- Department of PsychologyFaculty of Health Sciences, University Fernando Pessoa CanariasLas PalmasSpain
| | - Eric Westman
- Division of Clinical GeriatricsCentre for Alzheimer Research, Department of Neurobiology, Care Sciences, and Society (NVS), Karolinska InstitutetStockholmSweden
- Department of NeuroimagingCentre for Neuroimaging Sciences, Institute of Psychiatry, Psychology and Neuroscience, King's College LondonLondonUnited Kingdom
| | - Daniel Ferreira
- Division of Clinical GeriatricsCentre for Alzheimer Research, Department of Neurobiology, Care Sciences, and Society (NVS), Karolinska InstitutetStockholmSweden
| | - Rodrigo Moreno
- Division of Biomedical ImagingDepartment of Biomedical Engineering and Health Systems, KTH Royal Institute of TechnologyStockholmSweden
| | | |
Collapse
|
10
|
Berg A, Vandersmissen E, Wimmer M, Major D, Neubauer T, Lenis D, Cant J, Snoeckx A, Bühler K. Employing similarity to highlight differences: On the impact of anatomical assumptions in chest X-ray registration methods. Comput Biol Med 2023; 154:106543. [PMID: 36682179 DOI: 10.1016/j.compbiomed.2023.106543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 12/15/2022] [Accepted: 01/10/2023] [Indexed: 01/18/2023]
Abstract
To facilitate both the detection and the interpretation of findings in chest X-rays, comparison with a previous image of the same patient is very valuable to radiologists. Today, the most common approach for deep learning methods to automatically inspect chest X-rays disregards the patient history and classifies only single images as normal or abnormal. Nevertheless, several methods for assisting in the task of comparison through image registration have been proposed in the past. However, as we illustrate, they tend to miss specific types of pathological changes like cardiomegaly and effusion. Due to assumptions on fixed anatomical structures or their measurements of registration quality, they produce unnaturally deformed warp fields impacting visualization of differences between moving and fixed images. We aim to overcome these limitations, through a new paradigm based on individual rib pair segmentation for anatomy penalized registration. Our method proves to be a natural way to limit the folding percentage of the warp field to 1/6 of the state of the art while increasing the overlap of ribs by more than 25%, implying difference images showing pathological changes overlooked by other methods. We develop an anatomically penalized convolutional multi-stage solution on the National Institutes of Health (NIH) data set, starting from less than 25 fully and 50 partly labeled training images, employing sequential instance memory segmentation with hole dropout, weak labeling, coarse-to-fine refinement and Gaussian mixture model histogram matching. We statistically evaluate the benefits of our method and highlight the limits of currently used metrics for registration of chest X-rays.
Collapse
Affiliation(s)
- Astrid Berg
- VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Donau-City-Straße 11, Vienna, 1220, Austria.
| | - Eva Vandersmissen
- Agfa NV, Radiology Solutions R&D, Septestraat 27, 2640 Mortsel, Belgium.
| | - Maria Wimmer
- VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Donau-City-Straße 11, Vienna, 1220, Austria.
| | - David Major
- VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Donau-City-Straße 11, Vienna, 1220, Austria.
| | - Theresa Neubauer
- VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Donau-City-Straße 11, Vienna, 1220, Austria.
| | - Dimitrios Lenis
- VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Donau-City-Straße 11, Vienna, 1220, Austria.
| | - Jeroen Cant
- Agfa NV, Radiology Solutions R&D, Septestraat 27, 2640 Mortsel, Belgium.
| | - Annemiek Snoeckx
- Department of Radiology, Antwerp University Hospital, Drie Eikenstraat 655, 2650 Edegem, Belgium; Faculty of Medicine and Health Sciences, University of Antwerp, Universiteitsplein 1, 2610 Wilrijk, Belgium.
| | - Katja Bühler
- VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Donau-City-Straße 11, Vienna, 1220, Austria.
| |
Collapse
|
11
|
Dual attention network for unsupervised medical image registration based on VoxelMorph. Sci Rep 2022; 12:16250. [PMID: 36171468 PMCID: PMC9519746 DOI: 10.1038/s41598-022-20589-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2022] [Accepted: 09/15/2022] [Indexed: 11/24/2022] Open
Abstract
An accurate medical image registration is crucial in a variety of neuroscience and clinical studies. In this paper, we proposed a new unsupervised learning network, DAVoxelMorph to improve the accuracy of 3D deformable medical image registration. Based on the VoxelMorph model, our network presented two modifications, one is adding a dual attention architecture, specifically, we model semantic correlation on spatial and coordinate dimensions respectively, and the location attention module selectively aggregates the features of each location by weighting the features of all locations. The coordinate attention module further puts the location information into the channel attention. The other is introducing the bending penalty as regularization in the loss function to penalize the bending in the deformation field. Experimental results show that DAVoxelMorph achieved better registration performance including average Dice scores (0.714) and percentage of locations with non-positive Jacobian (0.345) compare with VoxelMorph (0.703, 0.355), CycleMorph (0.705, 0.133), ANTs SyN (0.707, 0.137) and NiftyReg (0.694, 0.549). Our model increases both model sensitivity and registration accuracy.
Collapse
|