51
|
Bao L, Chen K, Kong D, Ying S, Zeng T. Time multiscale regularization for nonlinear image registration. Comput Med Imaging Graph 2024; 112:102331. [PMID: 38199126 DOI: 10.1016/j.compmedimag.2024.102331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Revised: 10/25/2023] [Accepted: 12/13/2023] [Indexed: 01/12/2024]
Abstract
Regularization-based methods are commonly used for image registration. However, fixed regularizers have limitations in capturing details and describing the dynamic registration process. To address this issue, we propose a time multiscale registration framework for nonlinear image registration in this paper. Our approach replaces the fixed regularizer with a monotone decreasing sequence, and iteratively uses the residual of the previous step as the input for registration. Particularly, first, we introduce a dynamically varying regularization strategy that updates regularizers at each iteration and incorporates them with a multiscale framework. This approach guarantees an overall smooth deformation field in the initial stage of registration and fine-tunes local details as the images become more similar. We then deduce convergence analysis under certain conditions on the regularizers and parameters. Further, we introduce a TV-like regularizer to demonstrate the efficiency of our method. Finally, we compare our proposed multiscale algorithm with some existing methods on both synthetic images and pulmonary computed tomography (CT) images. The experimental results validate that our proposed algorithm outperforms the compared methods, especially in preserving details during image registration with sharp structures.
Collapse
Affiliation(s)
- Lili Bao
- Department of Mathematics, Shanghai University, Shanghai 200444, PR China
| | - Ke Chen
- Department of Mathematics and Statistics, University of Strathclyde, Glasgow, UK.
| | - Dexing Kong
- School of Mathematical Science, Zhejiang University, Hangzhou 310027, PR China
| | - Shihui Ying
- Department of Mathematics, Shanghai University, Shanghai 200444, PR China.
| | - Tieyong Zeng
- Department of Mathematics, The Chinese University of Hong Kong, Shatin, Hong Kong
| |
Collapse
|
52
|
Zhang J, Qing C, Li Y, Wang Y. BCSwinReg: A cross-modal attention network for CBCT-to-CT multimodal image registration. Comput Biol Med 2024; 171:107990. [PMID: 38377717 DOI: 10.1016/j.compbiomed.2024.107990] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Revised: 12/26/2023] [Accepted: 01/13/2024] [Indexed: 02/22/2024]
Abstract
Computed tomography (CT) and cone beam computed tomography (CBCT) registration plays an important role in radiotherapy. However, the poor quality of CBCT makes CBCT-CT multimodal registration challenging. Effective feature fusion and mapping often lead to better registration results for multimodal registration. Therefore, we proposed a new backbone network BCSwinReg and a cross-modal attention module CrossSwin. Specifically, a cross-modal attention CrossSwin is designed to promote multi-modal feature fusion, map the multi-modal domain to the common domain, and thus helping the network learn the correspondence between images better. Furthermore, a new network, BCSwinReg, is proposed to discover correspondence through cross-attention exchange information, obtain multi-level semantic information through a multi-resolution strategy, and finally integrate the deformation of multi-resolutions by the divide-conquer cascade method. We performed experiments on the publicly available 4D-Lung dataset to demonstrate the effectiveness of CrossSwin and BCSwinReg. Compared with VoxelMorph, the BCSwinReg has obtained performance improvements of 3.3% in Dice Similarity Coefficient (DSC) and 0.19 in the average 95% Hausdorff distance (HD95).
Collapse
Affiliation(s)
- Jieming Zhang
- The East China University of Science and Technology, Shanghai, 200237, China
| | - Chang Qing
- The East China University of Science and Technology, Shanghai, 200237, China.
| | - Yu Li
- The East China University of Science and Technology, Shanghai, 200237, China
| | - Yaqi Wang
- The East China University of Science and Technology, Shanghai, 200237, China
| |
Collapse
|
53
|
Huang S, Zhong L, Shi Y. Automated Mapping of Residual Distortion Severity in Diffusion MRI. COMPUTATIONAL DIFFUSION MRI : MICCAI WORKSHOP 2024; 14328:58-69. [PMID: 38500569 PMCID: PMC10948104 DOI: 10.1007/978-3-031-47292-3_6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/20/2024]
Abstract
Susceptibility-induced distortion is a common artifact in diffusion MRI (dMRI), which deforms the dMRI locally and poses significant challenges in connectivity analysis. While various methods were proposed to correct the distortion, residual distortions often persist at varying degrees across brain regions and subjects. Generating a voxel-level residual distortion severity map can thus be a valuable tool to better inform downstream connectivity analysis. To fill this current gap in dMRI analysis, we propose a supervised deep-learning network to predict a severity map of residual distortion. The training process is supervised using the structural similarity index measure (SSIM) of the fiber orientation distribution (FOD) in two opposite phase encoding (PE) directions. Only b0 images and related outputs from the distortion correction methods are needed as inputs in the testing process. The proposed method is applicable in large-scale datasets such as the UK Biobank, Adolescent Brain Cognitive Development (ABCD), and other emerging studies that only have complete dMRI data in one PE direction but acquires b0 images in both PEs. In our experiments, we trained the proposed model using the Lifespan Human Connectome Project Aging (HCP-Aging) dataset ( n = 662 ) and apply the trained model to data ( n = 1330 ) from UK Biobank. Our results show low training, validation, and test errors, and the severity map correlates excellently with an FOD integrity measure in both HCP-Aging and UK Biobank data. The proposed method is also highly efficient and can generate the severity map in around 1 second for each subject.
Collapse
Affiliation(s)
- Shuo Huang
- Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California (USC), Los Angeles, CA 90033, USA
- Alfred E. Mann Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California (USC), Los Angeles, CA 90089, USA
| | - Lujia Zhong
- Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California (USC), Los Angeles, CA 90033, USA
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California (USC), Los Angeles, CA 90089, USA
| | - Yonggang Shi
- Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California (USC), Los Angeles, CA 90033, USA
- Alfred E. Mann Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California (USC), Los Angeles, CA 90089, USA
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California (USC), Los Angeles, CA 90089, USA
| |
Collapse
|
54
|
Meyer S, Alam S, Kuo L, Hu YC, Liu Y, Lu W, Yorke E, Li A, Cervino L, Zhang P. Creating patient-specific digital phantoms with a longitudinal atlas for evaluating deformable CT-CBCT registration in adaptive lung radiotherapy. Med Phys 2024; 51:1405-1414. [PMID: 37449537 PMCID: PMC10787815 DOI: 10.1002/mp.16606] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Revised: 05/26/2023] [Accepted: 06/22/2023] [Indexed: 07/18/2023] Open
Abstract
BACKGROUND Quality assurance of deformable image registration (DIR) is challenging because the ground truth is often unavailable. In addition, current approaches that rely on artificial transformations do not adequately resemble clinical scenarios encountered in adaptive radiotherapy. PURPOSE We developed an atlas-based method to create a variety of patient-specific serial digital phantoms with CBCT-like image quality to assess the DIR performance for longitudinal CBCT imaging data in adaptive lung radiotherapy. METHODS A library of deformations was created by extracting the longitudinal changes observed between a planning CT and weekly CBCT from an atlas of lung radiotherapy patients. The planning CT of an inquiry patient was first deformed by mapping the deformation pattern from a matched atlas patient, and subsequently appended with CBCT artifacts to imitate a weekly CBCT. Finally, a group of digital phantoms around an inquiry patient was produced to simulate a series of possible evolutions of tumor and adjacent normal structures. We validated the generated deformation vector fields (DVFs) to ensure numerically and physiologically realistic transformations. The proposed framework was applied to evaluate the performance of the DIR algorithm implemented in the commercial Eclipse treatment planning system in a retrospective study of eight inquiry patients. RESULTS The generated DVFs were inverse consistent within less than 3 mm and did not exhibit unrealistic folding. The deformation patterns adequately mimicked the observed longitudinal anatomical changes of the matched atlas patients. Worse Eclipse DVF accuracy was observed in regions of low image contrast or artifacts. The structure volumes exhibiting a DVF error magnitude of equal or more than 2 mm ranged from 24.5% (spinal cord) to 69.2% (heart) and the maximum DVF error exceeded 5 mm for all structures except the spinal cord. Contour-based evaluations showed a high degree of alignment with dice similarity coefficients above 0.8 in all cases, which underestimated the overall DVF accuracy within the structures. CONCLUSIONS It is feasible to create and augment digital phantoms based on a particular patient of interest using multiple series of deformation patterns from matched patients in an atlas. This can provide a semi-automated procedure to complement the quality assurance of CT-CBCT DIR and facilitate the clinical implementation of image-guided and adaptive radiotherapy that involve longitudinal CBCT imaging studies.
Collapse
Affiliation(s)
- Sebastian Meyer
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Sadegh Alam
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - LiCheng Kuo
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Yu-Chi Hu
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Yilin Liu
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Wei Lu
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Ellen Yorke
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Anyi Li
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Laura Cervino
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Pengpeng Zhang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| |
Collapse
|
55
|
Neelakantan S, Mukherjee T, Smith BJ, Myers K, Rizi R, Avazmohammadi R. In-silico CT lung phantom generated from finite-element mesh. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2024; 12928:1292829. [PMID: 39055486 PMCID: PMC11270049 DOI: 10.1117/12.3006973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/27/2024]
Abstract
Several lung diseases lead to alterations in regional lung mechanics, including ventilator- and radiation-induced lung injuries. Such alterations can lead to localized underventilation of the affected areas, resulting in the overdistension of the surrounding healthy regions. Thus, there has been growing interest in quantifying the dynamics of the lung parenchyma using regional biomechanical markers. Image registration through dynamic imaging has emerged as a powerful tool to assess lung parenchyma's kinematic and deformation behaviors during respiration. However, the difficulty in validating the image registration estimation of lung deformation, primarily due to the lack of ground-truth deformation data, has limited its use in clinical settings. To address this barrier, we developed a method to convert a finite-element (FE) mesh of the lung into a phantom computed tomography (CT) image, advantageously possessing ground-truth information included in the FE model. The phantom CT images generated from the FE mesh replicated the geometry of the lung and large airways that were included in the FE model. Using spatial frequency response, we investigated the effect of " imaging parameters" such as voxel size (resolution) and proximity threshold values on image quality. A series of high-quality phantom images generated from the FE model simulating the respiratory cycle will allow for the validation and evaluation of image registration-based estimations of lung deformation. In addition, the present method could be used to generate synthetic data needed to train machine-learning models to estimate kinematic biomarkers from medical images that could serve as important diagnostic tools to assess heterogeneous lung injuries.
Collapse
Affiliation(s)
- Sunder Neelakantan
- Department of Biomedical Engineering, Texas A&M University, College Station, TX, USA
| | - Tanmay Mukherjee
- Department of Biomedical Engineering, Texas A&M University, College Station, TX, USA
| | - Bradford J Smith
- Department of Bioengineering, University of Colorado Denver | Anschutz Medical Campus, Aurora, CO, USA
- Department of Pediatric Pulmonary and Sleep Medicine, School of Medicine, University of Colorado, Aurora, CO, USA
| | - Kyle Myers
- Hagler Institute for Advanced Study, Texas A&M University, College Station, TX, USA
| | - Rahim Rizi
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Reza Avazmohammadi
- Department of Biomedical Engineering, Texas A&M University, College Station, TX, USA
- J. Mike Walker '66 Department of Mechanical Engineering, Texas A&M University, College Station, TX, USA
- Department of Cardiovascular Sciences, Houston Methodist Academic Institute, Houston, TX, USA
| |
Collapse
|
56
|
Gazula H, Tregidgo HFJ, Billot B, Balbastre Y, William-Ramirez J, Herisse R, Deden-Binder LJ, Casamitjana A, Melief EJ, Latimer CS, Kilgore MD, Montine M, Robinson E, Blackburn E, Marshall MS, Connors TR, Oakley DH, Frosch MP, Young SI, Van Leemput K, Dalca AV, FIschl B, Mac Donald CL, Keene CD, Hyman BT, Iglesias JE. Machine learning of dissection photographs and surface scanning for quantitative 3D neuropathology. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.06.08.544050. [PMID: 37333251 PMCID: PMC10274889 DOI: 10.1101/2023.06.08.544050] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/20/2023]
Abstract
We present open-source tools for 3D analysis of photographs of dissected slices of human brains, which are routinely acquired in brain banks but seldom used for quantitative analysis. Our tools can: (i) 3D reconstruct a volume from the photographs and, optionally, a surface scan; and (ii) produce a high-resolution 3D segmentation into 11 brain regions per hemisphere (22 in total), independently of the slice thickness. Our tools can be used as a substitute for ex vivo magnetic resonance imaging (MRI), which requires access to an MRI scanner, ex vivo scanning expertise, and considerable financial resources. We tested our tools on synthetic and real data from two NIH Alzheimer's Disease Research Centers. The results show that our methodology yields accurate 3D reconstructions, segmentations, and volumetric measurements that are highly correlated to those from MRI. Our method also detects expected differences between post mortem confirmed Alzheimer's disease cases and controls. The tools are available in our widespread neuroimaging suite "FreeSurfer" ( https://surfer.nmr.mgh.harvard.edu/fswiki/PhotoTools ).
Collapse
|
57
|
Zappalá S, Keenan BE, Marshall D, Wu J, Evans SL, Al-Dirini RMA. In vivo strain measurements in the human buttock during sitting using MR-based digital volume correlation. J Biomech 2024; 163:111913. [PMID: 38181575 DOI: 10.1016/j.jbiomech.2023.111913] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Revised: 10/11/2023] [Accepted: 12/20/2023] [Indexed: 01/07/2024]
Abstract
Advancements in systems for prevention and management of pressure ulcers require a more detailed understanding of the complex response of soft tissues to compressive loads. This study aimed at quantifying the progressive deformation of the buttock based on 3D measurements of soft tissue displacements from MR scans of 10 healthy subjects in a semi-recumbent position. Measurements were obtained using digital volume correlation (DVC) and released as a public dataset. A first parametric optimisation of the global registration step aimed at aligning skeletal elements showed acceptable values of Dice coefficient (around 80%). A second parametric optimisation on the deformable registration method showed errors of 0.99mm and 1.78mm against two simulated fields with magnitude 7.30±3.15mm and 19.37±9.58mm, respectively, generated with a finite element model of the buttock under sitting loads. Measurements allowed the quantification of the slide of the gluteus maximus away from the ischial tuberosity (IT, average 13.74 mm) that was only qualitatively identified in the literature, highlighting the importance of the ischial bursa in allowing sliding. Spatial evolution of the maximus shear strain on a path from the IT to the seating interface showed a peak of compression in the fat, close to the interface with the muscle. Obtained peak values were above the proposed damage threshold in the literature. Results in the study showed the complexity of the deformation of the soft tissues in the buttock and the need for further investigations aimed at isolating factors such as tissue geometry, duration and extent of load, sitting posture and tissue properties.
Collapse
Affiliation(s)
- Stefano Zappalá
- School of Computer Science and Informatics, Cardiff University, Cardiff, UK; Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff University, Cardiff, UK.
| | | | - David Marshall
- School of Computer Science and Informatics, Cardiff University, Cardiff, UK
| | - Jing Wu
- School of Computer Science and Informatics, Cardiff University, Cardiff, UK
| | - Sam L Evans
- School of Engineering, Cardiff University, Cardiff, UK
| | - Rami M A Al-Dirini
- College of Science and Engineering, Flinders University of South Australia, Adelaide, Australia
| |
Collapse
|
58
|
Tan Z, Shi F, Zhou Y, Wang J, Wang M, Peng Y, Xu K, Liu M, Chen X. A Multi-Scale Fusion and Transformer Based Registration Guided Speckle Noise Reduction for OCT Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:473-488. [PMID: 37643098 DOI: 10.1109/tmi.2023.3309813] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/31/2023]
Abstract
Optical coherence tomography (OCT) images are inevitably affected by speckle noise because OCT is based on low-coherence interference. Multi-frame averaging is one of the effective methods to reduce speckle noise. Before averaging, the misalignment between images must be calibrated. In this paper, in order to reduce misalignment between images caused during the acquisition, a novel multi-scale fusion and Transformer based (MsFTMorph) method is proposed for deformable retinal OCT image registration. The proposed method captures global connectivity and locality with convolutional vision transformer and also incorporates a multi-resolution fusion strategy for learning the global affine transformation. Comparative experiments with other state-of-the-art registration methods demonstrate that the proposed method achieves higher registration accuracy. Guided by the registration, subsequent multi-frame averaging shows better results in speckle noise reduction. The noise is suppressed while the edges can be preserved. In addition, our proposed method has strong cross-domain generalization, which can be directly applied to images acquired by different scanners with different modes.
Collapse
|
59
|
Alvarez P, El Mouss M, Calka M, Belme A, Berillon G, Brige P, Payan Y, Perrier P, Vialet A. Predicting primate tongue morphology based on geometrical skull matching. A first step towards an application on fossil hominins. PLoS Comput Biol 2024; 20:e1011808. [PMID: 38252664 PMCID: PMC10833839 DOI: 10.1371/journal.pcbi.1011808] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 02/01/2024] [Accepted: 01/08/2024] [Indexed: 01/24/2024] Open
Abstract
As part of a long-term research project aiming at generating a biomechanical model of a fossil human tongue from a carefully designed 3D Finite Element mesh of a living human tongue, we present a computer-based method that optimally registers 3D CT images of the head and neck of the living human into similar images of another primate. We quantitatively evaluate the method on a baboon. The method generates a geometric deformation field which is used to build up a 3D Finite Element mesh of the baboon tongue. In order to assess the method's ability to generate a realistic tongue from bony structure information alone, as would be the case for fossil humans, its performance is evaluated and compared under two conditions in which different anatomical information is available: (1) combined information from soft-tissue and bony structures; (2) information from bony structures alone. An Uncertainty Quantification method is used to evaluate the sensitivity of the transformation to two crucial parameters, namely the resolution of the transformation grid and the weight of a smoothness constraint applied to the transformation, and to determine the best possible meshes. In both conditions the baboon tongue morphology is realistically predicted, evidencing that bony structures alone provide enough relevant information to generate soft tissue.
Collapse
Affiliation(s)
- Pablo Alvarez
- Sorbonne Université, Institut des Sciences du Calcul et des Données, Paris, France
- Univ. Grenoble Alpes, CNRS, Grenoble INP, TIMC, Grenoble, France
| | - Marouane El Mouss
- Sorbonne Université, Institut des Sciences du Calcul et des Données, Paris, France
| | - Maxime Calka
- Sorbonne Université, Institut des Sciences du Calcul et des Données, Paris, France
| | - Anca Belme
- Sorbonne Université, Institut des Sciences du Calcul et des Données, Paris, France
- Sorbonne Université, Institute Jean Le Rond d’Alembert, UMR 7190, Paris, France
| | - Gilles Berillon
- Muséum national d’Histoire naturelle, UMR 7194 - Histoire naturelle de l’Homme préhistorique, Paris, France
| | - Pauline Brige
- Laboratoire d’Imagerie Interventionnelle Expérimentale, CERIMED, Marseille, France
| | - Yohan Payan
- Univ. Grenoble Alpes, CNRS, Grenoble INP, TIMC, Grenoble, France
| | - Pascal Perrier
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
| | - Amélie Vialet
- Sorbonne Université, Institut des Sciences du Calcul et des Données, Paris, France
- Muséum national d’Histoire naturelle, UMR 7194 - Histoire naturelle de l’Homme préhistorique, Paris, France
| |
Collapse
|
60
|
Zheng JQ, Wang Z, Huang B, Lim NH, Papież BW. Residual Aligner-based Network (RAN): Motion-separable structure for coarse-to-fine discontinuous deformable registration. Med Image Anal 2024; 91:103038. [PMID: 38000258 DOI: 10.1016/j.media.2023.103038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 11/09/2023] [Accepted: 11/15/2023] [Indexed: 11/26/2023]
Abstract
Deformable image registration, the estimation of the spatial transformation between different images, is an important task in medical imaging. Deep learning techniques have been shown to perform 3D image registration efficiently. However, current registration strategies often only focus on the deformation smoothness, which leads to the ignorance of complicated motion patterns (e.g., separate or sliding motions), especially for the intersection of organs. Thus, the performance when dealing with the discontinuous motions of multiple nearby objects is limited, causing undesired predictive outcomes in clinical usage, such as misidentification and mislocalization of lesions or other abnormalities. Consequently, we proposed a novel registration method to address this issue: a new Motion Separable backbone is exploited to capture the separate motion, with a theoretical analysis of the upper bound of the motions' discontinuity provided. In addition, a novel Residual Aligner module was used to disentangle and refine the predicted motions across the multiple neighboring objects/organs. We evaluate our method, Residual Aligner-based Network (RAN), on abdominal Computed Tomography (CT) scans and it has shown to achieve one of the most accurate unsupervised inter-subject registration for the 9 organs, with the highest-ranked registration of the veins (Dice Similarity Coefficient (%)/Average surface distance (mm): 62%/4.9mm for the vena cava and 34%/7.9mm for the portal and splenic vein), with a smaller model structure and less computation compared to state-of-the-art methods. Furthermore, when applied to lung CT, the RAN achieves comparable results to the best-ranked networks (94%/3.0mm), also with fewer parameters and less computation.
Collapse
Affiliation(s)
- Jian-Qing Zheng
- The Kennedy Institute of Rheumatology, University of Oxford, UK.
| | - Ziyang Wang
- Department of Computer Science, University of Oxford, Oxford, UK
| | - Baoru Huang
- The Hamlyn Centre for Robotic Surgery, Imperial College, London, UK
| | - Ngee Han Lim
- The Kennedy Institute of Rheumatology, University of Oxford, UK
| | | |
Collapse
|
61
|
Chen Z, Zheng Y, Gee JC. TransMatch: A Transformer-Based Multilevel Dual-Stream Feature Matching Network for Unsupervised Deformable Image Registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:15-27. [PMID: 37342954 DOI: 10.1109/tmi.2023.3288136] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/23/2023]
Abstract
Feature matching, which refers to establishing the correspondence of regions between two images (usually voxel features), is a crucial prerequisite of feature-based registration. For deformable image registration tasks, traditional feature-based registration methods typically use an iterative matching strategy for interest region matching, where feature selection and matching are explicit, but specific feature selection schemes are often useful in solving application-specific problems and require several minutes for each registration. In the past few years, the feasibility of learning-based methods, such as VoxelMorph and TransMorph, has been proven, and their performance has been shown to be competitive compared to traditional methods. However, these methods are usually single-stream, where the two images to be registered are concatenated into a 2-channel whole, and then the deformation field is output directly. The transformation of image features into interimage matching relationships is implicit. In this paper, we propose a novel end-to-end dual-stream unsupervised framework, named TransMatch, where each image is fed into a separate stream branch, and each branch performs feature extraction independently. Then, we implement explicit multilevel feature matching between image pairs via the query-key matching idea of the self-attention mechanism in the Transformer model. Comprehensive experiments are conducted on three 3D brain MR datasets, LPBA40, IXI, and OASIS, and the results show that the proposed method achieves state-of-the-art performance in several evaluation metrics compared to the commonly utilized registration methods, including SyN, NiftyReg, VoxelMorph, CycleMorph, ViT-V-Net, and TransMorph, demonstrating the effectiveness of our model in deformable medical image registration.
Collapse
|
62
|
Nenoff L, Amstutz F, Murr M, Archibald-Heeren B, Fusella M, Hussein M, Lechner W, Zhang Y, Sharp G, Vasquez Osorio E. Review and recommendations on deformable image registration uncertainties for radiotherapy applications. Phys Med Biol 2023; 68:24TR01. [PMID: 37972540 PMCID: PMC10725576 DOI: 10.1088/1361-6560/ad0d8a] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 10/30/2023] [Accepted: 11/15/2023] [Indexed: 11/19/2023]
Abstract
Deformable image registration (DIR) is a versatile tool used in many applications in radiotherapy (RT). DIR algorithms have been implemented in many commercial treatment planning systems providing accessible and easy-to-use solutions. However, the geometric uncertainty of DIR can be large and difficult to quantify, resulting in barriers to clinical practice. Currently, there is no agreement in the RT community on how to quantify these uncertainties and determine thresholds that distinguish a good DIR result from a poor one. This review summarises the current literature on sources of DIR uncertainties and their impact on RT applications. Recommendations are provided on how to handle these uncertainties for patient-specific use, commissioning, and research. Recommendations are also provided for developers and vendors to help users to understand DIR uncertainties and make the application of DIR in RT safer and more reliable.
Collapse
Affiliation(s)
- Lena Nenoff
- Department of Radiation Oncology, Massachusetts General Hospital, Boston, MA, United States of America
- Harvard Medical School, Boston, MA, United States of America
- OncoRay—National Center for Radiation Research in Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Helmholtz-Zentrum Dresden—Rossendorf, Dresden Germany
- Helmholtz-Zentrum Dresden—Rossendorf, Institute of Radiooncology—OncoRay, Dresden, Germany
| | - Florian Amstutz
- Department of Physics, ETH Zurich, Switzerland
- Center for Proton Therapy, Paul Scherrer Institute, Villigen PSI, Switzerland
- Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Bern, Switzerland
| | - Martina Murr
- Section for Biomedical Physics, Department of Radiation Oncology, University of Tübingen, Germany
| | | | - Marco Fusella
- Department of Radiation Oncology, Abano Terme Hospital, Italy
| | - Mohammad Hussein
- Metrology for Medical Physics, National Physical Laboratory, Teddington, United Kingdom
| | - Wolfgang Lechner
- Department of Radiation Oncology, Medical University of Vienna, Austria
| | - Ye Zhang
- Center for Proton Therapy, Paul Scherrer Institute, Villigen PSI, Switzerland
| | - Greg Sharp
- Department of Radiation Oncology, Massachusetts General Hospital, Boston, MA, United States of America
- Harvard Medical School, Boston, MA, United States of America
| | - Eliana Vasquez Osorio
- Division of Cancer Sciences, The University of Manchester, Manchester, United Kingdom
| |
Collapse
|
63
|
Smolders A, Lomax A, Weber DC, Albertini F. Deep learning based uncertainty prediction of deformable image registration for contour propagation and dose accumulation in online adaptive radiotherapy. Phys Med Biol 2023; 68:245027. [PMID: 37820691 DOI: 10.1088/1361-6560/ad0282] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Accepted: 10/11/2023] [Indexed: 10/13/2023]
Abstract
Objective.Online adaptive radiotherapy aims to fully leverage the advantages of highly conformal therapy by reducing anatomical and set-up uncertainty, thereby alleviating the need for robust treatments. This requires extensive automation, among which is the use of deformable image registration (DIR) for contour propagation and dose accumulation. However, inconsistencies in DIR solutions between different algorithms have caused distrust, hampering its direct clinical use. This work aims to enable the clinical use of DIR by developing deep learning methods to predict DIR uncertainty and propagating it into clinically usable metrics.Approach.Supervised and unsupervised neural networks were trained to predict the Gaussian uncertainty of a given deformable vector field (DVF). Since both methods rely on different assumptions, their predictions differ and were further merged into a combined model. The resulting normally distributed DVFs can be directly sampled to propagate the uncertainty into contour and accumulated dose uncertainty.Main results.The unsupervised and combined models can accurately predict the uncertainty in the manually annotated landmarks on the DIRLAB dataset. Furthermore, for 5 patients with lung cancer, the propagation of the predicted DVF uncertainty into contour uncertainty yielded for both methods anexpected calibration errorof less than 3%. Additionally, theprobabilisticly accumulated dose volume histograms(DVH) encompass well the accumulated proton therapy doses using 5 different DIR algorithms. It was additionally shown that the unsupervised model can be used for different DIR algorithms without the need for retraining.Significance.Our work presents first-of-a-kind deep learning methods to predict the uncertainty of the DIR process. The methods are fast, yield high-quality uncertainty estimates and are useable for different algorithms and applications. This allows clinics to use DIR uncertainty in their workflows without the need to change their DIR implementation.
Collapse
Affiliation(s)
- A Smolders
- Paul Scherrer Institute, Center for Proton Therapy, Switzerland
- Department of Physics, ETH Zurich, Switzerland
| | - A Lomax
- Paul Scherrer Institute, Center for Proton Therapy, Switzerland
- Department of Physics, ETH Zurich, Switzerland
| | - D C Weber
- Paul Scherrer Institute, Center for Proton Therapy, Switzerland
- Department of Radiation Oncology, University Hospital Zurich, Switzerland
- Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern, Switzerland
| | - F Albertini
- Paul Scherrer Institute, Center for Proton Therapy, Switzerland
| |
Collapse
|
64
|
Chrisochoides N, Liu Y, Drakopoulos F, Kot A, Foteinos P, Tsolakis C, Billias E, Clatz O, Ayache N, Fedorov A, Golby A, Black P, Kikinis R. Comparison of physics-based deformable registration methods for image-guided neurosurgery. Front Digit Health 2023; 5:1283726. [PMID: 38144260 PMCID: PMC10740151 DOI: 10.3389/fdgth.2023.1283726] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2023] [Accepted: 11/02/2023] [Indexed: 12/26/2023] Open
Abstract
This paper compares three finite element-based methods used in a physics-based non-rigid registration approach and reports on the progress made over the last 15 years. Large brain shifts caused by brain tumor removal affect registration accuracy by creating point and element outliers. A combination of approximation- and geometry-based point and element outlier rejection improves the rigid registration error by 2.5 mm and meets the real-time constraints (4 min). In addition, the paper raises several questions and presents two open problems for the robust estimation and improvement of registration error in the presence of outliers due to sparse, noisy, and incomplete data. It concludes with preliminary results on leveraging Quantum Computing, a promising new technology for computationally intensive problems like Feature Detection and Block Matching in addition to finite element solver; all three account for 75% of computing time in deformable registration.
Collapse
Affiliation(s)
- Nikos Chrisochoides
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA, United States
| | - Yixun Liu
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA, United States
| | - Fotis Drakopoulos
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA, United States
| | - Andriy Kot
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA, United States
| | - Panos Foteinos
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA, United States
| | - Christos Tsolakis
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA, United States
| | - Emmanuel Billias
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA, United States
| | - Olivier Clatz
- Inria, French Research Institute for Digital Science, Sophia Antipolis, Valbonne, France
| | - Nicholas Ayache
- Inria, French Research Institute for Digital Science, Sophia Antipolis, Valbonne, France
| | - Andrey Fedorov
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA, United States
- Neuroimaging Analysis Center, Department of Radiology, Harvard Medical School, Boston, MA, United States
| | - Alex Golby
- Neuroimaging Analysis Center, Department of Radiology, Harvard Medical School, Boston, MA, United States
- Image-guided Neurosurgery, Department of Neurosurgery, Harvard Medical School, Boston, MA, United States
| | - Peter Black
- Image-guided Neurosurgery, Department of Neurosurgery, Harvard Medical School, Boston, MA, United States
| | - Ron Kikinis
- Neuroimaging Analysis Center, Department of Radiology, Harvard Medical School, Boston, MA, United States
| |
Collapse
|
65
|
Liang X, Lin S, Liu F, Schreiber D, Yip M. ORRN: An ODE-Based Recursive Registration Network for Deformable Respiratory Motion Estimation With Lung 4DCT Images. IEEE Trans Biomed Eng 2023; 70:3265-3276. [PMID: 37279120 DOI: 10.1109/tbme.2023.3280463] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
OBJECTIVE Deformable Image Registration (DIR) plays a significant role in quantifying deformation in medical data. Recent Deep Learning methods have shown promising accuracy and speedup for registering a pair of medical images. However, in 4D (3D + time) medical data, organ motion, such as respiratory motion and heart beating, can not be effectively modeled by pair-wise methods as they were optimized for image pairs but did not consider the organ motion patterns necessary when considering 4D data. METHODS This article presents ORRN, an Ordinary Differential Equations (ODE)-based recursive image registration network. Our network learns to estimate time-varying voxel velocities for an ODE that models deformation in 4D image data. It adopts a recursive registration strategy to progressively estimate a deformation field through ODE integration of voxel velocities. RESULTS We evaluate the proposed method on two publicly available lung 4DCT datasets, DIRLab and CREATIS, for two tasks: 1) registering all images to the extreme inhale image for 3D+t deformation tracking and 2) registering extreme exhale to inhale phase images. Our method outperforms other learning-based methods in both tasks, producing the smallest Target Registration Error of 1.24 mm and 1.26 mm, respectively. Additionally, it produces less than 0.001% unrealistic image folding, and the computation speed is less than 1 s for each CT volume. CONCLUSION ORRN demonstrates promising registration accuracy, deformation plausibility, and computation efficiency on group-wise and pair-wise registration tasks. SIGNIFICANCE It has significant implications in enabling fast and accurate respiratory motion estimation for treatment planning in radiation therapy or robot motion planning in thoracic needle insertion.
Collapse
|
66
|
Wang AQ, Yu EM, Dalca AV, Sabuncu MR. A robust and interpretable deep learning framework for multi-modal registration via keypoints. Med Image Anal 2023; 90:102962. [PMID: 37769550 PMCID: PMC10591968 DOI: 10.1016/j.media.2023.102962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 08/24/2023] [Accepted: 09/07/2023] [Indexed: 10/03/2023]
Abstract
We present KeyMorph, a deep learning-based image registration framework that relies on automatically detecting corresponding keypoints. State-of-the-art deep learning methods for registration often are not robust to large misalignments, are not interpretable, and do not incorporate the symmetries of the problem. In addition, most models produce only a single prediction at test-time. Our core insight which addresses these shortcomings is that corresponding keypoints between images can be used to obtain the optimal transformation via a differentiable closed-form expression. We use this observation to drive the end-to-end learning of keypoints tailored for the registration task, and without knowledge of ground-truth keypoints. This framework not only leads to substantially more robust registration but also yields better interpretability, since the keypoints reveal which parts of the image are driving the final alignment. Moreover, KeyMorph can be designed to be equivariant under image translations and/or symmetric with respect to the input image ordering. Finally, we show how multiple deformation fields can be computed efficiently and in closed-form at test time corresponding to different transformation variants. We demonstrate the proposed framework in solving 3D affine and spline-based registration of multi-modal brain MRI scans. In particular, we show registration accuracy that surpasses current state-of-the-art methods, especially in the context of large displacements. Our code is available at https://github.com/alanqrwang/keymorph.
Collapse
Affiliation(s)
- Alan Q Wang
- School of Electrical and Computer Engineering, Cornell University and Cornell Tech, New York, NY 10044, USA; Department of Radiology, Weill Cornell Medical School, New York, NY 10065, USA.
| | - Evan M Yu
- Iterative Scopes, Cambridge, MA 02139, USA
| | - Adrian V Dalca
- Computer Science and Artificial Intelligence Lab at the Massachusetts Institute of Technology, Cambridge, MA 02139, USA; A.A. Martinos Center for Biomedical Imaging at the Massachusetts General Hospital, Charlestown, MA 02129, USA
| | - Mert R Sabuncu
- School of Electrical and Computer Engineering, Cornell University and Cornell Tech, New York, NY 10044, USA; Department of Radiology, Weill Cornell Medical School, New York, NY 10065, USA
| |
Collapse
|
67
|
Lotz J, Weiss N, van der Laak J, Heldmann S. Comparison of consecutive and restained sections for image registration in histopathology. J Med Imaging (Bellingham) 2023; 10:067501. [PMID: 38074626 PMCID: PMC10704256 DOI: 10.1117/1.jmi.10.6.067501] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 10/20/2023] [Accepted: 10/30/2023] [Indexed: 10/16/2024] Open
Abstract
Significance Although the registration of restained sections allows nucleus-level alignment that enables a direct analysis of interacting biomarkers, consecutive sections only allow the transfer of region-level annotations. The latter can be achieved at low computational cost using coarser image resolutions. Purpose In digital histopathology, virtual multistaining is important for diagnosis and biomarker research. Additionally, it provides accurate ground truth for various deep-learning tasks. Virtual multistaining can be obtained using different stains for consecutive sections or by restaining the same section. Both approaches require image registration to compensate for tissue deformations, but little attention has been devoted to comparing their accuracy. Approach We compared affine and deformable variational image registration of consecutive and restained sections and analyzed the effect of the image resolution that influences accuracy and required computational resources. The registration was applied to the automatic nonrigid histological image registration (ANHIR) challenge data (230 consecutive slide pairs) and the hyperparameters were determined. Then without changing the parameters, the registration was applied to a newly published hybrid dataset of restained and consecutive sections (HyReCo, 86 slide pairs, 5404 landmarks). Results We obtain a median landmark error after registration of 6.5 μ m (HyReCo) and 24.1 μ m (ANHIR) between consecutive sections. Between restained sections, the median registration error is 2.2 and 0.9 μ m in the two subsets of the HyReCo dataset. We observe that deformable registration leads to lower landmark errors than affine registration in both cases (p < 0.001 ), though the effect is smaller in restained sections. Conclusion Deformable registration of consecutive and restained sections is a valuable tool for the joint analysis of different stains.
Collapse
Affiliation(s)
- Johannes Lotz
- Fraunhofer Institute for Digital Medicine MEVIS, Lübeck, Germany
| | - Nick Weiss
- Fraunhofer Institute for Digital Medicine MEVIS, Lübeck, Germany
| | - Jeroen van der Laak
- Radboud University Medical Center, Department of Pathology, Nijmegen, The Netherlands
- Linköping University, Center for Medical Image Science and Visualization, Linköping, Sweden
| | - Stefan Heldmann
- Fraunhofer Institute for Digital Medicine MEVIS, Lübeck, Germany
| |
Collapse
|
68
|
Abbasi S, Mehdizadeh A, Boveiri HR, Mosleh Shirazi MA, Javidan R, Khayami R, Tavakoli M. Unsupervised deep learning registration model for multimodal brain images. J Appl Clin Med Phys 2023; 24:e14177. [PMID: 37823748 PMCID: PMC10647957 DOI: 10.1002/acm2.14177] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 05/29/2023] [Accepted: 09/14/2023] [Indexed: 10/13/2023] Open
Abstract
Multimodal image registration is a key for many clinical image-guided interventions. However, it is a challenging task because of complicated and unknown relationships between different modalities. Currently, deep supervised learning is the state-of-theart method at which the registration is conducted in end-to-end manner and one-shot. Therefore, a huge ground-truth data is required to improve the results of deep neural networks for registration. Moreover, supervised methods may yield models that bias towards annotated structures. Here, to deal with above challenges, an alternative approach is using unsupervised learning models. In this study, we have designed a novel deep unsupervised Convolutional Neural Network (CNN)-based model based on computer tomography/magnetic resonance (CT/MR) co-registration of brain images in an affine manner. For this purpose, we created a dataset consisting of 1100 pairs of CT/MR slices from the brain of 110 neuropsychic patients with/without tumor. At the next step, 12 landmarks were selected by a well-experienced radiologist and annotated on each slice resulting in the computation of series of metrics evaluation, target registration error (TRE), Dice similarity, Hausdorff, and Jaccard coefficients. The proposed method could register the multimodal images with TRE 9.89, Dice similarity 0.79, Hausdorff 7.15, and Jaccard 0.75 that are appreciable for clinical applications. Moreover, the approach registered the images in an acceptable time 203 ms and can be appreciable for clinical usage due to the short registration time and high accuracy. Here, the results illustrated that our proposed method achieved competitive performance against other related approaches from both reasonable computation time and the metrics evaluation.
Collapse
Affiliation(s)
- Samaneh Abbasi
- Department of Medical Physics and EngineeringSchool of MedicineShiraz University of Medical SciencesShirazIran
| | - Alireza Mehdizadeh
- Research Center for Neuromodulation and PainShiraz University of Medical SciencesShirazIran
| | - Hamid Reza Boveiri
- Department of Computer Engineering and ITShiraz University of TechnologyShirazIran
| | - Mohammad Amin Mosleh Shirazi
- Ionizing and Non‐Ionizing Radiation Protection Research Center, School of Paramedical SciencesShiraz University of Medical SciencesShirazIran
| | - Reza Javidan
- Department of Computer Engineering and ITShiraz University of TechnologyShirazIran
| | - Raouf Khayami
- Department of Computer Engineering and ITShiraz University of TechnologyShirazIran
| | - Meysam Tavakoli
- Department of Radiation Oncologyand Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| |
Collapse
|
69
|
Windolf C, Yu H, Paulk AC, Meszéna D, Muñoz W, Boussard J, Hardstone R, Caprara I, Jamali M, Kfir Y, Xu D, Chung JE, Sellers KK, Ye Z, Shaker J, Lebedeva A, Raghavan M, Trautmann E, Melin M, Couto J, Garcia S, Coughlin B, Horváth C, Fiáth R, Ulbert I, Movshon JA, Shadlen MN, Churchland MM, Churchland AK, Steinmetz NA, Chang EF, Schweitzer JS, Williams ZM, Cash SS, Paninski L, Varol E. DREDge: robust motion correction for high-density extracellular recordings across species. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.10.24.563768. [PMID: 37961359 PMCID: PMC10634799 DOI: 10.1101/2023.10.24.563768] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
High-density microelectrode arrays (MEAs) have opened new possibilities for systems neuroscience in human and non-human animals, but brain tissue motion relative to the array poses a challenge for downstream analyses, particularly in human recordings. We introduce DREDge (Decentralized Registration of Electrophysiology Data), a robust algorithm which is well suited for the registration of noisy, nonstationary extracellular electrophysiology recordings. In addition to estimating motion from spikes in the action potential (AP) frequency band, DREDge enables automated tracking of motion at high temporal resolution in the local field potential (LFP) frequency band. In human intraoperative recordings, which often feature fast (period <1s) motion, DREDge correction in the LFP band enabled reliable recovery of evoked potentials, and significantly reduced single-unit spike shape variability and spike sorting error. Applying DREDge to recordings made during deep probe insertions in nonhuman primates demonstrated the possibility of tracking probe motion of centimeters across several brain regions while simultaneously mapping single unit electrophysiological features. DREDge reliably delivered improved motion correction in acute mouse recordings, especially in those made with an recent ultra-high density probe. We also implemented a procedure for applying DREDge to recordings made across tens of days in chronic implantations in mice, reliably yielding stable motion tracking despite changes in neural activity across experimental sessions. Together, these advances enable automated, scalable registration of electrophysiological data across multiple species, probe types, and drift cases, providing a stable foundation for downstream scientific analyses of these rich datasets.
Collapse
Affiliation(s)
- Charlie Windolf
- Department of Statistics, Columbia University
- Zuckerman Institute, Columbia University
| | - Han Yu
- Zuckerman Institute, Columbia University
- Department of Electrical Engineering, Columbia University
| | - Angelique C Paulk
- Department of Neurology, Center for Neurotechnology and Neurorecovery, Massachusetts General Hospital, Harvard Medical School
| | - Domokos Meszéna
- Department of Neurology, Center for Neurotechnology and Neurorecovery, Massachusetts General Hospital, Harvard Medical School
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Budapest, Hungary
| | - William Muñoz
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School
| | - Julien Boussard
- Department of Statistics, Columbia University
- Zuckerman Institute, Columbia University
| | - Richard Hardstone
- Department of Neurology, Center for Neurotechnology and Neurorecovery, Massachusetts General Hospital, Harvard Medical School
| | - Irene Caprara
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School
| | - Mohsen Jamali
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School
| | - Yoav Kfir
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School
| | - Duo Xu
- Weill Institute for Neurosciences, University of California San Francisco
- Department of Neurological Surgery, University of California San Francisco
| | - Jason E Chung
- Department of Neurological Surgery, University of California San Francisco
| | - Kristin K Sellers
- Weill Institute for Neurosciences, University of California San Francisco
- Department of Neurological Surgery, University of California San Francisco
| | - Zhiwen Ye
- Department of Biological Structure, University of Washington
| | - Jordan Shaker
- Department of Biological Structure, University of Washington
| | | | | | - Eric Trautmann
- Department of Neuroscience, Columbia University Medical Center
- Zuckerman Institute, Columbia University
- Grossman Center for the Statistics of Mind, Columbia University
| | - Max Melin
- David Geffen School of Medicine, University of California Los Angeles
| | - João Couto
- David Geffen School of Medicine, University of California Los Angeles
| | - Samuel Garcia
- Centre National de la Recherche Scientifique, Centre de Recherche en Neurosciences de Lyon
| | - Brian Coughlin
- Department of Neurology, Center for Neurotechnology and Neurorecovery, Massachusetts General Hospital, Harvard Medical School
| | - Csaba Horváth
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Budapest, Hungary
| | - Richárd Fiáth
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Budapest, Hungary
| | - István Ulbert
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Budapest, Hungary
| | | | - Michael N Shadlen
- Zuckerman Institute, Columbia University
- Howard Hughes Medical Institute
| | | | - Anne K Churchland
- David Geffen School of Medicine, University of California Los Angeles
| | | | - Edward F Chang
- Weill Institute for Neurosciences, University of California San Francisco
- Department of Neurological Surgery, University of California San Francisco
| | - Jeffrey S Schweitzer
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School
| | - Ziv M Williams
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School
| | - Sydney S Cash
- Department of Neurology, Center for Neurotechnology and Neurorecovery, Massachusetts General Hospital, Harvard Medical School
| | - Liam Paninski
- Department of Statistics, Columbia University
- Zuckerman Institute, Columbia University
- Department of Neuroscience, Columbia University Medical Center
- Grossman Center for the Statistics of Mind, Columbia University
| | - Erdem Varol
- Department of Statistics, Columbia University
- Zuckerman Institute, Columbia University
- Department of Computer Science & Engineering, New York University
| |
Collapse
|
70
|
Joshi A, Hong Y. R2Net: Efficient and flexible diffeomorphic image registration using Lipschitz continuous residual networks. Med Image Anal 2023; 89:102917. [PMID: 37598607 DOI: 10.1016/j.media.2023.102917] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 06/26/2023] [Accepted: 07/25/2023] [Indexed: 08/22/2023]
Abstract
Classical diffeomorphic image registration methods, while being accurate, face the challenges of high computational costs. Deep learning based approaches provide a fast alternative to address these issues; however, most existing deep solutions either lose the good property of diffeomorphism or have limited flexibility to capture large deformations, under the assumption that deformations are driven by stationary velocity fields (SVFs). Also, the adopted squaring and scaling technique for integrating SVFs is time- and memory-consuming, hindering deep methods from handling large image volumes. In this paper, we present an unsupervised diffeomorphic image registration framework, which uses deep residual networks (ResNets) as numerical approximations of the underlying continuous diffeomorphic setting governed by ordinary differential equations, which is parameterized by either SVFs or time-varying (non-stationary) velocity fields. This flexible parameterization in our Residual Registration Network (R2Net) not only provides the model's ability to capture large deformation but also reduces the time and memory cost when integrating velocity fields for deformation generation. Also, we introduce a Lipschitz continuity constraint into the ResNet block to help achieve diffeomorphic deformations. To enhance the ability of our model for handling images with large volume sizes, we employ a hierarchical extension with a multi-phase learning strategy to solve the image registration task in a coarse-to-fine fashion. We demonstrate our models on four 3D image registration tasks with a wide range of anatomies, including brain MRIs, cine cardiac MRIs, and lung CT scans. Compared to classical methods SyN and diffeomorphic VoxelMorph, our models achieve comparable or better registration accuracy with much smoother deformations. Our source code is available online at https://github.com/ankitajoshi15/R2Net.
Collapse
Affiliation(s)
- Ankita Joshi
- School of Computing, University of Georgia, Athens, 30602, USA
| | - Yi Hong
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China.
| |
Collapse
|
71
|
Gan Z, Sun W, Liao K, Yang X. Probabilistic Modeling for Image Registration Using Radial Basis Functions: Application to Cardiac Motion Estimation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:7324-7338. [PMID: 35073271 DOI: 10.1109/tnnls.2022.3141119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Cardiovascular diseases (CVDs) are the leading cause of death, affecting the cardiac dynamics over the cardiac cycle. Estimation of cardiac motion plays an essential role in many medical clinical tasks. This article proposes a probabilistic framework for image registration using compact support radial basis functions (CSRBFs) to estimate cardiac motion. A variational inference-based generative model with convolutional neural networks (CNNs) is proposed to learn the probabilistic coefficients of CSRBFs used in image deformation. We designed two networks to estimate the deformation coefficients of CSRBFs: the first one solves the spatial transformation using given control points, and the second one models the transformation using drifting control points. The given-point-based network estimates the probabilistic coefficients of control points. In contrast, the drifting-point-based model predicts the probabilistic coefficients and spatial distribution of control points simultaneously. To regularize these coefficients, we derive the bending energy (BE) in the variational bound by defining the covariance of coefficients. The proposed framework has been evaluated on the cardiac motion estimation and the calculation of the myocardial strain. In the experiments, 1409 slice pairs of end-diastolic (ED) and end-systolic (ES) phase in 4-D cardiac magnetic resonance (MR) images selected from three public datasets are employed to evaluate our networks. The experimental results show that our framework outperforms the state-of-the-art registration methods concerning the deformation smoothness and registration accuracy.
Collapse
|
72
|
Chrisochoides N, Fedorov A, Liu Y, Kot A, Foteinos P, Drakopoulos F, Tsolakis C, Billias E, Clatz O, Ayache N, Golby A, Black P, Kikinis R. Real-Time Dynamic Data Driven Deformable Registration for Image-Guided Neurosurgery: Computational Aspects. ARXIV 2023:arXiv:2309.03336v1. [PMID: 37731651 PMCID: PMC10508827] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 09/22/2023]
Abstract
Current neurosurgical procedures utilize medical images of various modalities to enable the precise location of tumors and critical brain structures to plan accurate brain tumor resection. The difficulty of using preoperative images during the surgery is caused by the intra-operative deformation of the brain tissue (brain shift), which introduces discrepancies concerning the preoperative configuration. Intra-operative imaging allows tracking such deformations but cannot fully substitute for the quality of the pre-operative data. Dynamic Data Driven Deformable Non-Rigid Registration (D4NRR) is a complex and time-consuming image processing operation that allows the dynamic adjustment of the pre-operative image data to account for intra-operative brain shift during the surgery. This paper summarizes the computational aspects of a specific adaptive numerical approximation method and its variations for registering brain MRIs. It outlines its evolution over the last 15 years and identifies new directions for the computational aspects of the technique.
Collapse
Affiliation(s)
- Nikos Chrisochoides
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA
| | - Andrey Fedorov
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA
- Neuroimaging Analysis Center, Department of Radiology, Harvard Medical School, Boston, MA
| | - Yixun Liu
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA
| | - Andriy Kot
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA
| | - Panos Foteinos
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA
| | - Fotis Drakopoulos
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA
| | - Christos Tsolakis
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA
| | - Emmanuel Billias
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA
| | - Olivier Clatz
- Inria, French Research Institute for Digital Science, Sophia Antipolis, France
| | - Nicholas Ayache
- Inria, French Research Institute for Digital Science, Sophia Antipolis, France
| | - Alex Golby
- Neuroimaging Analysis Center, Department of Radiology, Harvard Medical School, Boston, MA
- Image-guided Neurosurgery, Department of Neurosurgery, Harvard Medical School, Boston, MA
| | - Peter Black
- Image-guided Neurosurgery, Department of Neurosurgery, Harvard Medical School, Boston, MA
| | - Ron Kikinis
- Neuroimaging Analysis Center, Department of Radiology, Harvard Medical School, Boston, MA
| |
Collapse
|
73
|
Kimberly WT, Sorby-Adams AJ, Webb AG, Wu EX, Beekman R, Bowry R, Schiff SJ, de Havenon A, Shen FX, Sze G, Schaefer P, Iglesias JE, Rosen MS, Sheth KN. Brain imaging with portable low-field MRI. NATURE REVIEWS BIOENGINEERING 2023; 1:617-630. [PMID: 37705717 PMCID: PMC10497072 DOI: 10.1038/s44222-023-00086-w] [Citation(s) in RCA: 29] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 06/06/2023] [Indexed: 09/15/2023]
Abstract
The advent of portable, low-field MRI (LF-MRI) heralds new opportunities in neuroimaging. Low power requirements and transportability have enabled scanning outside the controlled environment of a conventional MRI suite, enhancing access to neuroimaging for indications that are not well suited to existing technologies. Maximizing the information extracted from the reduced signal-to-noise ratio of LF-MRI is crucial to developing clinically useful diagnostic images. Progress in electromagnetic noise cancellation and machine learning reconstruction algorithms from sparse k-space data as well as new approaches to image enhancement have now enabled these advancements. Coupling technological innovation with bedside imaging creates new prospects in visualizing the healthy brain and detecting acute and chronic pathological changes. Ongoing development of hardware, improvements in pulse sequences and image reconstruction, and validation of clinical utility will continue to accelerate this field. As further innovation occurs, portable LF-MRI will facilitate the democratization of MRI and create new applications not previously feasible with conventional systems.
Collapse
Affiliation(s)
- W Taylor Kimberly
- Department of Neurology and the Center for Genomic Medicine, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Annabel J Sorby-Adams
- Department of Neurology and the Center for Genomic Medicine, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Andrew G Webb
- Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands
| | - Ed X Wu
- Laboratory of Biomedical Imaging and Signal Processing, Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China
| | - Rachel Beekman
- Division of Neurocritical Care and Emergency Neurology, Department of Neurology, Yale New Haven Hospital and Yale School of Medicine, Yale Center for Brain & Mind Health, New Haven, CT, USA
| | - Ritvij Bowry
- Departments of Neurosurgery and Neurology, McGovern Medical School, University of Texas Health Neurosciences, Houston, TX, USA
| | - Steven J Schiff
- Department of Neurosurgery, Yale School of Medicine, New Haven, CT, USA
| | - Adam de Havenon
- Division of Vascular Neurology, Department of Neurology, Yale New Haven Hospital and Yale School of Medicine, New Haven, CT, USA
| | - Francis X Shen
- Harvard Medical School Center for Bioethics, Harvard law School, Boston, MA, USA
- Department of Psychiatry, Massachusetts General Hospital, Boston, MA, USA
| | - Gordon Sze
- Department of Radiology, Yale New Haven Hospital and Yale School of Medicine, New Haven, CT, USA
| | - Pamela Schaefer
- Division of Neuroradiology, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Juan Eugenio Iglesias
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
- Centre for Medical Image Computing, University College London, London, UK
- Computer Science and AI Laboratory, Massachusetts Institute of Technology, Boston, MA, USA
| | - Matthew S Rosen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Kevin N Sheth
- Division of Neurocritical Care and Emergency Neurology, Department of Neurology, Yale New Haven Hospital and Yale School of Medicine, Yale Center for Brain & Mind Health, New Haven, CT, USA
| |
Collapse
|
74
|
Fan X, Li Z, Li Z, Wang X, Liu R, Luo Z, Huang H. Automated Learning for Deformable Medical Image Registration by Jointly Optimizing Network Architectures and Objective Functions. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2023; 32:4880-4892. [PMID: 37624710 DOI: 10.1109/tip.2023.3307215] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/27/2023]
Abstract
Deformable image registration plays a critical role in various tasks of medical image analysis. A successful registration algorithm, either derived from conventional energy optimization or deep networks, requires tremendous efforts from computer experts to well design registration energy or to carefully tune network architectures with respect to medical data available for a given registration task/scenario. This paper proposes an automated learning registration algorithm (AutoReg) that cooperatively optimizes both architectures and their corresponding training objectives, enabling non-computer experts to conveniently find off-the-shelf registration algorithms for various registration scenarios. Specifically, we establish a triple-level framework to embrace the searching for both network architectures and objectives with a cooperating optimization. Extensive experiments on multiple volumetric datasets and various registration scenarios demonstrate that AutoReg can automatically learn an optimal deep registration network for given volumes and achieve state-of-the-art performance. The automatically learned network also improves computational efficiency over the mainstream UNet architecture from 0.558 to 0.270 seconds for a volume pair on the same configuration.
Collapse
|
75
|
Alscher T, Erleben K, Darkner S. Collision-constrained deformable image registration framework for discontinuity management. PLoS One 2023; 18:e0290243. [PMID: 37594943 PMCID: PMC10437794 DOI: 10.1371/journal.pone.0290243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Accepted: 08/03/2023] [Indexed: 08/20/2023] Open
Abstract
Topological changes like sliding motion, sources and sinks are a significant challenge in image registration. This work proposes the use of the alternating direction method of multipliers as a general framework for constraining the registration of separate objects with individual deformation fields from overlapping in image registration. This constraint is enforced by introducing a collision detection algorithm from the field of computer graphics which results in a robust divide and conquer optimization strategy using Free-Form Deformations. A series of experiments demonstrate that the proposed framework performs superior with regards to the combination of intersection prevention and image registration including synthetic examples containing complex displacement patterns. The results show compliance with the non-intersection constraints while simultaneously preventing a decrease in registration accuracy. Furthermore, the application of the proposed algorithm to the DIR-Lab data set demonstrates that the framework generalizes to real data by validating it on a lung registration problem.
Collapse
Affiliation(s)
- Thomas Alscher
- Department of Computer Science, University of Copenhagen, Copenhagen, Region Hovedstaden, Denmark
| | - Kenny Erleben
- Department of Computer Science, University of Copenhagen, Copenhagen, Region Hovedstaden, Denmark
| | - Sune Darkner
- Department of Computer Science, University of Copenhagen, Copenhagen, Region Hovedstaden, Denmark
| |
Collapse
|
76
|
Abstract
The aim of this review is to provide a comprehensive survey of statistical challenges in neuroimaging data analysis, from neuroimaging techniques to large-scale neuroimaging studies and statistical learning methods. We briefly review eight popular neuroimaging techniques and their potential applications in neuroscience research and clinical translation. We delineate four themes of neuroimaging data and review major image processing analysis methods for processing neuroimaging data at the individual level. We briefly review four large-scale neuroimaging-related studies and a consortium on imaging genomics and discuss four themes of neuroimaging data analysis at the population level. We review nine major population-based statistical analysis methods and their associated statistical challenges and present recent progress in statistical methodology to address these challenges.
Collapse
Affiliation(s)
- Hongtu Zhu
- Department of Biostatistics, Department of Statistics, Department of Genetics, and Department of Computer Science, University of North Carolina, Chapel Hill, North Carolina, USA;
- Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, North Carolina, USA
| | - Tengfei Li
- Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, North Carolina, USA
- Department of Radiology, University of North Carolina, Chapel Hill, North Carolina, USA
| | - Bingxin Zhao
- Department of Statistics and Data Science, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| |
Collapse
|
77
|
Shang J, Huang P, Zhang K, Dai J, Yan H. On-board MRI image compression using video encoder for MR-guided radiotherapy. Quant Imaging Med Surg 2023; 13:5207-5217. [PMID: 37581063 PMCID: PMC10423359 DOI: 10.21037/qims-22-1378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 06/01/2023] [Indexed: 08/16/2023]
Abstract
Background Magnetic resonance imaging (MRI) is currently used for online target monitoring and plan adaptation in modern image-guided radiotherapy. However, storing a large amount of data accumulated during patient treatment becomes an issue. In this study, the feasibility to compress MRI images accumulated in MR-guided radiotherapy using video encoders was investigated. Methods Two sorting algorithms were employed to reorder the slices in multiple MRI sets for the input sequence of video encoder. Three cropping algorithms were used to auto-segment regions of interest for separate data storage. Four video encoders, motion-JPEG (M-JPEG), MPEG-4 (MP4), Advanced Video Coding (AVC or H.264) and High Efficiency Video Coding (HEVC or H.265) were investigated. The compression performance of video encoders was evaluated by compression ratio and time, while the restoration accuracy of video encoders was evaluated by mean square error (MSE), peak signal-to-noise ratio (PSNR), and video quality matrix (VQM). The performances of all combinations of video encoders, sorting methods, and cropping algorithms were investigated and their effects were statistically analyzed. Results The compression ratios of MP4, H.264 and H.265 with both sorting methods were improved by 26% and 5%, 42% and 27%, 72% and 43%, respectively, comparing to those of M-JPEG. The slice-prioritized sorting method showed a higher compression ratio than that of the location-prioritized sorting method for MP4 (P=0.00000), H.264 (P=0.00012) and H.265 (P=0.00000), respectively. The compression ratios of H.265 were improved significantly with the applications of morphology algorithm (P=0.01890 and P=0.00530), flood-fill algorithm (P=0.00510 and P=0.00020) and level-set algorithm (P=0.02800 and P=0.00830) for both sorting methods. Among the four video encoders, H.265 showed the best compression ratio and restoration accuracy. Conclusions The compression ratio and restoration accuracy of video encoders using inter-frame coding (MP4, H.264 and H.265) were higher than that of video encoders using intra-frame coding (M-JPEG). It is feasible to implement video encoders using inter-frame coding for high-performance MRI data storage in MR-guided radiotherapy.
Collapse
Affiliation(s)
- Jiawen Shang
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Peng Huang
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Ke Zhang
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jianrong Dai
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Hui Yan
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
78
|
Deng L, Zhang Y, Wang J, Huang S, Yang X. Improving performance of medical image alignment through super-resolution. Biomed Eng Lett 2023; 13:397-406. [PMID: 37519883 PMCID: PMC10382383 DOI: 10.1007/s13534-023-00268-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Revised: 01/29/2023] [Accepted: 02/01/2023] [Indexed: 02/21/2023] Open
Abstract
Medical image alignment is an important tool for tracking patient conditions, but the quality of alignment is influenced by the effectiveness of low-dose Cone-beam CT (CBCT) imaging and patient characteristics. To address these two issues, we propose an unsupervised alignment method that incorporates a preprocessing super-resolution process. We constructed the model based on a private clinical dataset and validated the enhancement of the super-resolution on alignment using clinical and public data. Through all three experiments, we demonstrate that higher resolution data yields better results in the alignment process. To fully constrain similarity and structure, a new loss function is proposed; Pearson correlation coefficient combined with regional mutual information. In all test samples, the newly proposed loss function obtains higher results than the common loss function and improve alignment accuracy. Subsequent experiments verified that, combined with the newly proposed loss function, the super-resolution processed data boosts alignment, can reaching up to 9.58%. Moreover, this boost is not limited to a single model, but is effective in different alignment models. These experiments demonstrate that the unsupervised alignment method with super-resolution preprocessing proposed in this study effectively improved alignment and plays an important role in tracking different patient conditions over time.
Collapse
Affiliation(s)
- Liwei Deng
- Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, School of Automation, Harbin University of Science and Technology, Harbin, 150080 Heilongjiang China
| | - Yuanzhi Zhang
- Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, School of Automation, Harbin University of Science and Technology, Harbin, 150080 Heilongjiang China
| | - Jing Wang
- Faculty of Rehabilitation Medicine, Biofeedback Laboratory, Guangzhou Xinhua University, Guangzhou, 510520 Guangdong China
| | - Sijuan Huang
- Department of Radiation Oncology State Key Laboratory of Oncology in South China Collaborative Innovation Center for Cancer Medicine Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou, 510060 Guangdong China
| | - Xin Yang
- Department of Radiation Oncology State Key Laboratory of Oncology in South China Collaborative Innovation Center for Cancer Medicine Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou, 510060 Guangdong China
| |
Collapse
|
79
|
Rivas-Villar D, Motschi AR, Pircher M, Hitzenberger CK, Schranz M, Roberts PK, Schmidt-Erfurth U, Bogunović H. Automated inter-device 3D OCT image registration using deep learning and retinal layer segmentation. BIOMEDICAL OPTICS EXPRESS 2023; 14:3726-3747. [PMID: 37497506 PMCID: PMC10368062 DOI: 10.1364/boe.493047] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 05/18/2023] [Accepted: 05/26/2023] [Indexed: 07/28/2023]
Abstract
Optical coherence tomography (OCT) is the most widely used imaging modality in ophthalmology. There are multiple variations of OCT imaging capable of producing complementary information. Thus, registering these complementary volumes is desirable in order to combine their information. In this work, we propose a novel automated pipeline to register OCT images produced by different devices. This pipeline is based on two steps: a multi-modal 2D en-face registration based on deep learning, and a Z-axis (axial axis) registration based on the retinal layer segmentation. We evaluate our method using data from a Heidelberg Spectralis and an experimental PS-OCT device. The empirical results demonstrated high-quality registrations, with mean errors of approximately 46 µm for the 2D registration and 9.59 µm for the Z-axis registration. These registrations may help in multiple clinical applications such as the validation of layer segmentations among others.
Collapse
Affiliation(s)
- David Rivas-Villar
- Centro de investigacion CITIC, Universidade da Coruña, 15071 A Coruña, Spain
- Grupo VARPA, Instituto de Investigacion Biomédica de A Coruña (INIBIC), Universidade da Coruña, 15006 A Coruña, Spain
| | - Alice R Motschi
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
| | - Michael Pircher
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
| | - Christoph K Hitzenberger
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
| | - Markus Schranz
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
| | - Philipp K Roberts
- Medical University of Vienna, Department of Ophthalmology and Optometry, Vienna, Austria
| | - Ursula Schmidt-Erfurth
- Medical University of Vienna, Department of Ophthalmology and Optometry, Vienna, Austria
| | - Hrvoje Bogunović
- Medical University of Vienna, Department of Ophthalmology and Optometry, Christian Doppler Lab for Artificial Intelligence in Retina, Vienna, Austria
| |
Collapse
|
80
|
Yang G, Xu M, Chen W, Qiao X, Shi H, Hu Y. A brain CT-based approach for predicting and analyzing stroke-associated pneumonia from intracerebral hemorrhage. Front Neurol 2023; 14:1139048. [PMID: 37332986 PMCID: PMC10272424 DOI: 10.3389/fneur.2023.1139048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 05/08/2023] [Indexed: 06/20/2023] Open
Abstract
Introduction Stroke-associated pneumonia (SAP) is a common complication of stroke that can increase the mortality rate of patients and the burden on their families. In contrast to prior clinical scoring models that rely on baseline data, we propose constructing models based on brain CT scans due to their accessibility and clinical universality. Methods Our study aims to explore the mechanism behind the distribution and lesion areas of intracerebral hemorrhage (ICH) in relation to pneumonia, we utilized an MRI atlas that could present brain structures and a registration method in our program to extract features that may represent this relationship. We developed three machine learning models to predict the occurrence of SAP using these features. Ten-fold cross-validation was applied to evaluate the performance of models. Additionally, we constructed a probability map through statistical analysis that could display which brain regions are more frequently impacted by hematoma in patients with SAP based on four types of pneumonia. Results Our study included a cohort of 244 patients, and we extracted 35 features that captured the invasion of ICH to different brain regions for model development. We evaluated the performance of three machine learning models, namely, logistic regression, support vector machine, and random forest, in predicting SAP, and the AUCs for these models ranged from 0.77 to 0.82. The probability map revealed that the distribution of ICH varied between the left and right brain hemispheres in patients with moderate and severe SAP, and we identified several brain structures, including the left-choroid-plexus, right-choroid-plexus, right-hippocampus, and left-hippocampus, that were more closely related to SAP based on feature selection. Additionally, we observed that some statistical indicators of ICH volume, such as mean and maximum values, were proportional to the severity of SAP. Discussion Our findings suggest that our method is effective in classifying the development of pneumonia based on brain CT scans. Furthermore, we identified distinct characteristics, such as volume and distribution, of ICH in four different types of SAP.
Collapse
Affiliation(s)
- Guangtong Yang
- School of Control Science and Engineering, Shandong University, Jinan, China
| | - Min Xu
- Neurointensive Care Unit, Shengli Oilfield Central Hospital, Dongying, China
| | - Wei Chen
- Department of Radiology, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Xu Qiao
- School of Control Science and Engineering, Shandong University, Jinan, China
| | - Hongfeng Shi
- Neurointensive Care Unit, Shengli Oilfield Central Hospital, Dongying, China
| | - Yongmei Hu
- School of Control Science and Engineering, Shandong University, Jinan, China
| |
Collapse
|
81
|
Song L, Ma M, Liu G. TS-Net: Two-stage deformable medical image registration network based on new smooth constraints. Magn Reson Imaging 2023; 99:26-33. [PMID: 36709011 DOI: 10.1016/j.mri.2023.01.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Revised: 05/27/2022] [Accepted: 01/14/2023] [Indexed: 01/27/2023]
Abstract
Medical image registration can establish the spatial consistency of the corresponding anatomical structures between different medical images, which is important in medical image analysis. In recent years, with the rapid development of deep learning, the image registration methods based on deep learning greatly improve the speed, accuracy, and robustness of registration. Regrettably, these methods typically do not work well for large deformations and complex deformations in the image, and neglect to preserve the topological properties of the image during deformation. Aiming at these problems, we propose a new network TS-Net that learns deformation from coarse to fine and transmits information of different scales in the two stages. Two-stage network learning deformation from coarse to fine can gradually learn the large and complex deformations in images. In the second stage, the feature maps downsampled in the first stage for skip connection can expand the local receptive field and obtain more local information. The smooth constraints function used in the past is to impose the same restriction on the global, which is not targeted. In this paper, we propose a new smooth constraints function for each voxel deformation, which can better ensure the smoothness of the transformation and maintain the topological properties of the image. The experiments on brain datasets with complex deformations and heart datasets with large deformations show that our proposed method achieves better results while maintaining the topological properties of deformations compared to existing deep learning-based registration methods.
Collapse
Affiliation(s)
- Lei Song
- College of Computer Science and Technology, Jilin University, Changchun 130012, Jilin, PR China; Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, Jilin, PR China.
| | - Mingrui Ma
- College of Computer Science and Technology, Jilin University, Changchun 130012, Jilin, PR China; Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, Jilin, PR China.
| | - Guixia Liu
- College of Computer Science and Technology, Jilin University, Changchun 130012, Jilin, PR China; Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, Jilin, PR China.
| |
Collapse
|
82
|
Tian L, Greer H, Vialard FX, Kwitt R, Estépar RSJ, Rushmore RJ, Makris N, Bouix S, Niethammer M. GradICON: Approximate Diffeomorphisms via Gradient Inverse Consistency. PROCEEDINGS. IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION 2023; 2023:18084-18094. [PMID: 39247628 PMCID: PMC11378329 DOI: 10.1109/cvpr52729.2023.01734] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/10/2024]
Abstract
We present an approach to learning regular spatial transformations between image pairs in the context of medical image registration. Contrary to optimization-based registration techniques and many modern learning-based methods, we do not directly penalize transformation irregularities but instead promote transformation regularity via an inverse consistency penalty. We use a neural network to predict a map between a source and a target image as well as the map when swapping the source and target images. Different from existing approaches, we compose these two resulting maps and regularize deviations of the Jacobian of this composition from the identity matrix. This regularizer - GradICON - results in much better convergence when training registration models compared to promoting inverse consistency of the composition of maps directly while retaining the desirable implicit regularization effects of the latter. We achieve state-of-the-art registration performance on a variety of real-world medical image datasets using a single set of hyperparameters and a single non-dataset-specific training protocol. Code is available at https://github.com/uncbiag/ICON.
Collapse
|
83
|
Salido J, Vallez N, González-López L, Deniz O, Bueno G. Comparison of deep learning models for digital H&E staining from unpaired label-free multispectral microscopy images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 235:107528. [PMID: 37040684 DOI: 10.1016/j.cmpb.2023.107528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Revised: 03/27/2023] [Accepted: 04/03/2023] [Indexed: 05/08/2023]
Abstract
BACKGROUND AND OBJECTIVE This paper presents the quantitative comparison of three generative models of digital staining, also known as virtual staining, in H&E modality (i.e., Hematoxylin and Eosin) that are applied to 5 types of breast tissue. Moreover, a qualitative evaluation of the results achieved with the best model was carried out. This process is based on images of samples without staining captured by a multispectral microscope with previous dimensional reduction to three channels in the RGB range. METHODS The models compared are based on conditional GAN (pix2pix) which uses images aligned with/without staining, and two models that do not require image alignment, Cycle GAN (cycleGAN) and contrastive learning-based model (CUT). These models are compared based on the structural similarity and chromatic discrepancy between samples with chemical staining and their corresponding ones with digital staining. The correspondence between images is achieved after the chemical staining images are subjected to digital unstaining by means of a model obtained to guarantee the cyclic consistency of the generative models. RESULTS The comparison of the three models corroborates the visual evaluation of the results showing the superiority of cycleGAN both for its larger structural similarity with respect to chemical staining (mean value of SSIM ∼ 0.95) and lower chromatic discrepancy (10%). To this end, quantization and calculation of EMD (Earth Mover's Distance) between clusters is used. In addition, quality evaluation through subjective psychophysical tests with three experts was carried out to evaluate quality of the results with the best model (cycleGAN). CONCLUSIONS The results can be satisfactorily evaluated by metrics that use as reference image a chemically stained sample and the digital staining images of the reference sample with prior digital unstaining. These metrics demonstrate that generative staining models that guarantee cyclic consistency provide the closest results to chemical H&E staining that also is consistent with the result of qualitative evaluation by experts.
Collapse
Affiliation(s)
- Jesus Salido
- IEEAC Dept. (ESI-UCLM), P de la Universidad 4, Ciudad Real, 13071, Spain.
| | - Noelia Vallez
- IEEAC Dept. (ETSII-UCLM), Avda. Camilo José Cela s/n, Ciudad Real, 13071, Spain
| | - Lucía González-López
- Hospital Gral. Universitario de C.Real (HGUCR), C. Obispo Rafael Torija s/n, Ciudad Real, 13005, Spain
| | - Oscar Deniz
- IEEAC Dept. (ETSII-UCLM), Avda. Camilo José Cela s/n, Ciudad Real, 13071, Spain
| | - Gloria Bueno
- IEEAC Dept. (ETSII-UCLM), Avda. Camilo José Cela s/n, Ciudad Real, 13071, Spain
| |
Collapse
|
84
|
Giacopelli G, Migliore M, Tegolo D. NeuronAlg: An Innovative Neuronal Computational Model for Immunofluorescence Image Segmentation. SENSORS (BASEL, SWITZERLAND) 2023; 23:4598. [PMID: 37430509 DOI: 10.3390/s23104598] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2023] [Revised: 04/24/2023] [Accepted: 05/03/2023] [Indexed: 07/12/2023]
Abstract
Background: Image analysis applications in digital pathology include various methods for segmenting regions of interest. Their identification is one of the most complex steps and therefore of great interest for the study of robust methods that do not necessarily rely on a machine learning (ML) approach. Method: A fully automatic and optimized segmentation process for different datasets is a prerequisite for classifying and diagnosing indirect immunofluorescence (IIF) raw data. This study describes a deterministic computational neuroscience approach for identifying cells and nuclei. It is very different from the conventional neural network approaches but has an equivalent quantitative and qualitative performance, and it is also robust against adversative noise. The method is robust, based on formally correct functions, and does not suffer from having to be tuned on specific data sets. Results: This work demonstrates the robustness of the method against variability of parameters, such as image size, mode, and signal-to-noise ratio. We validated the method on three datasets (Neuroblastoma, NucleusSegData, and ISBI 2009 Dataset) using images annotated by independent medical doctors. Conclusions: The definition of deterministic and formally correct methods, from a functional and structural point of view, guarantees the achievement of optimized and functionally correct results. The excellent performance of our deterministic method (NeuronalAlg) in segmenting cells and nuclei from fluorescence images was measured with quantitative indicators and compared with those achieved by three published ML approaches.
Collapse
Affiliation(s)
| | - Michele Migliore
- National Research Council, Institute of Biophysics, 90153 Palermo, Italy
| | - Domenico Tegolo
- National Research Council, Institute of Biophysics, 90153 Palermo, Italy
- Dipartimento Matematica e Informatica, Universitá degli Studi di Palermo, 90123 Palermo, Italy
| |
Collapse
|
85
|
Zhang R, Wang J, Chen C. Automatic implant shape design for minimally invasive repair of pectus excavatum using deep learning and shape registration. Comput Biol Med 2023; 158:106806. [PMID: 37019009 DOI: 10.1016/j.compbiomed.2023.106806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 03/05/2023] [Accepted: 03/20/2023] [Indexed: 04/05/2023]
Abstract
Minimally invasive repair of pectus excavatum (MIRPE) is an effective method for correcting pectus excavatum (PE), a congenital chest wall deformity characterized by concave depression of the sternum. In MIRPE, a long, thin, curved stainless plate (implant) is placed across the thoracic cage to correct the deformity. However, the implant curvature is difficult to accurately determine during the procedure. This implant depends on the surgeon's expert knowledge and experience and lacks objective criteria. Moreover, tedious manual input by surgeons is required to estimate the implant shape. In this study, a novel three-step end-to-end automatic framework is proposed to determine the implant shape during preoperative planning: (1) The deepest depression point (DDP) in the sagittal plane of the patient's CT volume is automatically determined using Sparse R-CNN-R101, and the axial slice containing the point is extracted. (2) Cascade Mask R-CNN-X101 segments the anterior intercostal gristle of the pectus, sternum and rib in the axial slice, and the contour is extracted to generate the PE point set. (3) Robust shape registration is performed to match the PE shape with a healthy thoracic cage, which is then utilized to generate the implant shape. The framework was evaluated on a CT dataset of 90 PE patients and 30 healthy children. The experimental results show that the average error of the DDP extraction was 5.83 mm. The end-to-end output of our framework was compared with surgical outcomes of professional surgeons to clinically validate the effectiveness of our method. The results indicate that the root mean square error (RMSE) between the midline of the real implant and our framework output was less than 2 mm.
Collapse
|
86
|
Delaby N, Barateau A, Chiavassa S, Biston MC, Chartier P, Graulières E, Guinement L, Huger S, Lacornerie T, Millardet-Martin C, Sottiaux A, Caron J, Gensanne D, Pointreau Y, Coutte A, Biau J, Serre AA, Castelli J, Tomsej M, Garcia R, Khamphan C, Badey A. Practical and technical key challenges in head and neck adaptive radiotherapy: The GORTEC point of view. Phys Med 2023; 109:102568. [PMID: 37015168 DOI: 10.1016/j.ejmp.2023.102568] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 02/15/2023] [Accepted: 03/18/2023] [Indexed: 04/05/2023] Open
Abstract
Anatomical variations occur during head and neck (H&N) radiotherapy (RT) treatment. These variations may result in underdosage to the target volume or overdosage to the organ at risk. Replanning during the treatment course can be triggered to overcome this issue. Due to technological, methodological and clinical evolutions, tools for adaptive RT (ART) are becoming increasingly sophisticated. The aim of this paper is to give an overview of the key steps of an H&N ART workflow and tools from the point of view of a group of French-speaking medical physicists and physicians (from GORTEC). Focuses are made on image registration, segmentation, estimation of the delivered dose of the day, workflow and quality assurance for an implementation of H&N offline and online ART. Practical recommendations are given to assist physicians and medical physicists in a clinical workflow.
Collapse
|
87
|
Iglesias JE. A ready-to-use machine learning tool for symmetric multi-modality registration of brain MRI. Sci Rep 2023; 13:6657. [PMID: 37095168 PMCID: PMC10126156 DOI: 10.1038/s41598-023-33781-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Accepted: 04/19/2023] [Indexed: 04/26/2023] Open
Abstract
Volumetric registration of brain MRI is routinely used in human neuroimaging, e.g., to align different MRI modalities, to measure change in longitudinal analysis, to map an individual to a template, or in registration-based segmentation. Classical registration techniques based on numerical optimization have been very successful in this domain, and are implemented in widespread software suites like ANTs, Elastix, NiftyReg, or DARTEL. Over the last 7-8 years, learning-based techniques have emerged, which have a number of advantages like high computational efficiency, potential for higher accuracy, easy integration of supervision, and the ability to be part of a meta-architectures. However, their adoption in neuroimaging pipelines has so far been almost inexistent. Reasons include: lack of robustness to changes in MRI modality and resolution; lack of robust affine registration modules; lack of (guaranteed) symmetry; and, at a more practical level, the requirement of deep learning expertise that may be lacking at neuroimaging research sites. Here, we present EasyReg, an open-source, learning-based registration tool that can be easily used from the command line without any deep learning expertise or specific hardware. EasyReg combines the features of classical registration tools, the capabilities of modern deep learning methods, and the robustness to changes in MRI modality and resolution provided by our recent work in domain randomization. As a result, EasyReg is: fast; symmetric; diffeomorphic (and thus invertible); agnostic to MRI modality and resolution; compatible with affine and nonlinear registration; and does not require any preprocessing or parameter tuning. We present results on challenging registration tasks, showing that EasyReg is as accurate as classical methods when registering 1 mm isotropic scans within MRI modality, but much more accurate across modalities and resolutions. EasyReg is publicly available as part of FreeSurfer; see https://surfer.nmr.mgh.harvard.edu/fswiki/EasyReg .
Collapse
Affiliation(s)
- Juan Eugenio Iglesias
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, 02129, USA.
- Department of Medical Physics and Biomedical Engineering, University College London, London, WC1V 6LJ, UK.
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Boston, 02139, USA.
| |
Collapse
|
88
|
Ramakrishnan V, Schönmehl R, Artinger A, Winter L, Böck H, Schreml S, Gürtler F, Daza J, Schmitt VH, Mamilos A, Arbelaez P, Teufel A, Niedermair T, Topolcan O, Karlíková M, Sossalla S, Wiedenroth CB, Rupp M, Brochhausen C. 3D Visualization, Skeletonization and Branching Analysis of Blood Vessels in Angiogenesis. Int J Mol Sci 2023; 24:ijms24097714. [PMID: 37175421 PMCID: PMC10178731 DOI: 10.3390/ijms24097714] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 04/20/2023] [Accepted: 04/21/2023] [Indexed: 05/15/2023] Open
Abstract
Angiogenesis is the process of new blood vessels growing from existing vasculature. Visualizing them as a three-dimensional (3D) model is a challenging, yet relevant, task as it would be of great help to researchers, pathologists, and medical doctors. A branching analysis on the 3D model would further facilitate research and diagnostic purposes. In this paper, a pipeline of vision algorithms is elaborated to visualize and analyze blood vessels in 3D from formalin-fixed paraffin-embedded (FFPE) granulation tissue sections with two different staining methods. First, a U-net neural network is used to segment blood vessels from the tissues. Second, image registration is used to align the consecutive images. Coarse registration using an image-intensity optimization technique, followed by finetuning using a neural network based on Spatial Transformers, results in an excellent alignment of images. Lastly, the corresponding segmented masks depicting the blood vessels are aligned and interpolated using the results of the image registration, resulting in a visualized 3D model. Additionally, a skeletonization algorithm is used to analyze the branching characteristics of the 3D vascular model. In summary, computer vision and deep learning is used to reconstruct, visualize and analyze a 3D vascular model from a set of parallel tissue samples. Our technique opens innovative perspectives in the pathophysiological understanding of vascular morphogenesis under different pathophysiological conditions and its potential diagnostic role.
Collapse
Affiliation(s)
- Vignesh Ramakrishnan
- Institute of Pathology, University of Regensburg, 93053 Regensburg, Germany
- Central Biobank Regensburg, University and University Hospital Regensburg, 93053 Regensburg, Germany
| | - Rebecca Schönmehl
- Institute of Pathology, University Medical Centre Mannheim, Heidelberg University, 68167 Mannheim, Germany
| | - Annalena Artinger
- Institute of Pathology, University Medical Centre Mannheim, Heidelberg University, 68167 Mannheim, Germany
| | - Lina Winter
- Institute of Pathology, University Medical Centre Mannheim, Heidelberg University, 68167 Mannheim, Germany
| | - Hendrik Böck
- Institute of Pathology, University Medical Centre Mannheim, Heidelberg University, 68167 Mannheim, Germany
| | - Stephan Schreml
- Department of Dermatology, University Medical Centre Regensburg, 93053 Regensburg, Germany
| | - Florian Gürtler
- Institute of Pathology, University of Regensburg, 93053 Regensburg, Germany
- Central Biobank Regensburg, University and University Hospital Regensburg, 93053 Regensburg, Germany
| | - Jimmy Daza
- Department of Internal Medicine II, Division of Hepatology, Medical Faculty Mannheim, Heidelberg University, 68167 Mannheim, Germany
| | - Volker H Schmitt
- Department of Cardiology, University Medical Centre, Johannes Gutenberg University of Mainz, 55131 Mainz, Germany
| | - Andreas Mamilos
- Institute of Pathology, University of Regensburg, 93053 Regensburg, Germany
- Central Biobank Regensburg, University and University Hospital Regensburg, 93053 Regensburg, Germany
| | - Pablo Arbelaez
- Center for Research and Formation in Artificial Intelligence (CinfonIA), Universidad de Los Andes, 111711 Bogota, Colombia
| | - Andreas Teufel
- Department of Internal Medicine II, Division of Hepatology, Medical Faculty Mannheim, Heidelberg University, 68167 Mannheim, Germany
| | - Tanja Niedermair
- Institute of Pathology, University of Regensburg, 93053 Regensburg, Germany
- Central Biobank Regensburg, University and University Hospital Regensburg, 93053 Regensburg, Germany
| | - Ondrej Topolcan
- Biomedical Center, Faculty of Medicine in Pilsen, Charles University, 32300 Pilsen, Czech Republic
| | - Marie Karlíková
- Biomedical Center, Faculty of Medicine in Pilsen, Charles University, 32300 Pilsen, Czech Republic
| | - Samuel Sossalla
- Department of Internal Medicine II, University Hospital Regensburg, 93053 Regensburg, Germany
| | | | - Markus Rupp
- Department of Trauma Surgery, University Medical Centre Regensburg, 93053 Regensburg, Germany
| | - Christoph Brochhausen
- Institute of Pathology, University of Regensburg, 93053 Regensburg, Germany
- Institute of Pathology, University Medical Centre Mannheim, Heidelberg University, 68167 Mannheim, Germany
| |
Collapse
|
89
|
Aganj I, Fischl B. Intermediate Deformable Image Registration via Windowed Cross-Correlation. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2023; 2023:10.1109/isbi53787.2023.10230715. [PMID: 37691967 PMCID: PMC10485808 DOI: 10.1109/isbi53787.2023.10230715] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2023]
Abstract
In population and longitudinal imaging studies that employ deformable image registration, more accurate results can be achieved by initializing deformable registration with the results of affine registration where global misalignments have been considerably reduced. Such affine registration, however, is limited to linear transformations and it cannot account for large nonlinear anatomical variations, such as those between pre- and post-operative images or across different subject anatomies. In this work, we introduce a new intermediate deformable image registration (IDIR) technique that recovers large deformations via windowed cross-correlation, and provide an efficient implementation based on the fast Fourier transform. We evaluate our method on 2D X-ray and 3D magnetic resonance images, demonstrating its ability to align substantial nonlinear anatomical variations within a few iterations.
Collapse
Affiliation(s)
- Iman Aganj
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School
| | - Bruce Fischl
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School
| |
Collapse
|
90
|
Naser MA, Wahid KA, Ahmed S, Salama V, Dede C, Edwards BW, Lin R, McDonald B, Salzillo TC, He R, Ding Y, Abdelaal MA, Thill D, O'Connell N, Willcut V, Christodouleas JP, Lai SY, Fuller CD, Mohamed ASR. Quality assurance assessment of intra-acquisition diffusion-weighted and T2-weighted magnetic resonance imaging registration and contour propagation for head and neck cancer radiotherapy. Med Phys 2023; 50:2089-2099. [PMID: 36519973 PMCID: PMC10121748 DOI: 10.1002/mp.16128] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Revised: 11/10/2022] [Accepted: 11/13/2022] [Indexed: 12/23/2022] Open
Abstract
BACKGROUND/PURPOSE Adequate image registration of anatomical and functional magnetic resonance imaging (MRI) scans is necessary for MR-guided head and neck cancer (HNC) adaptive radiotherapy planning. Despite the quantitative capabilities of diffusion-weighted imaging (DWI) MRI for treatment plan adaptation, geometric distortion remains a considerable limitation. Therefore, we systematically investigated various deformable image registration (DIR) methods to co-register DWI and T2-weighted (T2W) images. MATERIALS/METHODS We compared three commercial (ADMIRE, Velocity, Raystation) and three open-source (Elastix with default settings [Elastix Default], Elastix with parameter set 23 [Elastix 23], Demons) post-acquisition DIR methods applied to T2W and DWI MRI images acquired during the same imaging session in twenty immobilized HNC patients. In addition, we used the non-registered images (None) as a control comparator. Ground-truth segmentations of radiotherapy structures (tumour and organs at risk) were generated by a physician expert on both image sequences. For each registration approach, structures were propagated from T2W to DWI images. These propagated structures were then compared with ground-truth DWI structures using the Dice similarity coefficient and mean surface distance. RESULTS 19 left submandibular glands, 18 right submandibular glands, 20 left parotid glands, 20 right parotid glands, 20 spinal cords, and 12 tumours were delineated. Most DIR methods took <30 s to execute per case, with the exception of Elastix 23 which took ∼458 s to execute per case. ADMIRE and Elastix 23 demonstrated improved performance over None for all metrics and structures (Bonferroni-corrected p < 0.05), while the other methods did not. Moreover, ADMIRE and Elastix 23 significantly improved performance in individual and pooled analysis compared to all other methods. CONCLUSIONS The ADMIRE DIR method offers improved geometric performance with reasonable execution time so should be favoured for registering T2W and DWI images acquired during the same scan session in HNC patients. These results are important to ensure the appropriate selection of registration strategies for MR-guided radiotherapy.
Collapse
Affiliation(s)
- Mohamed A Naser
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Kareem A Wahid
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Sara Ahmed
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Vivian Salama
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Cem Dede
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Benjamin W Edwards
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Ruitao Lin
- Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Brigid McDonald
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Travis C Salzillo
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Renjie He
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Yao Ding
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Moamen Abobakr Abdelaal
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | | | | | | | | | - Stephen Y Lai
- Department of Head and Neck Surgery, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Clifton D Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Abdallah S R Mohamed
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| |
Collapse
|
91
|
Montez DF, Van AN, Miller RL, Seider NA, Marek S, Zheng A, Newbold DJ, Scheidter K, Feczko E, Perrone AJ, Miranda-Dominguez O, Earl EA, Kay BP, Jha AK, Sotiras A, Laumann TO, Greene DJ, Gordon EM, Tisdall MD, van der Kouwe A, Fair DA, Dosenbach NUF. Using synthetic MR images for distortion correction. Dev Cogn Neurosci 2023; 60:101234. [PMID: 37023632 PMCID: PMC10106483 DOI: 10.1016/j.dcn.2023.101234] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Revised: 03/07/2023] [Accepted: 03/16/2023] [Indexed: 04/07/2023] Open
Abstract
Functional MRI (fMRI) data acquired using echo-planar imaging (EPI) are highly distorted by magnetic field inhomogeneities. Distortion and differences in image contrast between EPI and T1-weighted and T2-weighted (T1w/T2w) images makes their alignment a challenge. Typically, field map data are used to correct EPI distortions. Alignments achieved with field maps can vary greatly and depends on the quality of field map data. However, many public datasets lack field map data entirely. Additionally, reliable field map data is often difficult to acquire in high-motion pediatric or developmental cohorts. To address this, we developed Synth, a software package for distortion correction and cross-modal image registration that does not require field map data. Synth combines information from T1w and T2w anatomical images to construct an idealized undistorted synthetic image with similar contrast properties to EPI data. This synthetic image acts as an effective reference for individual-specific distortion correction. Using pediatric (ABCD: Adolescent Brain Cognitive Development) and adult (MSC: Midnight Scan Club; HCP: Human Connectome Project) data, we demonstrate that Synth performs comparably to field map distortion correction approaches, and often outperforms them. Field map-less distortion correction with Synth allows accurate and precise registration of fMRI data with missing or corrupted field map information.
Collapse
Affiliation(s)
- David F Montez
- Department of Neurology, Washington University School of Medicine, St. Louis, MO 63110, United States of America; Department of Psychiatry, Washington University School of Medicine, St. Louis, MO 63110, United States of America.
| | - Andrew N Van
- Department of Neurology, Washington University School of Medicine, St. Louis, MO 63110, United States of America; Department of Biomedical Engineering, Washington University School of Medicine, St. Louis, MO 63110, United States of America
| | - Ryland L Miller
- Department of Neurology, Washington University School of Medicine, St. Louis, MO 63110, United States of America; Department of Psychiatry, Washington University School of Medicine, St. Louis, MO 63110, United States of America
| | - Nicole A Seider
- Department of Neurology, Washington University School of Medicine, St. Louis, MO 63110, United States of America
| | - Scott Marek
- Department of Psychiatry, Washington University School of Medicine, St. Louis, MO 63110, United States of America
| | - Annie Zheng
- Department of Neurology, Washington University School of Medicine, St. Louis, MO 63110, United States of America
| | - Dillan J Newbold
- Department of Neurology, Washington University School of Medicine, St. Louis, MO 63110, United States of America; Department of Neurology, New York University Langone Medical Center, New York, NY 10016, United States of America
| | - Kristen Scheidter
- Department of Neurology, Washington University School of Medicine, St. Louis, MO 63110, United States of America; Department of Psychiatry, Washington University School of Medicine, St. Louis, MO 63110, United States of America
| | - Eric Feczko
- Masonic Institute for the Developing Brain, University of Minnesota Medical School, Minneapolis, MN 55455, United States of America; Department of Pediatrics, University of Minnesota Medical School, Minneapolis, MN 55455, United States of America
| | - Anders J Perrone
- Masonic Institute for the Developing Brain, University of Minnesota Medical School, Minneapolis, MN 55455, United States of America; Department of Psychiatry, Oregon Health and Science University, Portland, OR 97239, United States of America
| | - Oscar Miranda-Dominguez
- Masonic Institute for the Developing Brain, University of Minnesota Medical School, Minneapolis, MN 55455, United States of America; Department of Pediatrics, University of Minnesota Medical School, Minneapolis, MN 55455, United States of America
| | - Eric A Earl
- Masonic Institute for the Developing Brain, University of Minnesota Medical School, Minneapolis, MN 55455, United States of America; Department of Psychiatry, Oregon Health and Science University, Portland, OR 97239, United States of America
| | - Benjamin P Kay
- Department of Neurology, Washington University School of Medicine, St. Louis, MO 63110, United States of America
| | - Abhinav K Jha
- Department of Biomedical Engineering, Washington University School of Medicine, St. Louis, MO 63110, United States of America; Department of Radiology, Washington University School of Medicine, St. Louis, MO 63110, United States of America
| | - Aristeidis Sotiras
- Department of Radiology, Washington University School of Medicine, St. Louis, MO 63110, United States of America; Institute for Informatics, Washington University School of Medicine, St. Louis, MO 63110, United States of America
| | - Timothy O Laumann
- Department of Psychiatry, Washington University School of Medicine, St. Louis, MO 63110, United States of America
| | - Deanna J Greene
- Department of Cognitive Science, University of California, San Diego, La Jolla CA 92093, United States of America
| | - Evan M Gordon
- Department of Radiology, Washington University School of Medicine, St. Louis, MO 63110, United States of America
| | - M Dylan Tisdall
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, United States of America
| | - Andre van der Kouwe
- Department of Radiology, Massachusetts General Hospital, Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA 02129, United States of America; Department of Radiology, Harvard Medical School, Boston, MA 02115, United States of America
| | - Damien A Fair
- Masonic Institute for the Developing Brain, University of Minnesota Medical School, Minneapolis, MN 55455, United States of America; Department of Pediatrics, University of Minnesota Medical School, Minneapolis, MN 55455, United States of America; Institute of Child Development, University of Minnesota Medical School, Minneapolis, MN 55455, United States of America
| | - Nico U F Dosenbach
- Department of Neurology, Washington University School of Medicine, St. Louis, MO 63110, United States of America; Department of Biomedical Engineering, Washington University School of Medicine, St. Louis, MO 63110, United States of America; Department of Radiology, Washington University School of Medicine, St. Louis, MO 63110, United States of America; Department of Pediatrics, Washington University School of Medicine, St. Louis, MO 63110, United States of America
| |
Collapse
|
92
|
Wang F, Xu X, Yang D, Chen RC, Royce TJ, Wang A, Lian J, Lian C. Dynamic Cross-Task Representation Adaptation for Clinical Targets Co-Segmentation in CT Image-Guided Post-Prostatectomy Radiotherapy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1046-1055. [PMID: 36399586 PMCID: PMC10209913 DOI: 10.1109/tmi.2022.3223405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Adjuvant and salvage radiotherapy after radical prostatectomy requires precise delineations of prostate bed (PB), i.e., the clinical target volume, and surrounding organs at risk (OARs) to optimize radiotherapy planning. Segmenting PB is particularly challenging even for clinicians, e.g., from the planning computed tomography (CT) images, as it is an invisible/virtual target after the operative removal of the cancerous prostate gland. Very recently, a few deep learning-based methods have been proposed to automatically contour non-contrast PB by leveraging its spatial reliance on adjacent OARs (i.e., the bladder and rectum) with much more clear boundaries, mimicking the clinical workflow of experienced clinicians. Although achieving state-of-the-art results from both the clinical and technical aspects, these existing methods improperly ignore the gap between the hierarchical feature representations needed for segmenting those fundamentally different clinical targets (i.e., PB and OARs), which in turn limits their delineation accuracy. This paper proposes an asymmetric multi-task network integrating dynamic cross-task representation adaptation (i.e., DyAdapt) for accurate and efficient co-segmentation of PB and OARs in one-pass from CT images. In the learning-to-learn framework, the DyAdapt modules adaptively transfer the hierarchical feature representations from the source task of OARs segmentation to match up with the target (and more challenging) task of PB segmentation, conditioned on the dynamic inter-task associations learned from the learning states of the feed-forward path. On a real-patient dataset, our method led to state-of-the-art results of PB and OARs co-segmentation. Code is available at https://github.com/ladderlab-xjtu/DyAdapt.
Collapse
|
93
|
Che T, Wang X, Zhao K, Zhao Y, Zeng D, Li Q, Zheng Y, Yang N, Wang J, Li S. AMNet: Adaptive multi-level network for deformable registration of 3D brain MR images. Med Image Anal 2023; 85:102740. [PMID: 36682155 DOI: 10.1016/j.media.2023.102740] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Revised: 11/20/2022] [Accepted: 01/03/2023] [Indexed: 01/15/2023]
Abstract
Three-dimensional (3D) deformable image registration is a fundamental technique in medical image analysis tasks. Although it has been extensively investigated, current deep-learning-based registration models may face the challenges posed by deformations with various degrees of complexity. This paper proposes an adaptive multi-level registration network (AMNet) to retain the continuity of the deformation field and to achieve high-performance registration for 3D brain MR images. First, we design a lightweight registration network with an adaptive growth strategy to learn deformation field from multi-level wavelet sub-bands, which facilitates both global and local optimization and achieves registration with high performance. Second, our AMNet is designed for image-wise registration, which adapts the local importance of a region in accordance with the complexity degrees of its deformation, and thereafter improves the registration efficiency and maintains the continuity of the deformation field. Experimental results from five publicly-available brain MR datasets and a synthetic brain MR dataset show that our method achieves superior performance against state-of-the-art medical image registration approaches.
Collapse
Affiliation(s)
- Tongtong Che
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
| | - Xiuying Wang
- School of Computer Science, The University of Sydney, Sydney, Australia.
| | - Kun Zhao
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
| | - Yan Zhao
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
| | - Debin Zeng
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
| | - Qiongling Li
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, 100875, China
| | - Yuanjie Zheng
- School of Information Science and Engineering, Shandong Normal University, Jinan, 250358, China
| | - Ning Yang
- Department of Neurosurgery, Qilu Hospital of Shandong University and Brain Science Research Institute, Shandong University, Jinan, 250012, China
| | - Jian Wang
- Department of Neurosurgery, Qilu Hospital of Shandong University and Brain Science Research Institute, Shandong University, Jinan, 250012, China; Department of Biomedicine, University of Bergen, Jonas Lies Vei 91, 5009 Bergen, Norway
| | - Shuyu Li
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, 100875, China.
| |
Collapse
|
94
|
Xiao X, Dong S, Yu Y, Li Y, Yang G, Qiu Z. MAE-TransRNet: An improved transformer-ConvNet architecture with masked autoencoder for cardiac MRI registration. Front Med (Lausanne) 2023; 10:1114571. [PMID: 36968818 PMCID: PMC10033952 DOI: 10.3389/fmed.2023.1114571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Accepted: 02/14/2023] [Indexed: 03/11/2023] Open
Abstract
The heart is a relatively complex non-rigid motion organ in the human body. Quantitative motion analysis of the heart takes on a critical significance to help doctors with accurate diagnosis and treatment. Moreover, cardiovascular magnetic resonance imaging (CMRI) can be used to perform a more detailed quantitative analysis evaluation for cardiac diagnosis. Deformable image registration (DIR) has become a vital task in biomedical image analysis since tissue structures have variability in medical images. Recently, the model based on masked autoencoder (MAE) has recently been shown to be effective in computer vision tasks. Vision Transformer has the context aggregation ability to restore the semantic information in the original image regions by using a low proportion of visible image patches to predict the masked image patches. A novel Transformer-ConvNet architecture is proposed in this study based on MAE for medical image registration. The core of the Transformer is designed as a masked autoencoder (MAE) and a lightweight decoder structure, and feature extraction before the downstream registration task is transformed into the self-supervised learning task. This study also rethinks the calculation method of the multi-head self-attention mechanism in the Transformer encoder. We improve the query-key-value-based dot product attention by introducing both depthwise separable convolution (DWSC) and squeeze and excitation (SE) modules into the self-attention module to reduce the amount of parameter computation to highlight image details and maintain high spatial resolution image features. In addition, concurrent spatial and channel squeeze and excitation (scSE) module is embedded into the CNN structure, which also proves to be effective for extracting robust feature representations. The proposed method, called MAE-TransRNet, has better generalization. The proposed model is evaluated on the cardiac short-axis public dataset (with images and labels) at the 2017 Automated Cardiac Diagnosis Challenge (ACDC). The relevant qualitative and quantitative results (e.g., dice performance and Hausdorff distance) suggest that the proposed model can achieve superior results over those achieved by the state-of-the-art methods, thus proving that MAE and improved self-attention are more effective and promising for medical image registration tasks. Codes and models are available at https://github.com/XinXiao101/MAE-TransRNet.
Collapse
Affiliation(s)
- Xin Xiao
- College of Information and Computer Engineering, Northeast Forestry University, Harbin, China
| | - Suyu Dong
- College of Information and Computer Engineering, Northeast Forestry University, Harbin, China
- *Correspondence: Suyu Dong
| | - Yang Yu
- Department of Cardiovascular Surgery, Beijing Anzhen Hospital, Capital Medical University, Beijing, China
| | - Yan Li
- College of Information and Computer Engineering, Northeast Forestry University, Harbin, China
- Yan Li
| | - Guangyuan Yang
- First Affiliated Hospital, Jiamusi University, Jiamusi, China
- Guangyuan Yang
| | - Zhaowen Qiu
- College of Information and Computer Engineering, Northeast Forestry University, Harbin, China
- Zhaowen Qiu
| |
Collapse
|
95
|
Hering A, Hansen L, Mok TCW, Chung ACS, Siebert H, Hager S, Lange A, Kuckertz S, Heldmann S, Shao W, Vesal S, Rusu M, Sonn G, Estienne T, Vakalopoulou M, Han L, Huang Y, Yap PT, Brudfors M, Balbastre Y, Joutard S, Modat M, Lifshitz G, Raviv D, Lv J, Li Q, Jaouen V, Visvikis D, Fourcade C, Rubeaux M, Pan W, Xu Z, Jian B, De Benetti F, Wodzinski M, Gunnarsson N, Sjolund J, Grzech D, Qiu H, Li Z, Thorley A, Duan J, Grosbrohmer C, Hoopes A, Reinertsen I, Xiao Y, Landman B, Huo Y, Murphy K, Lessmann N, van Ginneken B, Dalca AV, Heinrich MP. Learn2Reg: Comprehensive Multi-Task Medical Image Registration Challenge, Dataset and Evaluation in the Era of Deep Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:697-712. [PMID: 36264729 DOI: 10.1109/tmi.2022.3213983] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Image registration is a fundamental medical image analysis task, and a wide variety of approaches have been proposed. However, only a few studies have comprehensively compared medical image registration approaches on a wide range of clinically relevant tasks. This limits the development of registration methods, the adoption of research advances into practice, and a fair benchmark across competing approaches. The Learn2Reg challenge addresses these limitations by providing a multi-task medical image registration data set for comprehensive characterisation of deformable registration algorithms. A continuous evaluation will be possible at https://learn2reg.grand-challenge.org. Learn2Reg covers a wide range of anatomies (brain, abdomen, and thorax), modalities (ultrasound, CT, MR), availability of annotations, as well as intra- and inter-patient registration evaluation. We established an easily accessible framework for training and validation of 3D registration methods, which enabled the compilation of results of over 65 individual method submissions from more than 20 unique teams. We used a complementary set of metrics, including robustness, accuracy, plausibility, and runtime, enabling unique insight into the current state-of-the-art of medical image registration. This paper describes datasets, tasks, evaluation methods and results of the challenge, as well as results of further analysis of transferability to new datasets, the importance of label supervision, and resulting bias. While no single approach worked best across all tasks, many methodological aspects could be identified that push the performance of medical image registration to new state-of-the-art performance. Furthermore, we demystified the common belief that conventional registration methods have to be much slower than deep-learning-based methods.
Collapse
|
96
|
Zhu X, Ding M, Zhang X. Free form deformation and symmetry constraint‐based multi‐modal brain image registration using generative adversarial nets. CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY 2023. [DOI: 10.1049/cit2.12159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/13/2023] Open
Affiliation(s)
- Xingxing Zhu
- Department of Biomedical Engineering School of Life Science and Technology Ministry of Education Key Laboratory of Molecular Biophysics Huazhong University of Science and Technology Wuhan China
| | - Mingyue Ding
- Department of Biomedical Engineering School of Life Science and Technology Ministry of Education Key Laboratory of Molecular Biophysics Huazhong University of Science and Technology Wuhan China
| | - Xuming Zhang
- Department of Biomedical Engineering School of Life Science and Technology Ministry of Education Key Laboratory of Molecular Biophysics Huazhong University of Science and Technology Wuhan China
| |
Collapse
|
97
|
Iglesias JE, Billot B, Balbastre Y, Magdamo C, Arnold SE, Das S, Edlow BL, Alexander DC, Golland P, Fischl B. SynthSR: A public AI tool to turn heterogeneous clinical brain scans into high-resolution T1-weighted images for 3D morphometry. SCIENCE ADVANCES 2023; 9:eadd3607. [PMID: 36724222 PMCID: PMC9891693 DOI: 10.1126/sciadv.add3607] [Citation(s) in RCA: 41] [Impact Index Per Article: 20.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Accepted: 01/04/2023] [Indexed: 05/10/2023]
Abstract
Every year, millions of brain magnetic resonance imaging (MRI) scans are acquired in hospitals across the world. These have the potential to revolutionize our understanding of many neurological diseases, but their morphometric analysis has not yet been possible due to their anisotropic resolution. We present an artificial intelligence technique, "SynthSR," that takes clinical brain MRI scans with any MR contrast (T1, T2, etc.), orientation (axial/coronal/sagittal), and resolution and turns them into high-resolution T1 scans that are usable by virtually all existing human neuroimaging tools. We present results on segmentation, registration, and atlasing of >10,000 scans of controls and patients with brain tumors, strokes, and Alzheimer's disease. SynthSR yields morphometric results that are very highly correlated with what one would have obtained with high-resolution T1 scans. SynthSR allows sample sizes that have the potential to overcome the power limitations of prospective research studies and shed new light on the healthy and diseased human brain.
Collapse
Affiliation(s)
- Juan E. Iglesias
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
- Centre for Medical Image Computing, Department of Computer Science, University College London, London, UK
- Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Benjamin Billot
- Centre for Medical Image Computing, Department of Computer Science, University College London, London, UK
| | - Yaël Balbastre
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Colin Magdamo
- Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Steven E. Arnold
- Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Sudeshna Das
- Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Brian L. Edlow
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
- Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
- Center for Neurotechnology and Neurorecovery, Massachusetts General Hospital, Boston, MA, USA
| | - Daniel C. Alexander
- Centre for Medical Image Computing, Department of Computer Science, University College London, London, UK
| | - Polina Golland
- Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Bruce Fischl
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
- Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
98
|
Li L, Mazomenos E, Chandler JH, Obstein KL, Valdastri P, Stoyanov D, Vasconcelos F. Robust endoscopic image mosaicking via fusion of multimodal estimation. Med Image Anal 2023; 84:102709. [PMID: 36549045 PMCID: PMC10636739 DOI: 10.1016/j.media.2022.102709] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Revised: 08/15/2022] [Accepted: 11/29/2022] [Indexed: 12/23/2022]
Abstract
We propose an endoscopic image mosaicking algorithm that is robust to light conditioning changes, specular reflections, and feature-less scenes. These conditions are especially common in minimally invasive surgery where the light source moves with the camera to dynamically illuminate close range scenes. This makes it difficult for a single image registration method to robustly track camera motion and then generate consistent mosaics of the expanded surgical scene across different and heterogeneous environments. Instead of relying on one specialised feature extractor or image registration method, we propose to fuse different image registration algorithms according to their uncertainties, formulating the problem as affine pose graph optimisation. This allows to combine landmarks, dense intensity registration, and learning-based approaches in a single framework. To demonstrate our application we consider deep learning-based optical flow, hand-crafted features, and intensity-based registration, however, the framework is general and could take as input other sources of motion estimation, including other sensor modalities. We validate the performance of our approach on three datasets with very different characteristics to highlighting its generalisability, demonstrating the advantages of our proposed fusion framework. While each individual registration algorithm eventually fails drastically on certain surgical scenes, the fusion approach flexibly determines which algorithms to use and in which proportion to more robustly obtain consistent mosaics.
Collapse
Affiliation(s)
- Liang Li
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences(WEISS) and Department of Computer Science, University College London, London, UK; College of Control Science and Engineering, Zhejiang University, Hangzhou, 310027, China.
| | - Evangelos Mazomenos
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences(WEISS) and Department of Computer Science, University College London, London, UK.
| | - James H Chandler
- Storm Lab UK, School of Electronic, and Electrical Engineering, University of Leeds, Leeds LS2 9JT, UK.
| | - Keith L Obstein
- Division of Gastroenterology, Hepatology, and Nutrition, Vanderbilt University Medical Center, Nashville, TN 37232, USA; STORM Lab, Department of Mechanical Engineering, Vanderbilt University, Nashville, TN 37235, USA.
| | - Pietro Valdastri
- Storm Lab UK, School of Electronic, and Electrical Engineering, University of Leeds, Leeds LS2 9JT, UK.
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences(WEISS) and Department of Computer Science, University College London, London, UK.
| | - Francisco Vasconcelos
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences(WEISS) and Department of Computer Science, University College London, London, UK.
| |
Collapse
|
99
|
Zhang Y, Nie R, Cao J, Ma C. Self-Supervised Fusion for Multi-Modal Medical Images via Contrastive Auto-Encoding and Convolutional Information Exchange. IEEE COMPUT INTELL M 2023. [DOI: 10.1109/mci.2022.3223487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
|
100
|
Ruthven M, Miquel ME, King AP. A segmentation-informed deep learning framework to register dynamic two-dimensional magnetic resonance images of the vocal tract during speech. Biomed Signal Process Control 2023; 80:104290. [PMID: 36743699 PMCID: PMC9746295 DOI: 10.1016/j.bspc.2022.104290] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 09/29/2022] [Accepted: 10/08/2022] [Indexed: 11/06/2022]
Abstract
Objective Dynamic magnetic resonance (MR) imaging enables visualisation of articulators during speech. There is growing interest in quantifying articulator motion in two-dimensional MR images of the vocal tract, to better understand speech production and potentially inform patient management decisions. Image registration is an established way to achieve this quantification. Recently, segmentation-informed deformable registration frameworks have been developed and have achieved state-of-the-art accuracy. This work aims to adapt such a framework and optimise it for estimating displacement fields between dynamic two-dimensional MR images of the vocal tract during speech. Methods A deep-learning-based registration framework was developed and compared with current state-of-the-art registration methods and frameworks (two traditional methods and three deep-learning-based frameworks, two of which are segmentation informed). The accuracy of the methods and frameworks was evaluated using the Dice coefficient (DSC), average surface distance (ASD) and a metric based on velopharyngeal closure. The metric evaluated if the fields captured a clinically relevant and quantifiable aspect of articulator motion. Results The segmentation-informed frameworks achieved higher DSCs and lower ASDs and captured more velopharyngeal closures than the traditional methods and the framework that was not segmentation informed. All segmentation-informed frameworks achieved similar DSCs and ASDs. However, the proposed framework captured the most velopharyngeal closures. Conclusions A framework was successfully developed and found to more accurately estimate articulator motion than five current state-of-the-art methods and frameworks. Significance The first deep-learning-based framework specifically for registering dynamic two-dimensional MR images of the vocal tract during speech has been developed and evaluated.
Collapse
Affiliation(s)
- Matthieu Ruthven
- Clinical Physics, Barts Health NHS Trust, West Smithfield, London EC1A 7BE, United Kingdom,School of Biomedical Engineering & Imaging Sciences, King’s College London, King’s Health Partners, St Thomas’ Hospital, London SE1 7EH, United Kingdom,Corresponding author at: Clinical Physics, Barts Health NHS Trust, West Smithfield, London EC1A 7BE, United Kingdom.
| | - Marc E. Miquel
- Clinical Physics, Barts Health NHS Trust, West Smithfield, London EC1A 7BE, United Kingdom,Digital Environment Research Institute (DERI), Empire House, 67-75 New Road, Queen Mary University of London, London E1 1HH, United Kingdom,Advanced Cardiovascular Imaging, Barts NIHR BRC, Queen Mary University of London, London EC1M 6BQ, United Kingdom
| | - Andrew P. King
- School of Biomedical Engineering & Imaging Sciences, King’s College London, King’s Health Partners, St Thomas’ Hospital, London SE1 7EH, United Kingdom
| |
Collapse
|