1
|
A Fast Subpixel Registration Algorithm Based on Single-Step DFT Combined with Phase Correlation Constraint in Multimodality Brain Image. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2020; 2020:9343461. [PMID: 32454887 PMCID: PMC7229540 DOI: 10.1155/2020/9343461] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/19/2020] [Accepted: 02/04/2020] [Indexed: 11/21/2022]
Abstract
Multimodality brain image registration technology is the key technology to determine the accuracy and speed of brain diagnosis and treatment. In order to achieve high-precision image registration, a fast subpixel registration algorithm based on single-step DFT combined with phase correlation constraint in multimodality brain image was proposed in this paper. Firstly, the coarse positioning at the pixel level was achieved by using the downsampling cross-correlation model, which reduced the Fourier transform dimension of the cross-correlation matrix and the multiplication of the discrete Fourier transform matrix, so as to speed up the coarse registration process. Then, the improved DFT multiplier of the matrix multiplication was used in the neighborhood of the coarse point, and the subpixel fast location was achieved by the bidirectional search strategy. Qualitative and quantitative simulation experiment results show that, compared with comparison registration algorithms, our proposed algorithm could greatly reduce space and time complexity without losing accuracy.
Collapse
|
2
|
Machado I, Toews M, George E, Unadkat P, Essayed W, Luo J, Teodoro P, Carvalho H, Martins J, Golland P, Pieper S, Frisken S, Golby A, Wells Iii W, Ou Y. Deformable MRI-Ultrasound registration using correlation-based attribute matching for brain shift correction: Accuracy and generality in multi-site data. Neuroimage 2019; 202:116094. [PMID: 31446127 PMCID: PMC6819249 DOI: 10.1016/j.neuroimage.2019.116094] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2019] [Revised: 07/18/2019] [Accepted: 08/09/2019] [Indexed: 11/16/2022] Open
Abstract
Intraoperative tissue deformation, known as brain shift, decreases the benefit of using preoperative images to guide neurosurgery. Non-rigid registration of preoperative magnetic resonance (MR) to intraoperative ultrasound (iUS) has been proposed as a means to compensate for brain shift. We focus on the initial registration from MR to predurotomy iUS. We present a method that builds on previous work to address the need for accuracy and generality of MR-iUS registration algorithms in multi-site clinical data. High-dimensional texture attributes were used instead of image intensities for image registration and the standard difference-based attribute matching was replaced with correlation-based attribute matching. A strategy that deals explicitly with the large field-of-view mismatch between MR and iUS images was proposed. Key parameters were optimized across independent MR-iUS brain tumor datasets acquired at 3 institutions, with a total of 43 tumor patients and 758 reference landmarks for evaluating the accuracy of the proposed algorithm. Despite differences in imaging protocols, patient demographics and landmark distributions, the algorithm is able to reduce landmark errors prior to registration in three data sets (5.37±4.27, 4.18±1.97 and 6.18±3.38 mm, respectively) to a consistently low level (2.28±0.71, 2.08±0.37 and 2.24±0.78 mm, respectively). This algorithm was tested against 15 other algorithms and it is competitive with the state-of-the-art on multiple datasets. We show that the algorithm has one of the lowest errors in all datasets (accuracy), and this is achieved while sticking to a fixed set of parameters for multi-site data (generality). In contrast, other algorithms/tools of similar performance need per-dataset parameter tuning (high accuracy but lower generality), and those that stick to fixed parameters have larger errors or inconsistent performance (generality but not the top accuracy). Landmark errors were further characterized according to brain regions and tumor types, a topic so far missing in the literature.
Collapse
Affiliation(s)
- Inês Machado
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Mechanical Engineering, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal.
| | - Matthew Toews
- Department of Systems Engineering, École de Technologie Supérieure, Montreal, Canada
| | - Elizabeth George
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Prashin Unadkat
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Walid Essayed
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Jie Luo
- Graduate School of Frontier Sciences, University of Tokyo, Tokyo, Japan
| | - Pedro Teodoro
- Escola Superior Náutica Infante D. Henrique, Lisbon, Portugal
| | - Herculano Carvalho
- Department of Neurosurgery, Hospital de Santa Maria, CHLN, Lisbon, Portugal
| | - Jorge Martins
- Department of Mechanical Engineering, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal
| | - Polina Golland
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | - Steve Pieper
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Isomics, Inc., Cambridge, MA, USA
| | - Sarah Frisken
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Alexandra Golby
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - William Wells Iii
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | - Yangming Ou
- Department of Pediatrics and Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
4
|
Gu S, Meng X, Sciurba FC, Ma H, Leader J, Kaminski N, Gur D, Pu J. Bidirectional elastic image registration using B-spline affine transformation. Comput Med Imaging Graph 2014; 38:306-14. [PMID: 24530210 DOI: 10.1016/j.compmedimag.2014.01.002] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2013] [Revised: 12/13/2013] [Accepted: 01/14/2014] [Indexed: 10/25/2022]
Abstract
A registration scheme termed as B-spline affine transformation (BSAT) is presented in this study to elastically align two images. We define an affine transformation instead of the traditional translation at each control point. Mathematically, BSAT is a generalized form of the affine transformation and the traditional B-spline transformation (BST). In order to improve the performance of the iterative closest point (ICP) method in registering two homologous shapes but with large deformation, a bidirectional instead of the traditional unidirectional objective/cost function is proposed. In implementation, the objective function is formulated as a sparse linear equation problem, and a sub-division strategy is used to achieve a reasonable efficiency in registration. The performance of the developed scheme was assessed using both two-dimensional (2D) synthesized dataset and three-dimensional (3D) volumetric computed tomography (CT) data. Our experiments showed that the proposed B-spline affine model could obtain reasonable registration accuracy.
Collapse
Affiliation(s)
- Suicheng Gu
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA 15213, United States
| | - Xin Meng
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA 15213, United States
| | - Frank C Sciurba
- Department of Medicine, University of Pittsburgh, Pittsburgh, PA 15213, United States
| | - Hongxia Ma
- Department of Radiology, University of Xi'an Jiaotong University First Affiliated Hospital, Xi'an, Shaanxi, P.R. China
| | - Joseph Leader
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA 15213, United States
| | - Naftali Kaminski
- Department of Medicine, University of Pittsburgh, Pittsburgh, PA 15213, United States
| | - David Gur
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA 15213, United States
| | - Jiantao Pu
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA 15213, United States; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213, United States.
| |
Collapse
|
5
|
Sotiras A, Davatzikos C, Paragios N. Deformable medical image registration: a survey. IEEE TRANSACTIONS ON MEDICAL IMAGING 2013; 32:1153-90. [PMID: 23739795 PMCID: PMC3745275 DOI: 10.1109/tmi.2013.2265603] [Citation(s) in RCA: 613] [Impact Index Per Article: 51.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Deformable image registration is a fundamental task in medical image processing. Among its most important applications, one may cite: 1) multi-modality fusion, where information acquired by different imaging devices or protocols is fused to facilitate diagnosis and treatment planning; 2) longitudinal studies, where temporal structural or anatomical changes are investigated; and 3) population modeling and statistical atlases used to study normal anatomical variability. In this paper, we attempt to give an overview of deformable registration methods, putting emphasis on the most recent advances in the domain. Additional emphasis has been given to techniques applied to medical images. In order to study image registration methods in depth, their main components are identified and studied independently. The most recent techniques are presented in a systematic fashion. The contribution of this paper is to provide an extensive account of registration techniques in a systematic manner.
Collapse
Affiliation(s)
- Aristeidis Sotiras
- Section of Biomedical Image Analysis, Center for Biomedical Image Computing and Analytics, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104 USA
| | - Christos Davatzikos
- Section of Biomedical Image Analysis, Center for Biomedical Image Computing and Analytics, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104 USA
| | - Nikos Paragios
- Center for Visual Computing, Department of Applied Mathematics, Ecole Centrale de Paris, Chatenay-Malabry, 92 295 FRANCE, the Equipe Galen, INRIA Saclay - Ile-de-France, Orsay, 91893 FRANCE and the Universite Paris-Est, LIGM (UMR CNRS), Center for Visual Computing, Ecole des Ponts ParisTech, Champs-sur-Marne, 77455 FRANCE
| |
Collapse
|
6
|
Liao S, Gao Y, Lian J, Shen D. Sparse patch-based label propagation for accurate prostate localization in CT images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2013; 32:419-434. [PMID: 23204280 PMCID: PMC3845245 DOI: 10.1109/tmi.2012.2230018] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
In this paper, we propose a new prostate computed tomography (CT) segmentation method for image guided radiation therapy. The main contributions of our method lie in the following aspects. 1) Instead of using voxel intensity information alone, patch-based representation in the discriminative feature space with logistic sparse LASSO is used as anatomical signature to deal with low contrast problem in prostate CT images. 2) Based on the proposed patch-based signature, a new multi-atlases label fusion method formulated under sparse representation framework is designed to segment prostate in the new treatment images, with guidance from the previous segmented images of the same patient. This method estimates the prostate likelihood of each voxel in the new treatment image from its nearby candidate voxels in the previous segmented images, based on the nonlocal mean principle and sparsity constraint. 3) A hierarchical labeling strategy is further designed to perform label fusion, where voxels with high confidence are first labeled for providing useful context information in the same image for aiding the labeling of the remaining voxels. 4) An online update mechanism is finally adopted to progressively collect more patient-specific information from newly segmented treatment images of the same patient, for adaptive and more accurate segmentation. The proposed method has been extensively evaluated on a prostate CT image database consisting of 24 patients where each patient has more than 10 treatment images, and further compared with several state-of-the-art prostate CT segmentation algorithms using various evaluation metrics. Experimental results demonstrate that the proposed method consistently achieves higher segmentation accuracy than any other methods under comparison.
Collapse
Affiliation(s)
- Shu Liao
- Department of Radiology and Biomedical Research Imaging Center (BRIC), Chapel Hill, NC 27599, USA.
| | | | | | | |
Collapse
|