1
|
Im JE, Khalifa M, Gregory AV, Erickson BJ, Kline TL. A Systematic Review on the Use of Registration-Based Change Tracking Methods in Longitudinal Radiological Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01333-1. [PMID: 39578321 DOI: 10.1007/s10278-024-01333-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Revised: 11/05/2024] [Accepted: 11/06/2024] [Indexed: 11/24/2024]
Abstract
Registration is the process of spatially and/or temporally aligning different images. It is a critical tool that can facilitate the automatic tracking of pathological changes detected in radiological images and align images captured by different imaging systems and/or those acquired using different acquisition parameters. The longitudinal analysis of clinical changes has a significant role in helping clinicians evaluate disease progression and determine the most suitable course of treatment for patients. This study provides a comprehensive review of the role registration-based approaches play in automated change tracking in radiological imaging and explores the three types of registration approaches which include rigid, affine, and nonrigid registration, as well as methods of detecting and quantifying changes in registered longitudinal images: the intensity-based approach and the deformation-based approach. After providing an overview and background, we highlight the clinical applications of these methods, specifically focusing on computed tomography (CT) and magnetic resonance imaging (MRI) in tumors and multiple sclerosis (MS), two of the most heavily studied areas in automated change tracking. We conclude with a discussion and recommendation for future directions.
Collapse
Affiliation(s)
- Jeeho E Im
- Department of Radiology, Mayo Clinic, 200 First St SW, Rochester, MN, 55905, USA
| | - Muhammed Khalifa
- Department of Radiology, Mayo Clinic, 200 First St SW, Rochester, MN, 55905, USA
| | - Adriana V Gregory
- Department of Radiology, Mayo Clinic, 200 First St SW, Rochester, MN, 55905, USA
| | - Bradley J Erickson
- Department of Radiology, Mayo Clinic, 200 First St SW, Rochester, MN, 55905, USA
| | - Timothy L Kline
- Department of Radiology, Mayo Clinic, 200 First St SW, Rochester, MN, 55905, USA.
| |
Collapse
|
2
|
Bopp MHA, Grote A, Gjorgjevski M, Pojskic M, Saß B, Nimsky C. Enabling Navigation and Augmented Reality in the Sitting Position in Posterior Fossa Surgery Using Intraoperative Ultrasound. Cancers (Basel) 2024; 16:1985. [PMID: 38893106 PMCID: PMC11171013 DOI: 10.3390/cancers16111985] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Revised: 05/09/2024] [Accepted: 05/21/2024] [Indexed: 06/21/2024] Open
Abstract
Despite its broad use in cranial and spinal surgery, navigation support and microscope-based augmented reality (AR) have not yet found their way into posterior fossa surgery in the sitting position. While this position offers surgical benefits, navigation accuracy and thereof the use of navigation itself seems limited. Intraoperative ultrasound (iUS) can be applied at any time during surgery, delivering real-time images that can be used for accuracy verification and navigation updates. Within this study, its applicability in the sitting position was assessed. Data from 15 patients with lesions within the posterior fossa who underwent magnetic resonance imaging (MRI)-based navigation-supported surgery in the sitting position were retrospectively analyzed using the standard reference array and new rigid image-based MRI-iUS co-registration. The navigation accuracy was evaluated based on the spatial overlap of the outlined lesions and the distance between the corresponding landmarks in both data sets, respectively. Image-based co-registration significantly improved (p < 0.001) the spatial overlap of the outlined lesion (0.42 ± 0.30 vs. 0.65 ± 0.23) and significantly reduced (p < 0.001) the distance between the corresponding landmarks (8.69 ± 6.23 mm vs. 3.19 ± 2.73 mm), allowing for the sufficient use of navigation and AR support. Navigated iUS can therefore serve as an easy-to-use tool to enable navigation support for posterior fossa surgery in the sitting position.
Collapse
Affiliation(s)
- Miriam H. A. Bopp
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany; (A.G.); (M.G.); (M.P.); (B.S.); (C.N.)
- Center for Mind, Brain and Behavior (CMBB), 35043 Marburg, Germany
| | - Alexander Grote
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany; (A.G.); (M.G.); (M.P.); (B.S.); (C.N.)
| | - Marko Gjorgjevski
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany; (A.G.); (M.G.); (M.P.); (B.S.); (C.N.)
| | - Mirza Pojskic
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany; (A.G.); (M.G.); (M.P.); (B.S.); (C.N.)
| | - Benjamin Saß
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany; (A.G.); (M.G.); (M.P.); (B.S.); (C.N.)
| | - Christopher Nimsky
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany; (A.G.); (M.G.); (M.P.); (B.S.); (C.N.)
- Center for Mind, Brain and Behavior (CMBB), 35043 Marburg, Germany
| |
Collapse
|
3
|
Pirhadi A, Salari S, Ahmad MO, Rivaz H, Xiao Y. Robust landmark-based brain shift correction with a Siamese neural network in ultrasound-guided brain tumor resection. Int J Comput Assist Radiol Surg 2023; 18:501-508. [PMID: 36306056 DOI: 10.1007/s11548-022-02770-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Accepted: 09/29/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE In brain tumor surgery, tissue shift (called brain shift) can move the surgical target and invalidate the surgical plan. A cost-effective and flexible tool, intra-operative ultrasound (iUS) with robust image registration algorithms can effectively track brain shift to ensure surgical outcomes and safety. METHODS We proposed to employ a Siamese neural network, which was first trained using natural images and fine-tuned with domain-specific data to automatically detect matching anatomical landmarks in iUS scans at different surgical stages. An efficient 2.5D approach and an iterative re-weighted least squares algorithm are utilized to perform landmark-based registration for brain shift correction. The proposed method is validated and compared against the state-of-the-art methods using the public BITE and RESECT datasets. RESULTS Registration of pre-resection iUS scans to during- and post-resection iUS images were executed. The results with the proposed method shows a significant improvement from the initial misalignment ([Formula: see text]) and the method is comparable to the state-of-the-art methods validated on the same datasets. CONCLUSIONS We have proposed a robust technique to efficiently detect matching landmarks in iUS and perform brain shift correction with excellent performance. It has the potential to improve the accuracy and safety of neurosurgery.
Collapse
Affiliation(s)
- Amir Pirhadi
- Department of Electrical and Computer Engineering, Concordia University, Montreal, Canada.
| | - Soorena Salari
- Department of Computer Science and Software Engineering, Concordia University, Montreal, Canada
| | - M Omair Ahmad
- Department of Electrical and Computer Engineering, Concordia University, Montreal, Canada
| | - Hassan Rivaz
- Department of Electrical and Computer Engineering and PERFORM Centre, Concordia University, Montreal, Canada
| | - Yiming Xiao
- Department of Computer Science and Software Engineering and PERFORM Centre, Concordia University, Montreal, Canada
| |
Collapse
|
4
|
Tran MQ, Do T, Tran H, Tjiputra E, Tran QD, Nguyen A. Light-Weight Deformable Registration Using Adversarial Learning With Distilling Knowledge. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1443-1453. [PMID: 34990354 DOI: 10.1109/tmi.2022.3141013] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Deformable registration is a crucial step in many medical procedures such as image-guided surgery and radiation therapy. Most recent learning-based methods focus on improving the accuracy by optimizing the non-linear spatial correspondence between the input images. Therefore, these methods are computationally expensive and require modern graphic cards for real-time deployment. In this paper, we introduce a new Light-weight Deformable Registration network that significantly reduces the computational cost while achieving competitive accuracy. In particular, we propose a new adversarial learning with distilling knowledge algorithm that successfully leverages meaningful information from the effective but expensive teacher network to the student network. We design the student network such as it is light-weight and well suitable for deployment on a typical CPU. The extensively experimental results on different public datasets show that our proposed method achieves state-of-the-art accuracy while significantly faster than recent methods. We further show that the use of our adversarial learning algorithm is essential for a time-efficiency deformable registration method. Finally, our source code and trained models are available at https://github.com/aioz-ai/LDR_ALDK.
Collapse
|
5
|
Sánchez-Margallo JA, Tas L, Moelker A, van den Dobbelsteen JJ, Sánchez-Margallo FM, Langø T, van Walsum T, van de Berg NJ. Block-matching-based registration to evaluate ultrasound visibility of percutaneous needles in liver-mimicking phantoms. Med Phys 2021; 48:7602-7612. [PMID: 34665885 PMCID: PMC9298012 DOI: 10.1002/mp.15305] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Revised: 10/08/2021] [Accepted: 10/14/2021] [Indexed: 11/24/2022] Open
Abstract
Purpose To present a novel methodical approach to compare visibility of percutaneous needles in ultrasound images. Methods A motor‐driven rotation platform was used to gradually change the needle angle while capturing image data. Data analysis was automated using block‐matching‐based registration, with a tracking and refinement step. Every 25 frames, a Hough transform was used to improve needle alignments after large rotations. The method was demonstrated by comparing three commercial needles (14G radiofrequency ablation, RFA; 18G Trocar; 22G Chiba) and six prototype needles with different sizes, materials, and surface conditions (polished, sand‐blasted, and kerfed), within polyvinyl alcohol phantom tissue and ex vivo bovine liver models. For each needle and angle, a contrast‐to‐noise ratio (CNR) was determined to quantify visibility. CNR values are presented as a function of needle type and insertion angle. In addition, the normalized area under the (CNR‐angle) curve was used as a summary metric to compare needles. Results In phantom tissue, the first kerfed needle design had the largest normalized area of visibility and the polished 1 mm diameter stainless steel needle the smallest (0.704 ± 0.199 vs. 0.154 ± 0.027, p < 0.01). In the ex vivo model, the second kerfed needle design had the largest normalized area of visibility, and the sand‐blasted stainless steel needle the smallest (0.470 ± 0.190 vs. 0.127 ± 0.047, p < 0.001). As expected, the analysis showed needle visibility peaks at orthogonal insertion angles. For acute or obtuse angles, needle visibility was similar or reduced. Overall, the variability in needle visibility was considerably higher in livers. Conclusion The best overall visibility was found with kerfed needles and the commercial RFA needle. The presented methodical approach to quantify ultrasound visibility allows comparisons of (echogenic) needles, as well as other technological innovations aiming to improve ultrasound visibility of percutaneous needles, such as coatings, material treatments, and beam steering approaches.
Collapse
Affiliation(s)
- Juan A Sánchez-Margallo
- Bioengineering and Health Technologies Unit, Jesús Usón Minimally Invasive Surgery Centre, Cáceres, Spain
| | - Lisette Tas
- Department of Biomechanical Engineering, Delft University of Technology, Delft, The Netherlands
| | - Adriaan Moelker
- Department of Radiology & Nuclear Medicine, Erasmus MC, University Medical Center Rotterdam, Rotterdam, The Netherlands
| | | | | | | | - Theo van Walsum
- Biomedical Imaging Group Rotterdam, Department of Radiology & Nuclear Medicine, Erasmus MC, University Medical Center Rotterdam, The Netherlands
| | - Nick J van de Berg
- Department of Radiology & Nuclear Medicine, Erasmus MC, University Medical Center Rotterdam, Rotterdam, The Netherlands
| |
Collapse
|
6
|
Mao Z, Zhao L, Huang S, Fan Y, Pui-Wai Lee A. Direct 3D ultrasound fusion for transesophageal echocardiography. Comput Biol Med 2021; 134:104502. [PMID: 34130220 DOI: 10.1016/j.compbiomed.2021.104502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2020] [Revised: 05/10/2021] [Accepted: 05/11/2021] [Indexed: 10/21/2022]
Abstract
BACKGROUND Real-time three-dimensional transesophageal echocardiography (3D TEE) has been increasingly used in clinic for fast 3D analysis of cardiac anatomy and function. However, 3D TEE still suffers from the limited field of view (FoV). It is challenging to adopt conventional multi-view fusion methods to 3D TEE images because feature-based registration methods tend to fail in the ultrasound scenario, and conventional intensity-based methods have poor convergence properties and require an iterative coarse-to-fine strategy. METHODS A novel multi-view registration and fusion method is proposed to enlarge the FoV of 3D TEE images efficiently. A direct method is proposed to solve the registration problem in the Lie algebra space. Fast implementation is realized by searching voxels on three orthogonal planes between two volumes. Besides, a weighted-average 3D fusion method is proposed to fuse the aligned images seamlessly. For a sequence of 3D TEE images, they are fused incrementally. RESULTS Qualitative and quantitative results of in-vivo experiments indicate that the proposed registration algorithm outperforms a state-of-the-art PCA-based registration method in terms of accuracy and efficiency. Image registration and fusion performed on 76 in-vivo 3D TEE volumes from nine patients show apparent enlargement of FoV (enlarged around two times) in the obtained fused images. CONCLUSIONS The proposed methods can fuse 3D TEE images efficiently and accurately so that the whole Region of Interest (ROI) can be seen in a single frame. This research shows good potential to assist clinical diagnosis, preoperative planning, and future intraoperative guidance with 3D TEE.
Collapse
Affiliation(s)
- Zhehua Mao
- Centre for Autonomous Systems, Faculty of Engineering and Information Technology, University of Technology, Sydney, Australia.
| | - Liang Zhao
- Centre for Autonomous Systems, Faculty of Engineering and Information Technology, University of Technology, Sydney, Australia
| | - Shoudong Huang
- Centre for Autonomous Systems, Faculty of Engineering and Information Technology, University of Technology, Sydney, Australia
| | - Yiting Fan
- Department of Cardiology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Alex Pui-Wai Lee
- Division of Cardiology, Department of Medicine and Therapeutics, Prince of Wales Hospital and Laboratory of Cardiac Imaging and 3D Printing, Li Ka Shing Institute of Health Science, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong, China
| |
Collapse
|
7
|
Navigated 3D Ultrasound in Brain Metastasis Surgery: Analyzing the Differences in Object Appearances in Ultrasound and Magnetic Resonance Imaging. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10217798] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Background: Implementation of intraoperative 3D ultrasound (i3D US) into modern neuronavigational systems offers the possibility of live imaging and subsequent imaging updates. However, different modalities, image acquisition strategies, and timing of imaging influence object appearances. We analyzed the differences in object appearances in ultrasound (US) and magnetic resonance imaging (MRI) in 35 cases of brain metastasis, which were operated in a multimodal navigational setup after intraoperative computed tomography based (iCT) registration. Method: Registration accuracy was determined using the target registration error (TRE). Lesions segmented in preoperative magnetic resonance imaging (preMRI) and i3D US were compared focusing on object size, location, and similarity. Results: The mean and standard deviation (SD) of the TRE was 0.84 ± 0.36 mm. Objects were similar in size (mean ± SD in preMRI: 13.6 ± 16.0 cm3 vs. i3D US: 13.5 ± 16.0 cm3). The Dice coefficient was 0.68 ± 0.22 (mean ± SD), the Hausdorff distance 8.1 ± 2.9 mm (mean ± SD), and the Euclidean distance of the centers of gravity 3.7 ± 2.5 mm (mean ± SD). Conclusion: i3D US clearly delineates tumor boundaries and allows live updating of imaging for compensation of brain shift, which can already be identified to a significant amount before dural opening.
Collapse
|
8
|
Dong S, Luo G, Tam C, Wang W, Wang K, Cao S, Chen B, Zhang H, Li S. Deep Atlas Network for Efficient 3D Left Ventricle Segmentation on Echocardiography. Med Image Anal 2020; 61:101638. [DOI: 10.1016/j.media.2020.101638] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2019] [Revised: 01/06/2020] [Accepted: 01/09/2020] [Indexed: 10/25/2022]
|
9
|
Cheung W, Stevenson GN, de Melo Tavares Ferreira AEG, Alphonse J, Welsh AW. Feasibility of image registration and fusion for evaluation of structure and perfusion of the entire second trimester placenta by three-dimensional power Doppler ultrasound. Placenta 2020; 94:13-19. [PMID: 32217266 DOI: 10.1016/j.placenta.2020.03.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/15/2019] [Revised: 02/24/2020] [Accepted: 03/10/2020] [Indexed: 11/17/2022]
Abstract
BACKGROUND Placental perfusion can be evaluated by 3D power Doppler ultrasound (3D PD-US), particularly using the validated tool 3D Fractional Moving Blood Volume (3D-FMBV); however regional variability and size limitations beyond the first trimester mean that multiple 3D PD-US volumes are required to evaluate the whole organ. PURPOSE We assessed the feasibility of manual offline stitching of second trimester 3D PD-US volumes of the placenta to assess whole organ perfusion using 3D-FMBV. MATERIALS AND METHODS This was a single-centre, prospective, observational cohort study of 36 normal second trimester singleton pregnancies with anterior placentas. 3D PD-US placental volumes were manually segmented offline and stitched together by rigid registration using manually selected, pair-wise coordinates. Data acquisition and offline volume segmentation and stitching were triplicated by a single observer with Dice similarity coefficient (DSC) and Hausdorff distance used to assess consistency. Intraclass correlation coefficient (ICC) was used to assess intra-observer repeatability of 3D-FMBV and placental volume. RESULTS Acquisition and stitching success were 94% and 88%, respectively. Median time for acquisition, segmentation and stitching were 13 min, 40 min and 95 min, respectively. Median intra-observer DSCs were 0.94 and 0.88, and Hausdorff distances were 11.85 mm and 36.6 mm, for segmentations and stitching, respectively. CONCLUSION 3D-ultrasound volume stitching of the placenta is technically feasible. Intra-observer repeatability was good to excellent for all measured parameters. This work demonstrates technical feasibility; further studies may provide the basis of an in-vivo assessment tool to measure the placenta in mid-to late pregnancy.
Collapse
Affiliation(s)
- Winnie Cheung
- School of Women's and Children's Health, University of New South Wales, Randwick, New South Wales, Australia
| | - Gordon N Stevenson
- School of Women's and Children's Health, University of New South Wales, Randwick, New South Wales, Australia
| | | | - Jennifer Alphonse
- School of Women's and Children's Health, University of New South Wales, Randwick, New South Wales, Australia
| | - Alec W Welsh
- School of Women's and Children's Health, University of New South Wales, Randwick, New South Wales, Australia; Department of Maternal-Fetal Medicine, Royal Hospital for Women, Randwick, New South Wales, Australia.
| |
Collapse
|
10
|
Xing Q, Chitnis P, Sikdar S, Alshiek J, Shobeiri SA, Wei Q. M3VR-A multi-stage, multi-resolution, and multi-volumes-of-interest volume registration method applied to 3D endovaginal ultrasound. PLoS One 2019; 14:e0224583. [PMID: 31751356 PMCID: PMC6872108 DOI: 10.1371/journal.pone.0224583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2019] [Accepted: 10/16/2019] [Indexed: 11/24/2022] Open
Abstract
Heterogeneity of echo-texture and lack of sharply delineated tissue boundaries in diagnostic ultrasound images make three-dimensional (3D) registration challenging, especially when the volumes to be registered are considerably different due to local changes. We implemented a novel computational method that optimally registers volumetric ultrasound image data containing significant and local anatomical differences. It is A Multi-stage, Multi-resolution, and Multi-volumes-of-interest Volume Registration Method. A single region registration is optimized first for a close initial alignment to avoid convergence to a locally optimal solution. Multiple sub-volumes of interest can then be selected as target alignment regions to achieve confident consistency across the volume. Finally, a multi-resolution rigid registration is performed on these sub-volumes associated with different weights in the cost function. We applied the method on 3D endovaginal ultrasound image data acquired from patients during biopsy procedure of the pelvic floor muscle. Systematic assessment of our proposed method through cross validation demonstrated its accuracy and robustness. The algorithm can also be applied on medical imaging data of other modalities for which the traditional rigid registration methods would fail.
Collapse
Affiliation(s)
- Qi Xing
- Department of Computer Science, George Mason University, Fairfax, Virginia, United States of America
- The School of Information Science and Technology, Southwest Jiaotong University, Sichuan, China
| | - Parag Chitnis
- Department of Bioengineering, George Mason University, Fairfax, Virginia, United States of America
| | - Siddhartha Sikdar
- Department of Bioengineering, George Mason University, Fairfax, Virginia, United States of America
| | - Jonia Alshiek
- Department of Obstetrics & Gynecology, INOVA Health System, Falls Church, Virginia, United States of America
| | - S. Abbas Shobeiri
- Department of Bioengineering, George Mason University, Fairfax, Virginia, United States of America
- Department of Obstetrics & Gynecology, INOVA Health System, Falls Church, Virginia, United States of America
| | - Qi Wei
- Department of Bioengineering, George Mason University, Fairfax, Virginia, United States of America
| |
Collapse
|
11
|
Terentjev AB, Perrin DP, Settlemier SH, Zurakowski D, Smirnov PO, del Nido PJ, Shturts IV, Vasilyev NV. Temporal enhancement of 2D color Doppler echocardiography sequences by fragment-based frame reordering and refinement. Int J Comput Assist Radiol Surg 2019; 14:577-586. [DOI: 10.1007/s11548-019-01926-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2018] [Accepted: 02/15/2019] [Indexed: 10/27/2022]
|
12
|
Automatic and efficient MRI-US segmentations for improving intraoperative image fusion in image-guided neurosurgery. NEUROIMAGE-CLINICAL 2019; 22:101766. [PMID: 30901714 PMCID: PMC6425116 DOI: 10.1016/j.nicl.2019.101766] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/16/2018] [Revised: 01/20/2019] [Accepted: 03/10/2019] [Indexed: 11/24/2022]
Abstract
Knowledge of the exact tumor location and structures at risk in its vicinity are crucial for neurosurgical interventions. Neuronavigation systems support navigation within the patient's brain, based on preoperative MRI (preMRI). However, increasing tissue deformation during the course of tumor resection reduces navigation accuracy based on preMRI. Intraoperative ultrasound (iUS) is therefore used as real-time intraoperative imaging. Registration of preMRI and iUS remains a challenge due to different or varying contrasts in iUS and preMRI. Here, we present an automatic and efficient segmentation of B-mode US images to support the registration process. The falx cerebri and the tentorium cerebelli were identified as examples for central cerebral structures and their segmentations can serve as guiding frame for multi-modal image registration. Segmentations of the falx and tentorium were performed with an average Dice coefficient of 0.74 and an average Hausdorff distance of 12.2 mm. The subsequent registration incorporates these segmentations and increases accuracy, robustness and speed of the overall registration process compared to purely intensity-based registration. For validation an expert manually located corresponding landmarks. Our approach reduces the initial mean Target Registration Error from 16.9 mm to 3.8 mm using our intensity-based registration and to 2.2 mm with our combined segmentation and registration approach. The intensity-based registration reduced the maximum initial TRE from 19.4 mm to 5.6 mm, with the approach incorporating segmentations this is reduced to 3.0 mm. Mean volumetric intensity-based registration of preMRI and iUS took 40.5 s, including segmentations 12.0 s. We demonstrate that our segmentation-based registration increases accuracy, robustness, and speed of multi-modal image registration of preoperative MRI and intraoperative ultrasound images for improving intraoperative image guided neurosurgery. For this we provide a fast and efficient segmentation of central anatomical structures of the perifalcine region on ultrasound images. We demonstrate the advantages of our method by comparing the results of our segmentation-based registration with the initial registration provided by the navigation system and with an intensity-based registration approach.
Collapse
|
13
|
Machado I, Toews M, Luo J, Unadkat P, Essayed W, George E, Teodoro P, Carvalho H, Martins J, Golland P, Pieper S, Frisken S, Golby A, Wells W. Non-rigid registration of 3D ultrasound for neurosurgery using automatic feature detection and matching. Int J Comput Assist Radiol Surg 2018; 13:1525-1538. [PMID: 29869321 PMCID: PMC6151276 DOI: 10.1007/s11548-018-1786-7] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2018] [Accepted: 05/03/2018] [Indexed: 12/19/2022]
Abstract
PURPOSE The brain undergoes significant structural change over the course of neurosurgery, including highly nonlinear deformation and resection. It can be informative to recover the spatial mapping between structures identified in preoperative surgical planning and the intraoperative state of the brain. We present a novel feature-based method for achieving robust, fully automatic deformable registration of intraoperative neurosurgical ultrasound images. METHODS A sparse set of local image feature correspondences is first estimated between ultrasound image pairs, after which rigid, affine and thin-plate spline models are used to estimate dense mappings throughout the image. Correspondences are derived from 3D features, distinctive generic image patterns that are automatically extracted from 3D ultrasound images and characterized in terms of their geometry (i.e., location, scale, and orientation) and a descriptor of local image appearance. Feature correspondences between ultrasound images are achieved based on a nearest-neighbor descriptor matching and probabilistic voting model similar to the Hough transform. RESULTS Experiments demonstrate our method on intraoperative ultrasound images acquired before and after opening of the dura mater, during resection and after resection in nine clinical cases. A total of 1620 automatically extracted 3D feature correspondences were manually validated by eleven experts and used to guide the registration. Then, using manually labeled corresponding landmarks in the pre- and post-resection ultrasound images, we show that our feature-based registration reduces the mean target registration error from an initial value of 3.3 to 1.5 mm. CONCLUSIONS This result demonstrates that the 3D features promise to offer a robust and accurate solution for 3D ultrasound registration and to correct for brain shift in image-guided neurosurgery.
Collapse
Affiliation(s)
- Inês Machado
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St., Boston, MA, 02115, USA.
- IDMEC, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001, Lisbon, Portugal.
| | - Matthew Toews
- École de Technologie Superieure, 1100 Notre-Dame St W, Montreal, QC, H3C 1K3, Canada
| | - Jie Luo
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St., Boston, MA, 02115, USA
- Graduate School of Frontier Sciences, University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa, Chiba, Japan
| | - Prashin Unadkat
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St., Boston, MA, 02115, USA
| | - Walid Essayed
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St., Boston, MA, 02115, USA
| | - Elizabeth George
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St., Boston, MA, 02115, USA
| | - Pedro Teodoro
- IDMEC, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001, Lisbon, Portugal
| | - Herculano Carvalho
- Department of Neurosurgery, CHLN, Hospital de Santa Maria, Avenida Professor Egas Moniz, 1649-035, Lisbon, Portugal
| | - Jorge Martins
- IDMEC, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001, Lisbon, Portugal
| | - Polina Golland
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 32 Vassar St, Cambridge, MA, 02139, USA
| | - Steve Pieper
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St., Boston, MA, 02115, USA
- Isomics, Inc., 55 Kirkland St, Cambridge, MA, 02138, USA
| | - Sarah Frisken
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St., Boston, MA, 02115, USA
| | - Alexandra Golby
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St., Boston, MA, 02115, USA
| | - William Wells
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St., Boston, MA, 02115, USA
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 32 Vassar St, Cambridge, MA, 02139, USA
| |
Collapse
|
14
|
González SJ, Mooney B, Lin HY, Zhao X, Kiluk JV, Khakpour N, Laronga C, Lee MC. 2-D and 3-D Ultrasound for Tumor Volume Analysis: A Prospective Study. ULTRASOUND IN MEDICINE & BIOLOGY 2017; 43:775-781. [PMID: 28187928 DOI: 10.1016/j.ultrasmedbio.2016.12.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/03/2016] [Revised: 12/12/2016] [Accepted: 12/14/2016] [Indexed: 06/06/2023]
Abstract
Ultrasound (US) allows real-time tumor assessment. We evaluated the volumetric limits of 2-D and 3-D US, compared with magnetic resonance imaging (MRI), with a prospective institutional review board-approved clinical evaluation of US-to-MRI volumetric correlation. US images of pre- and post-neoadjuvant breast cancers were obtained. Volume discrepancy was evaluated with the non-parametric Wilcoxon signed-rank test. Expected inter-observer variability <14% was evaluated as relative paired difference (RPD); clinical relevance was gauged with the volumetric standard error of the mean (SEM). For 42 patients, 133 of 170 US examinations were evaluable. For tumors ≤20 cm3, both highly correlated to MRI with RPD within inter-observer variability and Pearson's correlation up to 0.86 (0.80 before and 0.86 after neoadjuvant chemotherapy, respectively). Lesions 20-40 cm3 had US-to-MRI discrepancy within inter-observer variability for 2-D (RPD: 13%), but not 3-D (RPD: 27%) US (SEM: 1.47 cm3 for 2-D, SEM: 2.28 cm3 for 3-D), suggesting clinical utility. Tumors >40 cm3 correlated poorly. Tumor volumes ≤20 cm3 exhibited a good correlation to MRI. Studies of clinical applications are warranted.
Collapse
Affiliation(s)
- Segundo J González
- Comprehensive Breast Program, H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida, USA
| | - Blaise Mooney
- Diagnostic Breast Imaging, H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida, USA
| | - Hui-Yi Lin
- Biostatistics and Bioinformatics, H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida, USA
| | - Xiuhua Zhao
- Biostatistics and Bioinformatics, H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida, USA
| | - John V Kiluk
- Comprehensive Breast Program, H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida, USA
| | - Nazanin Khakpour
- Comprehensive Breast Program, H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida, USA
| | - Christine Laronga
- Comprehensive Breast Program, H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida, USA
| | - M Catherine Lee
- Comprehensive Breast Program, H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida, USA.
| |
Collapse
|
15
|
Che C, Mathai TS, Galeotti J. Ultrasound registration: A review. Methods 2017; 115:128-143. [DOI: 10.1016/j.ymeth.2016.12.006] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2016] [Revised: 12/07/2016] [Accepted: 12/08/2016] [Indexed: 11/29/2022] Open
|
16
|
Real-time target tracking of soft tissues in 3D ultrasound images based on robust visual information and mechanical simulation. Med Image Anal 2017; 35:582-598. [DOI: 10.1016/j.media.2016.09.004] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2015] [Revised: 09/06/2016] [Accepted: 09/07/2016] [Indexed: 11/19/2022]
|
17
|
Rivaz H. Robust deformable registration of pre- and post-resection ultrasound volumes for visualization of residual tumor in neurosurgery. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2016; 2015:141-4. [PMID: 26736220 DOI: 10.1109/embc.2015.7318320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The brain tissue deforms significantly during neurosurgery, which has led to the use of intra-operative ultrasound in many sites to provide updated ultrasound images of tumor and critical parts of the brain. Several factors degrade the quality of post-resection ultrasound images such as hemorrhage, air bubbles in tumor cavity and the application of blood-clotting agent around the edges of the resection. As a result, registration of post- and pre-resection ultrasound is of significant clinical importance. In this paper, we propose a nonrigid symmetric registration (NSR) framework for accurate alignment of pre- and post-resection volumetric ultrasound images in near real-time. We first formulate registration as the minimization of a regularized cost function, and analytically derive its derivative to efficiently optimize the cost function. We use Efficient Second-order Minimization (ESM) method for fast and robust optimization. Furthermore, we use inverse-consistent deformation method to generate realistic deformation fields. The results show that NSR significantly improves the quality of alignment between pre- and post-resection ultrasound images.
Collapse
|
18
|
Zhou H, Rivaz H. Registration of Pre- and Postresection Ultrasound Volumes With Noncorresponding Regions in Neurosurgery. IEEE J Biomed Health Inform 2016; 20:1240-9. [DOI: 10.1109/jbhi.2016.2554122] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
19
|
Danudibroto A, Bersvendsen J, Gérard O, Mirea O, D'hooge J, Samset E. Spatiotemporal registration of multiple three-dimensional echocardiographic recordings for enhanced field of view imaging. J Med Imaging (Bellingham) 2016; 3:037001. [PMID: 27446972 DOI: 10.1117/1.jmi.3.3.037001] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2016] [Accepted: 06/20/2016] [Indexed: 11/14/2022] Open
Abstract
The use of three-dimensional (3-D) echocardiography is limited by signal dropouts and narrow field of view. Data compounding is proposed as a solution to overcome these limitations by combining multiple 3-D recordings to form a wide field of view. The first step of the solution requires registration between the recordings both in the spatial and temporal dimension for dynamic organs such as the heart. Accurate registration between the individual echo recordings is crucial for the quality of compounded volumes. A temporal registration method based on a piecewise one-dimensional cubic B-spline in combination with multiscale iterative Farnebäck optic flow method for spatial registration was described. The temporal registration method was validated on in vivo data sets with annotated timing of mitral valve opening. The spatial registration method was validated using in vivo data and compared to registration with Procrustes analysis using manual contouring as a benchmark. The spatial accuracy was assessed in terms of mean of absolute distance and Hausdorff distance between the left ventricular contours. The results showed that the temporal registration accuracy is in the range of half the time resolution of the echo recordings and the achieved spatial accuracy of the proposed method is comparable to manual registration.
Collapse
Affiliation(s)
- Adriyana Danudibroto
- GE Vingmed Ultrasound, Gaustadalléen 21, Oslo 0349, Norway; KU Leuven, Department of Cardiovasular Sciences, Cardiovascular Imaging and Dynamics Lab, UZ Herestraat 49, Box 7003, Leuven 3000, Belgium
| | - Jørn Bersvendsen
- GE Vingmed Ultrasound, Gaustadalléen 21, Oslo 0349, Norway; University of Oslo, Department of Informatics, Gaustadalléen 23 B, Oslo 0373, Norway
| | - Olivier Gérard
- GE Vingmed Ultrasound , Gaustadalléen 21, Oslo 0349, Norway
| | - Oana Mirea
- KU Leuven , Department of Cardiovasular Sciences, Cardiovascular Imaging and Dynamics Lab, UZ Herestraat 49, Box 7003, Leuven 3000, Belgium
| | - Jan D'hooge
- KU Leuven , Department of Cardiovasular Sciences, Cardiovascular Imaging and Dynamics Lab, UZ Herestraat 49, Box 7003, Leuven 3000, Belgium
| | - Eigil Samset
- KU Leuven , Department of Cardiovasular Sciences, Cardiovascular Imaging and Dynamics Lab, UZ Herestraat 49, Box 7003, Leuven 3000, Belgium
| |
Collapse
|
20
|
Bersvendsen J, Toews M, Danudibroto A, Wells WM, Urheim S, Estépar RSJ, Samset E. Robust Spatio-Temporal Registration of 4D Cardiac Ultrasound Sequences. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2016; 9790:97900F. [PMID: 27516706 PMCID: PMC4976768 DOI: 10.1117/12.2217005] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Registration of multiple 3D ultrasound sectors in order to provide an extended field of view is important for the appreciation of larger anatomical structures at high spatial and temporal resolution. In this paper, we present a method for fully automatic spatio-temporal registration between two partially overlapping 3D ultrasound sequences. The temporal alignment is solved by aligning the normalized cross correlation-over-time curves of the sequences. For the spatial alignment, corresponding 3D Scale Invariant Feature Transform (SIFT) features are extracted from all frames of both sequences independently of the temporal alignment. A rigid transform is then calculated by least squares minimization in combination with random sample consensus. The method is applied to 16 echocardiographic sequences of the left and right ventricles and evaluated against manually annotated temporal events and spatial anatomical landmarks. The mean distances between manually identified landmarks in the left and right ventricles after automatic registration were (mean ± SD) 4.3 ± 1.2 mm compared to a reference error of 2.8 ± 0.6 mm with manual registration. For the temporal alignment, the absolute errors in valvular event times were 14.4 ± 11.6 ms for Aortic Valve (AV) opening, 18.6 ± 16.0 ms for AV closing, and 34.6 ± 26.4 ms for mitral valve opening, compared to a mean inter-frame time of 29 ms.
Collapse
Affiliation(s)
- Jørn Bersvendsen
- GE Vingmed Ultrasound, Horten, Norway ; University of Oslo, Oslo, Norway ; Center for Cardiological Innovation, Oslo, Norway
| | | | | | - William M Wells
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| | | | | | - Eigil Samset
- GE Vingmed Ultrasound, Horten, Norway ; University of Oslo, Oslo, Norway ; Center for Cardiological Innovation, Oslo, Norway
| |
Collapse
|
21
|
Banerjee J, Klink C, Niessen WJ, Moelker A, van Walsum T. 4D Ultrasound Tracking of Liver and its Verification for TIPS Guidance. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:52-62. [PMID: 26168435 DOI: 10.1109/tmi.2015.2454056] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
In this work we describe a 4D registration method for on the fly stabilization of ultrasound volumes for improving image guidance for transjugular intrahepatic portosystemic shunt (TIPS) interventions. The purpose of the method is to enable a continuous visualization of the relevant anatomical planes (determined in a planning stage) in a free breathing patient during the intervention. This requires registration of the planning information to the interventional images, which is achieved in two steps. In the first step tracking is performed across the streaming input. An approximate transformation between the reference image and the incoming image is estimated by composing the intermediate transformations obtained from the tracking. In the second step a subsequent registration is performed between the reference image and the approximately transformed incoming image to account for the accumulation of error. The two step approach helps in reducing the search range and is robust under rotation. We additionally present an approach to initialize and verify the registration. Verification is required when the reference image (containing planning information) is acquired in the past and is not part of the (interventional) 4D ultrasound sequence. The verification score will help in invalidating the registration outcome, for instance, in the case of insufficient overlap or information between the registering images due to probe motion or loss of contact, respectively. We evaluate the method over thirteen 4D US sequences acquired from eight subjects. A graphics processing unit implementation runs the 4D tracking at 9 Hz with a mean registration error of 1.7 mm.
Collapse
|
22
|
Three-dimensional Reconstruction of Peripheral Nerve Internal Fascicular Groups. Sci Rep 2015; 5:17168. [PMID: 26596642 PMCID: PMC4657002 DOI: 10.1038/srep17168] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2015] [Accepted: 10/26/2015] [Indexed: 11/25/2022] Open
Abstract
Peripheral nerves are important pathways for receiving afferent sensory impulses and sending out efferent motor instructions, as carried out by sensory nerve fibers and motor nerve fibers. It has remained a great challenge to functionally reconnect nerve internal fiber bundles (or fascicles) in nerve repair. One possible solution may be to establish a 3D nerve fascicle visualization system. This study described the key technology of 3D peripheral nerve fascicle reconstruction. Firstly, fixed nerve segments were embedded with position lines, cryostat-sectioned continuously, stained and imaged histologically. Position line cross-sections were identified using a trained support vector machine method, and the coordinates of their central pixels were obtained. Then, nerve section images were registered using the bilinear method, and edges of fascicles were extracted using an improved gradient vector flow snake method. Subsequently, fascicle types were identified automatically using the multi-directional gradient and second-order gradient method. Finally, a 3D virtual model of internal fascicles was obtained after section images were processed. This technique was successfully applied for 3D reconstruction for the median nerve of the hand-wrist and cubital fossa regions and the gastrocnemius nerve. This nerve internal fascicle 3D reconstruction technology would be helpful for aiding peripheral nerve repair and virtual surgery.
Collapse
|
23
|
A Markov random field approach to group-wise registration/mosaicing with application to ultrasound. Med Image Anal 2015; 24:106-124. [PMID: 26142928 DOI: 10.1016/j.media.2015.05.011] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2014] [Revised: 05/25/2015] [Accepted: 05/26/2015] [Indexed: 11/24/2022]
|
24
|
Banerjee J, Klink C, Peters ED, Niessen WJ, Moelker A, van Walsum T. Fast and robust 3D ultrasound registration – Block and game theoretic matching. Med Image Anal 2015; 20:173-83. [DOI: 10.1016/j.media.2014.11.004] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2014] [Revised: 11/03/2014] [Accepted: 11/08/2014] [Indexed: 11/30/2022]
|
25
|
Presles B, Fargier-Voiron M, Biston MC, Lynch R, Munoz A, Liebgott H, Pommier P, Rit S, Sarrut D. Semiautomatic registration of 3D transabdominal ultrasound images for patient repositioning during postprostatectomy radiotherapy. Med Phys 2014; 41:122903. [DOI: 10.1118/1.4901642] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023] Open
|
26
|
Hough space parametrization: ensuring global consistency in intensity-based registration. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2014; 17:275-82. [PMID: 25333128 DOI: 10.1007/978-3-319-10404-1_35] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Intensity based registration is a challenge when images to be registered have insufficient amount of information in their overlapping region. Especially, in the absence of dominant structures such as strong edges in this region, obtaining a solution that satisfies global structural consistency becomes difficult. In this work, we propose to exploit the vast amount of available information beyond the overlapping region to support the registration process. To this end, a novel global regularization term using Generalized Hough Transform is designed that ensures the global consistency when the local information in the overlap region is insufficient to drive the registration. Using prior data, we learn a parametrization of the target anatomy in Hough space. This parametrization is then used as a regularization for registering the observed partial images without using any prior data. Experiments on synthetic as well as on sample real medical images demonstrate the good performance and potential use of the proposed concept.
Collapse
|
27
|
Gu S, Meng X, Sciurba FC, Ma H, Leader J, Kaminski N, Gur D, Pu J. Bidirectional elastic image registration using B-spline affine transformation. Comput Med Imaging Graph 2014; 38:306-14. [PMID: 24530210 DOI: 10.1016/j.compmedimag.2014.01.002] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2013] [Revised: 12/13/2013] [Accepted: 01/14/2014] [Indexed: 10/25/2022]
Abstract
A registration scheme termed as B-spline affine transformation (BSAT) is presented in this study to elastically align two images. We define an affine transformation instead of the traditional translation at each control point. Mathematically, BSAT is a generalized form of the affine transformation and the traditional B-spline transformation (BST). In order to improve the performance of the iterative closest point (ICP) method in registering two homologous shapes but with large deformation, a bidirectional instead of the traditional unidirectional objective/cost function is proposed. In implementation, the objective function is formulated as a sparse linear equation problem, and a sub-division strategy is used to achieve a reasonable efficiency in registration. The performance of the developed scheme was assessed using both two-dimensional (2D) synthesized dataset and three-dimensional (3D) volumetric computed tomography (CT) data. Our experiments showed that the proposed B-spline affine model could obtain reasonable registration accuracy.
Collapse
Affiliation(s)
- Suicheng Gu
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA 15213, United States
| | - Xin Meng
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA 15213, United States
| | - Frank C Sciurba
- Department of Medicine, University of Pittsburgh, Pittsburgh, PA 15213, United States
| | - Hongxia Ma
- Department of Radiology, University of Xi'an Jiaotong University First Affiliated Hospital, Xi'an, Shaanxi, P.R. China
| | - Joseph Leader
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA 15213, United States
| | - Naftali Kaminski
- Department of Medicine, University of Pittsburgh, Pittsburgh, PA 15213, United States
| | - David Gur
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA 15213, United States
| | - Jiantao Pu
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA 15213, United States; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213, United States.
| |
Collapse
|
28
|
Three-Dimensional Echocardiography in Congenital Heart Disease. CURRENT PEDIATRICS REPORTS 2013. [DOI: 10.1007/s40124-013-0014-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
29
|
De Luca V, Tschannen M, Székely G, Tanner C. A learning-based approach for fast and robust vessel tracking in long ultrasound sequences. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2013; 16:518-25. [PMID: 24505706 DOI: 10.1007/978-3-642-40811-3_65] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
We propose a learning-based method for robust tracking in long ultrasound sequences for image guidance applications. The framework is based on a scale-adaptive block-matching and temporal realignment driven by the image appearance learned from an initial training phase. The latter is introduced to avoid error accumulation over long sequences. The vessel tracking performance is assessed on long 2D ultrasound sequences of the liver of 9 volunteers under free breathing. We achieve a mean tracking accuracy of 0.96 mm. Without learning, the error increases significantly (2.19 mm, p<0.001).
Collapse
Affiliation(s)
- Valeria De Luca
- Computer Vision Laboratory, ETH Zürich, 8092 Zürich, Switzerland
| | | | - Gábor Székely
- Computer Vision Laboratory, ETH Zürich, 8092 Zürich, Switzerland
| | - Christine Tanner
- Computer Vision Laboratory, ETH Zürich, 8092 Zürich, Switzerland
| |
Collapse
|
30
|
Mansi T, Voigt I, Georgescu B, Zheng X, Mengue EA, Hackl M, Ionasec RI, Noack T, Seeburger J, Comaniciu D. An integrated framework for finite-element modeling of mitral valve biomechanics from medical images: application to MitralClip intervention planning. Med Image Anal 2012; 16:1330-46. [PMID: 22766456 DOI: 10.1016/j.media.2012.05.009] [Citation(s) in RCA: 61] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2012] [Revised: 04/21/2012] [Accepted: 05/18/2012] [Indexed: 11/17/2022]
Abstract
Treatment of mitral valve (MV) diseases requires comprehensive clinical evaluation and therapy personalization to optimize outcomes. Finite-element models (FEMs) of MV physiology have been proposed to study the biomechanical impact of MV repair, but their translation into the clinics remains challenging. As a step towards this goal, we present an integrated framework for finite-element modeling of the MV closure based on patient-specific anatomies and boundary conditions. Starting from temporal medical images, we estimate a comprehensive model of the MV apparatus dynamics, including papillary tips, using a machine-learning approach. A detailed model of the open MV at end-diastole is then computed, which is finally closed according to a FEM of MV biomechanics. The motion of the mitral annulus and papillary tips are constrained from the image data for increased accuracy. A sensitivity analysis of our system shows that chordae rest length and boundary conditions have a significant influence upon the simulation results. We quantitatively test the generalization of our framework on 25 consecutive patients. Comparisons between the simulated closed valve and ground truth show encouraging results (average point-to-mesh distance: 1.49 ± 0.62 mm) but also the need for personalization of tissue properties, as illustrated in three patients. Finally, the predictive power of our model is tested on one patient who underwent MitralClip by comparing the simulated intervention with the real outcome in terms of MV closure, yielding promising prediction. By providing an integrated way to perform MV simulation, our framework may constitute a surrogate tool for model validation and therapy planning.
Collapse
Affiliation(s)
- Tommaso Mansi
- Siemens Corporation, Corporate Research and Technology, Image Analytics and Informatics, Princeton, NJ, USA.
| | | | | | | | | | | | | | | | | | | |
Collapse
|
31
|
Brattain LJ, Vasilyev NV, Howe RD. Enabling 3D Ultrasound Procedure Guidance through Enhanced Visualization. INFORMATION PROCESSING IN COMPUTER-ASSISTED INTERVENTIONS : THIRD INTERNATIONAL CONFERENCE, IPCAI 2012, PISA, ITALY, JUNE 27, 2012 PROCEEDINGS. IPCAI (CONFERENCE) (3RD : 2012 : PISA, ITALY) 2012; 7330:115-124. [PMID: 29862385 PMCID: PMC5983382 DOI: 10.1007/978-3-642-30618-1_12] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Real-time 3D ultrasound (3DUS) imaging offers improved spatial orientation information relative to 2D ultrasound. However, in order to improve its efficacy in guiding minimally invasive intra-cardiac procedures where real-time visual feedback of an instrument tip location is crucial, 3DUS volume visualization alone is inadequate. This paper presents a set of enhanced visualization functionalities that are able to track the tip of an instrument in slice views at real-time. User study with in vitro porcine heart indicates a speedup of over 30% in task completion time.
Collapse
Affiliation(s)
- Laura J Brattain
- Harvard School of Engineering and Applied Sciences, Cambridge, MA USA 02138
- MIT Lincoln Laboratory, 244 Wood St., Lexington, MA USA 02420
| | - Nikolay V Vasilyev
- Department of Cardiac Surgery, Children's Hospital Boston, Boston, MA USA 02115
| | - Robert D Howe
- Harvard School of Engineering and Applied Sciences, Cambridge, MA USA 02138
| |
Collapse
|