1
|
Dou Y, Mu F, Li Y, Varghese T. Sensorless End-to-End Freehand 3-D Ultrasound Reconstruction With Physics-Guided Deep Learning. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2024; 71:1514-1525. [PMID: 39302786 PMCID: PMC11875936 DOI: 10.1109/tuffc.2024.3465214] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/22/2024]
Abstract
Three-dimensional ultrasound (3-D US) imaging with freehand scanning is utilized in cardiac, obstetric, abdominal, and vascular examinations. While 3-D US using either a "wobbler" or "matrix" transducer suffers from a small field of view and low acquisition rates, freehand scanning offers significant advantages due to its ease of use. However, current 3-D US volumetric reconstruction methods with freehand sweeps are limited by imaging plane shifts along the scanning path, i.e., out-of-plane (OOP) motion. Prior studies have incorporated motion sensors attached to the transducer, which is cumbersome and inconvenient in a clinical setting. Recent work has introduced deep neural networks (DNNs) with 3-D convolutions to estimate the position of imaging planes from a series of input frames. These approaches, however, fall short for estimating OOP motion. The goal of this article is to bridge the gap by designing a novel, physics-inspired DNN for freehand 3-D US reconstruction without motion sensors, aiming to improve the reconstruction quality and, at the same time, to reduce computational resources needed for training and inference. To this end, we present our physics-guided learning-based prediction of pose information (PLPPI) model for 3-D freehand US reconstruction without 3-D convolution. PLPPI yields significantly more accurate reconstructions and offers a major reduction in computation time. It attains a performance increase in the double digits in terms of mean percentage error, with up to 106% speedup and 131% reduction in graphic processing unit (GPU) memory usage, when compared to the latest deep learning methods.
Collapse
|
2
|
Guo Y, Hu M, Min X, Wang Y, Dai M, Zhai G, Zhang XP, Yang X. Blind Image Quality Assessment for Pathological Microscopic Image Under Screen and Immersion Scenarios. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3295-3306. [PMID: 37267133 DOI: 10.1109/tmi.2023.3282387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
The high-quality pathological microscopic images are essential for physicians or pathologists to make a correct diagnosis. Image quality assessment (IQA) can quantify the visual distortion degree of images and guide the imaging system to improve image quality, thus raising the quality of pathological microscopic images. Current IQA methods are not ideal for pathological microscopy images due to their specificity. In this paper, we present deep learning-based blind image quality assessment model with saliency block and patch block for pathological microscopic images. The saliency block and patch block can handle the local and global distortions, respectively. To better capture the area of interest of pathologists when viewing pathological images, the saliency block is fine-tuned by eye movement data of pathologists. The patch block can capture lots of global information strongly related to image quality via the interaction between different image patches from different positions. The performance of the developed model is validated by the home-made Pathological Microscopic Image Quality Database under Screen and Immersion Scenarios (PMIQD-SIS) and cross-validated by the five public datasets. The results of ablation experiments demonstrate the contribution of the added blocks. The dataset and the corresponding code are publicly available at: https://github.com/mikugyf/PMIQD-SIS.
Collapse
|
3
|
Lu W, Chen J, Wang Y, Chang W, Wang Y, Chen C, Dong L, Liang P, Kong D. Coplanarity Constrained Ultrasound Probe Calibration Based on N-Wire Phantom. ULTRASOUND IN MEDICINE & BIOLOGY 2023; 49:2316-2324. [PMID: 37541788 DOI: 10.1016/j.ultrasmedbio.2023.05.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 05/15/2023] [Accepted: 05/26/2023] [Indexed: 08/06/2023]
Abstract
OBJECTIVE N-wire phantom-based ultrasound probe calibration has been used widely in many freehand tracked ultrasound imaging systems. The calibration matrix is obtained by registering the coplanar point cloud in ultrasound space and non-coplanar point cloud in tracking sensor space based on the least squares method. This method is sensitive to outliers and loses the coplanar information of the fiducial points. In this article, we describe a coplanarity-constrained calibration algorithm focusing on these issues. METHODS We verified that the out-of-plane error along the oblique wire in the N-wire phantom followed a normal distribution and used it to remove the experimental outliers and fit the plane with the Levenberg-Marquardt algorithm. Then, we projected the points to the plane along the oblique wire. Coplanarity-constrained point cloud registration was used to calculate the transformation matrix. RESULTS Compared with the other two commonly used methods, our method had the best calibration precision and achieved 25% and 36% improvement of the mean calibration accuracy than the closed-form solution and in-plane error method respectively at depth 16. Experiments at different depths revealed that our algorithm had better performance in our setup. CONCLUSION Our proposed coplanarity-constrained calibration algorithm achieved significant improvement in both precision and accuracy compared with existing algorithms with the same N-wire phantom. It is expected that calibration accuracy will improve when the algorithm is applied to all other N-wire phantom-based calibration procedures.
Collapse
Affiliation(s)
- Wenliang Lu
- School of Mathematical Sciences, Zhejiang University, Hangzhou, China
| | - Jiye Chen
- Fifth Medical Center, Chinese PLA General Hospital, Beijing, China; Chinese PLA Medical School, Beijing, China
| | - Yuan Wang
- School of Mathematical Sciences, Zhejiang University, Hangzhou, China
| | - Wanru Chang
- School of Mathematical Sciences, Zhejiang University, Hangzhou, China
| | - Yun Wang
- School of Mathematical Sciences, Zhejiang University, Hangzhou, China
| | | | - Linan Dong
- Department of Interventional Ultrasound, First Medical Center, Chinese PLA General Hospital, Beijing, China
| | - Ping Liang
- Fifth Medical Center, Chinese PLA General Hospital, Beijing, China; Chinese PLA Medical School, Beijing, China
| | - Dexing Kong
- School of Mathematical Sciences, Zhejiang University, Hangzhou, China.
| |
Collapse
|
4
|
Wu C, Fu T, Chen X, Xiao J, Ai D, Fan J, Lin Y, Song H, Yang J. Automatic spatial calibration of freehand ultrasound probe with a multilayer N-wire phantom. ULTRASONICS 2023; 128:106862. [PMID: 36240539 DOI: 10.1016/j.ultras.2022.106862] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Revised: 08/25/2022] [Accepted: 10/03/2022] [Indexed: 06/16/2023]
Abstract
The classic N-wire phantom has been widely used in the calibration of freehand ultrasound probes. One of the main challenges of the phantom is accurately identifying N-fiducials in ultrasound images, especially with multiple N-wire structures. In this study, a method using a multilayer N-wire phantom for the automatic spatial calibration of ultrasound images is proposed. All dots in the ultrasound image are segmented, scored, and classified according to the unique geometric features of the multilayer N-wire phantom. A recognition method for identifying N-fiducials from the dots is proposed for calibrating the spatial transformation of the ultrasound probe. At depths of 9, 11, 13, and 15 cm, the reconstruction error of 50 points is 1.24 ± 0.16, 1.09 ± 0.06, 0.95 ± 0.08, 1.02 ± 0.05 mm, respectively. The reconstruction mockup test shows that the distance accuracy is 1.11 ± 0.82 mm at a depth of 15 cm.
Collapse
Affiliation(s)
- Chan Wu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Tianyu Fu
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China.
| | - Xinyu Chen
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Jian Xiao
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Yucong Lin
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China.
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| |
Collapse
|
5
|
A novel ultrasound probe calibration method for multimodal image guidance of needle placement in cervical cancer brachytherapy. Phys Med 2022; 100:81-89. [PMID: 35759943 DOI: 10.1016/j.ejmp.2022.06.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/17/2021] [Revised: 03/10/2022] [Accepted: 06/13/2022] [Indexed: 01/11/2023] Open
Abstract
PURPOSE Interstitial needles placement is a critical component of combined intracavitary/interstitial (IC/IS) brachytherapy (BT). To ensure precise placement of interstitial needles, we proposed a novel ultrasonic (US) probe calibration method to accurately register the US image in the magnetic resonance imaging (MRI) image and provide multimodal image guidance for needle placement. METHODS A wire-based calibration phantom combined with the stylus was developed for the calibration of US probe. The calibration phantom helps to quickly align the imaging plane of the US probe with the fiducial points to obtain US images of these points. The coordinates of fiducial points in US images were located automatically by feature extraction algorithms and were further corrected by the proposed correction method. Ingenious structures were designed on both sides of the calibration phantom to accurately obtain the coordinates of the fiducial points relative to the tracking device. Marker validation and pelvic phantom study were performed to evaluate the accuracy of the proposed calibration method. RESULTS In the marker validation, the US probe calibration method with corrected transformation achieves a registration accuracy of 0.694 ± 0.014 mm, and the uncorrected one is 0.746 ± 0.018 mm. In the pelvic phantom study, the needle tip difference was 1.096 ± 0.225 mm and trajectory difference was 1.416 ± 0.284 degrees. CONCLUSION The proposed US probe calibration method is helpful to achieve more accurate multimodality image guidance for needle placement.
Collapse
|
6
|
Chen X, Liu M, Xie B, Chen L, Wei J. Characterization of top 100 researches on e-waste based on bibliometric analysis. ENVIRONMENTAL SCIENCE AND POLLUTION RESEARCH INTERNATIONAL 2021; 28:61568-61580. [PMID: 34184220 DOI: 10.1007/s11356-021-15147-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Accepted: 06/22/2021] [Indexed: 06/13/2023]
Abstract
With rapid development of energy, information, and communication technology, e-waste problem has become one of the global issues to be settled urgently. The main features on publication years, journals, countries and institutions, authors, keywords, and content types of the 100 most-cited articles on e-waste had been unfolded in this research. The direction and way forward were illustrated, the trends to date were demonstrated, and the terrain and pathways were evaluated on the research of e-waste. Bibliometric analysis method was applied to analyze various attributes on the 100 most-cited articles which were retrieved from WoSCC on May 25, 2021, by utilizing the software tools Microsoft Excel 2016 and VOS viewer 1.6.9. The publication year and citation number of the 100 articles ranged between 2003 and 2017 and from 83 to 925, respectively. Environmental Science & Technology (n=17) published the maximum articles. Waste Management, Journal of Hazardous Materials, and Environmental Science & Technology were the core journals on e-waste. One hundred twenty-three institutions and 25 countries were involved in publishing the 100 articles. Three hundred seventy authors contributed to the 100 articles in total. A total of 267 keywords occurred in the 100 articles. The keywords "e-waste" and "recycling" held the highest occurrences. The study content of the 100 articles could be classified into four types including the characteristic-and-property type, the environment-and-health type, the management-and-economic type, and the technique-and-processing type. Overall completeness and applicability of the evidence found in this study were verified sufficiently; the potential biases in the review process were also considered. The innovations of the research from the past bibliometric analysis work on e-waste were stated, and the implication for practice and research of this study were explained as well. 2007, 2008, and 2009 were a peak of the researches on e-waste, while the recent years were experiencing a valley. China and its institutions were most influential in this field on e-waste. India was becoming more and more influential on e-waste research in the world. Nigeria was the research center in Africa, and Brazil was the research center in Latin America. Wong Minghung was the most important expert on e-waste. The impact on environment and human being's health was the hot topic of researches on e-waste; the characteristic and property of e-waste were studied not enough. The researches of technique and processing would be the direction and way forward in the study field on e-waste. The characteristic and property on e-waste would need more attention to be researched. The researchers could develop new pathways based on and beyond the four types of content evaluated in this research.
Collapse
Affiliation(s)
- Xianghong Chen
- The Open University of Guangdong, Guangzhou, China.
- College of Mechanical and Electrical Engineering, Guangdong Polytechnic Institute, Guangzhou, China.
| | - Ming Liu
- Evidence-Based Medicine Center, School of Basic Medical Sciences, Lanzhou University, Lanzhou, China
| | - Bo Xie
- The Open University of Guangdong, Guangzhou, China
- College of Mechanical and Electrical Engineering, Guangdong Polytechnic Institute, Guangzhou, China
| | - Liwei Chen
- The Open University of Guangdong, Guangzhou, China
- College of Mechanical and Electrical Engineering, Guangdong Polytechnic Institute, Guangzhou, China
| | - Jingting Wei
- The Open University of Guangdong, Guangzhou, China
- College of Mechanical and Electrical Engineering, Guangdong Polytechnic Institute, Guangzhou, China
| |
Collapse
|
7
|
Shapey J, Dowrick T, Delaunay R, Mackle EC, Thompson S, Janatka M, Guichard R, Georgoulas A, Pérez-Suárez D, Bradford R, Saeed SR, Ourselin S, Clarkson MJ, Vercauteren T. Integrated multi-modality image-guided navigation for neurosurgery: open-source software platform using state-of-the-art clinical hardware. Int J Comput Assist Radiol Surg 2021; 16:1347-1356. [PMID: 33937966 PMCID: PMC8295168 DOI: 10.1007/s11548-021-02374-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2021] [Accepted: 04/08/2021] [Indexed: 01/19/2023]
Abstract
PURPOSE Image-guided surgery (IGS) is an integral part of modern neuro-oncology surgery. Navigated ultrasound provides the surgeon with reconstructed views of ultrasound data, but no commercial system presently permits its integration with other essential non-imaging-based intraoperative monitoring modalities such as intraoperative neuromonitoring. Such a system would be particularly useful in skull base neurosurgery. METHODS We established functional and technical requirements of an integrated multi-modality IGS system tailored for skull base surgery with the ability to incorporate: (1) preoperative MRI data and associated 3D volume reconstructions, (2) real-time intraoperative neurophysiological data and (3) live reconstructed 3D ultrasound. We created an open-source software platform to integrate with readily available commercial hardware. We tested the accuracy of the system's ultrasound navigation and reconstruction using a polyvinyl alcohol phantom model and simulated the use of the complete navigation system in a clinical operating room using a patient-specific phantom model. RESULTS Experimental validation of the system's navigated ultrasound component demonstrated accuracy of [Formula: see text] and a frame rate of 25 frames per second. Clinical simulation confirmed that system assembly was straightforward, could be achieved in a clinically acceptable time of [Formula: see text] and performed with a clinically acceptable level of accuracy. CONCLUSION We present an integrated open-source research platform for multi-modality IGS. The present prototype system was tailored for neurosurgery and met all minimum design requirements focused on skull base surgery. Future work aims to optimise the system further by addressing the remaining target requirements.
Collapse
Affiliation(s)
- Jonathan Shapey
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK. .,Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK. .,Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK.
| | - Thomas Dowrick
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK.,Centre for Medical Image Computing, UCL, London, UK.,Department of Medical Physics and Biomedical Engineering, UCL, London, UK
| | - Rémi Delaunay
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK
| | - Eleanor C Mackle
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK
| | - Stephen Thompson
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK.,Centre for Medical Image Computing, UCL, London, UK.,Department of Medical Physics and Biomedical Engineering, UCL, London, UK
| | - Mirek Janatka
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK.,Centre for Medical Image Computing, UCL, London, UK.,Department of Medical Physics and Biomedical Engineering, UCL, London, UK
| | - Roland Guichard
- Research Software Development Group, Research IT Services, UCL, London, UK
| | | | - David Pérez-Suárez
- Research Software Development Group, Research IT Services, UCL, London, UK
| | - Robert Bradford
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
| | - Shakeel R Saeed
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK.,The Ear Institute, UCL, London, UK.,The Royal National Throat, Nose and Ear Hospital, London, UK
| | - Sébastien Ourselin
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Matthew J Clarkson
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK.,Centre for Medical Image Computing, UCL, London, UK.,Department of Medical Physics and Biomedical Engineering, UCL, London, UK
| | - Tom Vercauteren
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| |
Collapse
|
8
|
Zeng J, Liu Z, Jiang S, Pang Q, Wang P. Verification of Guiding Needle Placement by Registered Ultrasound Image During Combined Intracavitary/Interstitial Gynecologic Brachytherapy. Cancer Manag Res 2021; 13:1917-1928. [PMID: 33658854 PMCID: PMC7917343 DOI: 10.2147/cmar.s294498] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Accepted: 01/14/2021] [Indexed: 11/23/2022] Open
Abstract
Purpose Our previous research demonstrated that under ideal conditions, rigid registration between MRI images and US images had high accuracy for real-time image guidance. The work presented in this paper focused on the application of the previously established procedures to a new context, including preoperative CT images. Materials and Methods We used a template to calibrate the US probe and completed the registration between preoperative CT images and US images. Marker experiments on the accuracy of real-time needle trajectories in CT images were performed using micro electromagnetic sensors. Pelvic phantom experiments were carried out to test the registration accuracy between CT and US images, in addition to registration accuracy between US images and real-time needle trajectories (real-time space model). Results The US probe calibration error in CT images was 0.879 ± 0.149 mm. The difference of registration between US images and CT images was 0.935 ± 0.166 mm in the axial plane (n = 30) and 0.916 ± 0.143 mm in the sagittal plane (n =12). The difference of registration between US images and the needle’s real-time trajectories was 0.951 ± 0.202 mm. Conclusion Under ideal conditions, rigid registration between CT images and US images had high accuracy for real-time image guidance.
Collapse
Affiliation(s)
- Jing Zeng
- Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin, Tianjin's Clinical Research Center for Cancer, Tianjin, People's Republic of China.,Department of Gynecologic Oncology, Tianjin Central Hospital of Gynecology and Obstetrics, Affiliated Hospital of Nankai University, Tianjin, People's Republic of China
| | - Ziqi Liu
- School of Mechanical Engineering, Tianjin University, Tianjin, People's Republic of China
| | - Shan Jiang
- School of Mechanical Engineering, Tianjin University, Tianjin, People's Republic of China
| | - Qingsong Pang
- Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin, Tianjin's Clinical Research Center for Cancer, Tianjin, People's Republic of China
| | - Ping Wang
- Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin, Tianjin's Clinical Research Center for Cancer, Tianjin, People's Republic of China
| |
Collapse
|
9
|
Reinertsen I, Collins DL, Drouin S. The Essential Role of Open Data and Software for the Future of Ultrasound-Based Neuronavigation. Front Oncol 2021; 10:619274. [PMID: 33604299 PMCID: PMC7884817 DOI: 10.3389/fonc.2020.619274] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2020] [Accepted: 12/11/2020] [Indexed: 01/17/2023] Open
Abstract
With the recent developments in machine learning and modern graphics processing units (GPUs), there is a marked shift in the way intra-operative ultrasound (iUS) images can be processed and presented during surgery. Real-time processing of images to highlight important anatomical structures combined with in-situ display, has the potential to greatly facilitate the acquisition and interpretation of iUS images when guiding an operation. In order to take full advantage of the recent advances in machine learning, large amounts of high-quality annotated training data are necessary to develop and validate the algorithms. To ensure efficient collection of a sufficient number of patient images and external validity of the models, training data should be collected at several centers by different neurosurgeons, and stored in a standard format directly compatible with the most commonly used machine learning toolkits and libraries. In this paper, we argue that such effort to collect and organize large-scale multi-center datasets should be based on common open source software and databases. We first describe the development of existing open-source ultrasound based neuronavigation systems and how these systems have contributed to enhanced neurosurgical guidance over the last 15 years. We review the impact of the large number of projects worldwide that have benefited from the publicly available datasets “Brain Images of Tumors for Evaluation” (BITE) and “Retrospective evaluation of Cerebral Tumors” (RESECT) that include MR and US data from brain tumor cases. We also describe the need for continuous data collection and how this effort can be organized through the use of a well-adapted and user-friendly open-source software platform that integrates both continually improved guidance and automated data collection functionalities.
Collapse
Affiliation(s)
- Ingerid Reinertsen
- Department of Health Research, SINTEF Digital, Trondheim, Norway.,Department of Circulation and Medical Imaging, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| | - D Louis Collins
- NIST Laboratory, McConnell Brain Imaging Center, Montreal Neurological Institute and Hospital, McGill University, Montréal, QC, Canada
| | - Simon Drouin
- Laboratoire Multimédia, École de Technologie Supérieure, Montréal, QC, Canada
| |
Collapse
|
10
|
Verification of needle guidance accuracy in pelvic phantom using registered ultrasound and MRI images for intracavitary/interstitial gynecologic brachytherapy. J Contemp Brachytherapy 2020; 12:147-159. [PMID: 32395139 PMCID: PMC7207233 DOI: 10.5114/jcb.2020.94583] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2019] [Accepted: 03/17/2020] [Indexed: 12/24/2022] Open
Abstract
Purpose In combined intracavitary/interstitial (IC/IS) gynecologic brachytherapy, trackers attached to interstitial needles of localize real-time needle trajectories, and intraoperative ultrasound (US) images provide updated anatomy information during needle insertions. To achieve an effective visualization and image guidance, real-time needle trajectories and US images can be unified in preoperative magnetic resonance imaging (MRI) image space together. This study evaluates the rigid registration accuracy between US images and MRI images as well as the registration accuracy between US images and real-time needle trajectories in a pelvic phantom. Material and methods A method for US probe calibration and accomplished rigid registration between MRI images and US images was proposed. An IC/IS applicator was designed. Micro electromagnetic sensor to track and localize real-time needle trajectories in 3D MRI image space was used. Marker validation to test the accuracy of US probe calibration and pelvic phantom validation to test the registration accuracy between US images and MRI images was conducted as well as and pelvic phantom study to verify the registration accuracy between real-time needle trajectories and needle trajectories in registered US images. Results US probe calibration accuracy was 0.80 ±0.23 mm (n = 60). Registration accuracy between US images and MRI images were 1.01 ±0.22 mm in the axial plane (n = 60) and 1.14 ±0.20 mm in the sagittal plane (n = 24). Registration accuracy between real-time needle trajectories and needle trajectories in registered US images were 1.25 ±0.31 mm (n = 40) and 1.61 ±0.28 degrees (n = 5), respectively. Conclusions In this study, we showed that under ideal conditions, rigid registration between MRI images and US images obtained high accuracy for real-time image guidance. Additionally, registered US images provided accurate image guidance during visual needle insertion in IC/IS gynecologic brachytherapy to achieve a combination of effective visualization and image guidance.
Collapse
|
11
|
Cenni F, Monari D, Schless SH, Aertbeliën E, Desloovere K, Bruyninckx H. Efficient image based method using water-filled balloons for improving probe spatial calibration in 3D freehand ultrasonography. ULTRASONICS 2019; 94:124-130. [PMID: 30558809 DOI: 10.1016/j.ultras.2018.11.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/12/2018] [Revised: 11/26/2018] [Accepted: 11/26/2018] [Indexed: 06/09/2023]
Abstract
The ultrasound (US) probe spatial calibration is a key prerequisite for enabling the use of the 3D freehand US technique. Several methods have been proposed for achieving an accurate and precise calibration, although these methods still require specialised equipment. This equipment is often not available in research or clinical facilities. Therefore, the present investigation aimed to propose an efficient US probe calibration method that is accessible in terms of cost, easy to apply and capable of achieving results suitable for clinical applications. The data acquisition was carried out by performing two perpendicular US sweeps over water filled balloon phantoms. The data analysis was carried out by computing the similarity measures between 2D images from the first sweep and the corresponding images of the 3D reconstruction of the second sweep. These measures were maximized by using the Nelder-Mead algorithm, to find the optimal solution for the calibration parameters. The calibration results were evaluated in terms of accuracy and precision by comparing known phantom geometries with those extracted from the US images. The accuracy and the precision after applying the calibration method were improved. By using the parameters obtained from the plane phantom method as initialization of the calibration parameters, the accuracy and the precision in the best scenario was 0.4 mm and 1.5 mm, respectively. These results were in line with the methods requiring specialised equipment. However, the applied method was unable to consistently produce this level of accuracy and precision. The calibration parameters were also tested in a musculoskeletal application, revealing sufficient matching of the relevant anatomical features when multiple US sweeps are combined in a 3D reconstruction. To improve the current results and increase the reproducibility of this research, the developed software is made available.
Collapse
Affiliation(s)
- Francesco Cenni
- KU Leuven, Department of Movement Sciences, Tervuursevest 101, 3001 Leuven, Belgium; Clinical Motion Analysis Laboratory, University Hospital, Pellenberg, Weligerveld 1, 3212 Pellenberg, Belgium.
| | - Davide Monari
- Clinical Motion Analysis Laboratory, University Hospital, Pellenberg, Weligerveld 1, 3212 Pellenberg, Belgium; KU Leuven, Department of Mechanical Engineering, Celestijnenlaan 300b, 3001 Leuven, Belgium
| | - Simon-Henri Schless
- Clinical Motion Analysis Laboratory, University Hospital, Pellenberg, Weligerveld 1, 3212 Pellenberg, Belgium; KU Leuven, Department of Rehabilitation Sciences, Tervuursevest 101, 3001 Leuven, Belgium
| | - Erwin Aertbeliën
- KU Leuven, Department of Mechanical Engineering, Celestijnenlaan 300b, 3001 Leuven, Belgium
| | - Kaat Desloovere
- Clinical Motion Analysis Laboratory, University Hospital, Pellenberg, Weligerveld 1, 3212 Pellenberg, Belgium; KU Leuven, Department of Rehabilitation Sciences, Tervuursevest 101, 3001 Leuven, Belgium
| | - Herman Bruyninckx
- KU Leuven, Department of Mechanical Engineering, Celestijnenlaan 300b, 3001 Leuven, Belgium
| |
Collapse
|
12
|
Peoples JJ, Bisleri G, Ellis RE. Deformable multimodal registration for navigation in beating-heart cardiac surgery. Int J Comput Assist Radiol Surg 2019; 14:955-966. [PMID: 30888597 DOI: 10.1007/s11548-019-01932-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2019] [Accepted: 03/01/2019] [Indexed: 11/29/2022]
Abstract
PURPOSE Minimally invasive beating-heart surgery is currently performed using endoscopes and without navigation. Registration of intraoperative ultrasound to a preoperative cardiac CT scan is a valuable step toward image-guided navigation. METHODS The registration was achieved by first extracting a representative point set from each ultrasound image in the sequence using a deformable registration. A template shape representing the cardiac chambers was deformed through a hierarchy of affine transformations to match each ultrasound image using a generalized expectation maximization algorithm. These extracted point sets were matched to the CT by exhaustively searching over a large number of precomputed slices of 3D geometry. The result is a similarity transformation mapping the intraoperative ultrasound to preoperative CT. RESULTS Complete data sets were acquired for four patients. Transesophageal echocardiography ultrasound sequences were deformably registered to a model of oriented points with a mean error of 2.3 mm. Ultrasound and CT scans were registered to a mean of 3 mm, which is comparable to the error of 2.8 mm expected by merging ultrasound registration with uncertainty of cardiac CT. CONCLUSION The proposed algorithm registered 3D CT with dynamic 2D intraoperative imaging. The algorithm aligned the images in both space and time, needing neither dynamic CT imaging nor intraoperative electrocardiograms. The accuracy was sufficient for navigation in thoracoscopically guided beating-heart surgery.
Collapse
|
13
|
Prevost R, Salehi M, Jagoda S, Kumar N, Sprung J, Ladikos A, Bauer R, Zettinig O, Wein W. 3D freehand ultrasound without external tracking using deep learning. Med Image Anal 2018; 48:187-202. [PMID: 29936399 DOI: 10.1016/j.media.2018.06.003] [Citation(s) in RCA: 50] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2018] [Revised: 06/05/2018] [Accepted: 06/06/2018] [Indexed: 11/18/2022]
Abstract
This work aims at creating 3D freehand ultrasound reconstructions from 2D probes with image-based tracking, therefore not requiring expensive or cumbersome external tracking hardware. Existing model-based approaches such as speckle decorrelation only partially capture the underlying complexity of ultrasound image formation, thus producing reconstruction accuracies incompatible with current clinical requirements. Here, we introduce an alternative approach that relies on a statistical analysis rather than physical models, and use a convolutional neural network (CNN) to directly estimate the motion of successive ultrasound frames in an end-to-end fashion. We demonstrate how this technique is related to prior approaches, and derive how to further improve its predictive capabilities by incorporating additional information such as data from inertial measurement units (IMU). This novel method is thoroughly evaluated and analyzed on a dataset of 800 in vivo ultrasound sweeps, yielding unprecedentedly accurate reconstructions with a median normalized drift of 5.2%. Even on long sweeps exceeding 20 cm with complex trajectories, this allows to obtain length measurements with median errors of 3.4%, hence paving the way toward translation into clinical routine.
Collapse
Affiliation(s)
| | - Mehrdad Salehi
- ImFusion GmbH, Agnes-Pockels-Bogen 1, Munich, Germany; Computer Aided Medical Procedures (CAMP), TU Munich, Munich, Germany
| | - Simon Jagoda
- ImFusion GmbH, Agnes-Pockels-Bogen 1, Munich, Germany
| | - Navneet Kumar
- ImFusion GmbH, Agnes-Pockels-Bogen 1, Munich, Germany
| | | | | | | | | | - Wolfgang Wein
- ImFusion GmbH, Agnes-Pockels-Bogen 1, Munich, Germany
| |
Collapse
|