1
|
Che H, Qin J, Chen Y, Ji Z, Yan Y, Yang J, Wang Q, Liang C, Wu J. Improving Needle Tip Tracking and Detection in Ultrasound-Based Navigation System Using Deep Learning-Enabled Approach. IEEE J Biomed Health Inform 2024; 28:2930-2942. [PMID: 38215329 DOI: 10.1109/jbhi.2024.3353343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2024]
Abstract
Ultrasound-guided percutaneous interventions have numerous advantages over traditional techniques. Accurate needle placement in the target anatomy is crucial for successful intervention, and reliable visual information is essential to achieve this. However, previous studies have revealed several challenges, such as the variability in needle echogenicity and the common misalignment of the ultrasound beam and the needle. Advanced techniques have been developed to optimize needle visualization, including hardware-based and image-processing-based methods. This paper proposes a novel strategy of integrating ultrasound-based deep learning approaches into an optical navigation system to enhance needle visualization and improve tip positioning accuracy. Both the tracking and detection algorithms are optimized utilizing optical tracking information. The information is introduced into the tracking network to define the search patch update strategy and form a trajectory reference to correct tracking results. In the detection network, the original image is processed according to the needle insertion position and current position given by the optical localization system to locate a coarse region, and the depth-score criterion is adopted to optimize detection results. Extensive experiments demonstrate that our approach achieves promising tip tracking and detection performance with tip localization errors of 1.11 ± 0.59 mm and 1.17 ± 0.70 mm, respectively. Moreover, we establish a paired dataset consisting of ultrasound images and their corresponding spatial tip coordinates acquired from the optical tracking system and conduct real puncture experiments to verify the effectiveness of the proposed methods. Our approach significantly improves needle visualization and provides physicians with visual guidance for posture adjustment.
Collapse
|
2
|
Sun M, Huang W, Zhang H, Shi Y, Wang J, Gong Q, Wang X. Temporal contexts for motion tracking in ultrasound sequences with information bottleneck. Med Phys 2023; 50:5553-5567. [PMID: 36866782 DOI: 10.1002/mp.16339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 02/13/2023] [Accepted: 02/18/2023] [Indexed: 03/04/2023] Open
Abstract
BACKGROUND Recently, deep convolutional neural networks (CNNs) have been widely adopted for ultrasound sequence tracking and shown to perform satisfactorily. However, existing trackers ignore the rich temporal contexts that exists between consecutive frames, making it difficult for these trackers to perceive information about the motion of the target. PURPOSE In this paper, we propose a sophisticated method to fully utilize temporal contexts for ultrasound sequences tracking with information bottleneck. This method determines the temporal contexts between consecutive frames to perform both feature extraction and similarity graph refinement, and information bottleneck is integrated into the feature refinement process. METHODS The proposed tracker combined three models. First, online temporal adaptive convolutional neural network (TAdaCNN) is proposed to focus on feature extraction and enhance spatial features using temporal information. Second, information bottleneck (IB) is incorporated to achieve more accurate target tracking by maximally limiting the amount of information in the network and discarding irrelevant information. Finally, we propose temporal adaptive transformer (TA-Trans) that efficiently encodes temporal knowledge by decoding it for similarity graph refinement. The tracker was trained on 2015 MICCAI Challenge on Liver Ultrasound Tracking (CLUST) dataset to evaluate the performance of the proposed method by calculating the tracking error (TE) between the predicted landmarks and the ground truth landmarks for each frame. The experimental results are compared with 13 state-of-the-art methods, and ablation studies are conducted. RESULTS On CLUST 2015 dataset, our proposed model achieves a mean TE of 0.81 ± 0.74 mm and a maximum TE of 1.93 mm for 85 point-landmarks across 39 ultrasound sequences in the 2D sequences. Tracking speed ranged from 41 to 63 frames per second (fps). CONCLUSIONS This study demonstrates a new integrated workflow for ultrasound sequences motion tracking. The results show that the model has excellent accuracy and robustness. Reliable and accurate motion estimation is provided for applications requiring real-time motion estimation in the context of ultrasound-guided radiation therapy.
Collapse
Affiliation(s)
- Mengxue Sun
- School of Information Science and Engineering, Shandong Normal University, Jinan, China
| | - Wenhui Huang
- School of Information Science and Engineering, Shandong Normal University, Jinan, China
| | - Huili Zhang
- Shandong Innovation and Development Research Institute, Jinan, China
| | - Yunfeng Shi
- School of Information Science and Engineering, Shandong Normal University, Jinan, China
| | - Jiale Wang
- School of Information Science and Engineering, Shandong Normal University, Jinan, China
| | - Qingtao Gong
- Ulsan Ship and Ocean College, Ludong University, Yantai, China
| | - Xiaoyan Wang
- Department of Urology, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
3
|
Magana-Salgado U, Namburi P, Feigin-Almon M, Pallares-Lopez R, Anthony B. A comparison of point-tracking algorithms in ultrasound videos from the upper limb. Biomed Eng Online 2023; 22:52. [PMID: 37226240 DOI: 10.1186/s12938-023-01105-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Accepted: 04/25/2023] [Indexed: 05/26/2023] Open
Abstract
Tracking points in ultrasound (US) videos can be especially useful to characterize tissues in motion. Tracking algorithms that analyze successive video frames, such as variations of Optical Flow and Lucas-Kanade (LK), exploit frame-to-frame temporal information to track regions of interest. In contrast, convolutional neural-network (CNN) models process each video frame independently of neighboring frames. In this paper, we show that frame-to-frame trackers accumulate error over time. We propose three interpolation-like methods to combat error accumulation and show that all three methods reduce tracking errors in frame-to-frame trackers. On the neural-network end, we show that a CNN-based tracker, DeepLabCut (DLC), outperforms all four frame-to-frame trackers when tracking tissues in motion. DLC is more accurate than the frame-to-frame trackers and less sensitive to variations in types of tissue movement. The only caveat found with DLC comes from its non-temporal tracking strategy, leading to jitter between consecutive frames. Overall, when tracking points in videos of moving tissue, we recommend using DLC when prioritizing accuracy and robustness across movements in videos, and using LK with the proposed error-correction methods for small movements when tracking jitter is unacceptable.
Collapse
Affiliation(s)
- Uriel Magana-Salgado
- Department of Mechanical Engineering, MIT, Cambridge, MA, 02139, USA
- Mechanical Engineering Graduate Program, MIT, Cambridge, MA, 02139, USA
| | - Praneeth Namburi
- Institute for Medical Engineering and Science, Massachusetts Institute of Technology, 77 Massachusetts Ave, 12-3211, Cambridge, MA, 02139, USA.
- MIT.Nano Immersion Lab, MIT, Cambridge, MA, 02139, USA.
| | | | - Roger Pallares-Lopez
- Department of Mechanical Engineering, MIT, Cambridge, MA, 02139, USA
- Mechanical Engineering Graduate Program, MIT, Cambridge, MA, 02139, USA
| | - Brian Anthony
- Department of Mechanical Engineering, MIT, Cambridge, MA, 02139, USA
- Institute for Medical Engineering and Science, Massachusetts Institute of Technology, 77 Massachusetts Ave, 12-3211, Cambridge, MA, 02139, USA
- MIT.Nano Immersion Lab, MIT, Cambridge, MA, 02139, USA
| |
Collapse
|
4
|
Levin AA, Klimov DD, Nechunaev AA, Prokhorenko LS, Mishchenkov DS, Nosova AG, Astakhov DA, Poduraev YV, Panchenkov DN. Assessment of experimental OpenCV tracking algorithms for ultrasound videos. Sci Rep 2023; 13:6765. [PMID: 37185281 PMCID: PMC10130022 DOI: 10.1038/s41598-023-30930-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Accepted: 03/03/2023] [Indexed: 05/17/2023] Open
Abstract
This study aims to compare the tracking algorithms provided by the OpenCV library to use on ultrasound video. Despite the widespread application of this computer vision library, few works describe the attempts to use it to track the movement of liver tumors on ultrasound video. Movements of the neoplasms caused by the patient`s breath interfere with the positioning of the instruments during the process of biopsy and radio-frequency ablation. The main hypothesis of the experiment was that tracking neoplasms and correcting the position of the manipulator in case of using robotic-assisted surgery will allow positioning the instruments more precisely. Another goal of the experiment was to check if it is possible to ensure real-time tracking with at least 25 processed frames per second for standard definition video. OpenCV version 4.5.0 was used with 7 tracking algorithms from the extra modules package. They are: Boosting, CSRT, KCF, MedianFlow, MIL, MOSSE, TLD. More than 5600 frames of standard definition were processed during the experiment. Analysis of the results shows that two algorithms-CSRT and KCF-could solve the problem of tumor tracking. They lead the test with 70% and more of Intersection over Union and more than 85% successful searches. They could also be used in real-time processing with an average processing speed of up to frames per second in CSRT and 100 + frames per second for KCF. Tracking results reach the average deviation between centers of neoplasms to 2 mm and maximum deviation less than 5 mm. This experiment also shows that no frames made CSRT and KCF algorithms fail simultaneously. So, the hypothesis for future work is combining these algorithms to work together, with one of them-CSRT-as support for the KCF tracker on the rarely failed frames.
Collapse
Affiliation(s)
- A A Levin
- Moscow State University of Medicine and Dentistry Named After A.I. Evdokimov, 20/1 Delegatskaya ul., Moscow, Russian Federation, 127473.
| | - D D Klimov
- Moscow State University of Medicine and Dentistry Named After A.I. Evdokimov, 20/1 Delegatskaya ul., Moscow, Russian Federation, 127473
| | - A A Nechunaev
- Moscow State University of Medicine and Dentistry Named After A.I. Evdokimov, 20/1 Delegatskaya ul., Moscow, Russian Federation, 127473
| | - L S Prokhorenko
- Moscow State University of Medicine and Dentistry Named After A.I. Evdokimov, 20/1 Delegatskaya ul., Moscow, Russian Federation, 127473
| | - D S Mishchenkov
- Moscow State University of Medicine and Dentistry Named After A.I. Evdokimov, 20/1 Delegatskaya ul., Moscow, Russian Federation, 127473
| | - A G Nosova
- Moscow State University of Medicine and Dentistry Named After A.I. Evdokimov, 20/1 Delegatskaya ul., Moscow, Russian Federation, 127473
| | - D A Astakhov
- Moscow State University of Medicine and Dentistry Named After A.I. Evdokimov, 20/1 Delegatskaya ul., Moscow, Russian Federation, 127473
| | - Y V Poduraev
- Moscow State University of Technology "STANKIN", 1 Vadkovsky per., Moscow, Russian Federation, 127055
| | - D N Panchenkov
- Moscow State University of Medicine and Dentistry Named After A.I. Evdokimov, 20/1 Delegatskaya ul., Moscow, Russian Federation, 127473
| |
Collapse
|
5
|
Wulff D, Hagenah J, Ernst F. Landmark tracking in 4D ultrasound using generalized representation learning. Int J Comput Assist Radiol Surg 2023; 18:493-500. [PMID: 36242701 PMCID: PMC9939499 DOI: 10.1007/s11548-022-02768-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Accepted: 09/26/2022] [Indexed: 11/25/2022]
Abstract
PURPOSE In this study, we present and validate a novel concept for target tracking in 4D ultrasound. The key idea is to replace image patch similarity metrics by distances in a latent representation. For this, 3D ultrasound patches are mapped into a representation space using sliced-Wasserstein autoencoders. METHODS A novel target tracking method for 4D ultrasound is presented that performs tracking in a representation space instead of in images space. Sliced-Wasserstein autoencoders are trained in an unsupervised manner which are used to map 3D ultrasound patches into a representation space. The tracking procedure is based on a greedy algorithm approach and measuring distances between representation vectors to relocate the target . The proposed algorithm is validated on an in vivo data set of liver images. Furthermore, three different concepts for training the autoencoder are presented to provide cross-patient generalizability, aiming at minimal training time on data of the individual patient. RESULTS Eight annotated 4D ultrasound sequences are used to test the tracking method. Tracking could be performed in all sequences using all autoencoder training approaches. A mean tracking error of 3.23 mm could be achieved using generalized fine-tuned autoencoders. It is shown that using generalized autoencoders and fine-tuning them achieves better tracking results than training subject individual autoencoders. CONCLUSION It could be shown that distances between encoded image patches in a representation space can serve as a meaningful measure of the image patch similarity, even under realistic deformations of the anatomical structure. Based on that, we could validate the proposed tracking algorithm in an in vivo setting. Furthermore, our results indicate that using generalized autoencoders, fine-tuning on only a small number of patches from the individual patient provides promising results.
Collapse
Affiliation(s)
- Daniel Wulff
- Institute for Robotics and Cognitive Systems, University of Lübeck, Ratzeburger Allee 160, Lübeck, 23562 Schleswig-Holstein Germany
| | - Jannis Hagenah
- Department of Engineering Science, Institute of Biomedical Engineering, University of Oxford, ParksRoad, Oxford, OX1 3PJ UK
| | - Floris Ernst
- Institute for Robotics and Cognitive Systems, University of Lübeck, Ratzeburger Allee 160, Lübeck, 23562 Schleswig-Holstein Germany
| |
Collapse
|
6
|
Wang Y, Fu T, Wang Y, Xiao D, Lin Y, Fan J, Song H, Liu F, Yang J. Multi 3: multi-templates siamese network with multi-peaks detection and multi-features refinement for target tracking in ultrasound image sequences. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac9032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Accepted: 09/07/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Objective. Radiation therapy requires a precise target location. However, respiratory motion increases the uncertainties of the target location. Accurate and robust tracking is significant for improving operation accuracy. Approach. In this work, we propose a tracking framework Multi3, including a multi-templates Siamese network, multi-peaks detection and multi-features refinement, for target tracking in ultrasound sequences. Specifically, we use two templates to provide the location and deformation of the target for robust tracking. Multi-peaks detection is applied to extend the set of potential target locations, and multi-features refinement is designed to select an appropriate location as the tracking result by quality assessment. Main results. The proposed Multi3 is evaluated on a public dataset, i.e. MICCAI 2015 challenge on liver ultrasound tracking (CLUST), and our clinical dataset provided by the Chinese People’s Liberation Army General Hospital. Experimental results show that Multi3 achieves accurate and robust tracking in ultrasound sequences (0.75 ± 0.62 mm and 0.51 ± 0.32 mm tracking errors in two datasets). Significance. The proposed Multi3 is the most robust method on the CLUST 2D benchmark set, exhibiting potential in clinical practice.
Collapse
|
7
|
Li W, Ye X, Huang Y, Dong Y, Chen X, Yang Y. An integrated ultrasound imaging and abdominal compression device for respiratory motion management in radiation therapy. Med Phys 2022; 49:6334-6345. [PMID: 35950934 DOI: 10.1002/mp.15928] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 07/13/2022] [Accepted: 08/02/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Radiotherapy to tumors in the abdomen is challenging because of the significant organ movement and tissue deformation caused by respiration. PURPOSE A motion management strategy that integrated ultrasound (US) imaging with abdominal compression was developed and evaluated, where US was used to real-time monitor organ motion after abdominal compression. METHODS A device that combined a US imaging system and an abdominal compression plate (ACP) was developed. Twenty-one healthy volunteers were involved to evaluate the motion management efficacy. Each volunteer was immobilized on a flat bench by the device. Abdominal US data were successively collected with and without ACP compression and experiments were repeated three times to verify the imaging reproducibility. A template matching algorithm based on normalized cross correlation (NCC) was implemented to track the targets (vessels in the liver, pancreas and stomach) automatically. The matching algorithm was validated by comparing with the manual references. Automatic tracking was judged as failed if the center of mass difference from manual tracking was beyond a failure threshold. Based on the locations obtained through the template matching algorithm, the motion correlation between liver and pancreas/stomach was investigated using Pearson correlation test. Paired Student's t-test was used to analyze the difference between the results without and with ACP compression. RESULTS The liver motion amplitude over all 21 volunteers was significantly (p<0.001) reduced from 14.9 ± 5.5/3.4 ± 1.8 mm in superior-inferior (SI)/anterior-posterior (AP) direction before ACP compression to 7.3 ± 1.5/1.6 ± 0.7 mm after ACP compression. The mean liver motion standard deviation before compression was on average 2.8/1.4 mm in SI/AP direction, and was significantly (p<0.001) reduced to 0.9/0.4 mm after compression. The failure rates of automatic tracking for liver, pancreas and stomach were reduced for failure thresholds of 1-5 mm after applying ACP. The Pearson correlation coefficients between liver and pancreas/stomach were 0.98/0.97 without ACP and 0.96/0.94 with ACP in SI direction, and were 0.68/0.68 and 0.43/0.42 in AP direction. The motion prediction errors for pancreas/stomach with ACP have significantly (p<0.001) reduced to 0.45 ± 0.36/0.52 ± 0.43 mm from 0.69 ± 0.56/0.71 ± 0.66 mm without ACP in SI direction, and to 0.38 ± 0.33/0.39 ± 0.27 mm from 0.44 ± 0.35/0.61 ± 0.59 mm in AP direction. CONCLUSIONS The proposed strategy that combines real-time US imaging and abdominal compression has the potential to reduce the abdominal organ motion while improving both target tracking reliability and motion reproducibility. Furthermore, the observed correlation between liver and pancreas/stomach motion indicates the possibility of indirect pancreas/stomach tracking using liver markers as tracking surrogates. The strategy is expected to provide an alternative for respiratory motion management in the radiation treatment of abdominal tumors. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Wanqing Li
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, Anhui, 230026, China
| | - Xianjun Ye
- Department of Ultrasound Medicine, the First Affiliated Hospital of USTC, University of Science and Technology of China, Hefei, Anhui, 230001, China
| | - Yunwen Huang
- Department of Radiation Oncology, the First Affiliated Hospital of USTC, University of Science and Technology of China, Hefei, Anhui, 230001, China
| | - Yuyan Dong
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, Anhui, 230026, China
| | - Xuemin Chen
- Health Management Center, the First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, 230001, China
| | - Yidong Yang
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, Anhui, 230026, China.,Department of Radiation Oncology, the First Affiliated Hospital of USTC, University of Science and Technology of China, Hefei, Anhui, 230001, China
| |
Collapse
|
8
|
Arjas A, Alles EJ, Maneas E, Arridge S, Desjardins A, Sillanpaa MJ, Hauptmann A. Neural Network Kalman Filtering for 3-D Object Tracking From Linear Array Ultrasound Data. IEEE Trans Ultrason Ferroelectr Freq Control 2022; 69:1691-1702. [PMID: 35324438 DOI: 10.1109/tuffc.2022.3162097] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Many interventional surgical procedures rely on medical imaging to visualize and track instruments. Such imaging methods not only need to be real time capable but also provide accurate and robust positional information. In ultrasound (US) applications, typically, only 2-D data from a linear array are available, and as such, obtaining accurate positional estimation in three dimensions is nontrivial. In this work, we first train a neural network, using realistic synthetic training data, to estimate the out-of-plane offset of an object with the associated axial aberration in the reconstructed US image. The obtained estimate is then combined with a Kalman filtering approach that utilizes positioning estimates obtained in previous time frames to improve localization robustness and reduce the impact of measurement noise. The accuracy of the proposed method is evaluated using simulations, and its practical applicability is demonstrated on experimental data obtained using a novel optical US imaging setup. Accurate and robust positional information is provided in real time. Axial and lateral coordinates for out-of-plane objects are estimated with a mean error of 0.1 mm for simulated data and a mean error of 0.2 mm for experimental data. The 3-D localization is most accurate for elevational distances larger than 1 mm, with a maximum distance of 6 mm considered for a 25-mm aperture.
Collapse
|
9
|
Wu C, Fu T, Wang Y, Lin Y, Wang Y, Ai D, Fan J, Song H, Yang J. Fusion Siamese network with drift correction for target tracking in ultrasound sequences. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac4fa1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Accepted: 01/27/2022] [Indexed: 12/25/2022]
Abstract
Abstract
Motion tracking techniques can revise the bias arising from respiration-caused motion in radiation therapy. Tracking key structures accurately and at a real-time speed is necessary for effective motion tracking. In this work, we propose a fusion Siamese network with drift correction for target tracking in ultrasound sequences. Specifically, the network fuses four response maps generated by the cross-correlation between convolution layers at different resolutions to reduce up-sampling error. A correction strategy combining local structural similarity and target trajectory is proposed to revise the target drift predicted by the network. Moreover, a coarse-to-fine strategy is proposed to train the network with a limited number of annotated images, in which an augmented dataset is generated by corner points to learn network features with high generalizability. The proposed method is evaluated on the basis of the public dataset of the MICCAI 2015 Challenge on Liver UltraSound Tracking (CLUST) and our ultrasound image dataset, which is provided by the Chinese People’s Liberation Army General Hospital (CPLAGH). A tracking error of 0.80 ± 1.16 mm is observed for 85 targets across 39 ultrasound sequences in the CLUST dataset. A tracking error of 0.61 ± 0.36 mm is observed for 20 targets across 10 ultrasound sequences in the CPLAGH dataset. The effectiveness of the proposed fusion and correction strategies is verified via two ablation experiments. Overall, the experimental results demonstrate the effectiveness of the proposed fusion Siamese network with drift correction and reveal its potential in clinical practice.
Collapse
|
10
|
Bharadwaj S, Prasad S, Almekkawy M. An Upgraded Siamese Neural Network for Motion Tracking in Ultrasound Image Sequences. IEEE Trans Ultrason Ferroelectr Freq Control 2021; 68:3515-3527. [PMID: 34232873 DOI: 10.1109/tuffc.2021.3095299] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Deep learning is heavily being borrowed to solve problems in medical imaging applications, and Siamese neural networks are the front runners of motion tracking. In this article, we propose to upgrade one such Siamese architecture-based neural network for robust and accurate landmark tracking in ultrasound images to improve the quality of image-guided radiation therapy. Although several researchers have improved the Siamese architecture-based networks with sophisticated detection modules and by incorporating transfer learning, the inherent assumptions of the constant position model and the missing motion model remain unaddressed limitations. In our proposed model, we overcome these limitations by introducing two modules into the original architecture. We employ a reference template update to resolve the constant position model and a linear Kalman filter (LKF) to address the missing motion model. Moreover, we demonstrate that the proposed architecture provides promising results without transfer learning. The proposed model was submitted to an open challenge organized by MICCAI and was evaluated exhaustively on the Liver US Tracking (CLUST) 2D dataset. Experimental results proved that the proposed model tracked the landmarks with promising accuracy. Furthermore, we also induced synthetic occlusions to perform a qualitative analysis of the proposed approach. The evaluations were performed on the training set of the CLUST 2D dataset. The proposed method outperformed the original Siamese architecture by a significant margin.
Collapse
|
11
|
Dai X, Lei Y, Roper J, Chen Y, Bradley JD, Curran WJ, Liu T, Yang X. Deep learning-based motion tracking using ultrasound images. Med Phys 2021; 48:7747-7756. [PMID: 34724712 DOI: 10.1002/mp.15321] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Revised: 10/13/2021] [Accepted: 10/22/2021] [Indexed: 12/25/2022] Open
Abstract
PURPOSE Ultrasound (US) imaging is an established imaging modality capable of offering video-rate volumetric images without ionizing radiation. It has the potential for intra-fraction motion tracking in radiation therapy. In this study, a deep learning-based method has been developed to tackle the challenges in motion tracking using US imaging. METHODS We present a Markov-like network, which is implemented via generative adversarial networks, to extract features from sequential US frames (one tracked frame followed by untracked frames) and thereby estimate a set of deformation vector fields (DVFs) through the registration of the tracked frame and the untracked frames. The positions of the landmarks in the untracked frames are finally determined by shifting landmarks in the tracked frame according to the estimated DVFs. The performance of the proposed method was evaluated on the testing dataset by calculating the tracking error (TE) between the predicted and ground truth landmarks on each frame. RESULTS The proposed method was evaluated using the MICCAI CLUST 2015 dataset which was collected using seven US scanners with eight types of transducers and the Cardiac Acquisitions for Multi-structure Ultrasound Segmentation (CAMUS) dataset which was acquired using GE Vivid E95 ultrasound scanners. The CLUST dataset contains 63 2D and 22 3D US image sequences respectively from 42 and 18 subjects, and the CAMUS dataset includes 2D US images from 450 patients. On CLUST dataset, our proposed method achieved a mean tracking error of 0.70 ± 0.38 mm for the 2D sequences and 1.71 ± 0.84 mm for the 3D sequences for those public available annotations. And on CAMUS dataset, a mean tracking error of 0.54 ± 1.24 mm for the landmarks in the left atrium was achieved. CONCLUSIONS A novel motion tracking algorithm using US images based on modern deep learning techniques has been demonstrated in this study. The proposed method can offer millimeter-level tumor motion prediction in real time, which has the potential to be adopted into routine tumor motion management in radiation therapy.
Collapse
Affiliation(s)
- Xianjin Dai
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Yue Chen
- The Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University School of Medicine, Atlanta, Georgia, USA
| | - Jeffrey D Bradley
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- The Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University School of Medicine, Atlanta, Georgia, USA
| |
Collapse
|
12
|
Mezheritsky T, Romaguera LV, Le W, Kadoury S. Population-based 3D respiratory motion modelling from convolutional autoencoders for 2D ultrasound-guided radiotherapy. Med Image Anal 2021; 75:102260. [PMID: 34670149 DOI: 10.1016/j.media.2021.102260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Revised: 09/29/2021] [Accepted: 10/01/2021] [Indexed: 10/20/2022]
Abstract
Radiotherapy is a widely used treatment modality for various types of cancers. A challenge for precise delivery of radiation to the treatment site is the management of internal motion caused by the patient's breathing, especially around abdominal organs such as the liver. Current image-guided radiation therapy (IGRT) solutions rely on ionising imaging modalities such as X-ray or CBCT, which do not allow real-time target tracking. Ultrasound imaging (US) on the other hand is relatively inexpensive, portable and non-ionising. Although 2D US can be acquired at a sufficient temporal frequency, it doesn't allow for target tracking in multiple planes, while 3D US acquisitions are not adapted for real-time. In this work, a novel deep learning-based motion modelling framework is presented for ultrasound IGRT. Our solution includes an image similarity-based rigid alignment module combined with a deep deformable motion model. Leveraging the representational capabilities of convolutional autoencoders, our deformable motion model associates complex 3D deformations with 2D surrogate US images through a common learned low dimensional representation. The model is trained on a variety of deformations and anatomies which enables it to generate the 3D motion experienced by the liver of a previously unseen subject. During inference, our framework only requires two pre-treatment 3D volumes of the liver at extreme breathing phases and a live 2D surrogate image representing the current state of the organ. In this study, the presented model is evaluated on a 3D+t US data set of 20 volunteers based on image similarity as well as anatomical target tracking performance. We report results that surpass comparable methodologies in both metric categories with a mean tracking error of 3.5±2.4 mm, demonstrating the potential of this technique for IGRT.
Collapse
Affiliation(s)
- Tal Mezheritsky
- MedICAL Laboratory, École Polytechnique de Montréal, Montréal, Canada.
| | | | | | - Samuel Kadoury
- MedICAL Laboratory, École Polytechnique de Montréal, Montréal, Canada; CHUM Research Center, Montréal, Canada
| |
Collapse
|
13
|
He J, Shen C, Chen Y, Huang Y, Wu J. FPSN-FNCC: an accurate and fast motion tracking algorithm in 3D ultrasound for image-guided interventions. Phys Med Biol 2021; 66. [PMID: 33975283 DOI: 10.1088/1361-6560/abffef] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Accepted: 05/11/2021] [Indexed: 11/12/2022]
Abstract
The uncertain motions of a target caused by the breath, heartbeat and body drift of a patient can increase the target locating error during image-guided interventions, and that may cause additional surgery trauma. A surgery navigation system with accurate motion tracking is important for improving the operation accuracy and reducing trauma. In this work, we propose an accurate and fast tracking algorithm in three-dimensional (3D) ultrasound (US) sequences for US-guided surgery to achieve moving object tracking. The idea of this algorithm is as follows. Firstly, feature pyramid architecture is introduced into a Siamese network to extract multiscale convolutional features. Secondly, to improve the network discriminative power and the robustness to ultrasonic noise and gain variation, we use the normalized cross correlation (NCC) to calculate the similarity between template block and search block. Thirdly, a fast NCC (FNCC) is proposed, which can perform the real-time tracking. Finally, a density peaks clustering approach is used to compensate the motion of the target and further improve the tracking accuracy. The proposed algorithm is evaluated on a CLUST dataset that includes 22 sets of 3D US sequences, and the mean error of 1.60±0.97 mm compared with manual annotations is obtained. After comparing with other published works, the results show that our algorithm achieves the comparable performance. The ablation study proves that the results benefit from the feature pyramid architecture and FNCC. These findings show that our algorithm may improve the motion tracking accuracy in image-guided interventions.
Collapse
Affiliation(s)
- Jishuai He
- Institute of Biomedical Engineering, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, People's Republic of China.,Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| | - Chunxu Shen
- Institute of Biomedical Engineering, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, People's Republic of China.,Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China.,Tencent, Shenzhen, 518000, People's Republic of China
| | - Yao Chen
- Institute of Biomedical Engineering, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, People's Republic of China.,Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| | - Yibin Huang
- Department of Ultrasound, Shenzhen Traditional Chinese Medicine Hospital, Shenzhen, 518033, People's Republic of China
| | - Jian Wu
- Institute of Biomedical Engineering, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, People's Republic of China
| |
Collapse
|
14
|
Ipsen S, Wulff D, Kuhlemann I, Schweikard A, Ernst F. Towards automated ultrasound imaging-robotic image acquisition in liver and prostate for long-term motion monitoring. Phys Med Biol 2021; 66. [PMID: 33770768 DOI: 10.1088/1361-6560/abf277] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Accepted: 03/26/2021] [Indexed: 11/12/2022]
Abstract
Real-time volumetric (4D) ultrasound has shown high potential for diagnostic and therapy guidance tasks. One of the main drawbacks of ultrasound imaging to date is the reliance on manual probe positioning and the resulting user dependence. Robotic assistance could help overcome this issue and facilitate the acquisition of long-term image data to observe dynamic processesin vivoover time. The aim of this study is to assess the feasibility of robotic probe manipulation and organ motion quantification during extended imaging sessions. The system consists of a collaborative robot and a 4D ultrasound system providing real-time data access. Five healthy volunteers received liver and prostate scans during free breathing over 30 min. Initial probe placement was performed with real-time remote control with a predefined contact force of 10 N. During scan acquisition, the probe position was continuously adjusted to the body surface motion using impedance control. Ultrasound volumes, the pose of the end-effector and the estimated contact forces were recorded. For motion analysis, one anatomical landmark was manually annotated in a subset of ultrasound frames for each experiment. Probe contact was uninterrupted over the entire scan duration in all ten sessions. Organ drift and imaging artefacts were successfully compensated using remote control. The median contact force along the probe's longitudinal axis was 10.0 N with maximum values of 13.2 and 21.3 N for liver and prostate, respectively. Forces exceeding 11 N only occurred in 0.3% of the time. Probe and landmark motion were more pronounced in the liver, with median interquartile ranges of 1.5 and 9.6 mm, compared to 0.6 and 2.7 mm in the prostate. The results show that robotic ultrasound imaging with dynamic force control can be used for stable, long-term imaging of anatomical regions affected by motion. The system facilitates the acquisition of 4D image datain vivoover extended scanning periods for the first time and holds the potential to be used for motion monitoring for therapy guidance as well as diagnostic tasks.
Collapse
Affiliation(s)
- Svenja Ipsen
- Institute for Robotics and Cognitive Systems, University of Luebeck, Luebeck, Germany.,Fraunhofer Research Institution for Individualized and Cell-Based Medical Engineering IMTE, Luebeck, Germany
| | - Daniel Wulff
- Institute for Robotics and Cognitive Systems, University of Luebeck, Luebeck, Germany
| | - Ivo Kuhlemann
- Institute for Robotics and Cognitive Systems, University of Luebeck, Luebeck, Germany
| | - Achim Schweikard
- Institute for Robotics and Cognitive Systems, University of Luebeck, Luebeck, Germany
| | - Floris Ernst
- Institute for Robotics and Cognitive Systems, University of Luebeck, Luebeck, Germany
| |
Collapse
|
15
|
Huang Y, He J, Wu X, Zhao X, Wu J. Tracking 3D ultrasound anatomical landmarks via three orthogonal plane-based scale discriminative correlation filter network. Med Phys 2021; 48:2127-2135. [PMID: 33619737 DOI: 10.1002/mp.14798] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2020] [Revised: 02/09/2021] [Accepted: 02/11/2021] [Indexed: 12/12/2022] Open
Abstract
PURPOSE In abdominal interventional therapy, accurate motion tracking of the target is regarded as crucial to minimize trauma and optimize dosage delivery. Meanwhile, three-dimensional (3D) ultrasound (US) is an attractive modality to show the real-time motion pattern of the target. In this work, we developed an accurate and robust landmark tracking algorithm for 3D US sequences. METHODS The proposed algorithm introduces a three orthogonal planes (TOPs) based scale discriminative correlation filter network for 3D US landmarks tracking. First, we couple the fully convolutional network (FCN) with scale discriminative correlation filter (SDCF) to generate an effective tracker. And SDCF is reformulated as a differentiable layer, which ensures the network can perform scale learning and end-to-end training. Next, we train the end-to-end network on millions of natural images. Finally, we convert 3D US image to 2D three-channel image by TOP transformation and feed them to the proposed network for performing online tracking. RESULTS Online tracking performance was evaluated on the Challenge of Liver Ultrasound Tracking (CLUST) dataset with 22 sets of 3D US sequences, obtaining mean error of 1.63 ± 1.04 mm and 95th percentile (95%ile) error of 3.37 mm, when compared with manual annotations annotated by surgeons. Ablation study indicates that the promising results benefit from SDCF and scale learning, which alleviates the influence from deformation. The findings of the clinical analysis support that the proposed algorithm can work well with different initial patch sizes, which means that our algorithm has potential to lighten the burden of surgeons. CONCLUSIONS We propose a flexible, accurate and robust landmark tracking algorithm for the image-guided interventions, and our algorithm is comparable with the state-of-the-art approaches. The tracking accuracy and robustness show that our algorithm has potential in 3D US-guided abdominal interventional therapies. Furthermore, more researches are needed to improve the computing speed of the algorithm to achieve real-time tracking.
Collapse
Affiliation(s)
- Yibin Huang
- Department of Ultrasound, Shenzhen Traditional Chinese Medicine Hospital, Shenzhen, 518033, P.R. China
| | - Jishuai He
- Institute of Biomedical Engineering, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, P.R. China.,Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, P.R. China
| | - Xu Wu
- Institute of Biomedical Engineering, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, P.R. China.,Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, P.R. China
| | - Xiaozhi Zhao
- Institute of Biomedical Engineering, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, P.R. China.,Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, P.R. China
| | - Jian Wu
- Institute of Biomedical Engineering, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, P.R. China
| |
Collapse
|
16
|
Romaguera LV, Plantefève R, Romero FP, Hébert F, Carrier JF, Kadoury S. Prediction of in-plane organ deformation during free-breathing radiotherapy via discriminative spatial transformer networks. Med Image Anal 2020; 64:101754. [DOI: 10.1016/j.media.2020.101754] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2020] [Revised: 06/05/2020] [Accepted: 06/09/2020] [Indexed: 02/06/2023]
|
17
|
Liu F, Liu D, Tian J, Xie X, Yang X, Wang K. Cascaded one-shot deformable convolutional neural networks: Developing a deep learning model for respiratory motion estimation in ultrasound sequences. Med Image Anal 2020; 65:101793. [PMID: 32712521 DOI: 10.1016/j.media.2020.101793] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Revised: 03/27/2020] [Accepted: 07/17/2020] [Indexed: 02/09/2023]
Abstract
Improving the quality of image-guided radiation therapy requires the tracking of respiratory motion in ultrasound sequences. However, the low signal-to-noise ratio and the artifacts in ultrasound images make it difficult to track targets accurately and robustly. In this study, we propose a novel deep learning model, called a Cascaded One-shot Deformable Convolutional Neural Network (COSD-CNN), to track landmarks in real time in long ultrasound sequences. Specifically, we design a cascaded Siamese network structure to improve the tracking performance of CNN-based methods. We propose a one-shot deformable convolution module to enhance the robustness of the COSD-CNN to appearance variation in a meta-learning manner. Moreover, we design a simple and efficient unsupervised strategy to facilitate the network's training with a limited number of medical images, in which many corner points are selected from raw ultrasound images to learn network features with high generalizability. The proposed COSD-CNN has been extensively evaluated on the public Challenge on Liver UltraSound Tracking (CLUST) 2D dataset and on our own ultrasound image dataset from the First Affiliated Hospital of Sun Yat-sen University (FSYSU). Experiment results show that the proposed model can track a target through an ultrasound sequence with high accuracy and robustness. Our method achieves new state-of-the-art performance on the CLUST 2D benchmark set, indicating its strong potential for application in clinical practice.
Collapse
Affiliation(s)
- Fei Liu
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; Department of the Artificial Intelligence Technology, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Dan Liu
- Department of Medical Ultrasonics, Institute of Diagnostic and Interventional Ultrasound, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, 510080, China
| | - Jie Tian
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Beihang University, Beijing, 100191, China
| | - Xiaoyan Xie
- Department of Medical Ultrasonics, Institute of Diagnostic and Interventional Ultrasound, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, 510080, China
| | - Xin Yang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Kun Wang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; Department of the Artificial Intelligence Technology, University of Chinese Academy of Sciences, Beijing 100049, China.
| |
Collapse
|
18
|
Jupitz SA, Shepard AJ, Hill PM, Bednarz BP. Investigation of tumor and vessel motion correlation in the liver. J Appl Clin Med Phys 2020; 21:183-190. [PMID: 32533758 PMCID: PMC7484818 DOI: 10.1002/acm2.12943] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2019] [Revised: 01/29/2020] [Accepted: 05/06/2020] [Indexed: 12/24/2022] Open
Abstract
Intrafraction imaging‐based motion management systems for external beam radiotherapy can rely on internal surrogate structures when the target is not easily visualized. This work evaluated the validity of using liver vessels as internal surrogates for the estimation of liver tumor motion. Vessel and tumor motion were assessed using ten two‐dimensional sagittal MR cine datasets collected on the ViewRay MRIdian. For each case, a liver tumor and at least one vessel were tracked for 175 s. A tracking approach utilizing block matching and multiple simultaneous templates was applied. Accuracy of the tracked motion was calculated from the error between the tracked centroid position and manually defined ground truth annotations. The patient’s abdomen surface and diaphragm were manually annotated in all frames. The Pearson correlation coefficient (CC) was used to compare the motion of the features and tumor in the anterior–posterior (AP) and superior–inferior (SI) directions. The distance between the centroids of the features and the tumors was calculated to assess if feature proximity affects relative correlation, and the tumor range of motion was determined. Intra‐ and interfraction motion amplitude variabilities were evaluated to further assess the relationship between tumor and feature motion. The mean CC between the motion of the vessel and the tumor were 0.85 ± 0.11 (AP) and 0.92 ± 0.04 (SI), 0.83 ± 0.11 (AP) and −0.89 ± 0.06 (SI) for the surface and tumor, and 0.80 ± 0.17 (AP) and 0.94 ± 0.03 (SI) for the diaphragm and tumor. For intrafraction analysis, the average amplitude variability was 2.47 ± 0.77 mm (AP) and 3.14 ± 1.49 mm (SI) for the vessels, 2.70 ± 1.08 mm (AP) and 3.43 ± 1.73 mm (SI) for the surface, and 2.76 ± 1.41 mm (AP) and 2.91 ± 1.38 mm (SI) for the diaphragm. No relationship between distance and motion correlation was observed. The motion of liver tumors and liver vessels was well correlated, making vessels a suitable surrogate for tumor motion in the liver.
Collapse
Affiliation(s)
- Sydney A Jupitz
- Department of Medical Physics, University of Wisconsin-Madison, Madison, WI, USA
| | - Andrew J Shepard
- Department of Human Oncology, University of Wisconsin-Madison, Madison, WI, USA
| | - Patrick M Hill
- Department of Human Oncology, University of Wisconsin-Madison, Madison, WI, USA
| | - Bryan P Bednarz
- Department of Medical Physics, University of Wisconsin-Madison, Madison, WI, USA
| |
Collapse
|
19
|
Dunnhofer M, Antico M, Sasazawa F, Takeda Y, Camps S, Martinel N, Micheloni C, Carneiro G, Fontanarosa D. Siam-U-Net: encoder-decoder siamese network for knee cartilage tracking in ultrasound images. Med Image Anal 2020; 60:101631. [PMID: 31927473 DOI: 10.1016/j.media.2019.101631] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Revised: 12/20/2019] [Accepted: 12/20/2019] [Indexed: 01/13/2023]
Abstract
The tracking of the knee femoral condyle cartilage during ultrasound-guided minimally invasive procedures is important to avoid damaging this structure during such interventions. In this study, we propose a new deep learning method to track, accurately and efficiently, the femoral condyle cartilage in ultrasound sequences, which were acquired under several clinical conditions, mimicking realistic surgical setups. Our solution, that we name Siam-U-Net, requires minimal user initialization and combines a deep learning segmentation method with a siamese framework for tracking the cartilage in temporal and spatio-temporal sequences of 2D ultrasound images. Through extensive performance validation given by the Dice Similarity Coefficient, we demonstrate that our algorithm is able to track the femoral condyle cartilage with an accuracy which is comparable to experienced surgeons. It is additionally shown that the proposed method outperforms state-of-the-art segmentation models and trackers in the localization of the cartilage. We claim that the proposed solution has the potential for ultrasound guidance in minimally invasive knee procedures.
Collapse
|
20
|
Shen C, He J, Huang Y, Wu J. Discriminative Correlation Filter Network for Robust Landmark Tracking in Ultrasound Guided Intervention. ACTA ACUST UNITED AC 2019. [DOI: 10.1007/978-3-030-32254-0_72] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/17/2023]
|
21
|
Huang P, Su L, Chen S, Cao K, Song Q, Kazanzides P, Iordachita I, Lediju Bell MA, Wong JW, Li D, Ding K. 2D ultrasound imaging based intra-fraction respiratory motion tracking for abdominal radiation therapy using machine learning. Phys Med Biol 2019; 64:185006. [PMID: 31323649 DOI: 10.1088/1361-6560/ab33db] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
We have previously developed a robotic ultrasound imaging system for motion monitoring in abdominal radiation therapy. Owing to the slow speed of ultrasound image processing, our previous system could only track abdominal motions under breath-hold. To overcome this limitation, a novel 2D-based image processing method for tracking intra-fraction respiratory motion is proposed. Fifty-seven different anatomical features acquired from 27 sets of 2D ultrasound sequences were used in this study. Three 2D ultrasound sequences were acquired with the robotic ultrasound system from three healthy volunteers. The remaining datasets were provided by the 2015 MICCAI Challenge on Liver Ultrasound Tracking. All datasets were preprocessed to extract the feature point, and a patient-specific motion pattern was extracted by principal component analysis and slow feature analysis (SFA). The tracking finds the most similar frame (or indexed frame) by a k-dimensional-tree-based nearest neighbor search for estimating the tracked object location. A template image was updated dynamically through the indexed frame to perform a fast template matching (TM) within a learned smaller search region on the incoming frame. The mean tracking error between manually annotated landmarks and the location extracted from the indexed training frame is 1.80 ± 1.42 mm. Adding a fast TM procedure within a small search region reduces the mean tracking error to 1.14 ± 1.16 mm. The tracking time per frame is 15 ms, which is well below the frame acquisition time. Furthermore, the anatomical reproducibility was measured by analyzing the location's anatomical landmark relative to the probe; the position-controlled probe has better reproducibility and yields a smaller mean error across all three volunteer cases, compared to the force-controlled probe (2.69 versus 11.20 mm in the superior-inferior direction and 1.19 versus 8.21 mm in the anterior-posterior direction). Our method reduces the processing time for tracking respiratory motion significantly, which can reduce the delivery uncertainty.
Collapse
Affiliation(s)
- Pu Huang
- Shandong Key Laboratory of Medical Physics and Image Processing, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong, People's Republic of China. Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins School of Medicine, Baltimore, MD, United States of America. Authors contributed equally to this work
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
22
|
Schlüter M, Gerlach S, Fürweger C, Schlaefer A. Analysis and optimization of the robot setup for robotic-ultrasound-guided radiation therapy. Int J Comput Assist Radiol Surg 2019; 14:1379-87. [DOI: 10.1007/s11548-019-02009-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2019] [Accepted: 05/30/2019] [Indexed: 10/26/2022]
|
23
|
He J, Shen C, Huang Y, Wu J. Siamese Spatial Pyramid Matching Network with Location Prior for Anatomical Landmark Tracking in 3-Dimension Ultrasound Sequence. Pattern Recognition and Computer Vision 2019. [DOI: 10.1007/978-3-030-31723-2_29] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/13/2023]
|
24
|
Abstract
Over the years, medical image tracking has gained considerable attention from both medical and research communities due to its widespread utility in a multitude of clinical applications, from functional assessment during diagnosis and therapy planning to structure tracking or image fusion during image-guided interventions. Despite the ever-increasing number of image tracking methods available, most still consist of independent implementations with specific target applications, lacking the versatility to deal with distinct end-goals without the need for methodological tailoring and/or exhaustive tuning of numerous parameters. With this in mind, we have developed the medical image tracking toolbox (MITT)-a software package designed to ease customization of image tracking solutions in the medical field. While its workflow principles make it suitable to work with 2-D or 3-D image sequences, its modules offer versatility to set up computationally efficient tracking solutions, even for users with limited programming skills. MITT is implemented in both C/C++ and MATLAB, including several variants of an object-based image tracking algorithm and allowing to track multiple types of objects (i.e., contours, multi-contours, surfaces, and multi-surfaces) with several customization features. In this paper, the toolbox is presented, its features discussed, and illustrative examples of its usage in the cardiology field provided, demonstrating its versatility, simplicity, and time efficiency.
Collapse
|
25
|
De Luca V, Banerjee J, Hallack A, Kondo S, Makhinya M, Nouri D, Royer L, Cifor A, Dardenne G, Goksel O, Gooding MJ, Klink C, Krupa A, Le Bras A, Marchal M, Moelker A, Niessen WJ, Papiez BW, Rothberg A, Schnabel J, van Walsum T, Harris E, Lediju Bell MA, Tanner C. Evaluation of 2D and 3D ultrasound tracking algorithms and impact on ultrasound-guided liver radiotherapy margins. Med Phys 2018; 45:4986-5003. [PMID: 30168159 DOI: 10.1002/mp.13152] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2018] [Revised: 07/26/2018] [Accepted: 07/27/2018] [Indexed: 12/25/2022] Open
Abstract
PURPOSE Compensation for respiratory motion is important during abdominal cancer treatments. In this work we report the results of the 2015 MICCAI Challenge on Liver Ultrasound Tracking and extend the 2D results to relate them to clinical relevance in form of reducing treatment margins and hence sparing healthy tissues, while maintaining full duty cycle. METHODS We describe methodologies for estimating and temporally predicting respiratory liver motion from continuous ultrasound imaging, used during ultrasound-guided radiation therapy. Furthermore, we investigated the trade-off between tracking accuracy and runtime in combination with temporal prediction strategies and their impact on treatment margins. RESULTS Based on 2D ultrasound sequences from 39 volunteers, a mean tracking accuracy of 0.9 mm was achieved when combining the results from the 4 challenge submissions (1.2 to 3.3 mm). The two submissions for the 3D sequences from 14 volunteers provided mean accuracies of 1.7 and 1.8 mm. In combination with temporal prediction, using the faster (41 vs 228 ms) but less accurate (1.4 vs 0.9 mm) tracking method resulted in substantially reduced treatment margins (70% vs 39%) in contrast to mid-ventilation margins, as it avoided non-linear temporal prediction by keeping the treatment system latency low (150 vs 400 ms). Acceleration of the best tracking method would improve the margin reduction to 75%. CONCLUSIONS Liver motion estimation and prediction during free-breathing from 2D ultrasound images can substantially reduce the in-plane motion uncertainty and hence treatment margins. Employing an accurate tracking method while avoiding non-linear temporal prediction would be favorable. This approach has the potential to shorten treatment time compared to breath-hold and gated approaches, and increase treatment efficiency and safety.
Collapse
Affiliation(s)
- Valeria De Luca
- Computer Vision Laboratory, ETH Zurich, Zürich, Switzerland
- Novartis Institutes for Biomedical Research, Basel, Switzerland
| | | | - Andre Hallack
- Institute of Biomedical Engineering, University of Oxford, Oxford, UK
| | | | - Maxim Makhinya
- Computer Vision Laboratory, ETH Zurich, Zürich, Switzerland
| | | | - Lucas Royer
- Institut de Recherche Technologique b-com, Rennes, France
| | | | | | - Orcun Goksel
- Computer Vision Laboratory, ETH Zurich, Zürich, Switzerland
| | | | - Camiel Klink
- Department of Radiology, Erasmus MC, Rotterdam, The Netherlands
| | | | | | - Maud Marchal
- Institut de Recherche Technologique b-com, Rennes, France
| | - Adriaan Moelker
- Department of Radiology, Erasmus MC, Rotterdam, The Netherlands
| | - Wiro J Niessen
- Department of Radiology, Erasmus MC, Rotterdam, The Netherlands
| | | | | | - Julia Schnabel
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Theo van Walsum
- Department of Radiology, Erasmus MC, Rotterdam, The Netherlands
| | | | - Muyinatu A Lediju Bell
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, USA
| | | |
Collapse
|
26
|
Williamson T, Cheung W, Roberts SK, Chauhan S. Ultrasound-based liver tracking utilizing a hybrid template/optical flow approach. Int J Comput Assist Radiol Surg 2018; 13:1605-1615. [PMID: 29873025 DOI: 10.1007/s11548-018-1780-0] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2017] [Accepted: 04/26/2018] [Indexed: 11/26/2022]
Abstract
PURPOSE With the ongoing shift toward reduced invasiveness in many surgical procedures, methods for tracking moving targets within the body become vital. Non-invasive treatment methods such as stereotactic radiation therapy and high intensity focused ultrasound, in particular, rely on the accurate localization of targets throughout treatment to ensure optimal treatment provision. This work aims at developing a robust, accurate and fast method for target tracking based on ultrasound images. METHODS A method for tracking of targets in real-time ultrasound image data was developed, based on the combination of template matching, dense optical flow and image intensity information. A weighting map is generated from each of these approaches which are then normalized, weighted and combined, with the weighted mean position then calculated to predict the current position. The approach was evaluated on the Challenge for Liver Ultrasound Tracking 2015 dataset, consisting of a total of 24 training and 39 test datasets with a total of 53 and 85 annotated targets throughout the liver, respectively. RESULTS The proposed method was implemented in MATLAB and achieved an accuracy of [Formula: see text] (95%: 1.91) mm and [Formula: see text] (95%: 1.85) mm on the training and test data, respectively. Tracking frequencies of between 8 and 36 fps (mean of 22 fps) were observed, largely dependent on the size of the region of interest. The achieved results represent an improvement in mean accuracy of approximately 0.3 mm over the reported methods in existing literature. CONCLUSIONS This work describes an accurate and robust method for the tracking of points of interest within 2D ultrasound data, based on a combination of multi-template matching, dense optical flow and relative image intensity information.
Collapse
Affiliation(s)
- Tom Williamson
- Department of Mechanical and Aerospace Engineering, Monash University, Lab 298, New Horizon Building, Wellington Rd, Clayton, Melbourne, VIC, 3800, Australia.
| | - Wa Cheung
- Department of Radiology, The Alfred, Commercial Road, Melbourne, Australia
| | - Stuart K Roberts
- Department of Gastroenterology, The Alfred, Commercial Road, Melbourne, Australia
| | - Sunita Chauhan
- Department of Mechanical and Aerospace Engineering, Monash University, Lab 298, New Horizon Building, Wellington Rd, Clayton, Melbourne, VIC, 3800, Australia
| |
Collapse
|
27
|
Abstract
Short-lag spatial coherence (SLSC) imaging displays the spatial coherence between backscattered ultrasound echoes instead of their signal amplitudes and is more robust to noise and clutter artifacts when compared with traditional delay-and-sum (DAS) B-mode imaging. However, SLSC imaging does not consider the content of images formed with different lags, and thus does not exploit the differences in tissue texture at each short-lag value. Our proposed method improves SLSC imaging by weighting the addition of lag values (i.e., M-weighting) and by applying robust principal component analysis (RPCA) to search for a low-dimensional subspace for projecting coherence images created with different lag values. The RPCA-based projections are considered to be denoised versions of the originals that are then weighted and added across lags to yield a final robust SLSC (R-SLSC) image. Our approach was tested on simulation, phantom, and in vivo liver data. Relative to DAS B-mode images, the mean contrast, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) improvements with R-SLSC images are 21.22 dB, 2.54, and 2.36, respectively, when averaged over simulated, phantom, and in vivo data and over all lags considered, which corresponds to mean improvements of 96.4%, 121.2%, and 120.5%, respectively. When compared with SLSC images, the corresponding mean improvements with R-SLSC images were 7.38 dB, 1.52, and 1.30, respectively (i.e., mean improvements of 14.5%, 50.5%, and 43.2%, respectively). Results show great promise for smoothing out the tissue texture of SLSC images and enhancing anechoic or hypoechoic target visibility at higher lag values, which could be useful in clinical tasks such as breast cyst visualization, liver vessel tracking, and obese patient imaging.
Collapse
|
28
|
Diodato A, Cafarelli A, Schiappacasse A, Tognarelli S, Ciuti G, Menciassi A. Motion compensation with skin contact control for high intensity focused ultrasound surgery in moving organs. ACTA ACUST UNITED AC 2018; 63:035017. [DOI: 10.1088/1361-6560/aa9c22] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
|
29
|
Abbass MA, Killin JK, Mahalingam N, Hooi FM, Barthe PG, Mast TD. Real-Time Spatiotemporal Control of High-Intensity Focused Ultrasound Thermal Ablation Using Echo Decorrelation Imaging in ex Vivo Bovine Liver. Ultrasound Med Biol 2018; 44:199-213. [PMID: 29074273 PMCID: PMC5712268 DOI: 10.1016/j.ultrasmedbio.2017.09.007] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2017] [Revised: 08/28/2017] [Accepted: 09/07/2017] [Indexed: 05/05/2023]
Abstract
The ability to control high-intensity focused ultrasound (HIFU) thermal ablation using echo decorrelation imaging feedback was evaluated in ex vivo bovine liver. Sonications were automatically ceased when the minimum cumulative echo decorrelation within the region of interest exceeded an ablation control threshold, determined from preliminary experiments as -2.7 (log-scaled decorrelation per millisecond), corresponding to 90% specificity for local ablation prediction. Controlled HIFU thermal ablation experiments were compared with uncontrolled experiments employing two, five or nine sonication cycles. Means and standard errors of the lesion width, area and depth, as well as receiver operating characteristic curves testing ablation prediction performance, were computed for each group. Controlled trials exhibited significantly smaller average lesion area, width and treatment time than five-cycle or nine-cycle uncontrolled trials and also had significantly greater prediction capability than two-cycle uncontrolled trials. These results suggest echo decorrelation imaging is an effective approach to real-time HIFU ablation control.
Collapse
Affiliation(s)
- Mohamed A Abbass
- Biomedical Engineering, University of Cincinnati, Cincinnati, Ohio, USA
| | - Jakob K Killin
- Biomedical Engineering, University of Cincinnati, Cincinnati, Ohio, USA
| | | | - Fong Ming Hooi
- Ultrasound Division, Siemens Healthcare, Issaquah, Washington, USA
| | | | - T Douglas Mast
- Biomedical Engineering, University of Cincinnati, Cincinnati, Ohio, USA.
| |
Collapse
|
30
|
Shepard AJ, Wang B, Foo TKF, Bednarz BP. A block matching based approach with multiple simultaneous templates for the real-time 2D ultrasound tracking of liver vessels. Med Phys 2017; 44:5889-5900. [PMID: 28898419 DOI: 10.1002/mp.12574] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2017] [Revised: 08/15/2017] [Accepted: 08/20/2017] [Indexed: 12/25/2022] Open
Abstract
PURPOSE The implementation of motion management techniques in radiation therapy can aid in mitigating uncertainties and reducing margins. For motion management to be effective, it is necessary to track key structures both accurately and at a real-time speed. Therefore, the focus of this work was to develop a 2D algorithm for the real-time tracking of ultrasound features to aid in radiation therapy motion management. MATERIALS AND METHODS The developed algorithm utilized a similarity measure-based block matching algorithm incorporating training methods and multiple simultaneous templates. The algorithm is broken down into three primary components, all of which use normalized cross-correlation (NCC) as a similarity metric. First, a global feature shift to account for gross displacements from the previous frame is determined using large block sizes which encompass the entirety of the feature. Second, the most similar reference frame is chosen from a series of training images that are accumulated during the first K frames of tracking to aid in contour consistency and provide a starting point for the localized template initialization. Finally, localized block matching is performed through the simultaneous use of both a training frame and the previous frame. The localized block matching utilizes a series of templates positioned at the boundary points of the training and previous contours. The weighted final boundary points from both the previous and the training frame are ultimately combined and used to determine an affine transformation from the previous frame to the current frame. RESULTS A mean tracking error of 0.72 ± 1.25 mm was observed for 85 point-landmarks across 39 ultrasound sequences relative to manual ground truth annotations. The image processing speed per landmark with the GPU implementation was between 41 and 165 frames per second (fps) during the training set accumulation, and between 73 and 234 fps after training set accumulation. Relative to a comparable multithreaded CPU approach using OpenMP, the GPU implementation resulted in speedups between -30% and 355% during training set accumulation, and between -37% and 639% postaccumulation. CONCLUSIONS Initial implementations indicated an accuracy that was comparable to or exceeding those achieved by alternative 2D tracking methods, with a computational speed that is more than sufficient for real-time applications in a radiation therapy environment. While the overall performance reached levels suitable for implementation in radiation therapy, the observed increase in failures for smaller features, as well as the algorithm's inability to be applied to nonconvex features warrants additional investigation to address the shortcomings observed.
Collapse
Affiliation(s)
- Andrew J Shepard
- Department of Medical Physics, School of Medicine and Public Health, University of Wisconsin-Madison, 1111 Highland Ave, Rm 1005, Madison, WI, 53705-2275, USA
| | - Bo Wang
- GE Global Research, 1 Research Cir, Niskayuna, NY, 12309, USA
| | - Thomas K F Foo
- GE Global Research, 1 Research Cir, Niskayuna, NY, 12309, USA
| | - Bryan P Bednarz
- Department of Medical Physics, School of Medicine and Public Health, University of Wisconsin-Madison, 1111 Highland Ave, Rm 1005, Madison, WI, 53705-2275, USA
| |
Collapse
|
31
|
Zachiu C, Ries M, Ramaekers P, Guey JL, Moonen CTW, de Senneville BD. Real-time non-rigid target tracking for ultrasound-guided clinical interventions. ACTA ACUST UNITED AC 2017; 62:8154-8177. [DOI: 10.1088/1361-6560/aa8c66] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
|
32
|
Ozkan E, Tanner C, Kastelic M, Mattausch O, Makhinya M, Goksel O. Robust motion tracking in liver from 2D ultrasound images using supporters. Int J Comput Assist Radiol Surg 2017; 12:941-50. [DOI: 10.1007/s11548-017-1559-8] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2017] [Accepted: 03/09/2017] [Indexed: 12/25/2022]
|
33
|
Ipsen S, Bruder R, O'Brien R, Keall PJ, Schweikard A, Poulsen PR. Online 4D ultrasound guidance for real-time motion compensation by MLC tracking. Med Phys 2017; 43:5695. [PMID: 27782689 DOI: 10.1118/1.4962932] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
Abstract
PURPOSE With the trend in radiotherapy moving toward dose escalation and hypofractionation, the need for highly accurate targeting increases. While MLC tracking is already being successfully used for motion compensation of moving targets in the prostate, current real-time target localization methods rely on repeated x-ray imaging and implanted fiducial markers or electromagnetic transponders rather than direct target visualization. In contrast, ultrasound imaging can yield volumetric data in real-time (3D + time = 4D) without ionizing radiation. The authors report the first results of combining these promising techniques-online 4D ultrasound guidance and MLC tracking-in a phantom. METHODS A software framework for real-time target localization was installed directly on a 4D ultrasound station and used to detect a 2 mm spherical lead marker inside a water tank. The lead marker was rigidly attached to a motion stage programmed to reproduce nine characteristic tumor trajectories chosen from large databases (five prostate, four lung). The 3D marker position detected by ultrasound was transferred to a computer program for MLC tracking at a rate of 21.3 Hz and used for real-time MLC aperture adaption on a conventional linear accelerator. The tracking system latency was measured using sinusoidal trajectories and compensated for by applying a kernel density prediction algorithm for the lung traces. To measure geometric accuracy, static anterior and lateral conformal fields as well as a 358° arc with a 10 cm circular aperture were delivered for each trajectory. The two-dimensional (2D) geometric tracking error was measured as the difference between marker position and MLC aperture center in continuously acquired portal images. For dosimetric evaluation, VMAT treatment plans with high and low modulation were delivered to a biplanar diode array dosimeter using the same trajectories. Dose measurements with and without MLC tracking were compared to a static reference dose using 3%/3 mm and 2%/2 mm γ-tests. RESULTS The overall tracking system latency was 172 ms. The mean 2D root-mean-square tracking error was 1.03 mm (0.80 mm prostate, 1.31 mm lung). MLC tracking improved the dose delivery in all cases with an overall reduction in the γ-failure rate of 91.2% (3%/3 mm) and 89.9% (2%/2 mm) compared to no motion compensation. Low modulation VMAT plans had no (3%/3 mm) or minimal (2%/2 mm) residual γ-failures while tracking reduced the γ-failure rate from 17.4% to 2.8% (3%/3 mm) and from 33.9% to 6.5% (2%/2 mm) for plans with high modulation. CONCLUSIONS Real-time 4D ultrasound tracking was successfully integrated with online MLC tracking for the first time. The developed framework showed an accuracy and latency comparable with other MLC tracking methods while holding the potential to measure and adapt to target motion, including rotation and deformation, noninvasively.
Collapse
Affiliation(s)
- Svenja Ipsen
- Institute for Robotics and Cognitive Systems, University of Luebeck, Luebeck 23562, Germany
| | - Ralf Bruder
- Institute for Robotics and Cognitive Systems, University of Luebeck, Luebeck 23562, Germany
| | - Rick O'Brien
- Radiation Physics Laboratory, Sydney Medical School, University of Sydney, NSW 2006, Australia
| | - Paul J Keall
- Radiation Physics Laboratory, Sydney Medical School, University of Sydney, NSW 2006, Australia
| | - Achim Schweikard
- Institute for Robotics and Cognitive Systems, University of Luebeck, Luebeck 23562, Germany
| | - Per R Poulsen
- Department of Clinical Medicine, Aarhus University and Department of Oncology, Aarhus University Hospital, Aarhus 8000, Denmark
| |
Collapse
|
34
|
Royer L, Krupa A, Dardenne G, Le Bras A, Marchand E, Marchal M. Real-time target tracking of soft tissues in 3D ultrasound images based on robust visual information and mechanical simulation. Med Image Anal 2017; 35:582-98. [DOI: 10.1016/j.media.2016.09.004] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2015] [Revised: 09/06/2016] [Accepted: 09/07/2016] [Indexed: 11/19/2022]
|
35
|
Seo J, Koizumi N, Mitsuishi M, Sugita N. Ultrasound image based visual servoing for moving target ablation by high intensity focused ultrasound. Int J Med Robot 2016; 13. [PMID: 27995752 PMCID: PMC5724706 DOI: 10.1002/rcs.1793] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2016] [Revised: 10/29/2016] [Accepted: 10/31/2016] [Indexed: 01/16/2023]
Abstract
Background Although high intensity focused ultrasound (HIFU) is a promising technology for tumor treatment, a moving abdominal target is still a challenge in current HIFU systems. In particular, respiratory‐induced organ motion can reduce the treatment efficiency and negatively influence the treatment result. In this research, we present: (1) a methodology for integration of ultrasound (US) image based visual servoing in a HIFU system; and (2) the experimental results obtained using the developed system. Materials and methods In the visual servoing system, target motion is monitored by biplane US imaging and tracked in real time (40 Hz) by registration with a preoperative 3D model. The distance between the target and the current HIFU focal position is calculated in every US frame and a three‐axis robot physically compensates for differences. Because simultaneous HIFU irradiation disturbs US target imaging, a sophisticated interlacing strategy was constructed. Results In the experiments, respiratory‐induced organ motion was simulated in a water tank with a linear actuator and kidney‐shaped phantom model. Motion compensation with HIFU irradiation was applied to the moving phantom model. Based on the experimental results, visual servoing exhibited a motion compensation accuracy of 1.7 mm (RMS) on average. Moreover, the integrated system could make a spherical HIFU‐ablated lesion in the desired position of the respiratory‐moving phantom model. Conclusions We have demonstrated the feasibility of our US image based visual servoing technique in a HIFU system for moving target treatment.
Collapse
Affiliation(s)
- Joonho Seo
- Korea Institute of Machinery and Materials, Daegu, South Korea
| | | | | | | |
Collapse
|
36
|
Sen HT, Bell MAL, Zhang Y, Ding K, Boctor E, Wong J, Iordachita I, Kazanzides P. System Integration and In Vivo Testing of a Robot for Ultrasound Guidance and Monitoring During Radiotherapy. IEEE Trans Biomed Eng 2016; 64:1608-1618. [PMID: 28113225 DOI: 10.1109/tbme.2016.2612229] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
We are developing a cooperatively controlled robot system for image-guided radiation therapy (IGRT) in which a clinician and robot share control of a 3-D ultrasound (US) probe. IGRT involves two main steps: 1) planning/simulation and 2) treatment delivery. The goals of the system are to provide guidance for patient setup and real-time target monitoring during fractionated radiotherapy of soft tissue targets, especially in the upper abdomen. To compensate for soft tissue deformations created by the probe, we present a novel workflow where the robot holds the US probe on the patient during acquisition of the planning computerized tomography image, thereby ensuring that planning is performed on the deformed tissue. The robot system introduces constraints (virtual fixtures) to help to produce consistent soft tissue deformation between simulation and treatment days, based on the robot position, contact force, and reference US image recorded during simulation. This paper presents the system integration and the proposed clinical workflow, validated by an in vivo canine study. The results show that the virtual fixtures enable the clinician to deviate from the recorded position to better reproduce the reference US image, which correlates with more consistent soft tissue deformation and the possibility for more accurate patient setup and radiation delivery.
Collapse
|
37
|
Wilms M, Ha IY, Handels H, Heinrich MP. Model-Based Regularisation for Respiratory Motion Estimation with Sparse Features in Image-Guided Interventions. In: Ourselin S, Joskowicz L, Sabuncu MR, Unal G, Wells W, editors. Medical Image Computing and Computer-Assisted Intervention - MICCAI 2016. Cham: Springer International Publishing; 2016. pp. 89-97. [DOI: 10.1007/978-3-319-46726-9_11] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register]
|
38
|
Şen HT, Cheng A, Ding K, Boctor E, Wong J, Iordachita I, Kazanzides P. Cooperative Control with Ultrasound Guidance for Radiation Therapy. Front Robot AI 2016. [DOI: 10.3389/frobt.2016.00049] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022] Open
|
39
|
Knopf A, Stützer K, Richter C, Rucinski A, da Silva J, Phillips J, Engelsman M, Shimizu S, Werner R, Jakobi A, Göksel O, Zhang Y, Oshea T, Fast M, Perrin R, Bert C, Rinaldi I, Korevaar E, Mcclelland J. Required transition from research to clinical application: Report on the 4D treatment planning workshops 2014 and 2015. Phys Med 2016; 32:874-82. [DOI: 10.1016/j.ejmp.2016.05.064] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/29/2015] [Revised: 05/28/2016] [Accepted: 05/31/2016] [Indexed: 12/25/2022] Open
|
40
|
O'Shea T, Bamber J, Fontanarosa D, van der Meer S, Verhaegen F, Harris E. Review of ultrasound image guidance in external beam radiotherapy part II: intra-fraction motion management and novel applications. Phys Med Biol 2016; 61:R90-137. [PMID: 27002558 DOI: 10.1088/0031-9155/61/8/r90] [Citation(s) in RCA: 66] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Imaging has become an essential tool in modern radiotherapy (RT), being used to plan dose delivery prior to treatment and verify target position before and during treatment. Ultrasound (US) imaging is cost-effective in providing excellent contrast at high resolution for depicting soft tissue targets apart from those shielded by the lungs or cranium. As a result, it is increasingly used in RT setup verification for the measurement of inter-fraction motion, the subject of Part I of this review (Fontanarosa et al 2015 Phys. Med. Biol. 60 R77-114). The combination of rapid imaging and zero ionising radiation dose makes US highly suitable for estimating intra-fraction motion. The current paper (Part II of the review) covers this topic. The basic technology for US motion estimation, and its current clinical application to the prostate, is described here, along with recent developments in robust motion-estimation algorithms, and three dimensional (3D) imaging. Together, these are likely to drive an increase in the number of future clinical studies and the range of cancer sites in which US motion management is applied. Also reviewed are selections of existing and proposed novel applications of US imaging to RT. These are driven by exciting developments in structural, functional and molecular US imaging and analytical techniques such as backscatter tissue analysis, elastography, photoacoustography, contrast-specific imaging, dynamic contrast analysis, microvascular and super-resolution imaging, and targeted microbubbles. Such techniques show promise for predicting and measuring the outcome of RT, quantifying normal tissue toxicity, improving tumour definition and defining a biological target volume that describes radiation sensitive regions of the tumour. US offers easy, low cost and efficient integration of these techniques into the RT workflow. US contrast technology also has potential to be used actively to assist RT by manipulating the tumour cell environment and by improving the delivery of radiosensitising agents. Finally, US imaging offers various ways to measure dose in 3D. If technical problems can be overcome, these hold potential for wide-dissemination of cost-effective pre-treatment dose verification and in vivo dose monitoring methods. It is concluded that US imaging could eventually contribute to all aspects of the RT workflow.
Collapse
Affiliation(s)
- Tuathan O'Shea
- Joint Department of Physics, Institute of Cancer Research and Royal Marsden NHS Foundation Trust, Sutton, London SM2 5NG, UK
| | | | | | | | | | | |
Collapse
|
41
|
Jenne J, Schwaab J. [Ultrasound motion tracking for radiation therapy]. Radiologe 2015; 55:984-91. [PMID: 26438093 DOI: 10.1007/s00117-015-0027-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
BACKGROUND In modern radiotherapy the radiation dose can be applied with an accuracy in the range of 1-2 mm provided that the exact position of the target is known. If, however, the target (the tumor) is located in the lungs or the abdomen, respiration or peristalsis can cause substantial movement of the target. METHODS Various methods for intrafractional motion detection and compensation are currently under consideration or are already applied in clinical practice. Sonography is one promising option, which is now on the brink of clinical implementation. Ultrasound is particularly suited for this purpose due to the high soft tissue contrast, real-time capability, the absence of ionizing radiation and low acquisition costs. Ultrasound motion tracking is an image-based approach, i.e. the target volume or an adjacent structure is directly monitored and the motion is tracked automatically on the ultrasound image. Diverse algorithms are presently available that provide the real-time target coordinates from 2D as well as 3D images. Definition of a suitable sonographic window is not, however, trivial and a gold standard for positioning and mounting of the transducer has not yet been developed. Furthermore, processing of the coordinate information in the therapy unit and the dynamic adaptation of the radiation field are challenging tasks. CONCLUSION It is not clear whether ultrasound motion tracking will become established in the clinical routine although all technical prerequisites can be considered as fulfilled, such that exciting progress in this field of research is still to be expected.
Collapse
|