1
|
Wulff D, Ernst F. Real-time deformable structure tracking in 3D ultrasound sequences using deformable convolutional layers. Comput Biol Med 2025; 186:109671. [PMID: 39842235 DOI: 10.1016/j.compbiomed.2025.109671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Revised: 01/08/2025] [Accepted: 01/08/2025] [Indexed: 01/24/2025]
Abstract
Ultrasound imaging can provide 3D images of soft tissue structures in real-time without harmful radiation. Due to its high level of availability and low-cost characteristics, it is becoming more and more interesting for therapy guidance purposes like in radiotherapy. However, for usage in radiotherapy a robust and real-time image analysis method is required to be able to track the target during the treatment session. Soft tissue structures move in high dimensional motion patterns, including deformation especially due to breathing which makes the tracking task challenging. To overcome the deformation complexity, a novel real-time capable tracking approach in 3D ultrasound is proposed in this study. Deformable convolution layers are included in a 2D convolutional autoencoder architecture to learn deformation-invariant representations from ultrasound patches. For this, a novel 3D to 2D ultrasound patch reduction strategy is proposed. The tracking procedure is performed in the representation space of the autoencoder. We therefore implemented a greedy local search tracking algorithm and evaluated it in a preliminary study on the basis of nine expert-labeled landmarks in 18 in-vivo 3D ultrasound liver sequences. Four different autoencoder architectures with different deformable convolution arrangements are compared in the evaluation. The results show that using deformable convolution layers is beneficial compared to conventional convolution layers. A mean tracking error of 1.58 ± 0.87 mm was measured using deformable convolution, which is an improvement by 10.7% compared to conventional convolution. Additionally, evaluating the runtime shows that the tracking algorithm is real-time capable as the mean runtime per 3D ultrasound frame was around 4 ms. It could be shown that using deformable convolution layers is beneficial for learning meaningful representations of deformable structures from ultrasound patches. A tracking error similar to state-of-the-art methods could be achieved but in a runtime up to 100 times faster.
Collapse
Affiliation(s)
- Daniel Wulff
- University of Lübeck, Ratzeburger Allee 160, Lübeck, 23562, Schleswig-Holstein, Germany; University of Rostock, Universitätsplatz 1, Rostock, 18055, Mecklenburg-Vorpommern, Germany.
| | - Floris Ernst
- University of Lübeck, Ratzeburger Allee 160, Lübeck, 23562, Schleswig-Holstein, Germany.
| |
Collapse
|
2
|
Osburg J, Scheibert A, Horn M, Pater R, Ernst F. Automatic robotic doppler sonography of leg arteries. Int J Comput Assist Radiol Surg 2024; 19:1965-1974. [PMID: 39052197 PMCID: PMC11442516 DOI: 10.1007/s11548-024-03235-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Accepted: 07/04/2024] [Indexed: 07/27/2024]
Abstract
PURPOSE Robot-assisted systems offer an opportunity to support the diagnostic and therapeutic treatment of vascular diseases to reduce radiation exposure and support the limited medical staff in vascular medicine. In the diagnosis and follow-up care of vascular pathologies, Doppler ultrasound has become the preferred diagnostic tool. The study presents a robotic system for automatic Doppler ultrasound examinations of patients' leg vessels. METHODS The robotic system consists of a redundant 7 DoF serial manipulator, to which a 3D ultrasound probe is attached. A compliant control was employed, whereby the transducer was guided along the vessel with a defined contact force. Visual servoing was used to correct the position of the probe during the scan so that the vessel can always be properly visualized. To track the vessel's position, methods based on template matching and Doppler sonography were used. RESULTS Our system was able to successfully scan the femoral artery of seven volunteers automatically for a distance of 20 cm. In particular, our approach using Doppler ultrasound data showed high robustness and an accuracy of 10.7 (±3.1) px in determining the vessel's position and thus outperformed our template matching approach, whereby an accuracy of 13.9 (±6.4) px was achieved. CONCLUSIONS The developed system enables automated robotic ultrasound examinations of vessels and thus represents an opportunity to reduce radiation exposure and staff workload. The integration of Doppler ultrasound improves the accuracy and robustness of vessel tracking, and could thus contribute to the realization of routine robotic vascular examinations and potential endovascular interventions.
Collapse
Affiliation(s)
- Jonas Osburg
- Institute for Robotics and Cognitive Systems, University of Luebeck, Ratzeburger Allee 160, Luebeck, 23562, Germany.
| | - Alexandra Scheibert
- Clinic for Surgery, Division for Vascular and Endovascular Surgery, University Clinic Schleswig-Holstein Campus Luebeck, Ratzeburger Allee 160, 23538, Luebeck, Germany
| | - Marco Horn
- Clinic for Surgery, Division for Vascular and Endovascular Surgery, University Clinic Schleswig-Holstein Campus Luebeck, Ratzeburger Allee 160, 23538, Luebeck, Germany
| | - Ravn Pater
- Clinic for Surgery, Division for Vascular and Endovascular Surgery, University Clinic Schleswig-Holstein Campus Luebeck, Ratzeburger Allee 160, 23538, Luebeck, Germany
| | - Floris Ernst
- Institute for Robotics and Cognitive Systems, University of Luebeck, Ratzeburger Allee 160, Luebeck, 23562, Germany
| |
Collapse
|
3
|
Ernst F, Osburg J, Tüshaus L. SonoBox: development of a robotic ultrasound tomograph for the ultrasound diagnosis of paediatric forearm fractures. Front Robot AI 2024; 11:1405169. [PMID: 39233849 PMCID: PMC11371668 DOI: 10.3389/frobt.2024.1405169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2024] [Accepted: 07/16/2024] [Indexed: 09/06/2024] Open
Abstract
Introduction Paediatric forearm fractures are a prevalent reason for medical consultation, often requiring diagnostic X-rays that present a risk due to ionising radiation, especially concerning given the sensitivity of children's tissues. This paper explores the efficacy of ultrasound imaging, particularly through the development of the SonoBox system, as a safer, non-ionising alternative. With emerging evidence supporting ultrasound as a viable method for fracture assessment, innovations like SonoBox will become increasingly important. Materials and methods In our project, we want to advance ultrasound-based, contact-free, and automated cross-sectional imaging for diagnosing paediatric forearm fractures. To this end, we are building a technical platform that navigates a commercially available ultrasound probe around the extremity within a water-filled tank, utilising intelligent robot control and image processing methods to generate a comprehensive ultrasound tomogram. Safety and hygiene considerations, gender and diversity relevance, and the potential reduction of radiation exposure and examination pain are pivotal aspects of this endeavour. Results Preliminary experiments have demonstrated the feasibility of rapidly generating ultrasound tomographies in a water bath, overcoming challenges such as water turbulence during probe movement. The SonoBox prototype has shown promising results in transmitting position data for ultrasound imaging, indicating potential for autonomous, accurate, and potentially painless fracture diagnosis. The project outlines further goals, including the construction of prototypes, validation through patient studies, and development of a hygiene concept for clinical application. Conclusion The SonoBox project represents a significant step forward in paediatric fracture diagnostics, offering a safer, more comfortable alternative to traditional X-ray imaging. By automating the imaging process and removing the need for direct contact, SonoBox has the potential to improve clinical efficiency, reduce patient discomfort, and broaden the scope of ultrasound applications. Further research and development will focus on validating its effectiveness in clinical settings and exploring its utility in other medical and veterinary applications.
Collapse
Affiliation(s)
- Floris Ernst
- Institute of Robotics and Cognitive Systems, University of Lübeck, Lübeck, Germany
| | - Jonas Osburg
- Institute of Robotics and Cognitive Systems, University of Lübeck, Lübeck, Germany
| | - Ludger Tüshaus
- Department of Paediatric Surgery, University Hospital Schleswig-Holstein, Lübeck, Germany
| |
Collapse
|
4
|
Liu T, Han S, Xie L, Xing W, Liu C, Li B, Ta D. Super-resolution reconstruction of ultrasound image using a modified diffusion model. Phys Med Biol 2024; 69:125026. [PMID: 38636526 DOI: 10.1088/1361-6560/ad4086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Accepted: 04/18/2024] [Indexed: 04/20/2024]
Abstract
Objective. This study aims to perform super-resolution (SR) reconstruction of ultrasound images using a modified diffusion model, designated as the diffusion model for ultrasound image super-resolution (DMUISR). SR involves converting low-resolution images to high-resolution ones, and the proposed model is designed to enhance the suitability of diffusion models for this task in the context of ultrasound imaging.Approach. DMUISR incorporates a multi-layer self-attention (MLSA) mechanism and a wavelet-transform based low-resolution image (WTLR) encoder to enhance its suitability for ultrasound image SR tasks. The model takes interpolated and magnified images as input and outputs high-quality, detailed SR images. The study utilized 1,334 ultrasound images from the public fetal head-circumference dataset (HC18) for evaluation.Main results. Experiments were conducted at 2× , 4× , and 8× magnification factors. DMUISR outperformed mainstream ultrasound SR methods (Bicubic, VDSR, DECUSR, DRCN, REDNet, SRGAN) across all scales, providing high-quality images with clear structures and rich detailed textures in both hard and soft tissue regions. DMUISR successfully accomplished multiscale SR reconstruction while suppressing over-smoothing and mode collapse problems. Quantitative results showed that DMUISR achieved the best performance in terms of learned perceptual image patch similarity, with a significant decrease of over 50% at all three magnification factors (2× , 4× , and 8× ), as well as improvements in peak signal-to-noise ratio and structural similarity index measure. Ablation experiments validated the effectiveness of the MLSA mechanism and WTLR encoder in improving DMUISR's SR performance. Furthermore, by reducing the number of diffusion steps, the computational time of DMUISR was shortened to nearly one-tenth of its original while maintaining image quality without significant degradation.Significance. This study demonstrates that the modified diffusion model, DMUISR, provides superior performance for SR reconstruction of ultrasound images and has potential in improving imaging quality in the medical ultrasound field.
Collapse
Affiliation(s)
- Tianyu Liu
- Institute of Biomedical Engineering & Technology, Academy for Engineering and Technology, Fudan University, Shanghai 200433, People's Republic of China
| | - Shuai Han
- Institute of Biomedical Engineering & Technology, Academy for Engineering and Technology, Fudan University, Shanghai 200433, People's Republic of China
| | - Linru Xie
- Institute of Biomedical Engineering & Technology, Academy for Engineering and Technology, Fudan University, Shanghai 200433, People's Republic of China
| | - Wenyu Xing
- Institute of Biomedical Engineering & Technology, Academy for Engineering and Technology, Fudan University, Shanghai 200433, People's Republic of China
| | - Chengcheng Liu
- Institute of Biomedical Engineering & Technology, Academy for Engineering and Technology, Fudan University, Shanghai 200433, People's Republic of China
- State Key Laboratory of Integrated Chips and Systems, Fudan University, Shanghai 201203, People's Republic of China
| | - Boyi Li
- Institute of Biomedical Engineering & Technology, Academy for Engineering and Technology, Fudan University, Shanghai 200433, People's Republic of China
| | - Dean Ta
- State Key Laboratory of Integrated Chips and Systems, Fudan University, Shanghai 201203, People's Republic of China
- Department of Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai 200438, People's Republic of China
| |
Collapse
|
5
|
Tang X, Wang H, Luo J, Jiang J, Nian F, Qi L, Sang L, Gan Z. Autonomous ultrasound scanning robotic system based on human posture recognition and image servo control: an application for cardiac imaging. Front Robot AI 2024; 11:1383732. [PMID: 38774468 PMCID: PMC11106497 DOI: 10.3389/frobt.2024.1383732] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Accepted: 04/12/2024] [Indexed: 05/24/2024] Open
Abstract
In traditional cardiac ultrasound diagnostics, the process of planning scanning paths and adjusting the ultrasound window relies solely on the experience and intuition of the physician, a method that not only affects the efficiency and quality of cardiac imaging but also increases the workload for physicians. To overcome these challenges, this study introduces a robotic system designed for autonomous cardiac ultrasound scanning, with the goal of advancing both the degree of automation and the quality of imaging in cardiac ultrasound examinations. The system achieves autonomous functionality through two key stages: initially, in the autonomous path planning stage, it utilizes a camera posture adjustment method based on the human body's central region and its planar normal vectors to achieve automatic adjustment of the camera's positioning angle; precise segmentation of the human body point cloud is accomplished through efficient point cloud processing techniques, and precise localization of the region of interest (ROI) based on keypoints of the human body. Furthermore, by applying isometric path slicing and B-spline curve fitting techniques, it independently plans the scanning path and the initial position of the probe. Subsequently, in the autonomous scanning stage, an innovative servo control strategy based on cardiac image edge correction is introduced to optimize the quality of the cardiac ultrasound window, integrating position compensation through admittance control to enhance the stability of autonomous cardiac ultrasound imaging, thereby obtaining a detailed view of the heart's structure and function. A series of experimental validations on human and cardiac models have assessed the system's effectiveness and precision in the correction of camera pose, planning of scanning paths, and control of cardiac ultrasound imaging quality, demonstrating its significant potential for clinical ultrasound scanning applications.
Collapse
Affiliation(s)
- Xiuhong Tang
- Academy for Engineering and Technology, Fudan University, Shanghai, China
| | - Hongbo Wang
- Academy for Engineering and Technology, Fudan University, Shanghai, China
| | - Jingjing Luo
- Academy for Engineering and Technology, Fudan University, Shanghai, China
| | - Jinlei Jiang
- Intelligent Robot Engineering Research Center of Ministry of Education, Shanghai, China
| | - Fan Nian
- Intelligent Robot Engineering Research Center of Ministry of Education, Shanghai, China
| | - Lizhe Qi
- Academy for Engineering and Technology, Fudan University, Shanghai, China
| | - Lingfeng Sang
- Institute of Intelligent Medical Care Technology, Ningbo, China
| | - Zhongxue Gan
- Academy for Engineering and Technology, Fudan University, Shanghai, China
| |
Collapse
|
6
|
Zhang P, Gao C, Huang Y, Chen X, Pan Z, Wang L, Dong D, Li S, Qi X. Artificial intelligence in liver imaging: methods and applications. Hepatol Int 2024; 18:422-434. [PMID: 38376649 DOI: 10.1007/s12072-023-10630-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Accepted: 12/18/2023] [Indexed: 02/21/2024]
Abstract
Liver disease is regarded as one of the major health threats to humans. Radiographic assessments hold promise in terms of addressing the current demands for precisely diagnosing and treating liver diseases, and artificial intelligence (AI), which excels at automatically making quantitative assessments of complex medical image characteristics, has made great strides regarding the qualitative interpretation of medical imaging by clinicians. Here, we review the current state of medical-imaging-based AI methodologies and their applications concerning the management of liver diseases. We summarize the representative AI methodologies in liver imaging with focusing on deep learning, and illustrate their promising clinical applications across the spectrum of precise liver disease detection, diagnosis and treatment. We also address the current challenges and future perspectives of AI in liver imaging, with an emphasis on feature interpretability, multimodal data integration and multicenter study. Taken together, it is revealed that AI methodologies, together with the large volume of available medical image data, might impact the future of liver disease care.
Collapse
Affiliation(s)
- Peng Zhang
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Chaofei Gao
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Yifei Huang
- Department of Gastroenterology, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Xiangyi Chen
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Zhuoshi Pan
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Lan Wang
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Di Dong
- CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China
| | - Shao Li
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China.
| | - Xiaolong Qi
- Center of Portal Hypertension, Department of Radiology, Zhongda Hospital, Medical School, Nurturing Center of Jiangsu Province for State Laboratory of AI Imaging & Interventional Radiology, Southeast University, Nanjing, China.
| |
Collapse
|
7
|
Gao X, Lv Q, Hou S. Progress in the Application of Portable Ultrasound Combined with Artificial Intelligence in Pre-Hospital Emergency and Disaster Sites. Diagnostics (Basel) 2023; 13:3388. [PMID: 37958284 PMCID: PMC10649742 DOI: 10.3390/diagnostics13213388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Revised: 10/30/2023] [Accepted: 11/02/2023] [Indexed: 11/15/2023] Open
Abstract
With the miniaturization of ultrasound and the development of artificial intelligence, its application in disaster scenes and pre-hospital emergency care has become more and more common. This study summarizes the literature on portable ultrasound in pre-hospital emergency and disaster scene treatment in the past decade and reviews the development and application of portable ultrasound. Portable ultrasound diagnostic equipment can be used to diagnose abdominal bleeding, limb fracture, hemopneumothorax, pericardial effusion, etc., based on which trauma can be diagnosed pre-hospital and provide guiding suggestions for the next triage and rescue; in early rescue, portable ultrasound can guide emergency operations, such as tracheal intubation, pericardial cavity puncture, and thoracic and abdominal puncture as well as improve the accuracy and timeliness of operation techniques. In addition, with the development of artificial intelligence (AI), AI-assisted diagnosis can improve the diagnosis level of ultrasound at disaster sites. The portable ultrasound diagnosis system equipped with an AI robotic arm can maximize the pre-screening classification and fast and concise diagnosis and treatment of batch casualties, thus providing a reliable basis for batch casualty classification and evacuation at disaster accident sites.
Collapse
Affiliation(s)
- Xing Gao
- Tianjin University Tianjin Hospital, Tianjin 300211, China;
- Institution of Disaster and Emergency Medicine, Tianjin University, Tianjin 300072, China
- Key Laboratory of Medical Rescue Key Technology and Equipment, Ministry of Emergency Management, Tianjin 300072, China
| | - Qi Lv
- Institution of Disaster and Emergency Medicine, Tianjin University, Tianjin 300072, China
- Key Laboratory of Medical Rescue Key Technology and Equipment, Ministry of Emergency Management, Tianjin 300072, China
| | - Shike Hou
- Institution of Disaster and Emergency Medicine, Tianjin University, Tianjin 300072, China
- Key Laboratory of Medical Rescue Key Technology and Equipment, Ministry of Emergency Management, Tianjin 300072, China
| |
Collapse
|
8
|
Jiang Z, Salcudean SE, Navab N. Robotic ultrasound imaging: State-of-the-art and future perspectives. Med Image Anal 2023; 89:102878. [PMID: 37541100 DOI: 10.1016/j.media.2023.102878] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 04/27/2023] [Accepted: 06/22/2023] [Indexed: 08/06/2023]
Abstract
Ultrasound (US) is one of the most widely used modalities for clinical intervention and diagnosis due to the merits of providing non-invasive, radiation-free, and real-time images. However, free-hand US examinations are highly operator-dependent. Robotic US System (RUSS) aims at overcoming this shortcoming by offering reproducibility, while also aiming at improving dexterity, and intelligent anatomy and disease-aware imaging. In addition to enhancing diagnostic outcomes, RUSS also holds the potential to provide medical interventions for populations suffering from the shortage of experienced sonographers. In this paper, we categorize RUSS as teleoperated or autonomous. Regarding teleoperated RUSS, we summarize their technical developments, and clinical evaluations, respectively. This survey then focuses on the review of recent work on autonomous robotic US imaging. We demonstrate that machine learning and artificial intelligence present the key techniques, which enable intelligent patient and process-specific, motion and deformation-aware robotic image acquisition. We also show that the research on artificial intelligence for autonomous RUSS has directed the research community toward understanding and modeling expert sonographers' semantic reasoning and action. Here, we call this process, the recovery of the "language of sonography". This side result of research on autonomous robotic US acquisitions could be considered as valuable and essential as the progress made in the robotic US examination itself. This article will provide both engineers and clinicians with a comprehensive understanding of RUSS by surveying underlying techniques. Additionally, we present the challenges that the scientific community needs to face in the coming years in order to achieve its ultimate goal of developing intelligent robotic sonographer colleagues. These colleagues are expected to be capable of collaborating with human sonographers in dynamic environments to enhance both diagnostic and intraoperative imaging.
Collapse
Affiliation(s)
- Zhongliang Jiang
- Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany.
| | - Septimiu E Salcudean
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC V6T 1Z4, Canada
| | - Nassir Navab
- Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany; Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
9
|
Bengs M, Sprenger J, Gerlach S, Neidhardt M, Schlaefer A. Real-Time Motion Analysis With 4D Deep Learning for Ultrasound-Guided Radiotherapy. IEEE Trans Biomed Eng 2023; 70:2690-2699. [PMID: 37030809 DOI: 10.1109/tbme.2023.3262422] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/30/2023]
Abstract
Motion compensation in radiation therapy is a challenging scenario that requires estimating and forecasting motion of tissue structures to deliver the target dose. Ultrasound offers direct imaging of tissue in real-time and is considered for image guidance in radiation therapy. Recently, fast volumetric ultrasound has gained traction, but motion analysis with such high-dimensional data remains difficult. While deep learning could bring many advantages, such as fast data processing and high performance, it remains unclear how to process sequences of hundreds of image volumes efficiently and effectively. We present a 4D deep learning approach for real-time motion estimation and forecasting using long-term 4D ultrasound data. Using motion traces acquired during radiation therapy combined with various tissue types, our results demonstrate that long-term motion estimation can be performed markerless with a tracking error of 0.35±0.2 mm and with an inference time of less than 5 ms. Also, we demonstrate forecasting directly from the image data up to 900 ms into the future. Overall, our findings highlight that 4D deep learning is a promising approach for motion analysis during radiotherapy.
Collapse
|
10
|
Seitz PK, Karger CP, Bendl R, Schwahofer A. Strategy for automatic ultrasound (US) probe positioning in robot-assisted ultrasound guided radiation therapy. Phys Med Biol 2023; 68. [PMID: 36584398 DOI: 10.1088/1361-6560/acaf46] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Accepted: 12/30/2022] [Indexed: 12/31/2022]
Abstract
Objective. As part of image-guided radiotherapy, ultrasound-guided radiotherapy is currently already in use and under investigation for robot assisted systems Ipsen 2021. It promises a real-time tumor localization during irradiation (intrafractional) without extra dose. The ultrasound probe is held and guided by a robot. However, there is a lack of basic safety mechanisms and interaction strategies to enable a safe clinical procedure. In this study we investigate potential positioning strategies with safety mechanisms for a safe robot-human-interaction.Approach. A compact setup of ultrasound device, lightweight robot, tracking camera, force sensor and control computer were integrated in a software application to represent a potential USgRT setup. For the realization of a clinical procedure, positioning strategies for the ultrasound head with the help of the robot were developed, implemented, and tested. In addition, basic safety mechanisms for the robot have been implemented, using the integrated force sensor, and have been tested by intentional collisions.Main results. Various positioning methods from manual guidance to completely automated procedures were tested. Robot-guided methods achieved higher positioning accuracy and were faster in execution compared to conventional hand-guided methods. The developed safety mechanisms worked as intended and the detected collision force were below 20 N.Significance. The study demonstrates the feasibility of a new approach for safe robotic ultrasound imaging, with a focus on abdominal usage (liver, prostate, kidney). The safety measures applied here can be extended to other human-robot interactions and present the basic for further studies in medical applications.
Collapse
Affiliation(s)
- Peter Karl Seitz
- Department of Medical Physics in Radiation Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany.,National Center for Radiation Research in Oncology (NCRO), Heidelberg Institute for Radiation Oncology (HIRO), Heidelberg, Germany.,University of Heidelberg, Faculty of Medicine Heidelberg, Heidelberg, Germany.,Medical Informatics, Heilbronn University, Heilbronn, Germany
| | - Christian P Karger
- Department of Medical Physics in Radiation Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany.,National Center for Radiation Research in Oncology (NCRO), Heidelberg Institute for Radiation Oncology (HIRO), Heidelberg, Germany
| | - Rolf Bendl
- Medical Informatics, Heilbronn University, Heilbronn, Germany
| | - Andrea Schwahofer
- Department of Medical Physics in Radiation Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany.,National Center for Radiation Research in Oncology (NCRO), Heidelberg Institute for Radiation Oncology (HIRO), Heidelberg, Germany.,Therapanacea, Paris, France
| |
Collapse
|
11
|
Ma X, Kuo WY, Yang K, Rahaman A, Zhang HK. A-SEE: Active-Sensing End-effector Enabled Probe Self-Normal-Positioning for Robotic Ultrasound Imaging Applications. IEEE Robot Autom Lett 2022; 7:12475-12482. [PMID: 37325198 PMCID: PMC10266708 DOI: 10.1109/lra.2022.3218183] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/06/2023]
Abstract
Conventional manual ultrasound (US) imaging is a physically demanding procedure for sonographers. A robotic US system (RUSS) has the potential to overcome this limitation by automating and standardizing the imaging procedure. It also extends ultrasound accessibility in resource-limited environments with the shortage of human operators by enabling remote diagnosis. During imaging, keeping the US probe normal to the skin surface largely benefits the US image quality. However, an autonomous, real-time, low-cost method to align the probe towards the direction orthogonal to the skin surface without pre-operative information is absent in RUSS. We propose a novel end-effector design to achieve self-normal-positioning of the US probe. The end-effector embeds four laser distance sensors to estimate the desired rotation towards the normal direction. We then integrate the proposed end-effector with a RUSS system which allows the probe to be automatically and dynamically kept to normal direction during US imaging. We evaluated the normal positioning accuracy and the US image quality using a flat surface phantom, an upper torso mannequin, and a lung ultrasound phantom. Results show that the normal positioning accuracy is 4.17 ± 2.24 degrees on the flat surface and 14.67 ± 8.46 degrees on the mannequin. The quality of the RUSS collected US images from the lung ultrasound phantom was equivalent to that of the manually collected ones.
Collapse
Affiliation(s)
- Xihan Ma
- Department of Robotics Engineering, Worcester Polytechnic Institute, Worcester, MA, 01609, USA
| | - Wen-Yi Kuo
- Department of Robotics Engineering, Worcester Polytechnic Institute, Worcester, MA, 01609, USA
| | - Kehan Yang
- Department of Robotics Engineering, Worcester Polytechnic Institute, Worcester, MA, 01609, USA
| | - Ashiqur Rahaman
- Department of Robotics Engineering, Worcester Polytechnic Institute, Worcester, MA, 01609, USA
- Department of Biomedical Engineering, Worcester Polytechnic Institute, Worcester, MA, 01609, USA
| | - Haichong K Zhang
- Department of Robotics Engineering, Worcester Polytechnic Institute, Worcester, MA, 01609, USA
- Department of Biomedical Engineering, Worcester Polytechnic Institute, Worcester, MA, 01609, USA
| |
Collapse
|
12
|
AI-based optimization for US-guided radiation therapy of the prostate. Int J Comput Assist Radiol Surg 2022; 17:2023-2032. [PMID: 35593988 PMCID: PMC9515059 DOI: 10.1007/s11548-022-02664-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Accepted: 04/26/2022] [Indexed: 11/05/2022]
Abstract
OBJECTIVES Fast volumetric ultrasound presents an interesting modality for continuous and real-time intra-fractional target tracking in radiation therapy of lesions in the abdomen. However, the placement of the ultrasound probe close to the target structures leads to blocking some beam directions. METHODS To handle the combinatorial complexity of searching for the ultrasound-robot pose and the subset of optimal treatment beams, we combine CNN-based candidate beam selection with simulated annealing for setup optimization of the ultrasound robot, and linear optimization for treatment plan optimization into an AI-based approach. For 50 prostate cases previously treated with the CyberKnife, we study setup and treatment plan optimization when including robotic ultrasound guidance. RESULTS The CNN-based search substantially outperforms previous randomized heuristics, increasing coverage from 93.66 to 97.20% on average. Moreover, in some cases the total MU was also reduced, particularly for smaller target volumes. Results after AI-based optimization are similar for treatment plans with and without beam blocking due to ultrasound guidance. CONCLUSIONS AI-based optimization allows for fast and effective search for configurations for robotic ultrasound-guided radiation therapy. The negative impact of the ultrasound robot on the plan quality can successfully be mitigated resulting only in minor differences.
Collapse
|
13
|
Sutedjo V, Tirindelli M, Eilers C, Simson W, Busam B, Navab N. Acoustic Shadowing Aware Robotic Ultrasound: Lighting up the Dark. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3141451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|