1
|
Du H, Zhang X, Zhang Y, Zhang F, Lin L, Huang T. A review of robot-assisted ultrasound examination: Systems and technology. Int J Med Robot 2024; 20:e2660. [PMID: 38978325 DOI: 10.1002/rcs.2660] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Revised: 06/01/2024] [Accepted: 06/29/2024] [Indexed: 07/10/2024]
Abstract
BACKGROUND At present, the number and overall level of ultrasound (US) doctors cannot meet the medical needs, and the medical ultrasound robots will largely solve the shortage of medical resources. METHODS According to the degree of automation, the handheld, semi-automatic and automatic ultrasound examination robot systems are summarised. Ultrasound scanning path planning and robot control are the keys to ensure that the robot systems can obtain high-quality images. Therefore, the ultrasound scanning path planning and control methods are summarised. The research progress and future trends are discussed. RESULTS A variety of ultrasound robot systems have been applied to various medical works. With the continuous improvement of automation, the systems provide high-quality ultrasound images and image guidance for clinicians. CONCLUSION Although the development of medical ultrasound robot still faces challenges, with the continuous progress of robot technology and communication technology, medical ultrasound robot will have great development potential and broad application space.
Collapse
Affiliation(s)
- Haiyan Du
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Harbin University of Science and Technology, Harbin, China
| | - Xinran Zhang
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Harbin University of Science and Technology, Harbin, China
| | - Yongde Zhang
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Harbin University of Science and Technology, Harbin, China
| | - Fujun Zhang
- Department of Minimally Invasive Interventional Therapy, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Letao Lin
- Department of Minimally Invasive Interventional Therapy, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Tao Huang
- Department of Minimally Invasive Interventional Therapy, Sun Yat-sen University Cancer Center, Guangzhou, China
| |
Collapse
|
2
|
Su K, Liu J, Ren X, Huo Y, Du G, Zhao W, Wang X, Liang B, Li D, Liu PX. A fully autonomous robotic ultrasound system for thyroid scanning. Nat Commun 2024; 15:4004. [PMID: 38734697 PMCID: PMC11519952 DOI: 10.1038/s41467-024-48421-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2023] [Accepted: 04/23/2024] [Indexed: 05/13/2024] Open
Abstract
The current thyroid ultrasound relies heavily on the experience and skills of the sonographer and the expertise of the radiologist, and the process is physically and cognitively exhausting. In this paper, we report a fully autonomous robotic ultrasound system, which is able to scan thyroid regions without human assistance and identify malignant nod- ules. In this system, human skeleton point recognition, reinforcement learning, and force feedback are used to deal with the difficulties in locating thyroid targets. The orientation of the ultrasound probe is adjusted dynamically via Bayesian optimization. Experimental results on human participants demonstrated that this system can perform high-quality ultrasound scans, close to manual scans obtained by clinicians. Additionally, it has the potential to detect thyroid nodules and provide data on nodule characteristics for American College of Radiology Thyroid Imaging Reporting and Data System (ACR TI-RADS) calculation.
Collapse
Affiliation(s)
- Kang Su
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, 510006, China
| | - Jingwei Liu
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, 510006, China
| | - Xiaoqi Ren
- School of Future Technology, South China University of Technology, Guangzhou, 511442, China
- Peng Cheng Laboratory, Shenzhen, 518000, China
| | - Yingxiang Huo
- School of Future Technology, South China University of Technology, Guangzhou, 511442, China
- Peng Cheng Laboratory, Shenzhen, 518000, China
| | - Guanglong Du
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, 510006, China.
| | - Wei Zhao
- Division of Vascular and Interventional Radiology, Nanfang Hospital Southern Medical University, Guangzhou, 510515, China
| | - Xueqian Wang
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China.
| | - Bin Liang
- Department of Automation, Tsinghua University, 100854, Beijing, China.
| | - Di Li
- School of Mechanical and Automotive Engineering, South China University of Technology, Guangzhou, 510641, China
| | - Peter Xiaoping Liu
- Department of Systems and Computer Engineering, Carleton University, Ottawa, ON, K1S 5B6, Canada.
| |
Collapse
|
3
|
Tang X, Wang H, Luo J, Jiang J, Nian F, Qi L, Sang L, Gan Z. Autonomous ultrasound scanning robotic system based on human posture recognition and image servo control: an application for cardiac imaging. Front Robot AI 2024; 11:1383732. [PMID: 38774468 PMCID: PMC11106497 DOI: 10.3389/frobt.2024.1383732] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Accepted: 04/12/2024] [Indexed: 05/24/2024] Open
Abstract
In traditional cardiac ultrasound diagnostics, the process of planning scanning paths and adjusting the ultrasound window relies solely on the experience and intuition of the physician, a method that not only affects the efficiency and quality of cardiac imaging but also increases the workload for physicians. To overcome these challenges, this study introduces a robotic system designed for autonomous cardiac ultrasound scanning, with the goal of advancing both the degree of automation and the quality of imaging in cardiac ultrasound examinations. The system achieves autonomous functionality through two key stages: initially, in the autonomous path planning stage, it utilizes a camera posture adjustment method based on the human body's central region and its planar normal vectors to achieve automatic adjustment of the camera's positioning angle; precise segmentation of the human body point cloud is accomplished through efficient point cloud processing techniques, and precise localization of the region of interest (ROI) based on keypoints of the human body. Furthermore, by applying isometric path slicing and B-spline curve fitting techniques, it independently plans the scanning path and the initial position of the probe. Subsequently, in the autonomous scanning stage, an innovative servo control strategy based on cardiac image edge correction is introduced to optimize the quality of the cardiac ultrasound window, integrating position compensation through admittance control to enhance the stability of autonomous cardiac ultrasound imaging, thereby obtaining a detailed view of the heart's structure and function. A series of experimental validations on human and cardiac models have assessed the system's effectiveness and precision in the correction of camera pose, planning of scanning paths, and control of cardiac ultrasound imaging quality, demonstrating its significant potential for clinical ultrasound scanning applications.
Collapse
Affiliation(s)
- Xiuhong Tang
- Academy for Engineering and Technology, Fudan University, Shanghai, China
| | - Hongbo Wang
- Academy for Engineering and Technology, Fudan University, Shanghai, China
| | - Jingjing Luo
- Academy for Engineering and Technology, Fudan University, Shanghai, China
| | - Jinlei Jiang
- Intelligent Robot Engineering Research Center of Ministry of Education, Shanghai, China
| | - Fan Nian
- Intelligent Robot Engineering Research Center of Ministry of Education, Shanghai, China
| | - Lizhe Qi
- Academy for Engineering and Technology, Fudan University, Shanghai, China
| | - Lingfeng Sang
- Institute of Intelligent Medical Care Technology, Ningbo, China
| | - Zhongxue Gan
- Academy for Engineering and Technology, Fudan University, Shanghai, China
| |
Collapse
|
4
|
Okuzaki K, Koizumi N, Yoshinaka K, Nishiyama Y, Zhou J, Tsumura R. Rib region detection for scanning path planning for fully automated robotic abdominal ultrasonography. Int J Comput Assist Radiol Surg 2024; 19:449-457. [PMID: 37787939 DOI: 10.1007/s11548-023-03019-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 09/12/2023] [Indexed: 10/04/2023]
Abstract
PURPOSE Scanning path planning is an essential technology for fully automated ultrasound (US) robotics. During biliary scanning, the subcostal boundary is critical body surface landmarks for scanning path planning but are often invisible, depending on the individual. This study developed a method of estimating the rib region for scanning path planning toward fully automated robotic US systems. METHODS We proposed a method for determining the rib region using RGB-D images and respiratory variation. We hypothesized that detecting the rib region would be possible based on changes in body surface position due to breathing. We generated a depth difference image by finding the difference between the depth image taken at the resting inspiratory position and the depth image taken at the maximum inspiratory position, which clearly shows the rib region. The boundary position of the subcostal was then determined by applying training using the YOLOv5 object detection model to this depth difference image. RESULTS In the experiments with healthy subjects, the proposed method of rib detection using the depth difference image marked an intersection over union (IoU) of 0.951 and average confidence of 0.77. The average error between the ground truth and predicted positions was 16.5 mm in 3D space. The results were superior to rib detection using only the RGB image. CONCLUSION The proposed depth difference imaging method, which measures respiratory variation, was able to accurately estimate the rib region without contact and physician intervention. It will be useful for planning the scan path during the biliary imaging.
Collapse
Affiliation(s)
- Koudai Okuzaki
- Health and Medical Research Institute, National Institute of Advanced Industrial Science and Technology, Tsukuba, Ibaraki, Japan
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Chofu, Tokyo, Japan
| | - Norihiro Koizumi
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Chofu, Tokyo, Japan
| | - Kiyoshi Yoshinaka
- Health and Medical Research Institute, National Institute of Advanced Industrial Science and Technology, Tsukuba, Ibaraki, Japan
| | - Yu Nishiyama
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Chofu, Tokyo, Japan
| | - Jiayi Zhou
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Chofu, Tokyo, Japan
| | - Ryosuke Tsumura
- Health and Medical Research Institute, National Institute of Advanced Industrial Science and Technology, Tsukuba, Ibaraki, Japan.
| |
Collapse
|
5
|
Hao M, Guo J, Liu C, Chen C, Wang S. Development and preliminary testing of a prior knowledge-based visual navigation system for cardiac ultrasound scanning. Biomed Eng Lett 2024; 14:307-316. [PMID: 38374906 PMCID: PMC10874367 DOI: 10.1007/s13534-023-00338-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 11/21/2023] [Accepted: 11/28/2023] [Indexed: 02/21/2024] Open
Abstract
Purpose Ultrasound is widely used to diagnose disease and guide surgery because it is versatile, inexpensive and radiation-free. However, image acquisition is dependent on the operation of a professional sonographer, which is a difficult skill to learn for a wider range of non-sonographers. Methods We propose a prior knowledge-based visual navigation method to obtain three important standard ultrasound views of the heart, based on the sonographer's skill learning and augmented reality prompts. The key information about the probe movement was captured using vision-based tracking and normalisation methods on 14 volunteers, based on a professional sonographer's practice. An augmented reality-based navigation method was then proposed to guide operators with no ultrasound experience to find standard views of the heart in a second set of three volunteers. Results Through quantitative analysis and qualitative scoring, the results showed that the proposed method can effectively guide non-sonographers to obtain standard views with diagnostic value. Conclusion It is believed that the method proposed in this paper has clear application value in primary care, and expansion of the data will allow the accuracy of the navigation to be further improved.
Collapse
Affiliation(s)
- Mingrui Hao
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190 China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100043 China
| | - Jun Guo
- Hangtian Center Hospital, Beijing, 100049 China
| | - Cuicui Liu
- Hangtian Center Hospital, Beijing, 100049 China
| | - Chen Chen
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190 China
| | - Shuangyi Wang
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190 China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100043 China
- Centre for Artificial Intelligence and Robotics, Hong Kong Institute of Science & Innovation, Chinese Academy of Sciences, Hong Kong, China
| |
Collapse
|
6
|
Tan J, Li B, Leng Y, Li Y, Peng J, Wu J, Luo B, Chen X, Rong Y, Fu C. Fully Automatic Dual-Probe Lung Ultrasound Scanning Robot for Screening Triage. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; 70:975-988. [PMID: 36191095 DOI: 10.1109/tuffc.2022.3211532] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Two-dimensional lung ultrasound (LUS) has widely emerged as a rapid and noninvasive imaging tool for the detection and diagnosis of coronavirus disease 2019 (COVID-19). However, image differences will be magnified due to changes in ultrasound (US) imaging experience, such as US probe attitude control and force control, which will directly affect the diagnosis results. In addition, the risk of virus transmission between sonographer and patients is increased due to frequent physical contact. In this study, a fully automatic dual-probe US scanning robot for the acquisition of LUS images is proposed and developed. Furthermore, the trajectory was optimized based on the velocity look-ahead strategy, the stability of contact force of the system and the scanning efficiency were improved by 24.13% and 29.46%, respectively. Also, the control ability of the contact force of robotic automatic scanning was 34.14 times higher than that of traditional manual scanning, which significantly improves the smoothness of scanning. Importantly, there was no significant difference in image quality obtained by robotic automatic scanning and manual scanning. Furthermore, the scanning time for a single person is less than 4 min, which greatly improves the efficiency of screening triage of group COVID-19 diagnosis and suspected patients and reduces the risk of virus exposure and spread.
Collapse
|
7
|
Zhang B, Cong H, Shen Y, Sun M. Visual Perception and Convolutional Neural Network-Based Robotic Autonomous Lung Ultrasound Scanning Localization System. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; 70:961-974. [PMID: 37015119 DOI: 10.1109/tuffc.2023.3263514] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Under the situation of severe COVID-19 epidemic, lung ultrasound (LUS) has been proved to be an effective and convenient method to diagnose and evaluate the extent of respiratory disease. However, the traditional clinical ultrasound (US) scanning requires doctors not only to be in close contact with patients but also to have rich experience. In order to alleviate the shortage of medical resources and reduce the work stress and risk of infection for doctors, we propose a visual perception and convolutional neural network (CNN)-based robotic autonomous LUS scanning localization system to realize scanned target recognition, probe pose solution and movement, and the acquisition of US images. The LUS scanned targets are identified through the target segmentation and localization algorithm based on the improved CNN, which is using the depth camera to collect the image information; furthermore, the method based on multiscale compensation normal vector is used to solve the attitude of the probe; finally, a position control strategy based on force feedback is designed to optimize the position and attitude of the probe, which can not only obtain high-quality US images but also ensure the safety of patients and the system. The results of human LUS scanning experiment verify the accuracy and feasibility of the system. The positioning accuracy of the scanned targets is 15.63 ± 0.18 mm, and the distance accuracy and rotation angle accuracy of the probe position calculation are 6.38 ± 0.25 mm and 8.60° ±2.29° , respectively. More importantly, the obtained high-quality US images can clearly capture the main pathological features of the lung. The system is expected to be applied in clinical practice.
Collapse
|
8
|
Ma X, Kuo WY, Yang K, Rahaman A, Zhang HK. A-SEE: Active-Sensing End-effector Enabled Probe Self-Normal-Positioning for Robotic Ultrasound Imaging Applications. IEEE Robot Autom Lett 2022; 7:12475-12482. [PMID: 37325198 PMCID: PMC10266708 DOI: 10.1109/lra.2022.3218183] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/06/2023]
Abstract
Conventional manual ultrasound (US) imaging is a physically demanding procedure for sonographers. A robotic US system (RUSS) has the potential to overcome this limitation by automating and standardizing the imaging procedure. It also extends ultrasound accessibility in resource-limited environments with the shortage of human operators by enabling remote diagnosis. During imaging, keeping the US probe normal to the skin surface largely benefits the US image quality. However, an autonomous, real-time, low-cost method to align the probe towards the direction orthogonal to the skin surface without pre-operative information is absent in RUSS. We propose a novel end-effector design to achieve self-normal-positioning of the US probe. The end-effector embeds four laser distance sensors to estimate the desired rotation towards the normal direction. We then integrate the proposed end-effector with a RUSS system which allows the probe to be automatically and dynamically kept to normal direction during US imaging. We evaluated the normal positioning accuracy and the US image quality using a flat surface phantom, an upper torso mannequin, and a lung ultrasound phantom. Results show that the normal positioning accuracy is 4.17 ± 2.24 degrees on the flat surface and 14.67 ± 8.46 degrees on the mannequin. The quality of the RUSS collected US images from the lung ultrasound phantom was equivalent to that of the manually collected ones.
Collapse
Affiliation(s)
- Xihan Ma
- Department of Robotics Engineering, Worcester Polytechnic Institute, Worcester, MA, 01609, USA
| | - Wen-Yi Kuo
- Department of Robotics Engineering, Worcester Polytechnic Institute, Worcester, MA, 01609, USA
| | - Kehan Yang
- Department of Robotics Engineering, Worcester Polytechnic Institute, Worcester, MA, 01609, USA
| | - Ashiqur Rahaman
- Department of Robotics Engineering, Worcester Polytechnic Institute, Worcester, MA, 01609, USA
- Department of Biomedical Engineering, Worcester Polytechnic Institute, Worcester, MA, 01609, USA
| | - Haichong K Zhang
- Department of Robotics Engineering, Worcester Polytechnic Institute, Worcester, MA, 01609, USA
- Department of Biomedical Engineering, Worcester Polytechnic Institute, Worcester, MA, 01609, USA
| |
Collapse
|
9
|
Jiang Z, Gao Y, Xie L, Navab N. Towards Autonomous Atlas-Based Ultrasound Acquisitions in Presence of Articulated Motion. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3180440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Zhongliang Jiang
- Chair for Computer Aided Medical Procedures and Augmented Reality (CAMP), Technical University of Munich, Garching, Germany
| | - Yuan Gao
- Chair for Computer Aided Medical Procedures and Augmented Reality (CAMP), Technical University of Munich, Garching, Germany
| | - Le Xie
- Institute of Forming Technology and Equipment and Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China
| | - Nassir Navab
- Chair for Computer Aided Medical Procedures and Augmented Reality (CAMP), Technical University of Munich, Garching, Germany
| |
Collapse
|