1
|
Xiao X, Zhang J, Shao Y, Liu J, Shi K, He C, Kong D. Deep Learning-Based Medical Ultrasound Image and Video Segmentation Methods: Overview, Frontiers, and Challenges. SENSORS (BASEL, SWITZERLAND) 2025; 25:2361. [PMID: 40285051 PMCID: PMC12031589 DOI: 10.3390/s25082361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/13/2025] [Revised: 04/03/2025] [Accepted: 04/05/2025] [Indexed: 04/29/2025]
Abstract
The intricate imaging structures, artifacts, and noise present in ultrasound images and videos pose significant challenges for accurate segmentation. Deep learning has recently emerged as a prominent field, playing a crucial role in medical image processing. This paper reviews ultrasound image and video segmentation methods based on deep learning techniques, summarizing the latest developments in this field, such as diffusion and segment anything models as well as classical methods. These methods are classified into four main categories based on the characteristics of the segmentation methods. Each category is outlined and evaluated in the corresponding section. We provide a comprehensive overview of deep learning-based ultrasound image segmentation methods, evaluation metrics, and common ultrasound datasets, hoping to explain the advantages and disadvantages of each method, summarize its achievements, and discuss challenges and future trends.
Collapse
Affiliation(s)
- Xiaolong Xiao
- College of Mathematical Medicine, Zhejiang Normal University, Jinhua 321004, China; (X.X.); (Y.S.); (J.L.); (K.S.); (C.H.); (D.K.)
- School of Computer Science and Technology (School of Artificial Intelligence), Zhejiang Normal University, Jinhua 321004, China
| | - Jianfeng Zhang
- College of Mathematical Medicine, Zhejiang Normal University, Jinhua 321004, China; (X.X.); (Y.S.); (J.L.); (K.S.); (C.H.); (D.K.)
- Puyang Institute of Big Data and Artificial Intelligence, Puyang 457006, China
| | - Yuan Shao
- College of Mathematical Medicine, Zhejiang Normal University, Jinhua 321004, China; (X.X.); (Y.S.); (J.L.); (K.S.); (C.H.); (D.K.)
- School of Computer Science and Technology (School of Artificial Intelligence), Zhejiang Normal University, Jinhua 321004, China
| | - Jialong Liu
- College of Mathematical Medicine, Zhejiang Normal University, Jinhua 321004, China; (X.X.); (Y.S.); (J.L.); (K.S.); (C.H.); (D.K.)
- School of Computer Science and Technology (School of Artificial Intelligence), Zhejiang Normal University, Jinhua 321004, China
| | - Kaibing Shi
- College of Mathematical Medicine, Zhejiang Normal University, Jinhua 321004, China; (X.X.); (Y.S.); (J.L.); (K.S.); (C.H.); (D.K.)
| | - Chunlei He
- College of Mathematical Medicine, Zhejiang Normal University, Jinhua 321004, China; (X.X.); (Y.S.); (J.L.); (K.S.); (C.H.); (D.K.)
| | - Dexing Kong
- College of Mathematical Medicine, Zhejiang Normal University, Jinhua 321004, China; (X.X.); (Y.S.); (J.L.); (K.S.); (C.H.); (D.K.)
- School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, China
| |
Collapse
|
2
|
Du H, Zhang X, Zhang Y, Zhang F, Lin L, Huang T. A review of robot-assisted ultrasound examination: Systems and technology. Int J Med Robot 2024; 20:e2660. [PMID: 38978325 DOI: 10.1002/rcs.2660] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Revised: 06/01/2024] [Accepted: 06/29/2024] [Indexed: 07/10/2024]
Abstract
BACKGROUND At present, the number and overall level of ultrasound (US) doctors cannot meet the medical needs, and the medical ultrasound robots will largely solve the shortage of medical resources. METHODS According to the degree of automation, the handheld, semi-automatic and automatic ultrasound examination robot systems are summarised. Ultrasound scanning path planning and robot control are the keys to ensure that the robot systems can obtain high-quality images. Therefore, the ultrasound scanning path planning and control methods are summarised. The research progress and future trends are discussed. RESULTS A variety of ultrasound robot systems have been applied to various medical works. With the continuous improvement of automation, the systems provide high-quality ultrasound images and image guidance for clinicians. CONCLUSION Although the development of medical ultrasound robot still faces challenges, with the continuous progress of robot technology and communication technology, medical ultrasound robot will have great development potential and broad application space.
Collapse
Affiliation(s)
- Haiyan Du
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Harbin University of Science and Technology, Harbin, China
| | - Xinran Zhang
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Harbin University of Science and Technology, Harbin, China
| | - Yongde Zhang
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Harbin University of Science and Technology, Harbin, China
| | - Fujun Zhang
- Department of Minimally Invasive Interventional Therapy, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Letao Lin
- Department of Minimally Invasive Interventional Therapy, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Tao Huang
- Department of Minimally Invasive Interventional Therapy, Sun Yat-sen University Cancer Center, Guangzhou, China
| |
Collapse
|
3
|
Su K, Liu J, Ren X, Huo Y, Du G, Zhao W, Wang X, Liang B, Li D, Liu PX. A fully autonomous robotic ultrasound system for thyroid scanning. Nat Commun 2024; 15:4004. [PMID: 38734697 PMCID: PMC11519952 DOI: 10.1038/s41467-024-48421-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2023] [Accepted: 04/23/2024] [Indexed: 05/13/2024] Open
Abstract
The current thyroid ultrasound relies heavily on the experience and skills of the sonographer and the expertise of the radiologist, and the process is physically and cognitively exhausting. In this paper, we report a fully autonomous robotic ultrasound system, which is able to scan thyroid regions without human assistance and identify malignant nod- ules. In this system, human skeleton point recognition, reinforcement learning, and force feedback are used to deal with the difficulties in locating thyroid targets. The orientation of the ultrasound probe is adjusted dynamically via Bayesian optimization. Experimental results on human participants demonstrated that this system can perform high-quality ultrasound scans, close to manual scans obtained by clinicians. Additionally, it has the potential to detect thyroid nodules and provide data on nodule characteristics for American College of Radiology Thyroid Imaging Reporting and Data System (ACR TI-RADS) calculation.
Collapse
Affiliation(s)
- Kang Su
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, 510006, China
| | - Jingwei Liu
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, 510006, China
| | - Xiaoqi Ren
- School of Future Technology, South China University of Technology, Guangzhou, 511442, China
- Peng Cheng Laboratory, Shenzhen, 518000, China
| | - Yingxiang Huo
- School of Future Technology, South China University of Technology, Guangzhou, 511442, China
- Peng Cheng Laboratory, Shenzhen, 518000, China
| | - Guanglong Du
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, 510006, China.
| | - Wei Zhao
- Division of Vascular and Interventional Radiology, Nanfang Hospital Southern Medical University, Guangzhou, 510515, China
| | - Xueqian Wang
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China.
| | - Bin Liang
- Department of Automation, Tsinghua University, 100854, Beijing, China.
| | - Di Li
- School of Mechanical and Automotive Engineering, South China University of Technology, Guangzhou, 510641, China
| | - Peter Xiaoping Liu
- Department of Systems and Computer Engineering, Carleton University, Ottawa, ON, K1S 5B6, Canada.
| |
Collapse
|
4
|
Yao L, Zhao B, Wang Q, Wang Z, Zhang P, Qi X, Wong PK, Hu Y. A Decision-Making Algorithm for Robotic Breast Ultrasound High-Quality Imaging via Broad Reinforcement Learning From Demonstration. IEEE Robot Autom Lett 2024; 9:3886-3893. [DOI: 10.1109/lra.2024.3371375] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/30/2024]
Affiliation(s)
- Liang Yao
- Department of Electromechanical Engineering, University of Macau, Macau, China
| | - Baoliang Zhao
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qiong Wang
- Guangdong Provincial Key Laboratory of Computer Vision and Virtual Reality Technology, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Ziwen Wang
- School of Mechanical Engineering and Automation, Harbin Institute of Technology, Shenzhen, China
| | - Peng Zhang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Xiaozhi Qi
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Pak Kin Wong
- Department of Electromechanical Engineering, University of Macau, Macau, China
| | - Ying Hu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
5
|
Monterossi G, Pedone Anchora L, Oliva R, Fagotti A, Fanfani F, Costantini B, Naldini A, Giannarelli D, Scambia G. The new surgical robot Hugo™ RAS for total hysterectomy: a pilot study. Facts Views Vis Obgyn 2023; 15:331-337. [PMID: 38128091 PMCID: PMC10832655 DOI: 10.52054/fvvo.15.4.11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2023] Open
Abstract
Background With the rising popularity of robotic surgery, Hugo™ RAS is one of the newest surgical robotic platforms. Investigating the reliability of this tool is the first step toward validating its use in clinical practice; and presently there arelimited data available regarding this. The literature is constantly enriched with initial experiences, however no study has demonstrated the safety of this platform yet. Objectives This study aimed to investigate its reliability during total hysterectomy. Materials and Methods A series of 20 consecutive patients scheduled for minimally invasive total hysterectomy with or without salpingo-oophorectomy for benign disease or prophylactic surgery were selected to undergo surgery with Hugo™ RAS. Data regarding any malfunction or breakdown of the robotic system as well as intra- and post-operative complications were prospectively recorded. Results Fifteen of the twenty patients (75.0%) underwent surgery for benign uterine diseases, and five (25.0%) underwent prophylactic surgery. Among the entire series, an instrument fault occurred in one case (5.0%). The problem was solved in 4.8 minutes and without complications for the patient. The median total operative time was 127 min (range, 98-255 min). The median estimated blood loss was 50 mL (range:30-125 mL). No intraoperative complications were observed. One patient (5.0%) developed Clavien-Dindo grade 2 post-operative complication. Conclusions In this pilot study, Hugo™ RAS showed high reliability, similar to other robotic devices. What is new? Present findings suggest that Hugo™ RAS is a viable option for major surgical procedures and deserves further investigation in clinical practice.
Collapse
|
6
|
Ning G, Liang H, Zhang X, Liao H. Autonomous Robotic Ultrasound Vascular Imaging System With Decoupled Control Strategy for External-Vision-Free Environments. IEEE Trans Biomed Eng 2023; 70:3166-3177. [PMID: 37227912 DOI: 10.1109/tbme.2023.3279114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
OBJECTIVE Ultrasound (US) probes scan over the surface of the human body to acquire US images in clinical vascular US diagnosis. However, due to the deformation and specificity of different human surfaces, the relationship between the scan trajectory of the skin and the internal tissues is not fully correlated, which poses a challenge for autonomous robotic US imaging in a dynamic and external-vision-free environment. Here, we propose a decoupled control strategy for autonomous robotic vascular US imaging in an environment without external vision. METHODS The proposed system is divided into outer-loop posture control and inner-loop orientation control, which are separately determined by a deep learning (DL) agent and a reinforcement learning (RL) agent. First, we use a weakly supervised US vessel segmentation network to estimate the probe orientation. In the outer loop control, we use a force-guided reinforcement learning agent to maintain a specific angle between the US probe and the skin in the dynamic imaging processes. Finally, the orientation and the posture are integrated to complete the imaging process. RESULTS Evaluation experiments on several volunteers showed that our RUS could autonomously perform vascular imaging in arms with different stiffness, curvature, and size without additional system adjustments. Furthermore, our system achieved reproducible imaging and reconstruction of dynamic targets without relying on vision-based surface information. CONCLUSION AND SIGNIFICANCE Our system and control strategy provides a novel framework for the application of US robots in complex and external-vision-free environments.
Collapse
|
7
|
Jiang Z, Salcudean SE, Navab N. Robotic ultrasound imaging: State-of-the-art and future perspectives. Med Image Anal 2023; 89:102878. [PMID: 37541100 DOI: 10.1016/j.media.2023.102878] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 04/27/2023] [Accepted: 06/22/2023] [Indexed: 08/06/2023]
Abstract
Ultrasound (US) is one of the most widely used modalities for clinical intervention and diagnosis due to the merits of providing non-invasive, radiation-free, and real-time images. However, free-hand US examinations are highly operator-dependent. Robotic US System (RUSS) aims at overcoming this shortcoming by offering reproducibility, while also aiming at improving dexterity, and intelligent anatomy and disease-aware imaging. In addition to enhancing diagnostic outcomes, RUSS also holds the potential to provide medical interventions for populations suffering from the shortage of experienced sonographers. In this paper, we categorize RUSS as teleoperated or autonomous. Regarding teleoperated RUSS, we summarize their technical developments, and clinical evaluations, respectively. This survey then focuses on the review of recent work on autonomous robotic US imaging. We demonstrate that machine learning and artificial intelligence present the key techniques, which enable intelligent patient and process-specific, motion and deformation-aware robotic image acquisition. We also show that the research on artificial intelligence for autonomous RUSS has directed the research community toward understanding and modeling expert sonographers' semantic reasoning and action. Here, we call this process, the recovery of the "language of sonography". This side result of research on autonomous robotic US acquisitions could be considered as valuable and essential as the progress made in the robotic US examination itself. This article will provide both engineers and clinicians with a comprehensive understanding of RUSS by surveying underlying techniques. Additionally, we present the challenges that the scientific community needs to face in the coming years in order to achieve its ultimate goal of developing intelligent robotic sonographer colleagues. These colleagues are expected to be capable of collaborating with human sonographers in dynamic environments to enhance both diagnostic and intraoperative imaging.
Collapse
Affiliation(s)
- Zhongliang Jiang
- Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany.
| | - Septimiu E Salcudean
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC V6T 1Z4, Canada
| | - Nassir Navab
- Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany; Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
8
|
Huang T, Ma L, Zhang B, Liao H. Advances in deep learning: From diagnosis to treatment. Biosci Trends 2023:2023.01148. [PMID: 37394613 DOI: 10.5582/bst.2023.01148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
Deep learning has brought about a revolution in the field of medical diagnosis and treatment. The use of deep learning in healthcare has grown exponentially in recent years, achieving physician-level accuracy in various diagnostic tasks and supporting applications such as electronic health records and clinical voice assistants. The emergence of medical foundation models, as a new approach to deep learning, has greatly improved the reasoning ability of machines. Characterized by large training datasets, context awareness, and multi-domain applications, medical foundation models can integrate various forms of medical data to provide user-friendly outputs based on a patien's information. Medical foundation models have the potential to integrate current diagnostic and treatment systems, providing the ability to understand multi-modal diagnostic information and real-time reasoning ability in complex surgical scenarios. Future research on foundation model-based deep learning methods will focus more on the collaboration between physicians and machines. On the one hand, developing new deep learning methods will reduce the repetitive labor of physicians and compensate for shortcomings in their diagnostic and treatment capabilities. On the other hand, physicians need to embrace new deep learning technologies, comprehend the principles and technical risks of deep learning methods, and master the procedures for integrating them into clinical practice. Ultimately, the integration of artificial intelligence analysis with human decision-making will facilitate accurate personalized medical care and enhance the efficiency of physicians.
Collapse
Affiliation(s)
- Tianqi Huang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Longfei Ma
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Boyu Zhang
- Research Center for Industries of the Future, and Key Laboratory of 3D Micro/Nano Fabrication and Characterization of Zhejiang Province, School of Engineering, Westlake University, Hangzhou, Zhejiang, China
| | - Hongen Liao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| |
Collapse
|
9
|
He T, Guo C, Liu H, Jiang L. A venipuncture robot with decoupled position and attitude guided by near-infrared vision and force feedback. Int J Med Robot 2023:e2512. [PMID: 36809654 DOI: 10.1002/rcs.2512] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 12/27/2022] [Accepted: 02/16/2023] [Indexed: 02/23/2023]
Abstract
BACKGROUND This study aims to develop a venipuncture robot to replace manual venipuncture to ease the heavy workload, lower the risk of 2019-nCoV infection, and boost venipuncture success rates. METHOD The robot is designed with decoupled position and attitude. It consists of a 3-degree-of-freedom positioning manipulator to locate the needle and a 3-degree-of-freedom end-effector that is always vertical to adjust the yaw and pitch angles of the needle. The near-infrared vision and laser sensors obtain the three-dimensional information of puncture positions, while the change in force detects the state feedback of punctures. RESULTS The experimental results demonstrate that the venipuncture robot has a compact design, flexible motion, high positioning accuracy and repeatability (0.11 and 0.04 mm), and a high success rate when puncturing the phantom. CONCLUSION This paper presents a decoupled position and attitude venipuncture robot guided by near-infrared vision and force feedback to replace manual venipuncture. The robot is compact, dexterous, and accurate, which helps to improve the success rate of venipuncture, and it is expected to achieve fully automatic venipuncture in the future.
Collapse
Affiliation(s)
- Tianbao He
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China
| | - Chuangqiang Guo
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China
| | - Hansong Liu
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China
| | - Li Jiang
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China
| |
Collapse
|
10
|
Zhang T, Chen C, Shu M, Wang R, Di C, Li G. Constant Force-Tracking Control Based on Deep Reinforcement Learning in Dynamic Auscultation Environment. SENSORS (BASEL, SWITZERLAND) 2023; 23:2186. [PMID: 36850780 PMCID: PMC9965931 DOI: 10.3390/s23042186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 02/10/2023] [Accepted: 02/10/2023] [Indexed: 06/18/2023]
Abstract
Intelligent medical robots can effectively help doctors carry out a series of medical diagnoses and auxiliary treatments and alleviate the current shortage of social personnel. Therefore, this paper investigates how to use deep reinforcement learning to solve dynamic medical auscultation tasks. We propose a constant force-tracking control method for dynamic environments and a modeling method that satisfies physical characteristics to simulate the dynamic breathing process and design an optimal reward function for the task of achieving efficient learning of the control strategy. We have carried out a large number of simulation experiments, and the error between the tracking of normal force and expected force is basically within ±0.5 N. The control strategy is tested in a real environment. The preliminary results show that the control strategy performs well in the constant force-tracking of medical auscultation tasks. The contact force is always within a safe and stable range, and the average contact force is about 5.2 N.
Collapse
Affiliation(s)
- Tieyi Zhang
- School of Mathematics and Statistics, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, China
- Shandong Artificial Intelligence Institute, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250014, China
| | - Chao Chen
- School of Mathematics and Statistics, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, China
- Shandong Artificial Intelligence Institute, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250014, China
| | - Minglei Shu
- School of Mathematics and Statistics, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, China
- Shandong Artificial Intelligence Institute, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250014, China
| | - Ruotong Wang
- Shandong Artificial Intelligence Institute, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250014, China
| | - Chong Di
- Shandong Artificial Intelligence Institute, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250014, China
| | - Gang Li
- School of Mathematics and Statistics, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, China
| |
Collapse
|
11
|
Ma L, Wang R, He Q, Huang L, Wei X, Lu X, Du Y, Luo J, Liao H. Artificial intelligence-based ultrasound imaging technologies for hepatic diseases. ILIVER 2022; 1:252-264. [DOI: 10.1016/j.iliver.2022.11.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2025]
|
12
|
Bi Y, Jiang Z, Gao Y, Wendler T, Karlas A, Navab N. VesNet-RL: Simulation-Based Reinforcement Learning for Real-World US Probe Navigation. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3176112] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Affiliation(s)
- Yuan Bi
- Chair for Computer-Aided Medical Procedures and Augmented Reality, Technical University of Munich, Garching bei München, Germany
| | - Zhongliang Jiang
- Chair for Computer-Aided Medical Procedures and Augmented Reality, Technical University of Munich, Garching bei München, Germany
| | - Yuan Gao
- Chair for Computer-Aided Medical Procedures and Augmented Reality, Technical University of Munich, Garching bei München, Germany
| | - Thomas Wendler
- Chair for Computer-Aided Medical Procedures and Augmented Reality, Technical University of Munich, Garching bei München, Germany
| | - Angelos Karlas
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München, München, Germany
| | - Nassir Navab
- Chair for Computer-Aided Medical Procedures and Augmented Reality, Technical University of Munich, Garching bei München, Germany
| |
Collapse
|
13
|
Duan A, Victorova M, Zhao J, Sun Y, Zheng Y, Navarro-Alarcon D. Ultrasound-Guided Assistive Robots for Scoliosis Assessment With Optimization-Based Control and Variable Impedance. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3186504] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Affiliation(s)
- Anqing Duan
- Department of Mechanical Engineering, The Hong Kong Polytechnic University, KLN, Hong Kong, Hong Kong
| | - Maria Victorova
- Department of Biomedical Engineering, The Hong Kong Polytechnic University, KLN, Hong Kong, Hong Kong
| | - Jingyuan Zhao
- Department of Biomedical Engineering, The Hong Kong Polytechnic University, KLN, Hong Kong, Hong Kong
| | - Yuxiang Sun
- Department of Mechanical Engineering, The Hong Kong Polytechnic University, KLN, Hong Kong, Hong Kong
| | - Yongping Zheng
- Department of Biomedical Engineering, The Hong Kong Polytechnic University, KLN, Hong Kong, Hong Kong
| | - David Navarro-Alarcon
- Department of Mechanical Engineering, The Hong Kong Polytechnic University, KLN, Hong Kong, Hong Kong
| |
Collapse
|