1
|
Plebani M, Scott S, Simundic AM, Cornes M, Padoan A, Cadamuro J, Vermeersch P, Çubukçu HC, González Á, Nybo M, Salvagno GL, Costelloe SJ, Falbo R, von Meyer A, Iaccino E, Botrè F, Banfi G, Lippi G. New insights in preanalytical quality. Clin Chem Lab Med 2025:cclm-2025-0478. [PMID: 40266896 DOI: 10.1515/cclm-2025-0478] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2025] [Accepted: 04/18/2025] [Indexed: 04/25/2025]
Abstract
The negative impact of preanalytical errors on the quality of laboratory testing is now universally recognized. Nonetheless, recent technological advancements and organizational transformations in healthcare - catalyzed by the still ongoing coronavirus disease 2019 (COVID-19 pandemic) - have introduced new challenges and promising opportunities for improvement. The integration of value-based scoring systems for clinical laboratories and growing evidence linking preanalytical errors to patient outcomes and healthcare costs underscore the critical importance of this phase. Emerging topics in the preanalytical phase include the pursuit of a "greener" and more sustainable environment, innovations in self-sampling and automated blood collection, and strategies to minimize patient blood loss. Additionally, efforts to reduce costs and enhance sustainability through patient blood management have gained momentum. Digitalization and artificial intelligence (AI) offer transformative potential, with applications in sample labeling, recording collection events, and monitoring sample conditions during transportation. AI-driven tools can also streamline the preanalytical workflow and mitigate errors. Specific challenges include managing hemolysis and developing strategies to minimize its impact, addressing issues related to urine collection, and designing robust protocols for sample stability studies. The rise of decentralized laboratory testing presents unique preanalytical hurdles, while emerging areas such as liquid biopsy and anti-doping testing introduce novel complexities. Altogether, these advancements and challenges highlight the dynamic evolution of the preanalytical phase and the critical need for continuous innovation and standardization. This collective opinion paper, which summarizes the abstracts of lectures delivered at the two-day European Federation of Laboratory Medicine (EFLM) Preanalytical Conference entitled "New Insight in Preanalytical Quality" (Padova, Italy; December 12-13, 2025), provides a comprehensive overview of preanalytical errors, offers some important insights into less obvious sources of preanalytical vulnerability and proposes efficient opportunities of improvement.
Collapse
Affiliation(s)
- Mario Plebani
- Department of Medicine (DIMED), University of Padova, Padova, Italy
- Laboratory Medicine Unit, University-Hospital of Padova, Padova, Italy
| | | | - Ana-Maria Simundic
- Department of Global Medical & Clinical Affairs, Greiner Bio-One, Kremsmünster, Austria
- Faculty of Pharmacy and Biochemistry, University of Zagreb, Zagreb, Croatia
| | - Mike Cornes
- Worcestershire Acute Hospitals NHS Trust, Worcester, UK
| | - Andrea Padoan
- Department of Medicine (DIMED), University of Padova, Padova, Italy
- Laboratory Medicine Unit, University-Hospital of Padova, Padova, Italy
| | - Janne Cadamuro
- Department of Laboratory Medicine, University Hospital Salzburg, Paracelsus Medical University, Salzburg, Austria
| | - Pieter Vermeersch
- Clinical Department of Laboratory Medicine, University Hospitals Leuven, Leuven, Belgium
| | - Hikmet Can Çubukçu
- Department of Medical Biochemistry, Sincan Training and Research Hospital, Ankara, Türkiye
| | - Álvaro González
- Service of Biochemistry, Clínica Universidad de Navarra, Pamplona, Spain
| | - Mads Nybo
- Department of Clinical Biochemistry, Odense University Hospital, Odense, Denmark
| | | | - Seán J Costelloe
- Department of Clinical Biochemistry, Cork University Hospital, Cork, Republic of Ireland
| | - Rosanna Falbo
- Ultraspecialized Laboratory of Clinical Pathology and Substance Abuse, ASST Brianza-Hospital PioXI, Desio, Italy
| | - Alexander von Meyer
- Institute for Laboratory Medicine, Barmherzige Brüder Hospital, Munich, Germany
| | - Enrico Iaccino
- Department of Experimental and Clinical Medicine, University Magna Graecia of Catanzaro, Catanzaro, Italy
| | - Francesco Botrè
- Laboratorio Antidoping FMSI, Federazione Medico Sportiva Italiana, Rome, Italy
- REDs - Research and Expertise on Anti-Doping Sciences, Institute of Sport Science, University of Lausanne, Lausanne, Switzerland
| | | | - Giuseppe Lippi
- Section of Clinical Biochemistry, University of Verona, Verona, Italy
| |
Collapse
|
2
|
Lonsdale H, Burns ML, Epstein RH, Hofer IS, Tighe PJ, Gálvez Delgado JA, Kor DJ, MacKay EJ, Rashidi P, Wanderer JP, McCormick PJ. Strengthening Discovery and Application of Artificial Intelligence in Anesthesiology: A Report from the Anesthesia Research Council. Anesth Analg 2025; 140:920-930. [PMID: 40305700 DOI: 10.1213/ane.0000000000007474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/02/2025]
Abstract
Interest in the potential applications of artificial intelligence in medicine, anesthesiology, and the world at large has never been higher. The Anesthesia Research Council steering committee formed an anesthesiologist artificial intelligence expert workgroup charged with evaluating the current state of artificial intelligence in anesthesiology, providing examples of future artificial intelligence applications and identifying barriers to artificial intelligence progress. The workgroup's findings are summarized here, starting with a brief introduction to artificial intelligence for clinicians, followed by overviews of current and anticipated artificial intelligence-focused research and applications in anesthesiology. Anesthesiology's progress in artificial intelligence is compared to that of other medical specialties, and barriers to artificial intelligence development and implementation in our specialty are discussed. The workgroup's recommendations address stakeholders in policymaking, research, development, implementation, training, and use of artificial intelligence-based tools for perioperative care.
Collapse
Affiliation(s)
- Hannah Lonsdale
- Hannah Lonsdale, M.B.Ch.B.: Department of Anesthesiology, Vanderbilt University Medical Center, Monroe Carell Jr. Children's Hospital at Vanderbilt, Nashville, Tennessee
| | - Michael L Burns
- Michael L. Burns, Ph.D., M.D.: Department of Anesthesiology, Michigan Medicine, University of Michigan, Ann Arbor, Michigan
| | - Richard H Epstein
- Richard H. Epstein, M.D.: Department of Anesthesiology, Perioperative Medicine, and Pain Management, University of Miami Miller School of Medicine, Miami, Florida
| | - Ira S Hofer
- Ira S. Hofer, M.D.: Department of Anesthesiology, Perioperative and Pain Medicine, and Charles Bronfman Institute of Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, New York
| | - Patrick J Tighe
- Patrick J. Tighe, M.D., M.S.: Department of Anesthesiology, University of Florida College of Medicine, Gainesville, Florida
| | - Julia A Gálvez Delgado
- Julia A. Gálvez Delgado, M.D., M.B.I.: Department of Anesthesiology, Perioperative and Pain Medicine, Boston Children's Hospital, Boston, Massachusetts
| | - Daryl J Kor
- Daryl J. Kor, M.D., M.Sc.: Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, Minnesota
| | - Emily J MacKay
- Emily J. MacKay, D.O., M.S.: Department of Anesthesiology and Critical Care, Penn Medicine, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Parisa Rashidi
- Parisa Rashidi, Ph.D.: Department of Biomedical Engineering, University of Florida, Gainesville, Florida
| | - Jonathan P Wanderer
- Jonathan P. Wanderer, M.D., M.Phil.: Departments of Anesthesiology and Biomedical Informatics, Vanderbilt University Medical Center, Nashville, Tennessee
| | - Patrick J McCormick
- Patrick J. McCormick, M.D., M.Eng.: Department of Anesthesiology and Critical Care Medicine, Memorial Sloan Kettering Cancer Center, New York, New York; Department of Anesthesiology, Weill Cornell Medicine, New York, New York
| |
Collapse
|
3
|
Lonsdale H, Burns ML, Epstein RH, Hofer IS, Tighe PJ, Gálvez Delgado JA, Kor DJ, Mackay EJ, Rashidi P, Wanderer JP, McCormick PJ. Strengthening Discovery and Application of Artificial Intelligence in Anesthesiology: A Report from the Anesthesia Research Council. Anesthesiology 2025; 142:599-610. [PMID: 40067037 PMCID: PMC11906170 DOI: 10.1097/aln.0000000000005326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/15/2025]
Abstract
Interest in the potential applications of artificial intelligence in medicine, anesthesiology, and the world at large has never been higher. The Anesthesia Research Council steering committee formed an anesthesiologist artificial intelligence expert workgroup charged with evaluating the current state of artificial intelligence in anesthesiology, providing examples of future artificial intelligence applications and identifying barriers to artificial intelligence progress. The workgroup's findings are summarized here, starting with a brief introduction to artificial intelligence for clinicians, followed by overviews of current and anticipated artificial intelligence-focused research and applications in anesthesiology. Anesthesiology's progress in artificial intelligence is compared to that of other medical specialties, and barriers to artificial intelligence development and implementation in our specialty are discussed. The workgroup's recommendations address stakeholders in policymaking, research, development, implementation, training, and use of artificial intelligence-based tools for perioperative care.
Collapse
Affiliation(s)
- Hannah Lonsdale
- Department of Anesthesiology, Vanderbilt University School
of Medicine, Monroe Carell Jr. Children’s Hospital at Vanderbilt, Nashville,
TN, USA
| | - Michael L. Burns
- Department of Anesthesiology, Michigan Medicine,
University of Michigan, Ann Arbor, MI, USA
| | - Richard H. Epstein
- Department of Anesthesiology, Perioperative Medicine, and
Pain Management, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Ira S. Hofer
- Department of Anesthesiology Pain and Perioperative
Medicine, Icahn School of Medicine at Mount Sinai, New York, NY, USA; Charles
Bronfman Institute of Personalized Medicine, Icahn School of Medicine at Mount
Sinai, New York, NY, USA
| | - Patrick J. Tighe
- Department of Anesthesiology, University of Florida
College of Medicine, Gainesville, FL, USA
| | - Julia A. Gálvez Delgado
- Department of Anesthesiology, Perioperative and Pain
Medicine, Boston Children’s Hospital, Boston, MA, USA
| | - Daryl J. Kor
- Department of Anesthesiology and Perioperative Medicine,
Mayo Clinic, Rochester, MN, USA
| | - Emily J. Mackay
- Department of Anesthesiology and Critical Care, Penn
Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Parisa Rashidi
- Department of Biomedical Engineering, University of
Florida, Gainesville, FL, USA
| | - Jonathan P. Wanderer
- Departments of Anesthesiology and Biomedical
Informatics, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Patrick J. McCormick
- Department of Anesthesiology and Critical Care Medicine,
Memorial Sloan Kettering Cancer Center, New York, NY, USA; and Department of
Anesthesiology, Weill Cornell Medicine, New York, NY, USA
| |
Collapse
|
4
|
Li Z, Xu Q. Multi-Section Magnetic Soft Robot with Multirobot Navigation System for Vasculature Intervention. CYBORG AND BIONIC SYSTEMS 2024; 5:0188. [PMID: 39610760 PMCID: PMC11602701 DOI: 10.34133/cbsystems.0188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2024] [Revised: 08/26/2024] [Accepted: 10/08/2024] [Indexed: 11/30/2024] Open
Abstract
Magnetic soft robots have recently become a promising technology that has been applied to minimally invasive cardiovascular surgery. This paper presents the analytical modeling of a novel multi-section magnetic soft robot (MS-MSR) with multi-curvature bending, which is maneuvered by an associated collaborative multirobot navigation system (CMNS) with magnetic actuation and ultrasound guidance targeted for intravascular intervention. The kinematic and dynamic analysis of the MS-MSR's telescopic motion is performed using the optimized Cosserat rod model by considering the effect of an external heterogeneous magnetic field, which is generated by a mobile magnetic actuation manipulator to adapt to complex steering scenarios. Meanwhile, an extracorporeal mobile ultrasound navigation manipulator is exploited to track the magnetic soft robot's distal tip motion to realize a closed-loop control. We also conduct a quadratic programming-based optimization scheme to synchronize the multi-objective task-space motion of CMNS with null-space projection. It allows the formulation of a comprehensive controller with motion priority for multirobot collaboration. Experimental results demonstrate that the proposed magnetic soft robot can be successfully navigated within the multi-bifurcation intravascular environment with a shape modeling error 3.62 ± 1.28 ∘ and a tip error of 1.08 ± 0.45 mm under the actuation of a CMNS through in vitro ultrasound-guided vasculature interventional tests.
Collapse
Affiliation(s)
- Zhengyang Li
- Department of Electromechanical Engineering, Faculty of Science and Technology,
University of Macau, Macau, China
| | - Qingsong Xu
- Department of Electromechanical Engineering, Faculty of Science and Technology,
University of Macau, Macau, China
| |
Collapse
|
5
|
Yang Z, Shi M, Gharbi Y, Qi Q, Shen H, Tao G, Xu W, Lyu W, Ji A. A Near-Infrared Imaging System for Robotic Venous Blood Collection. SENSORS (BASEL, SWITZERLAND) 2024; 24:7413. [PMID: 39599189 PMCID: PMC11598678 DOI: 10.3390/s24227413] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2024] [Revised: 11/11/2024] [Accepted: 11/16/2024] [Indexed: 11/29/2024]
Abstract
Venous blood collection is a widely used medical diagnostic technique, and with rapid advancements in robotics, robotic venous blood collection has the potential to replace traditional manual methods. The success of this robotic approach is heavily dependent on the quality of vein imaging. In this paper, we develop a vein imaging device based on the simulation analysis of vein imaging parameters and propose a U-Net+ResNet18 neural network for vein image segmentation. The U-Net+ResNet18 neural network integrates the residual blocks from ResNet18 into the encoder of the U-Net to form a new neural network. ResNet18 is pre-trained using the Bootstrap Your Own Latent (BYOL) framework, and its encoder parameters are transferred to the U-Net+ResNet18 neural network, enhancing the segmentation performance of vein images with limited labelled data. Furthermore, we optimize the AD-Census stereo matching algorithm by developing a variable-weight version, which improves its adaptability to image variations across different regions. Results show that, compared to U-Net, the BYOL+U-Net+ResNet18 method achieves an 8.31% reduction in Binary Cross-Entropy (BCE), a 5.50% reduction in Hausdorff Distance (HD), a 15.95% increase in Intersection over Union (IoU), and a 9.20% increase in the Dice coefficient (Dice), indicating improved image segmentation quality. The average error of the optimized AD-Census stereo matching algorithm is reduced by 25.69%, and the improvement of the image stereo matching performance is more obvious. Future research will explore the application of the vein imaging system in robotic venous blood collection to facilitate real-time puncture guidance.
Collapse
Affiliation(s)
- Zhikang Yang
- Laboratory of Locomotion Bioinspiration and Intelligent Robots, College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China; (Z.Y.); (M.S.); (Y.G.); (Q.Q.); (H.S.)
| | - Mao Shi
- Laboratory of Locomotion Bioinspiration and Intelligent Robots, College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China; (Z.Y.); (M.S.); (Y.G.); (Q.Q.); (H.S.)
| | - Yassine Gharbi
- Laboratory of Locomotion Bioinspiration and Intelligent Robots, College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China; (Z.Y.); (M.S.); (Y.G.); (Q.Q.); (H.S.)
| | - Qian Qi
- Laboratory of Locomotion Bioinspiration and Intelligent Robots, College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China; (Z.Y.); (M.S.); (Y.G.); (Q.Q.); (H.S.)
| | - Huan Shen
- Laboratory of Locomotion Bioinspiration and Intelligent Robots, College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China; (Z.Y.); (M.S.); (Y.G.); (Q.Q.); (H.S.)
| | - Gaojian Tao
- Department of Pain Medicine, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing 210008, China;
| | - Wu Xu
- Department of Neurosurgery, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing 210008, China;
| | - Wenqi Lyu
- Faculty of Sciences, Engineering and Technology (SET), University of Adelaide, Adelaide, SA 5005, Australia
| | - Aihong Ji
- Jiangsu Key Laboratory of Bionic Materials and Equipment, Nanjing 210016, China
- State Key Laboratory of Mechanics and Control for Aerospace Structures, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
| |
Collapse
|
6
|
Lezcano DA, Zhetpissov Y, Bernardes MC, Moreira P, Tokuda J, Kim JS, Iordachita II. Hybrid Deep Learning and Model-Based Needle Shape Prediction. IEEE SENSORS JOURNAL 2024; 24:18359-18371. [PMID: 39301509 PMCID: PMC11410364 DOI: 10.1109/jsen.2024.3386120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/22/2024]
Abstract
Needle insertion using flexible bevel tip needles are a common minimally-invasive surgical technique for prostate cancer interventions. Flexible, asymmetric bevel tip needles enable physicians for complex needle steering techniques to avoid sensitive anatomical structures during needle insertion. For accurate placement of the needle, predicting the trajectory of these needles intra-operatively would greatly reduce the need for frequently needle reinsertions thus improving patient comfort and positive outcomes. However, predicting the trajectory of the needle during insertion is a complex task that has yet to be solved due to random needle-tissue interactions. In this paper, we present and validate for the first time a hybrid deep learning and model-based approach to handle the intra-operative needle shape prediction problem through, leveraging a validated Lie-group theoretic model for needle shape representation. Furthermore, we present a novel self-supervised learning and method in conjunction with the Lie-group shape model for training these networks in the absence of data, enabling further refinement of these networks with transfer learning. Needle shape prediction was performed in single-layer and double-layer homogeneous phantom tissue for C- and S-shape needle insertions. Our method demonstrates an average root-mean-square prediction error of 1.03 mm over a dataset containing approximately 3,000 prediction samples with maximum prediction steps of 110 mm.
Collapse
Affiliation(s)
- Dimitri A Lezcano
- Mechanical Engineering Department, Johns Hopkins University, MD 21201 USA
| | - Yernar Zhetpissov
- Mechanical Engineering Department, Johns Hopkins University, MD 21201 USA
| | - Mariana C Bernardes
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - Pedro Moreira
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - Junichi Tokuda
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - Jin Seob Kim
- Mechanical Engineering Department, Johns Hopkins University, MD 21201 USA
| | | |
Collapse
|
7
|
Glielmo P, Fusco S, Gitto S, Zantonelli G, Albano D, Messina C, Sconfienza LM, Mauri G. Artificial intelligence in interventional radiology: state of the art. Eur Radiol Exp 2024; 8:62. [PMID: 38693468 PMCID: PMC11063019 DOI: 10.1186/s41747-024-00452-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 02/26/2024] [Indexed: 05/03/2024] Open
Abstract
Artificial intelligence (AI) has demonstrated great potential in a wide variety of applications in interventional radiology (IR). Support for decision-making and outcome prediction, new functions and improvements in fluoroscopy, ultrasound, computed tomography, and magnetic resonance imaging, specifically in the field of IR, have all been investigated. Furthermore, AI represents a significant boost for fusion imaging and simulated reality, robotics, touchless software interactions, and virtual biopsy. The procedural nature, heterogeneity, and lack of standardisation slow down the process of adoption of AI in IR. Research in AI is in its early stages as current literature is based on pilot or proof of concept studies. The full range of possibilities is yet to be explored.Relevance statement Exploring AI's transformative potential, this article assesses its current applications and challenges in IR, offering insights into decision support and outcome prediction, imaging enhancements, robotics, and touchless interactions, shaping the future of patient care.Key points• AI adoption in IR is more complex compared to diagnostic radiology.• Current literature about AI in IR is in its early stages.• AI has the potential to revolutionise every aspect of IR.
Collapse
Affiliation(s)
- Pierluigi Glielmo
- Dipartimento di Scienze Biomediche per la Salute, Università degli Studi di Milano, Via Mangiagalli, 31, 20133, Milan, Italy.
| | - Stefano Fusco
- Dipartimento di Scienze Biomediche per la Salute, Università degli Studi di Milano, Via Mangiagalli, 31, 20133, Milan, Italy
| | - Salvatore Gitto
- Dipartimento di Scienze Biomediche per la Salute, Università degli Studi di Milano, Via Mangiagalli, 31, 20133, Milan, Italy
- IRCCS Istituto Ortopedico Galeazzi, Via Cristina Belgioioso, 173, 20157, Milan, Italy
| | - Giulia Zantonelli
- Dipartimento di Scienze Biomediche per la Salute, Università degli Studi di Milano, Via Mangiagalli, 31, 20133, Milan, Italy
| | - Domenico Albano
- IRCCS Istituto Ortopedico Galeazzi, Via Cristina Belgioioso, 173, 20157, Milan, Italy
- Dipartimento di Scienze Biomediche, Chirurgiche ed Odontoiatriche, Università degli Studi di Milano, Via della Commenda, 10, 20122, Milan, Italy
| | - Carmelo Messina
- Dipartimento di Scienze Biomediche per la Salute, Università degli Studi di Milano, Via Mangiagalli, 31, 20133, Milan, Italy
- IRCCS Istituto Ortopedico Galeazzi, Via Cristina Belgioioso, 173, 20157, Milan, Italy
| | - Luca Maria Sconfienza
- Dipartimento di Scienze Biomediche per la Salute, Università degli Studi di Milano, Via Mangiagalli, 31, 20133, Milan, Italy
- IRCCS Istituto Ortopedico Galeazzi, Via Cristina Belgioioso, 173, 20157, Milan, Italy
| | - Giovanni Mauri
- Divisione di Radiologia Interventistica, IEO, IRCCS Istituto Europeo di Oncologia, Milan, Italy
| |
Collapse
|
8
|
Peng Q, Wang S, Han J, Huang C, Yu H, Li D, Qiu M, Cheng S, Wu C, Cai M, Fu S, Chen B, Wu X, Du S, Xu T. Thermal and Magnetic Dual-Responsive Catheter-Assisted Shape Memory Microrobots for Multistage Vascular Embolization. RESEARCH (WASHINGTON, D.C.) 2024; 7:0339. [PMID: 38550780 PMCID: PMC10976590 DOI: 10.34133/research.0339] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Accepted: 02/20/2024] [Indexed: 08/20/2024]
Abstract
Catheters navigating through complex vessels, such as sharp turns or multiple U-turns, remain challenging for vascular embolization. Here, we propose a novel multistage vascular embolization strategy for hard-to-reach vessels that releases untethered swimming shape-memory magnetic microrobots (SMMs) from the prior catheter to the vessel bifurcation. SMMs, made of organo-gel with magnetic particles, ensure biocompatibility, radiopacity, thrombosis, and fast thermal and magnetic responses. An SMM is initially a linear shape with a 0.5-mm diameter at 20 °C inserted in a catheter. It transforms into a predetermined helix within 2 s at 38 °C blood temperature after being pushed out of the catheter into the blood. SMMs enable agile swimming in confined and tortuous vessels and can swim upstream using helical propulsion with rotating magnetic fields. Moreover, we validated this multistage vascular embolization in living rabbits, completing 100-cm travel and renal artery embolization in 2 min. After 4 weeks, the SMMs maintained the embolic position, and the kidney volume decreased by 36%.
Collapse
Affiliation(s)
- Qianbi Peng
- Guangdong Provincial Key Lab of Robotics and Intelligent Systems, Shenzhen Institute of Advanced Technology,
Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Shu Wang
- Guangdong Provincial Key Lab of Robotics and Intelligent Systems, Shenzhen Institute of Advanced Technology,
Chinese Academy of Sciences, Shenzhen, China
| | - Jianguo Han
- Department of Neurosurgery, South China Hospital, Medical School,
Shenzhen University, Shenzhen, China
| | - Chenyang Huang
- Guangdong Provincial Key Lab of Robotics and Intelligent Systems, Shenzhen Institute of Advanced Technology,
Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Hengyuan Yu
- Guangdong Provincial Key Lab of Robotics and Intelligent Systems, Shenzhen Institute of Advanced Technology,
Chinese Academy of Sciences, Shenzhen, China
| | - Dong Li
- Guangdong Provincial Key Lab of Robotics and Intelligent Systems, Shenzhen Institute of Advanced Technology,
Chinese Academy of Sciences, Shenzhen, China
| | - Ming Qiu
- Department of Neurosurgery, South China Hospital, Medical School,
Shenzhen University, Shenzhen, China
| | - Si Cheng
- Department of Neurosurgery, South China Hospital, Medical School,
Shenzhen University, Shenzhen, China
| | - Chong Wu
- Department of Neurosurgery, South China Hospital, Medical School,
Shenzhen University, Shenzhen, China
| | - Mingxue Cai
- Guangdong Provincial Key Lab of Robotics and Intelligent Systems, Shenzhen Institute of Advanced Technology,
Chinese Academy of Sciences, Shenzhen, China
| | - Shixiong Fu
- Guangdong Provincial Key Lab of Robotics and Intelligent Systems, Shenzhen Institute of Advanced Technology,
Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Binghan Chen
- Guangdong Provincial Key Lab of Robotics and Intelligent Systems, Shenzhen Institute of Advanced Technology,
Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Xinyu Wu
- Guangdong Provincial Key Lab of Robotics and Intelligent Systems, Shenzhen Institute of Advanced Technology,
Chinese Academy of Sciences, Shenzhen, China
| | - Shiwei Du
- Department of Neurosurgery, South China Hospital, Medical School,
Shenzhen University, Shenzhen, China
| | - Tiantian Xu
- Guangdong Provincial Key Lab of Robotics and Intelligent Systems, Shenzhen Institute of Advanced Technology,
Chinese Academy of Sciences, Shenzhen, China
- The Key Laboratory of Biomedical Imaging Science and System, Shenzhen Institute of Advanced Technology,
Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
9
|
Zhang J, Liu L, Xiang P, Fang Q, Nie X, Ma H, Hu J, Xiong R, Wang Y, Lu H. AI co-pilot bronchoscope robot. Nat Commun 2024; 15:241. [PMID: 38172095 PMCID: PMC10764930 DOI: 10.1038/s41467-023-44385-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Accepted: 12/12/2023] [Indexed: 01/05/2024] Open
Abstract
The unequal distribution of medical resources and scarcity of experienced practitioners confine access to bronchoscopy primarily to well-equipped hospitals in developed regions, contributing to the unavailability of bronchoscopic services in underdeveloped areas. Here, we present an artificial intelligence (AI) co-pilot bronchoscope robot that empowers novice doctors to conduct lung examinations as safely and adeptly as experienced colleagues. The system features a user-friendly, plug-and-play catheter, devised for robot-assisted steering, facilitating access to bronchi beyond the fifth generation in average adult patients. Drawing upon historical bronchoscopic videos and expert imitation, our AI-human shared control algorithm enables novice doctors to achieve safe steering in the lung, mitigating misoperations. Both in vitro and in vivo results underscore that our system equips novice doctors with the skills to perform lung examinations as expertly as seasoned practitioners. This study offers innovative strategies to address the pressing issue of medical resource disparities through AI assistance.
Collapse
Affiliation(s)
- Jingyu Zhang
- State Key Laboratory of Industrial Control and Technology, Zhejiang University, 310027, Hangzhou, China
- Institute of Cyber-Systems and Control, Department of Control Science and Engineering, Zhejiang University, 310027, Hangzhou, China
| | - Lilu Liu
- State Key Laboratory of Industrial Control and Technology, Zhejiang University, 310027, Hangzhou, China
- Institute of Cyber-Systems and Control, Department of Control Science and Engineering, Zhejiang University, 310027, Hangzhou, China
| | - Pingyu Xiang
- State Key Laboratory of Industrial Control and Technology, Zhejiang University, 310027, Hangzhou, China
- Institute of Cyber-Systems and Control, Department of Control Science and Engineering, Zhejiang University, 310027, Hangzhou, China
| | - Qin Fang
- State Key Laboratory of Industrial Control and Technology, Zhejiang University, 310027, Hangzhou, China
- Institute of Cyber-Systems and Control, Department of Control Science and Engineering, Zhejiang University, 310027, Hangzhou, China
| | - Xiuping Nie
- State Key Laboratory of Industrial Control and Technology, Zhejiang University, 310027, Hangzhou, China
- Institute of Cyber-Systems and Control, Department of Control Science and Engineering, Zhejiang University, 310027, Hangzhou, China
| | - Honghai Ma
- Department of Thoracic Surgery, First Affiliated Hospital, School of Medicine, Zhejiang University, 310009, Hangzhou, China
| | - Jian Hu
- Department of Thoracic Surgery, First Affiliated Hospital, School of Medicine, Zhejiang University, 310009, Hangzhou, China
| | - Rong Xiong
- State Key Laboratory of Industrial Control and Technology, Zhejiang University, 310027, Hangzhou, China.
- Institute of Cyber-Systems and Control, Department of Control Science and Engineering, Zhejiang University, 310027, Hangzhou, China.
| | - Yue Wang
- State Key Laboratory of Industrial Control and Technology, Zhejiang University, 310027, Hangzhou, China.
- Institute of Cyber-Systems and Control, Department of Control Science and Engineering, Zhejiang University, 310027, Hangzhou, China.
| | - Haojian Lu
- State Key Laboratory of Industrial Control and Technology, Zhejiang University, 310027, Hangzhou, China.
- Institute of Cyber-Systems and Control, Department of Control Science and Engineering, Zhejiang University, 310027, Hangzhou, China.
| |
Collapse
|
10
|
Hamza M, Skidanov R, Podlipnov V. Visualization of Subcutaneous Blood Vessels Based on Hyperspectral Imaging and Three-Wavelength Index Images. SENSORS (BASEL, SWITZERLAND) 2023; 23:8895. [PMID: 37960594 PMCID: PMC10650145 DOI: 10.3390/s23218895] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 10/25/2023] [Accepted: 10/26/2023] [Indexed: 11/15/2023]
Abstract
Blood vessel visualization technology allows nursing staff to transition from traditional palpation or touch to locate the subcutaneous blood vessels to visualized localization by providing a clear visual aid for performing various medical procedures accurately and efficiently involving blood vessels; this can further improve the first-attempt puncture success rate for nursing staff and reduce the pain of patients. We propose a novel technique for hyperspectral visualization of blood vessels in human skin. An experiment with six participants with different skin types, race, and nationality backgrounds is described. A mere separation of spectral layers for different skin types is shown to be insufficient. The use of three-wavelength indices in imaging has shown a significant improvement in the quality of results compared to using only two-wavelength indices. This improvement can be attributed to an increase in the contrast ratio, which can be as high as 25%. We propose and implement a technique for finding new index formulae based on an exhaustive search and a binary blood-vessel image obtained through an expert assessment. As a result of the search, a novel index formula was deduced, allowing high-contrast blood vessel images to be generated for any skin type.
Collapse
Affiliation(s)
- Mohammed Hamza
- Department of Information Technology, Samara National Research University, Moskovskoye Shosse 34, 443086 Samara, Russia; (M.H.); (V.P.)
| | - Roman Skidanov
- Department of Information Technology, Samara National Research University, Moskovskoye Shosse 34, 443086 Samara, Russia; (M.H.); (V.P.)
- IPSI RAS—Branch of the FSRC “Crystallography and Photonics” RAS, Molodogvardeiskaya St. 151, 443001 Samara, Russia
| | - Vladimir Podlipnov
- Department of Information Technology, Samara National Research University, Moskovskoye Shosse 34, 443086 Samara, Russia; (M.H.); (V.P.)
- IPSI RAS—Branch of the FSRC “Crystallography and Photonics” RAS, Molodogvardeiskaya St. 151, 443001 Samara, Russia
| |
Collapse
|
11
|
Shen N, Xu T, Huang S, Mu F, Li J. Expert-Guided Knowledge Distillation for Semi-Supervised Vessel Segmentation. IEEE J Biomed Health Inform 2023; 27:5542-5553. [PMID: 37669209 DOI: 10.1109/jbhi.2023.3312338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/07/2023]
Abstract
In medical image analysis, blood vessel segmentation is of considerable clinical value for diagnosis and surgery. The predicaments of complex vascular structures obstruct the development of the field. Despite many algorithms have emerged to get off the tight corners, they rely excessively on careful annotations for tubular vessel extraction. A practical solution is to excavate the feature information distribution from unlabeled data. This work proposes a novel semi-supervised vessel segmentation framework, named EXP-Net, to navigate through finite annotations. Based on the training mechanism of the Mean Teacher model, we innovatively engage an expert network in EXP-Net to enhance knowledge distillation. The expert network comprises knowledge and connectivity enhancement modules, which are respectively in charge of modeling feature relationships from global and detailed perspectives. In particular, the knowledge enhancement module leverages the vision transformer to highlight the long-range dependencies among multi-level token components; the connectivity enhancement module maximizes the properties of topology and geometry by skeletonizing the vessel in a non-parametric manner. The key components are dedicated to the conditions of weak vessel connectivity and poor pixel contrast. Extensive evaluations show that our EXP-Net achieves state-of-the-art performance on subcutaneous vessel, retinal vessel, and coronary artery segmentations.
Collapse
|
12
|
Ning G, Liang H, Zhang X, Liao H. Autonomous Robotic Ultrasound Vascular Imaging System With Decoupled Control Strategy for External-Vision-Free Environments. IEEE Trans Biomed Eng 2023; 70:3166-3177. [PMID: 37227912 DOI: 10.1109/tbme.2023.3279114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
OBJECTIVE Ultrasound (US) probes scan over the surface of the human body to acquire US images in clinical vascular US diagnosis. However, due to the deformation and specificity of different human surfaces, the relationship between the scan trajectory of the skin and the internal tissues is not fully correlated, which poses a challenge for autonomous robotic US imaging in a dynamic and external-vision-free environment. Here, we propose a decoupled control strategy for autonomous robotic vascular US imaging in an environment without external vision. METHODS The proposed system is divided into outer-loop posture control and inner-loop orientation control, which are separately determined by a deep learning (DL) agent and a reinforcement learning (RL) agent. First, we use a weakly supervised US vessel segmentation network to estimate the probe orientation. In the outer loop control, we use a force-guided reinforcement learning agent to maintain a specific angle between the US probe and the skin in the dynamic imaging processes. Finally, the orientation and the posture are integrated to complete the imaging process. RESULTS Evaluation experiments on several volunteers showed that our RUS could autonomously perform vascular imaging in arms with different stiffness, curvature, and size without additional system adjustments. Furthermore, our system achieved reproducible imaging and reconstruction of dynamic targets without relying on vision-based surface information. CONCLUSION AND SIGNIFICANCE Our system and control strategy provides a novel framework for the application of US robots in complex and external-vision-free environments.
Collapse
|
13
|
Kuntz A, Emerson M, Ertop TE, Fried I, Fu M, Hoelscher J, Rox M, Akulian J, Gillaspie EA, Lee YZ, Maldonado F, Webster RJ, Alterovitz R. Autonomous medical needle steering in vivo. Sci Robot 2023; 8:eadf7614. [PMID: 37729421 PMCID: PMC11182607 DOI: 10.1126/scirobotics.adf7614] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Accepted: 08/21/2023] [Indexed: 09/22/2023]
Abstract
The use of needles to access sites within organs is fundamental to many interventional medical procedures both for diagnosis and treatment. Safely and accurately navigating a needle through living tissue to a target is currently often challenging or infeasible because of the presence of anatomical obstacles, high levels of uncertainty, and natural tissue motion. Medical robots capable of automating needle-based procedures have the potential to overcome these challenges and enable enhanced patient care and safety. However, autonomous navigation of a needle around obstacles to a predefined target in vivo has not been shown. Here, we introduce a medical robot that autonomously navigates a needle through living tissue around anatomical obstacles to a target in vivo. Our system leverages a laser-patterned highly flexible steerable needle capable of maneuvering along curvilinear trajectories. The autonomous robot accounts for anatomical obstacles, uncertainty in tissue/needle interaction, and respiratory motion using replanning, control, and safe insertion time windows. We applied the system to lung biopsy, which is critical for diagnosing lung cancer, the leading cause of cancer-related deaths in the United States. We demonstrated successful performance of our system in multiple in vivo porcine studies achieving targeting errors less than the radius of clinically relevant lung nodules. We also demonstrated that our approach offers greater accuracy compared with a standard manual bronchoscopy technique. Our results show the feasibility and advantage of deploying autonomous steerable needle robots in living tissue and how these systems can extend the current capabilities of physicians to further improve patient care.
Collapse
Affiliation(s)
- Alan Kuntz
- Kahlert School of Computing and Robotics Center, University of Utah; Salt Lake City, UT 84112, USA
| | - Maxwell Emerson
- Department of Mechanical Engineering, Vanderbilt University; Nashville, TN 37235, USA
| | - Tayfun Efe Ertop
- Department of Mechanical Engineering, Vanderbilt University; Nashville, TN 37235, USA
| | - Inbar Fried
- Department of Computer Science, University of North Carolina at Chapel Hill; Chapel Hill, NC 27599, USA
| | - Mengyu Fu
- Department of Computer Science, University of North Carolina at Chapel Hill; Chapel Hill, NC 27599, USA
| | - Janine Hoelscher
- Department of Computer Science, University of North Carolina at Chapel Hill; Chapel Hill, NC 27599, USA
| | - Margaret Rox
- Department of Mechanical Engineering, Vanderbilt University; Nashville, TN 37235, USA
| | - Jason Akulian
- Department of Medicine, Division of Pulmonary Diseases and Critical Care Medicine, University of North Carolina School of Medicine; Chapel Hill, NC 27599, USA
| | - Erin A. Gillaspie
- Department of Medicine and Thoracic Surgery, Vanderbilt University Medical Center; Nashville, TN 37232, USA
| | - Yueh Z. Lee
- Department of Radiology, University of North Carolina School of Medicine; Chapel Hill, NC 27599, USA
| | - Fabien Maldonado
- Department of Medicine and Thoracic Surgery, Vanderbilt University Medical Center; Nashville, TN 37232, USA
| | - Robert J. Webster
- Department of Mechanical Engineering, Vanderbilt University; Nashville, TN 37235, USA
| | - Ron Alterovitz
- Department of Computer Science, University of North Carolina at Chapel Hill; Chapel Hill, NC 27599, USA
| |
Collapse
|
14
|
Shen N, Xu T, Bian Z, Huang S, Mu F, Huang B, Xiao Y, Li J. SCANet: A Unified Semi-Supervised Learning Framework for Vessel Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:2476-2489. [PMID: 35862338 DOI: 10.1109/tmi.2022.3193150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Automatic subcutaneous vessel imaging with near-infrared (NIR) optical apparatus can promote the accuracy of locating blood vessels, thus significantly contributing to clinical venipuncture research. Though deep learning models have achieved remarkable success in medical image segmentation, they still struggle in the subfield of subcutaneous vessel segmentation due to the scarcity and low-quality of annotated data. To relieve it, this work presents a novel semi-supervised learning framework, SCANet, that achieves accurate vessel segmentation through an alternate training strategy. The SCANet is composed of a multi-scale recurrent neural network that embeds coarse-to-fine features and two auxiliary branches, a consistency decoder and an adversarial learning branch, responsible for strengthening fine-grained details and eliminating differences between ground-truths and predictions, respectively. Equipped with a novel semi-supervised alternate training strategy, the three components work collaboratively, enabling SCANet to accurately segment vessel regions with only a handful of labeled data and abounding unlabeled data. Moreover, to mitigate the shortage of annotated data in this field, we provide a new subcutaneous vessel dataset, VESSEL-NIR. Extensive experiments on a wide variety of tasks, including the segmentation of subcutaneous vessels, retinal vessels, and skin lesions, well demonstrate the superiority and generality of our approach.
Collapse
|
15
|
Alakuş TB. A Novel Repetition Frequency-Based DNA Encoding Scheme to Predict Human and Mouse DNA Enhancers with Deep Learning. Biomimetics (Basel) 2023; 8:218. [PMID: 37366813 DOI: 10.3390/biomimetics8020218] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 05/18/2023] [Accepted: 05/22/2023] [Indexed: 06/28/2023] Open
Abstract
Recent studies have shown that DNA enhancers have an important role in the regulation of gene expression. They are responsible for different important biological elements and processes such as development, homeostasis, and embryogenesis. However, experimental prediction of these DNA enhancers is time-consuming and costly as it requires laboratory work. Therefore, researchers started to look for alternative ways and started to apply computation-based deep learning algorithms to this field. Yet, the inconsistency and unsuccessful prediction performance of computational-based approaches among various cell lines led to the investigation of these approaches as well. Therefore, in this study, a novel DNA encoding scheme was proposed, and solutions were sought to the problems mentioned and DNA enhancers were predicted with BiLSTM. The study consisted of four different stages for two scenarios. In the first stage, DNA enhancer data were obtained. In the second stage, DNA sequences were converted to numerical representations by both the proposed encoding scheme and various DNA encoding schemes including EIIP, integer number, and atomic number. In the third stage, the BiLSTM model was designed, and the data were classified. In the final stage, the performance of DNA encoding schemes was determined by accuracy, precision, recall, F1-score, CSI, MCC, G-mean, Kappa coefficient, and AUC scores. In the first scenario, it was determined whether the DNA enhancers belonged to humans or mice. As a result of the prediction process, the highest performance was achieved with the proposed DNA encoding scheme, and an accuracy of 92.16% and an AUC score of 0.85 were calculated, respectively. The closest accuracy score to the proposed scheme was obtained with the EIIP DNA encoding scheme and the result was observed as 89.14%. The AUC score of this scheme was measured as 0.87. Among the remaining DNA encoding schemes, the atomic number showed an accuracy score of 86.61%, while this rate decreased to 76.96% with the integer scheme. The AUC values of these schemes were 0.84 and 0.82, respectively. In the second scenario, it was determined whether there was a DNA enhancer and, if so, it was decided to which species this enhancer belonged. In this scenario, the highest accuracy score was obtained with the proposed DNA encoding scheme and the result was 84.59%. Moreover, the AUC score of the proposed scheme was determined as 0.92. EIIP and integer DNA encoding schemes showed accuracy scores of 77.80% and 73.68%, respectively, while their AUC scores were close to 0.90. The most ineffective prediction was performed with the atomic number and the accuracy score of this scheme was calculated as 68.27%. Finally, the AUC score of this scheme was 0.81. At the end of the study, it was observed that the proposed DNA encoding scheme was successful and effective in predicting DNA enhancers.
Collapse
Affiliation(s)
- Talha Burak Alakuş
- Department of Software Engineering, Faculty of Engineering, Kırklareli University, 39100 Kırklareli, Turkey
| |
Collapse
|
16
|
Khan MS, Olds JL. When neuro-robots go wrong: A review. Front Neurorobot 2023; 17:1112839. [PMID: 36819005 PMCID: PMC9935594 DOI: 10.3389/fnbot.2023.1112839] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Accepted: 01/19/2023] [Indexed: 02/05/2023] Open
Abstract
Neuro-robots are a class of autonomous machines that, in their architecture, mimic aspects of the human brain and cognition. As such, they represent unique artifacts created by humans based on human understanding of healthy human brains. European Union's Convention on Roboethics 2025 states that the design of all robots (including neuro-robots) must include provisions for the complete traceability of the robots' actions, analogous to an aircraft's flight data recorder. At the same time, one can anticipate rising instances of neuro-robotic failure, as they operate on imperfect data in real environments, and the underlying AI behind such neuro-robots has yet to achieve explainability. This paper reviews the trajectory of the technology used in neuro-robots and accompanying failures. The failures demand an explanation. While drawing on existing explainable AI research, we argue explainability in AI limits the same in neuro-robots. In order to make robots more explainable, we suggest potential pathways for future research.
Collapse
|
17
|
Hidalgo EM, Wright L, Isaksson M, Lambert G, Marwick TH. Current Applications of Robot-Assisted Ultrasound Examination. JACC Cardiovasc Imaging 2023; 16:239-247. [PMID: 36648034 DOI: 10.1016/j.jcmg.2022.07.018] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Revised: 07/06/2022] [Accepted: 07/21/2022] [Indexed: 11/06/2022]
Abstract
Despite advances in miniaturization and automation, the need for expert acquisition of a full echocardiogram, including Doppler, has restricted access in remote areas. Recent developments in robotics, teleoperation, and upgraded telecommunications infrastructure may provide a solution to this deficiency. Robot-assisted teleoperated ultrasound examination can aid medical diagnosis in remote locations and may improve health inequalities between rural and urban settings. This review aimed to analyze the status of teleoperated robotic systems for ultrasound examinations, evaluate clinical and preclinical applications, identify limitations, and outline future directions for clinical use. Overall, robot-assisted teleoperated ultrasound is feasible and safe in the reported clinical and preclinical studies, with the robots able to follow the hand movements performed by sonographers and researchers from a distance or in local networks. Moreover, multiple types of ultrasound examinations have been performed in remote areas, with a high success rate nearly comparable to that of conventional sonography. The studies showed that although a low-bandwidth link can be used to control a robot, the bandwidth requirements for real-time transmission of video and ultrasound images are significantly higher. Furthermore, if haptic feedback is implemented, the bandwidth requirements are increased. Haptically enabled systems that improve robotic control are necessary for accelerating the introduction to clinical use. Haptic feedback and enhanced front-end interface control for remote users are vital aspects required for clinical application. The incorporation of artificial intelligence through either aiding in window acquisition (knowledge of anatomical landmarks to adjust scanning planes) or through measurement and disease identification is yet to be researched. However, it has the potential to lead to dramatic advances. A new generation of robots is in development, and several projects in the preclinical stage reveal a promising future to overcome the shortage of health professionals in remote areas.
Collapse
Affiliation(s)
- Edgar M Hidalgo
- Department of Mechanical Engineering and Product Design Engineering, Swinburne University of Technology, Melbourne, Australia
| | - Leah Wright
- Baker Heart and Diabetes Institute, Melbourne, Australia
| | - Mats Isaksson
- Department of Mechanical Engineering and Product Design Engineering, Swinburne University of Technology, Melbourne, Australia
| | - Gavin Lambert
- Department of Mechanical Engineering and Product Design Engineering, Swinburne University of Technology, Melbourne, Australia; Iverson Health Innovation Research Institute, Swinburne University of Technology, Melbourne, Australia
| | | |
Collapse
|
18
|
Krittanawong C, Singh NK, Scheuring RA, Urquieta E, Bershad EM, Macaulay TR, Kaplin S, Dunn C, Kry SF, Russomano T, Shepanek M, Stowe RP, Kirkpatrick AW, Broderick TJ, Sibonga JD, Lee AG, Crucian BE. Human Health during Space Travel: State-of-the-Art Review. Cells 2022; 12:cells12010040. [PMID: 36611835 PMCID: PMC9818606 DOI: 10.3390/cells12010040] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 12/05/2022] [Accepted: 12/07/2022] [Indexed: 12/24/2022] Open
Abstract
The field of human space travel is in the midst of a dramatic revolution. Upcoming missions are looking to push the boundaries of space travel, with plans to travel for longer distances and durations than ever before. Both the National Aeronautics and Space Administration (NASA) and several commercial space companies (e.g., Blue Origin, SpaceX, Virgin Galactic) have already started the process of preparing for long-distance, long-duration space exploration and currently plan to explore inner solar planets (e.g., Mars) by the 2030s. With the emergence of space tourism, space travel has materialized as a potential new, exciting frontier of business, hospitality, medicine, and technology in the coming years. However, current evidence regarding human health in space is very limited, particularly pertaining to short-term and long-term space travel. This review synthesizes developments across the continuum of space health including prior studies and unpublished data from NASA related to each individual organ system, and medical screening prior to space travel. We categorized the extraterrestrial environment into exogenous (e.g., space radiation and microgravity) and endogenous processes (e.g., alteration of humans' natural circadian rhythm and mental health due to confinement, isolation, immobilization, and lack of social interaction) and their various effects on human health. The aim of this review is to explore the potential health challenges associated with space travel and how they may be overcome in order to enable new paradigms for space health, as well as the use of emerging Artificial Intelligence based (AI) technology to propel future space health research.
Collapse
Affiliation(s)
- Chayakrit Krittanawong
- Department of Medicine and Center for Space Medicine, Section of Cardiology, Baylor College of Medicine, Houston, TX 77030, USA
- Translational Research Institute for Space Health, Houston, TX 77030, USA
- Department of Cardiovascular Diseases, New York University School of Medicine, New York, NY 10016, USA
- Correspondence: or (C.K.); (B.E.C.); Tel.: +1-713-798-4951 (C.K.); +1-281-483-0123 (B.E.C.)
| | - Nitin Kumar Singh
- Biotechnology and Planetary Protection Group, Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA
| | | | - Emmanuel Urquieta
- Translational Research Institute for Space Health, Houston, TX 77030, USA
- Department of Emergency Medicine and Center for Space Medicine, Baylor College of Medicine, Houston, TX 77030, USA
| | - Eric M. Bershad
- Department of Neurology, Center for Space Medicine, Baylor College of Medicine, Houston, TX 77030, USA
| | | | - Scott Kaplin
- Department of Cardiovascular Diseases, New York University School of Medicine, New York, NY 10016, USA
| | - Carly Dunn
- Department of Dermatology, Baylor College of Medicine, Houston, TX 77030, USA
| | - Stephen F. Kry
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | | | - Marc Shepanek
- Office of the Chief Health and Medical Officer, NASA, Washington, DC 20546, USA
| | | | - Andrew W. Kirkpatrick
- Department of Surgery and Critical Care Medicine, University of Calgary, Calgary, AB T2N 1N4, Canada
| | | | - Jean D. Sibonga
- Division of Biomedical Research and Environmental Sciences, NASA Lyndon B. Johnson Space Center, Houston, TX 77058, USA
| | - Andrew G. Lee
- Department of Ophthalmology, University of Texas Medical Branch School of Medicine, Galveston, TX 77555, USA
- Department of Ophthalmology, Blanton Eye Institute, Houston Methodist Hospital, Houston, TX 77030, USA
- Department of Ophthalmology, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- Department of Ophthalmology, Texas A and M College of Medicine, College Station, TX 77807, USA
- Department of Ophthalmology, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA
- Departments of Ophthalmology, Neurology, and Neurosurgery, Weill Cornell Medicine, New York, NY 10021, USA
| | - Brian E. Crucian
- National Aeronautics and Space Administration (NASA) Johnson Space Center, Human Health and Performance Directorate, Houston, TX 77058, USA
- Correspondence: or (C.K.); (B.E.C.); Tel.: +1-713-798-4951 (C.K.); +1-281-483-0123 (B.E.C.)
| |
Collapse
|
19
|
He T, Guo C, Jiang L. Puncture site decision method for venipuncture robot based on near-infrared vision and multiobjective optimization. SCIENCE CHINA. TECHNOLOGICAL SCIENCES 2022; 66:13-23. [PMID: 36570559 PMCID: PMC9758675 DOI: 10.1007/s11431-022-2232-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Accepted: 10/12/2022] [Indexed: 06/17/2023]
Abstract
Venipuncture robots have superior perception and stability to humans and are expected to replace manual venipuncture. However, their use is greatly restricted because they cannot make decisions regarding the puncture sites. Thus, this study presents a multi-information fusion method for determining puncture sites for venipuncture robots to improve their autonomy in the case of limited resources. Here, numerous images have been gathered and processed to establish an image dataset of human forearms for training the U-Net with the soft attention mechanism (SAU-Net) for vein segmentation. Then, the veins are segmented from the images, feature information is extracted based on near-infrared vision, and a multiobjective optimization model for puncture site decision is provided by considering the depth, diameter, curvature, and length of the vein to determine the optimal puncture site. Experiments demonstrate that the method achieves a segmentation accuracy of 91.2% and a vein extraction rate of 86.7% while achieving the Pareto solution set (average time: 1.458 s) and optimal results for each vessel. Finally, a near-infrared camera is applied to the venipuncture robot to segment veins and determine puncture sites in real time, with the results transmitted back to the robot for an attitude adjustment. Consequently, this method can enhance the autonomy of venipuncture robots if implemented dramatically.
Collapse
Affiliation(s)
- TianBao He
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, 150001 China
| | - ChuangQiang Guo
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, 150001 China
| | - Li Jiang
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, 150001 China
| |
Collapse
|
20
|
Ando K, Okumura T, Komachi M, Horiguchi H, Matsumoto Y. Is artificial intelligence capable of generating hospital discharge summaries from inpatient records? PLOS DIGITAL HEALTH 2022; 1:e0000158. [PMID: 36812600 PMCID: PMC9931331 DOI: 10.1371/journal.pdig.0000158] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Accepted: 11/09/2022] [Indexed: 06/18/2023]
Abstract
Medical professionals have been burdened by clerical work, and artificial intelligence may efficiently support physicians by generating clinical summaries. However, whether hospital discharge summaries can be generated automatically from inpatient records stored in electronic health records remains unclear. Therefore, this study investigated the sources of information in discharge summaries. First, the discharge summaries were automatically split into fine-grained segments, such as those representing medical expressions, using a machine learning model from a previous study. Second, these segments in the discharge summaries that did not originate from inpatient records were filtered out. This was performed by calculating the n-gram overlap between inpatient records and discharge summaries. The final source origin decision was made manually. Finally, to reveal the specific sources (e.g., referral documents, prescriptions, and physician's memory) from which the segments originated, they were manually classified by consulting medical professionals. For further and deeper analysis, this study designed and annotated clinical role labels that represent the subjectivity of the expressions and builds a machine learning model to assign them automatically. The analysis results revealed the following: First, 39% of the information in the discharge summary originated from external sources other than inpatient records. Second, patient's past clinical records constituted 43%, and patient referral documents constituted 18% of the expressions derived from external sources. Third, 11% of the missing information was not derived from any documents. These are possibly derived from physicians' memories or reasoning. According to these results, end-to-end summarization using machine learning is considered infeasible. Machine summarization with an assisted post-editing process is the best fit for this problem domain.
Collapse
Affiliation(s)
- Kenichiro Ando
- Graduate School of Systems Design, Tokyo Metropolitan University, Tokyo, Japan
- Center for Advanced Intelligence Project, RIKEN, Tokyo, Japan
- National Hospital Organization, Tokyo, Japan
| | - Takashi Okumura
- School of Regional Innovation and Social Design Engineering, Kitami Institute of Technology, Hokkaido, Japan
| | - Mamoru Komachi
- Graduate School of Systems Design, Tokyo Metropolitan University, Tokyo, Japan
| | | | - Yuji Matsumoto
- Center for Advanced Intelligence Project, RIKEN, Tokyo, Japan
| |
Collapse
|
21
|
Alakus TB, Turkoglu I. Prediction of viral-host interactions of COVID-19 by computational methods. CHEMOMETRICS AND INTELLIGENT LABORATORY SYSTEMS : AN INTERNATIONAL JOURNAL SPONSORED BY THE CHEMOMETRICS SOCIETY 2022; 228:104622. [PMID: 35879939 PMCID: PMC9301933 DOI: 10.1016/j.chemolab.2022.104622] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Revised: 06/20/2022] [Accepted: 07/10/2022] [Indexed: 06/15/2023]
Abstract
Experimental approaches are currently used to determine viral-host interactions, but these approaches are both time-consuming and costly. For these reasons, computational-based approaches are recommended. In this study, using computational-based approaches, viral-host interactions of SARS-CoV-2 virus and human proteins were predicted. The study consists of four different stages; in the first stage viral and host protein sequences were obtained. In the second stage, protein sequences were converted into numerical expressions by various protein mapping methods. These methods are entropy-based, AVL-tree, FIBHASH, binary encoding, CPNR, PAM250, BLOSUM62, Atchley factors, Meiler parameters, EIIP, AESNN1, Miyazawa energies, Micheletti potentials, Z-scale, and hydrophobicity. In the third stage, a deep learning model was designed and BiLSTM was used for this. In the last stage, the protein sequences were classified, and the viral-host interactions were predicted. The performances of protein mapping methods were determined by accuracy, F1-score, specificity, sensitivity, and AUC scores. According to the classification results, the best classification process was obtained by the entropy-based method. With this method, 94.74% accuracy, and 0.95 AUC score were calculated. Then, the most successful classification process was performed with the Z-scale and 91.23% accuracy, and 0.96 AUC score were obtained. Although other protein mapping methods are not as efficient as Z-scale and entropy-based methods, they have achieved successful classification. AVL-tree, FIBHASH, binary encoding, CPNR, PAM250, BLOSUM62, Atchley factors, Meiler parameters and AESNN1 methods showed over 80% accuracy, F1-score, and AUC score. Accuracy scores of EIIP, Miyazawa energies, Micheletti potentials and hydrophobicity methods remained below 80%. When the results were examined in general, it was observed that the computational approaches were successful in predicting viral-host interactions between SARS-CoV-2 virus and human proteins.
Collapse
Affiliation(s)
- Talha Burak Alakus
- Kirklareli University, Department of Software Engineering, Kirklareli, 39000, Turkey
| | - Ibrahim Turkoglu
- Firat University, Department of Software Engineering, Elazig, 23119, Turkey
| |
Collapse
|
22
|
Wang T, Ugurlu H, Yan Y, Li M, Li M, Wild AM, Yildiz E, Schneider M, Sheehan D, Hu W, Sitti M. Adaptive wireless millirobotic locomotion into distal vasculature. Nat Commun 2022; 13:4465. [PMID: 35915075 PMCID: PMC9343456 DOI: 10.1038/s41467-022-32059-9] [Citation(s) in RCA: 37] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Accepted: 07/14/2022] [Indexed: 11/23/2022] Open
Abstract
Microcatheters have enabled diverse minimally invasive endovascular operations and notable health benefits compared with open surgeries. However, with tortuous routes far from the arterial puncture site, the distal vascular regions remain challenging for safe catheter access. Therefore, we propose a wireless stent-shaped magnetic soft robot to be deployed, actively navigated, used for medical functions, and retrieved in the example M4 segment of the middle cerebral artery. We investigate shape-adaptively controlled locomotion in phantoms emulating the physiological conditions here, where the lumen diameter shrinks from 1.5 mm to 1 mm, the radius of curvature of the tortuous lumen gets as small as 3 mm, the lumen bifurcation angle goes up to 120°, and the pulsatile flow speed reaches up to 26 cm/s. The robot can also withstand the flow when the magnetic actuation is turned off. These locomotion capabilities are confirmed in porcine arteries ex vivo. Furthermore, variants of the robot could release the tissue plasminogen activator on-demand locally for thrombolysis and function as flow diverters, initiating promising therapies towards acute ischemic stroke, aneurysm, arteriovenous malformation, dural arteriovenous fistulas, and brain tumors. These functions should facilitate the robot's usage in new distal endovascular operations.
Collapse
Affiliation(s)
- Tianlu Wang
- Physical Intelligence Department, Max Planck Institute for Intelligent Systems, 70569, Stuttgart, Germany
- Department of Information Technology and Electrical Engineering, ETH Zurich, 8092, Zurich, Switzerland
| | - Halim Ugurlu
- Physical Intelligence Department, Max Planck Institute for Intelligent Systems, 70569, Stuttgart, Germany
- Clinic for Neuroradiology, Klinikum Stuttgart, 70174, Stuttgart, Germany
- Department of Biophysics, Aydın Adnan Menderes University, Graduate School of Health Sciences, 09010, Aydın, Turkey
| | - Yingbo Yan
- Physical Intelligence Department, Max Planck Institute for Intelligent Systems, 70569, Stuttgart, Germany
| | - Mingtong Li
- Physical Intelligence Department, Max Planck Institute for Intelligent Systems, 70569, Stuttgart, Germany
| | - Meng Li
- Physical Intelligence Department, Max Planck Institute for Intelligent Systems, 70569, Stuttgart, Germany
| | - Anna-Maria Wild
- Physical Intelligence Department, Max Planck Institute for Intelligent Systems, 70569, Stuttgart, Germany
| | - Erdost Yildiz
- Physical Intelligence Department, Max Planck Institute for Intelligent Systems, 70569, Stuttgart, Germany
| | - Martina Schneider
- Physical Intelligence Department, Max Planck Institute for Intelligent Systems, 70569, Stuttgart, Germany
| | - Devin Sheehan
- Physical Intelligence Department, Max Planck Institute for Intelligent Systems, 70569, Stuttgart, Germany
| | - Wenqi Hu
- Physical Intelligence Department, Max Planck Institute for Intelligent Systems, 70569, Stuttgart, Germany.
| | - Metin Sitti
- Physical Intelligence Department, Max Planck Institute for Intelligent Systems, 70569, Stuttgart, Germany.
- Department of Information Technology and Electrical Engineering, ETH Zurich, 8092, Zurich, Switzerland.
- School of Medicine and College of Engineering, Koç University, 34450, Istanbul, Turkey.
| |
Collapse
|
23
|
Macrophage-compatible magnetic achiral nanorobots fabricated by electron beam lithography. Sci Rep 2022; 12:13080. [PMID: 35906371 PMCID: PMC9338296 DOI: 10.1038/s41598-022-17053-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Accepted: 07/20/2022] [Indexed: 12/05/2022] Open
Abstract
With the development and progress of nanotechnology, the prospect of using nanorobots to achieve targeted drug delivery is becoming possible. Although nanorobots can potentially improve nano-drug delivery systems, there remains a significant challenge to fabricating magnetically controllable nanorobots with a size suitable for drug delivery in complex in vivo environments. Most of the current research focused on the preparation and functionalization of microscale and milliscale robots due to the relative difficulties in fabricating nanoscale robots. To address this problem and move towards in vivo applications, this study uses electron beam lithography to fabricate achiral planar L-shaped nanorobots that are biocompatible with immune cells. Their minimal planar geometry enabled nanolithography to fabricate nanorobots with a minimum feature size down to 400 nm. Using an integrated imaging and control system, the locomotive behavior of the L-shaped nanorobots in a fluidic environment was studied by examining their velocity profiles and trajectories. Furthermore, the nanorobots exhibit excellent cell compatibility with various types of cells, including macrophage cells. Finally, the long-term cell culture medium immersion test demonstrated that the L-shaped nanorobots have robust stability. This work will demonstrate the potential to use these nanorobots to operate in vivo without triggering immune cell responses.
Collapse
|
24
|
Jiang Z, Gao Y, Xie L, Navab N. Towards Autonomous Atlas-Based Ultrasound Acquisitions in Presence of Articulated Motion. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3180440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Zhongliang Jiang
- Chair for Computer Aided Medical Procedures and Augmented Reality (CAMP), Technical University of Munich, Garching, Germany
| | - Yuan Gao
- Chair for Computer Aided Medical Procedures and Augmented Reality (CAMP), Technical University of Munich, Garching, Germany
| | - Le Xie
- Institute of Forming Technology and Equipment and Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China
| | - Nassir Navab
- Chair for Computer Aided Medical Procedures and Augmented Reality (CAMP), Technical University of Munich, Garching, Germany
| |
Collapse
|
25
|
Tiryaki ME, Demir SO, Sitti M. Deep Learning-based 3D Magnetic Microrobot Tracking using 2D MR Images. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3179509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Mehmet Efe Tiryaki
- Physical Intelligence Department, Max Planck Institute for Intelligent Systems, Stuttgart, Germany
| | - Sinan Ozgun Demir
- Physical Intelligence Department, Max Planck Institute for Intelligent Systems, Stuttgart, Germany
| | - Metin Sitti
- Physical Intelligence Department, Max Planck Institute for Intelligent Systems, Stuttgart, Germany
| |
Collapse
|
26
|
Leipheimer J, Balter M, Chen A, Yarmush M. Design and Evaluation of a Handheld Robotic Device for Peripheral Catheterization. J Med Device 2022; 16:021015. [PMID: 35284032 PMCID: PMC8905093 DOI: 10.1115/1.4053688] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2021] [Revised: 01/14/2022] [Indexed: 08/30/2024] Open
Abstract
Medical robots provide enhanced dexterity, vision, and safety for a broad range of procedures. In this article, we present a handheld, robotic device capable of performing peripheral catheter insertions with high accuracy and repeatability. The device utilizes a combination of ultrasound imaging, miniaturized robotics, and machine learning to safely and efficiently introduce a catheter sheath into a peripheral blood vessel. Here, we present the mechanical design and experimental validation of the device, known as VeniBot. Additionally, we present results on our ultrasound deep learning algorithm for vessel segmentation, and performance on tissue-mimicking phantom models that simulate difficult peripheral catheter placement. Overall, the device achieved first-attempt success rates of 97 ± 4% for vessel punctures and 89 ± 7% for sheath cannulations on the tissue mimicking models (n = 240). The results from these studies demonstrate the viability of a handheld device for performing semi-automated peripheral catheterization. In the future, the use of this device has the potential to improve clinical workflow and reduce patient discomfort by assuring a safe and efficient procedure.
Collapse
Affiliation(s)
| | - Max Balter
- Rutgers University, Piscataway, NJ 08854
| | - Alvin Chen
- Rutgers University, Piscataway, NJ 08854
| | | |
Collapse
|
27
|
Valdez F, Melin P. A review on quantum computing and deep learning algorithms and their applications. Soft comput 2022; 27:1-20. [PMID: 35411203 PMCID: PMC8988117 DOI: 10.1007/s00500-022-07037-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/09/2022] [Indexed: 11/01/2022]
Abstract
In this paper, we describe a review concerning the Quantum Computing (QC) and Deep Learning (DL) areas and their applications in Computational Intelligence (CI). Quantum algorithms (QAs), engage the rules of quantum mechanics to solve problems using quantum information, where the quantum information is concerning the state of a quantum system, which can be manipulated using quantum information algorithms and other processing techniques. Nowadays, many QAs have been proposed, whose general conclusion is that using the effects of quantum mechanics results in a significant speedup (exponential, polynomial, super polynomial) over the traditional algorithms. This implies that some complex problems currently intractable with traditional algorithms can be solved with QA. On the other hand, DL algorithms offer what is known as machine learning techniques. DL is concerned with teaching a computer to filter inputs through layers to learn how to predict and classify information. Observations can be in the form of plain text, images, or sound. The inspiration for deep learning is the way that the human brain filters information. Therefore, in this research, we analyzed these two areas to observe the most relevant works and applications developed by the researchers in the world.
Collapse
Affiliation(s)
- Fevrier Valdez
- Tijuana Institute of Technology, Calzada Tecnologico S/N, 22414 Tijuana, BC Mexico
| | - Patricia Melin
- Tijuana Institute of Technology, Calzada Tecnologico S/N, 22414 Tijuana, BC Mexico
| |
Collapse
|
28
|
Yang TH, Horng MH, Li RS, Sun YN. Scaphoid Fracture Detection by Using Convolutional Neural Network. Diagnostics (Basel) 2022; 12:diagnostics12040895. [PMID: 35453943 PMCID: PMC9024757 DOI: 10.3390/diagnostics12040895] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Revised: 03/28/2022] [Accepted: 03/30/2022] [Indexed: 11/21/2022] Open
Abstract
Scaphoid fractures frequently appear in injury radiograph, but approximately 20% are occult. While there are few studies in the fracture detection of X-ray scaphoid images, their effectiveness is insignificant in detecting the scaphoid fractures. Traditional image processing technology had been applied to segment interesting areas of X-ray images, but it always suffered from the requirements of manual intervention and a large amount of computational time. To date, the models of convolutional neural networks have been widely applied to medical image recognition; thus, this study proposed a two-stage convolutional neural network to detect scaphoid fractures. In the first stage, the scaphoid bone is separated from the X-ray image using the Faster R-CNN network. The second stage uses the ResNet model as the backbone for feature extraction, and uses the feature pyramid network and the convolutional block attention module to develop the detection and classification models for scaphoid fractures. Various metrics such as recall, precision, sensitivity, specificity, accuracy, and the area under the receiver operating characteristic curve (AUC) are used to evaluate our proposed method’s performance. The scaphoid bone detection achieved an accuracy of 99.70%. The results of scaphoid fracture detection with the rotational bounding box revealed a recall of 0.789, precision of 0.894, accuracy of 0.853, sensitivity of 0.789, specificity of 0.90, and AUC of 0.920. The resulting scaphoid fracture classification had the following performances: recall of 0.735, precision of 0.898, accuracy of 0.829, sensitivity of 0.735, specificity of 0.920, and AUC of 0.917. According to the experimental results, we found that the proposed method can provide effective references for measuring scaphoid fractures. It has a high potential to consider the solution of detection of scaphoid fractures. In the future, the integration of images of the anterior–posterior and lateral views of each participant to develop more powerful convolutional neural networks for fracture detection by X-ray radiograph is probably important to research.
Collapse
Affiliation(s)
- Tai-Hua Yang
- Department of Biomedical Engineering, National Cheng Kung University, Tainan 701, Taiwan;
- Department of Orthopedic Surgery, College of Medicine, National Cheng Kung University Hospital, National Cheng Kung University, Tainan 704, Taiwan
| | - Ming-Huwi Horng
- Department of Computer Science and Information Engineering, National Pingtung University, Pingtung 912, Taiwan;
| | - Rong-Shiang Li
- Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan 701, Taiwan;
| | - Yung-Nien Sun
- Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan 701, Taiwan;
- Correspondence: ; Tel.: +886-6-2757575 (ext. 62526)
| |
Collapse
|
29
|
Development of Stereo NIR-II Fluorescence Imaging System for 3D Tumor Vasculature in Small Animals. BIOSENSORS 2022; 12:bios12020085. [PMID: 35200345 PMCID: PMC8869613 DOI: 10.3390/bios12020085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 01/28/2022] [Accepted: 01/28/2022] [Indexed: 11/16/2022]
Abstract
Near-infrared-II (NIR-II, 1000–1700 nm) fluorescence imaging boasts high spatial resolution and deep tissue penetration due to low light scattering, reduced photon absorption, and low tissue autofluorescence. NIR-II biological imaging is applied mainly in the noninvasive visualization of blood vessels and tumors in deep tissue. In the study, a stereo NIR-II fluorescence imaging system was developed for acquiring three-dimension (3D) images on tumor vasculature in real-time, on top of the development of fluorescent semiconducting polymer dots (IR-TPE Pdots) with ultra-bright NIR-II fluorescence (1000–1400 nm) and high stability to perform long-term fluorescence imaging. The NIR-II imaging system only consists of one InGaAs camera and a moving stage to simulate left-eye view and right-eye view for the construction of 3D in-depth blood vessel images. The system was validated with blood vessel phantom of tumor-bearing mice and was applied successfully in obtaining 3D blood vessel images with 0.6 mm- and 5 mm-depth resolution and 0.15 mm spatial resolution. The NIR-II stereo vision provides precise 3D information on the tumor microenvironment and blood vessel path.
Collapse
|
30
|
Yang G, Ye Q, Xia J. Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond. AN INTERNATIONAL JOURNAL ON INFORMATION FUSION 2022; 77:29-52. [PMID: 34980946 PMCID: PMC8459787 DOI: 10.1016/j.inffus.2021.07.016] [Citation(s) in RCA: 181] [Impact Index Per Article: 60.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 05/25/2021] [Accepted: 07/25/2021] [Indexed: 05/04/2023]
Abstract
Explainable Artificial Intelligence (XAI) is an emerging research topic of machine learning aimed at unboxing how AI systems' black-box choices are made. This research field inspects the measures and models involved in decision-making and seeks solutions to explain them explicitly. Many of the machine learning algorithms cannot manifest how and why a decision has been cast. This is particularly true of the most popular deep neural network approaches currently in use. Consequently, our confidence in AI systems can be hindered by the lack of explainability in these black-box models. The XAI becomes more and more crucial for deep learning powered applications, especially for medical and healthcare studies, although in general these deep neural networks can return an arresting dividend in performance. The insufficient explainability and transparency in most existing AI systems can be one of the major reasons that successful implementation and integration of AI tools into routine clinical practice are uncommon. In this study, we first surveyed the current progress of XAI and in particular its advances in healthcare applications. We then introduced our solutions for XAI leveraging multi-modal and multi-centre data fusion, and subsequently validated in two showcases following real clinical scenarios. Comprehensive quantitative and qualitative analyses can prove the efficacy of our proposed XAI solutions, from which we can envisage successful applications in a broader range of clinical questions.
Collapse
Affiliation(s)
- Guang Yang
- National Heart and Lung Institute, Imperial College London, London, UK
- Royal Brompton Hospital, London, UK
- Imperial Institute of Advanced Technology, Hangzhou, China
| | - Qinghao Ye
- Hangzhou Ocean’s Smart Boya Co., Ltd, China
- University of California, San Diego, La Jolla, CA, USA
| | - Jun Xia
- Radiology Department, Shenzhen Second People’s Hospital, Shenzhen, China
| |
Collapse
|
31
|
Brattain LJ, Pierce TT, Gjesteby LA, Johnson MR, DeLosa ND, Werblin JS, Gupta JF, Ozturk A, Wang X, Li Q, Telfer BA, Samir AE. AI-Enabled, Ultrasound-Guided Handheld Robotic Device for Femoral Vascular Access. BIOSENSORS 2021; 11:522. [PMID: 34940279 PMCID: PMC8699246 DOI: 10.3390/bios11120522] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/26/2021] [Revised: 11/17/2021] [Accepted: 12/15/2021] [Indexed: 05/27/2023]
Abstract
Hemorrhage is a leading cause of trauma death, particularly in prehospital environments when evacuation is delayed. Obtaining central vascular access to a deep artery or vein is important for administration of emergency drugs and analgesics, and rapid replacement of blood volume, as well as invasive sensing and emerging life-saving interventions. However, central access is normally performed by highly experienced critical care physicians in a hospital setting. We developed a handheld AI-enabled interventional device, AI-GUIDE (Artificial Intelligence Guided Ultrasound Interventional Device), capable of directing users with no ultrasound or interventional expertise to catheterize a deep blood vessel, with an initial focus on the femoral vein. AI-GUIDE integrates with widely available commercial portable ultrasound systems and guides a user in ultrasound probe localization, venous puncture-point localization, and needle insertion. The system performs vascular puncture robotically and incorporates a preloaded guidewire to facilitate the Seldinger technique of catheter insertion. Results from tissue-mimicking phantom and porcine studies under normotensive and hypotensive conditions provide evidence of the technique's robustness, with key performance metrics in a live porcine model including: a mean time to acquire femoral vein insertion point of 53 ± 36 s (5 users with varying experience, in 20 trials), a total time to insert catheter of 80 ± 30 s (1 user, in 6 trials), and a mean number of 1.1 (normotensive, 39 trials) and 1.3 (hypotensive, 55 trials) needle insertion attempts (1 user). These performance metrics in a porcine model are consistent with those for experienced medical providers performing central vascular access on humans in a hospital.
Collapse
Affiliation(s)
- Laura J. Brattain
- Lincoln Laboratory, Massachusetts Institute of Technology, Lexington, MA 02421, USA; (L.J.B.); (L.A.G.); (M.R.J.); (N.D.D.); (J.S.W.); (J.F.G.)
| | - Theodore T. Pierce
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA; (T.T.P.); (A.O.); (X.W.); (Q.L.); (A.E.S.)
| | - Lars A. Gjesteby
- Lincoln Laboratory, Massachusetts Institute of Technology, Lexington, MA 02421, USA; (L.J.B.); (L.A.G.); (M.R.J.); (N.D.D.); (J.S.W.); (J.F.G.)
| | - Matthew R. Johnson
- Lincoln Laboratory, Massachusetts Institute of Technology, Lexington, MA 02421, USA; (L.J.B.); (L.A.G.); (M.R.J.); (N.D.D.); (J.S.W.); (J.F.G.)
| | - Nancy D. DeLosa
- Lincoln Laboratory, Massachusetts Institute of Technology, Lexington, MA 02421, USA; (L.J.B.); (L.A.G.); (M.R.J.); (N.D.D.); (J.S.W.); (J.F.G.)
| | - Joshua S. Werblin
- Lincoln Laboratory, Massachusetts Institute of Technology, Lexington, MA 02421, USA; (L.J.B.); (L.A.G.); (M.R.J.); (N.D.D.); (J.S.W.); (J.F.G.)
| | - Jay F. Gupta
- Lincoln Laboratory, Massachusetts Institute of Technology, Lexington, MA 02421, USA; (L.J.B.); (L.A.G.); (M.R.J.); (N.D.D.); (J.S.W.); (J.F.G.)
| | - Arinc Ozturk
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA; (T.T.P.); (A.O.); (X.W.); (Q.L.); (A.E.S.)
| | - Xiaohong Wang
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA; (T.T.P.); (A.O.); (X.W.); (Q.L.); (A.E.S.)
| | - Qian Li
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA; (T.T.P.); (A.O.); (X.W.); (Q.L.); (A.E.S.)
| | - Brian A. Telfer
- Lincoln Laboratory, Massachusetts Institute of Technology, Lexington, MA 02421, USA; (L.J.B.); (L.A.G.); (M.R.J.); (N.D.D.); (J.S.W.); (J.F.G.)
| | - Anthony E. Samir
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA; (T.T.P.); (A.O.); (X.W.); (Q.L.); (A.E.S.)
| |
Collapse
|
32
|
Sarker S, Jamal L, Ahmed SF, Irtisam N. Robotics and artificial intelligence in healthcare during COVID-19 pandemic: A systematic review. ROBOTICS AND AUTONOMOUS SYSTEMS 2021; 146:103902. [PMID: 34629751 PMCID: PMC8493645 DOI: 10.1016/j.robot.2021.103902] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Revised: 09/03/2021] [Accepted: 09/13/2021] [Indexed: 05/05/2023]
Abstract
The outbreak of the COVID-19 pandemic is unarguably the biggest catastrophe of the 21st century, probably the most significant global crisis after the second world war. The rapid spreading capability of the virus has compelled the world population to maintain strict preventive measures. The outrage of the virus has rampaged through the healthcare sector tremendously. This pandemic created a huge demand for necessary healthcare equipment, medicines along with the requirement for advanced robotics and artificial intelligence-based applications. The intelligent robot systems have great potential to render service in diagnosis, risk assessment, monitoring, telehealthcare, disinfection, and several other operations during this pandemic which has helped reduce the workload of the frontline workers remarkably. The long-awaited vaccine discovery of this deadly virus has also been greatly accelerated with AI-empowered tools. In addition to that, many robotics and Robotics Process Automation platforms have substantially facilitated the distribution of the vaccine in many arrangements pertaining to it. These forefront technologies have also aided in giving comfort to the people dealing with less addressed mental health complicacies. This paper investigates the use of robotics and artificial intelligence-based technologies and their applications in healthcare to fight against the COVID-19 pandemic. A systematic search following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) method is conducted to accumulate such literature, and an extensive review on 147 selected records is performed.
Collapse
Affiliation(s)
- Sujan Sarker
- Department of Robotics and Mechatronics Engineering, University of Dhaka, Dhaka, Bangladesh
| | - Lafifa Jamal
- Department of Robotics and Mechatronics Engineering, University of Dhaka, Dhaka, Bangladesh
| | - Syeda Faiza Ahmed
- Department of Robotics and Mechatronics Engineering, University of Dhaka, Dhaka, Bangladesh
| | - Niloy Irtisam
- Department of Robotics and Mechatronics Engineering, University of Dhaka, Dhaka, Bangladesh
| |
Collapse
|
33
|
Automatic and accurate needle detection in 2D ultrasound during robot-assisted needle insertion process. Int J Comput Assist Radiol Surg 2021; 17:295-303. [PMID: 34677747 DOI: 10.1007/s11548-021-02519-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Accepted: 10/05/2021] [Indexed: 10/20/2022]
Abstract
PURPOSE Robot-assisted needle insertion guided by 2D ultrasound (US) can effectively improve the accuracy and success rate of clinical puncture. To this end, automatic and accurate needle-tracking methods are important for monitoring puncture processes, avoiding the needle deviating from the intended path, and reducing the risk of injury to surrounding tissues. This work aims to develop a framework for automatic and accurate detection of an inserted needle in 2D US image during the insertion process. METHODS We propose a novel convolutional neural network architecture comprising of a two-channel encoder and single-channel decoder for needle segmentation using needle motion information extracted from two adjacent US image frames. Based on the novel network, we further propose an automatic needle detection framework. According to the prediction result of the previous frame, a region of interest of the needle in the US image was extracted and fed into the proposed network to achieve finer and faster continuous needle localization. RESULTS The performance of our method was evaluated based on 1000 pairs of US images extracted from robot-assisted needle insertions on freshly excised bovine and porcine tissues. The needle segmentation network achieved 99.7% accuracy, 86.2% precision, 89.1% recall, and an F1-score of 0.87. The needle detection framework successfully localized the needle with a mean tip error of 0.45 ± 0.33 mm and a mean orientation error of 0.42° ± 0.34° and achieved a total processing time of 50 ms per image. CONCLUSION The proposed framework demonstrated the capability to realize robust, accurate, and real-time needle localization during robot-assisted needle insertion processes. It has a promising application in tracking the needle and ensuring the safety of robotic-assisted automatic puncture during challenging US-guided minimally invasive procedures.
Collapse
|
34
|
Salman H, Akkar HA. An intelligent controller for ultrasound-based venipuncture through precise vein localization and stable needle insertion. APPLIED NANOSCIENCE 2021. [DOI: 10.1007/s13204-021-02058-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
35
|
Malpani R, Petty CW, Bhatt N, Staib LH, Chapiro J. Use of Artificial Intelligence in Non-Oncologic Interventional Radiology: Current State and Future Directions. DIGESTIVE DISEASE INTERVENTIONS 2021; 5:331-337. [PMID: 35005333 PMCID: PMC8740955 DOI: 10.1055/s-0041-1726300] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The future of radiology is disproportionately linked to the applications of artificial intelligence (AI). Recent exponential advancements in AI are already beginning to augment the clinical practice of radiology. Driven by a paucity of review articles in the area, this article aims to discuss applications of AI in non-oncologic IR across procedural planning, execution, and follow-up along with a discussion on the future directions of the field. Applications in vascular imaging, radiomics, touchless software interactions, robotics, natural language processing, post-procedural outcome prediction, device navigation, and image acquisition are included. Familiarity with AI study analysis will help open the current 'black box' of AI research and help bridge the gap between the research laboratory and clinical practice.
Collapse
Affiliation(s)
- Rohil Malpani
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, 330 Cedar Street, New Haven, CT 06520, USA
| | - Christopher W. Petty
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, 330 Cedar Street, New Haven, CT 06520, USA
| | - Neha Bhatt
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, 330 Cedar Street, New Haven, CT 06520, USA
| | - Lawrence H. Staib
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, 330 Cedar Street, New Haven, CT 06520, USA
| | - Julius Chapiro
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, 330 Cedar Street, New Haven, CT 06520, USA
| |
Collapse
|
36
|
Vagvolgyi BP, Khrenov M, Cope J, Deguet A, Kazanzides P, Manzoor S, Taylor RH, Krieger A. Telerobotic Operation of Intensive Care Unit Ventilators. Front Robot AI 2021; 8:612964. [PMID: 34250025 PMCID: PMC8264200 DOI: 10.3389/frobt.2021.612964] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Accepted: 06/07/2021] [Indexed: 01/18/2023] Open
Abstract
Since the first reports of a novel coronavirus (SARS-CoV-2) in December 2019, over 33 million people have been infected worldwide and approximately 1 million people worldwide have died from the disease caused by this virus, COVID-19. In the United States alone, there have been approximately 7 million cases and over 200,000 deaths. This outbreak has placed an enormous strain on healthcare systems and workers. Severe cases require hospital care, and 8.5% of patients require mechanical ventilation in an intensive care unit (ICU). One major challenge is the necessity for clinical care personnel to don and doff cumbersome personal protective equipment (PPE) in order to enter an ICU unit to make simple adjustments to ventilator settings. Although future ventilators and other ICU equipment may be controllable remotely through computer networks, the enormous installed base of existing ventilators do not have this capability. This paper reports the development of a simple, low cost telerobotic system that permits adjustment of ventilator settings from outside the ICU. The system consists of a small Cartesian robot capable of operating a ventilator touch screen with camera vision control via a wirelessly connected tablet master device located outside the room. Engineering system tests demonstrated that the open-loop mechanical repeatability of the device was 7.5 mm, and that the average positioning error of the robotic finger under visual servoing control was 5.94 mm. Successful usability tests in a simulated ICU environment were carried out and are reported. In addition to enabling a significant reduction in PPE consumption, the prototype system has been shown in a preliminary evaluation to significantly reduce the total time required for a respiratory therapist to perform typical setting adjustments on a commercial ventilator, including donning and doffing PPE, from 271 to 109 s.
Collapse
Affiliation(s)
- Balazs P Vagvolgyi
- Laboratory for Computational Sensing and Robotics, Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Mikhail Khrenov
- Department of Mechanical Engineering, A. James Clark School of Engineering, University of Maryland, College Park, MD, United States
| | - Jonathan Cope
- Anaesthesia and Critical Care Medicine, Johns Hopkins Hospital, Baltimore, MD, United States
| | - Anton Deguet
- Laboratory for Computational Sensing and Robotics, Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Peter Kazanzides
- Laboratory for Computational Sensing and Robotics, Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Sajid Manzoor
- Anaesthesia and Critical Care Medicine, Johns Hopkins Hospital, Baltimore, MD, United States
| | - Russell H Taylor
- Laboratory for Computational Sensing and Robotics, Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Axel Krieger
- Laboratory for Computational Sensing and Robotics, Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, United States.,Department of Mechanical Engineering, A. James Clark School of Engineering, University of Maryland, College Park, MD, United States
| |
Collapse
|
37
|
Rasmussen ET, Shiao EC, Zourelias L, Halbreiner MS, Passineau MJ, Murali S, Riviere CN. Coronary vessel detection methods for organ-mounted robots. Int J Med Robot 2021; 17:e2297. [PMID: 34081821 DOI: 10.1002/rcs.2297] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 05/31/2021] [Accepted: 06/01/2021] [Indexed: 12/30/2022]
Abstract
BACKGROUND HeartLander is a tethered robot walker that utilizes suction to adhere to the beating heart. HeartLander can be used for minimally invasive administration of cardiac medications or ablation of tissue. In order to administer injections safely, HeartLander must avoid coronary vasculature. METHODS Doppler ultrasound signals were recorded using a custom-made cardiac phantom and used to classify different coronary vessel properties. The classification was performed by two machine learning algorithms, the support vector machines and a deep convolutional neural network. These algorithms were then validated in animal trials. RESULTS Accuracy of identifying vessels above turbulent flow reached greater than 92% in phantom trials and greater than 98% in animal trials. CONCLUSIONS Through the use of two machine learning algorithms, HeartLander has shown the ability to identify different sized vasculature proximally above turbulent flow. These results indicate that it is feasible to use Doppler ultrasound to identify and avoid coronary vasculature during cardiac interventions using HeartLander.
Collapse
Affiliation(s)
- Eric T Rasmussen
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA
| | - Eric C Shiao
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA
| | - Lee Zourelias
- Cardiovascular Institute, Allegheny Health Network, Pittsburgh, Pennsylvania, USA
| | - Michael S Halbreiner
- Cardiovascular Institute, Allegheny Health Network, Pittsburgh, Pennsylvania, USA
| | - Michael J Passineau
- Cardiovascular Institute, Allegheny Health Network, Pittsburgh, Pennsylvania, USA
| | - Srinivas Murali
- Cardiovascular Institute, Allegheny Health Network, Pittsburgh, Pennsylvania, USA
| | - Cameron N Riviere
- The Robotics Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA
| |
Collapse
|
38
|
Gao A, Murphy RR, Chen W, Dagnino G, Fischer P, Gutierrez MG, Kundrat D, Nelson BJ, Shamsudhin N, Su H, Xia J, Zemmar A, Zhang D, Wang C, Yang GZ. Progress in robotics for combating infectious diseases. Sci Robot 2021; 6:6/52/eabf1462. [PMID: 34043552 DOI: 10.1126/scirobotics.abf1462] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Accepted: 03/09/2021] [Indexed: 12/24/2022]
Abstract
The world was unprepared for the COVID-19 pandemic, and recovery is likely to be a long process. Robots have long been heralded to take on dangerous, dull, and dirty jobs, often in environments that are unsuitable for humans. Could robots be used to fight future pandemics? We review the fundamental requirements for robotics for infectious disease management and outline how robotic technologies can be used in different scenarios, including disease prevention and monitoring, clinical care, laboratory automation, logistics, and maintenance of socioeconomic activities. We also address some of the open challenges for developing advanced robots that are application oriented, reliable, safe, and rapidly deployable when needed. Last, we look at the ethical use of robots and call for globally sustained efforts in order for robots to be ready for future outbreaks.
Collapse
Affiliation(s)
- Anzhu Gao
- Institute of Medical Robotics, Shanghai Jiao Tong University, 200240 Shanghai, China.,Department of Automation, Shanghai Jiao Tong University, 200240 Shanghai, China
| | - Robin R Murphy
- Humanitarian Robotics and AI Laboratory, Texas A&M University, College Station, TX, USA
| | - Weidong Chen
- Institute of Medical Robotics, Shanghai Jiao Tong University, 200240 Shanghai, China.,Department of Automation, Shanghai Jiao Tong University, 200240 Shanghai, China
| | - Giulio Dagnino
- Hamlyn Centre for Robotic Surgery, Imperial College London, London SW7 2AZ, UK.,University of Twente, Enschede, Netherlands
| | - Peer Fischer
- Institute of Physical Chemistry, University of Stuttgart, Stuttgart, Germany.,Micro, Nano, and Molecular Systems Laboratory, Max Planck Institute for Intelligent Systems, Stuttgart, Germany
| | | | - Dennis Kundrat
- Hamlyn Centre for Robotic Surgery, Imperial College London, London SW7 2AZ, UK
| | | | | | - Hao Su
- Biomechatronics and Intelligent Robotics Lab, Department of Mechanical Engineering, City University of New York, City College, New York, NY 10031, USA
| | - Jingen Xia
- Department of Pulmonary and Critical Care Medicine, Center of Respiratory Medicine, China-Japan Friendship Hospital, 100029 Beijing, China.,National Center for Respiratory Medicine, 100029 Beijing, China.,Institute of Respiratory Medicine, Chinese Academy of Medical Sciences, 100029 Beijing, China.,National Clinical Research Center for Respiratory Diseases, 100029 Beijing, China
| | - Ajmal Zemmar
- Department of Neurosurgery, Henan Provincial People's Hospital, Henan University People's Hospital, Henan University School of Medicine, 7 Weiwu Road, 450000 Zhengzhou, China.,Department of Neurosurgery, University of Louisville, School of Medicine, 200 Abraham Flexner Way, Louisville, KY 40202, USA
| | - Dandan Zhang
- Hamlyn Centre for Robotic Surgery, Imperial College London, London SW7 2AZ, UK
| | - Chen Wang
- Department of Pulmonary and Critical Care Medicine, Center of Respiratory Medicine, China-Japan Friendship Hospital, 100029 Beijing, China.,National Center for Respiratory Medicine, 100029 Beijing, China.,Institute of Respiratory Medicine, Chinese Academy of Medical Sciences, 100029 Beijing, China.,National Clinical Research Center for Respiratory Diseases, 100029 Beijing, China.,Chinese Academy of Medical Sciences, Peking Union Medical College, 100730 Beijing, China
| | - Guang-Zhong Yang
- Institute of Medical Robotics, Shanghai Jiao Tong University, 200240 Shanghai, China.
| |
Collapse
|
39
|
Kasperbauer TJ. Conflicting roles for humans in learning health systems and AI-enabled healthcare. J Eval Clin Pract 2021; 27:537-542. [PMID: 33164284 DOI: 10.1111/jep.13510] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 10/23/2020] [Accepted: 10/26/2020] [Indexed: 12/31/2022]
Abstract
The goals of learning health systems (LHS) and of AI in medicine overlap in many respects. Both require significant improvements in data sharing and IT infrastructure, aim to provide more personalized care for patients, and strive to break down traditional barriers between research and care. However, the defining features of LHS and AI diverge when it comes to the people involved in medicine, both patients and providers. LHS aim to enhance physician-patient relationships while developments in AI emphasize a physicianless experience. LHS also encourage better coordination of specialists across the health system, but AI aims to replace many specialists with technology and algorithms. This paper argues that these points of conflict may require a reconsideration of the role of humans in medical decision making. Although it is currently unclear to what extent machines will replace humans in healthcare, the parallel development of LHS and AI raises important questions about the exact role for humans within AI-enabled healthcare.
Collapse
Affiliation(s)
- T J Kasperbauer
- Indiana University Center for Bioethics, Indiana University School of Medicine, Indianapolis, Indiana, USA
| |
Collapse
|
40
|
Su B, Yu S, Li X, Gong Y, Li H, Ren Z, Xia Y, Wang H, Zhang Y, Yao W, Wang J, Tang J. Autonomous Robot for Removing Superficial Traumatic Blood. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE-JTEHM 2021; 9:2600109. [PMID: 33598368 PMCID: PMC7880304 DOI: 10.1109/jtehm.2021.3056618] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Revised: 01/16/2021] [Accepted: 01/29/2021] [Indexed: 11/09/2022]
Abstract
Objective: To remove blood from an incision and find the incision spot is a key task during surgery, or else over discharge of blood will endanger a patient's life. However, the repetitive manual blood removal involves plenty of workload contributing fatigue of surgeons. Thus, it is valuable to design a robotic system which can automatically remove blood on the incision surface. Methods: In this paper, we design a robotic system to fulfill the surgical task of the blood removal. The system consists of a pair of dual cameras, a 6-DoF robotic arm, an aspirator whose handle is fixed to a robotic arm, and a pump connected to the aspirator. Further, a path-planning algorithm is designed to generate a path, which the aspirator tip should follow to remove blood. Results: In a group of simulating bleeding experiments on ex vivo porcine tissue, the contour of the blood region is detected, and the reconstructed spatial coordinates of the detected blood contour is obtained afterward. The BRR robot cleans thoroughly the blood running out the incision. Conclusions: This study contributes the first result on designing an autonomous blood removal medical robot. The skill of the surgical blood removal operation, which is manually operated by surgeons nowadays, is alternatively grasped by the proposed BRR medical robot.
Collapse
Affiliation(s)
- Baiquan Su
- Medical Robotics Laboratory, School of AutomationBeijing University of Posts and TelecommunicationsBeijing100876China
| | - Shi Yu
- Medical Robotics Laboratory, School of AutomationBeijing University of Posts and TelecommunicationsBeijing100876China
| | - Xintong Li
- Medical Robotics Laboratory, School of AutomationBeijing University of Posts and TelecommunicationsBeijing100876China
| | - Yi Gong
- Medical Robotics Laboratory, School of AutomationBeijing University of Posts and TelecommunicationsBeijing100876China
| | - Han Li
- Medical Robotics Laboratory, School of AutomationBeijing University of Posts and TelecommunicationsBeijing100876China
| | - Zifeng Ren
- Medical Robotics Laboratory, School of AutomationBeijing University of Posts and TelecommunicationsBeijing100876China
| | - Yijing Xia
- Medical Robotics Laboratory, School of AutomationBeijing University of Posts and TelecommunicationsBeijing100876China
| | - He Wang
- Medical Robotics Laboratory, School of AutomationBeijing University of Posts and TelecommunicationsBeijing100876China
| | - Yucheng Zhang
- Medical Robotics Laboratory, School of AutomationBeijing University of Posts and TelecommunicationsBeijing100876China
| | - Wei Yao
- Department of GastroenterologyPeking University Third HospitalBeijing100191China
| | - Junchen Wang
- School of Mechanical Engineering and AutomationBeihang UniversityBeijing100191China.,Beijing Advanced Innovation Center, Biomedical EngineeringBeihang UniversityBeijing100086China
| | - Jie Tang
- Department of NeurosurgeryXuanwu HospitalCapital Medical UniversityBeijing100053China
| |
Collapse
|
41
|
|
42
|
Zou Z, Zhao R, Wu Y, Yang Z, Tian L, Wu S, Wang G, Yu Y, Zhao Q, Chen M, Pei J, Chen F, Zhang Y, Song S, Zhao M, Shi L. A hybrid and scalable brain-inspired robotic platform. Sci Rep 2020; 10:18160. [PMID: 33097742 PMCID: PMC7584638 DOI: 10.1038/s41598-020-73366-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Accepted: 08/31/2020] [Indexed: 01/04/2023] Open
Abstract
Recent years have witnessed tremendous progress of intelligent robots brought about by mimicking human intelligence. However, current robots are still far from being able to handle multiple tasks in a dynamic environment as efficiently as humans. To cope with complexity and variability, further progress toward scalability and adaptability are essential for intelligent robots. Here, we report a brain-inspired robotic platform implemented by an unmanned bicycle that exhibits scalability of network scale, quantity and diversity to handle the changing needs of different scenarios. The platform adopts rich coding schemes and a trainable and scalable neural state machine, enabling flexible cooperation of hybrid networks. In addition, an embedded system is developed using a cross-paradigm neuromorphic chip to facilitate the implementation of diverse neural networks in spike or non-spike form. The platform achieved various real-time tasks concurrently in different real-world scenarios, providing a new pathway to enhance robots' intelligence.
Collapse
Affiliation(s)
- Zhe Zou
- Center for Brain-Inspired Computing Research (CBICR), Beijing Innovation Center for Future Chip, Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing, 100084, China
| | - Rong Zhao
- Center for Brain-Inspired Computing Research (CBICR), Beijing Innovation Center for Future Chip, Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing, 100084, China
| | - Yujie Wu
- Center for Brain-Inspired Computing Research (CBICR), Beijing Innovation Center for Future Chip, Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing, 100084, China
| | - Zheyu Yang
- Center for Brain-Inspired Computing Research (CBICR), Beijing Innovation Center for Future Chip, Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing, 100084, China
| | - Lei Tian
- Center for Brain-Inspired Computing Research (CBICR), Beijing Innovation Center for Future Chip, Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing, 100084, China
| | - Shuang Wu
- Center for Brain-Inspired Computing Research (CBICR), Beijing Innovation Center for Future Chip, Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing, 100084, China
| | - Guanrui Wang
- Center for Brain-Inspired Computing Research (CBICR), Beijing Innovation Center for Future Chip, Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing, 100084, China
| | - Yongchao Yu
- Department of Automation, Tsinghua University, Beijing, 100084, China
| | - Qi Zhao
- Center for Brain-Inspired Computing Research (CBICR), Beijing Innovation Center for Future Chip, Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing, 100084, China
| | - Mingwang Chen
- Center for Brain-Inspired Computing Research (CBICR), Beijing Innovation Center for Future Chip, Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing, 100084, China
| | - Jing Pei
- Center for Brain-Inspired Computing Research (CBICR), Beijing Innovation Center for Future Chip, Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing, 100084, China
| | - Feng Chen
- Department of Automation, Tsinghua University, Beijing, 100084, China
| | - Youhui Zhang
- Department of Computer Science and Technology, Tsinghua University, Beijing, 100084, China
| | - Sen Song
- Department of Biomedical Engineering, Tsinghua University, Beijing, 100084, China
| | - Mingguo Zhao
- Center for Brain-Inspired Computing Research (CBICR), Beijing Innovation Center for Future Chip, Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing, 100084, China.
- Department of Automation, Tsinghua University, Beijing, 100084, China.
| | - Luping Shi
- Center for Brain-Inspired Computing Research (CBICR), Beijing Innovation Center for Future Chip, Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing, 100084, China.
| |
Collapse
|
43
|
|
44
|
Abstract
In this chapter we discuss the past, present and future of clinical biomarker development. We explore the advent of new technologies, paving the way in which health, medicine and disease is understood. This review includes the identification of physicochemical assays, current regulations, the development and reproducibility of clinical trials, as well as, the revolution of omics technologies and state-of-the-art integration and analysis approaches.
Collapse
|