1
|
Szeremeta M, Janica J, Niemcunowicz-Janica A. Artificial intelligence in forensic medicine and related sciences - selected issues. ARCHIVES OF FORENSIC MEDICINE AND CRIMINOLOGY 2024; 74:64-76. [PMID: 39450596 DOI: 10.4467/16891716amsik.24.005.19650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2024] [Accepted: 04/16/2024] [Indexed: 10/26/2024] Open
Abstract
Aim The aim of the work is to provide an overview of the potential application of artificial intelligence in forensic medicine and related sciences, and to identify concerns related to providing medico-legal opinions and legal liability in cases in which possible harm in terms of diagnosis and/or treatment is likely to occur when using an advanced system of computer-based information processing and analysis. Material and methods The material for the study comprised scientific literature related to the issue of artificial intelligence in forensic medicine and related sciences. For this purpose, Google Scholar, PubMed and ScienceDirect databases were searched. To identify useful articles, such terms as "artificial intelligence," "deep learning," "machine learning," "forensic medicine," "legal medicine," "forensic pathology" and "medicine" were used. In some cases, articles were identified based on the semantic proximity of the introduced terms. Conclusions Dynamic development of the computing power and the ability of artificial intelligence to analyze vast data volumes made it possible to transfer artificial intelligence methods to forensic medicine and related sciences. Artificial intelligence has numerous applications in forensic medicine and related sciences and can be helpful in thanatology, forensic traumatology, post-mortem identification examinations, as well as post-mortem microscopic and toxicological diagnostics. Analyzing the legal and medico-legal aspects, artificial intelligence in medicine should be treated as an auxiliary tool, whereas the final diagnostic and therapeutic decisions and the extent to which they are implemented should be the responsibility of humans.
Collapse
Affiliation(s)
- Michał Szeremeta
- Department of Forensic Medicine, Medical University of Białystok, Poland
| | - Julia Janica
- Student's Scientific Group at the Department of Forensic Medicine, Poland
| | | |
Collapse
|
2
|
Fozilov K, Colan J, Davila A, Misawa K, Qiu J, Hayashi Y, Mori K, Hasegawa Y. Endoscope Automation Framework with Hierarchical Control and Interactive Perception for Multi-Tool Tracking in Minimally Invasive Surgery. SENSORS (BASEL, SWITZERLAND) 2023; 23:9865. [PMID: 38139711 PMCID: PMC10748016 DOI: 10.3390/s23249865] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/10/2023] [Revised: 12/12/2023] [Accepted: 12/13/2023] [Indexed: 12/24/2023]
Abstract
In the context of Minimally Invasive Surgery, surgeons mainly rely on visual feedback during medical operations. In common procedures such as tissue resection, the automation of endoscopic control is crucial yet challenging, particularly due to the interactive dynamics of multi-agent operations and the necessity for real-time adaptation. This paper introduces a novel framework that unites a Hierarchical Quadratic Programming controller with an advanced interactive perception module. This integration addresses the need for adaptive visual field control and robust tool tracking in the operating scene, ensuring that surgeons and assistants have optimal viewpoint throughout the surgical task. The proposed framework handles multiple objectives within predefined thresholds, ensuring efficient tracking even amidst changes in operating backgrounds, varying lighting conditions, and partial occlusions. Empirical validations in scenarios involving single, double, and quadruple tool tracking during tissue resection tasks have underscored the system's robustness and adaptability. The positive feedback from user studies, coupled with the low cognitive and physical strain reported by surgeons and assistants, highlight the system's potential for real-world application.
Collapse
Affiliation(s)
- Khusniddin Fozilov
- Department of Micro-Nano Mechanical Science and Engineering, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8603, Aichi, Japan
| | - Jacinto Colan
- Department of Micro-Nano Mechanical Science and Engineering, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8603, Aichi, Japan
| | - Ana Davila
- Institutes of Innovation for Future Society, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8603, Aichi, Japan (Y.H.)
| | - Kazunari Misawa
- Aichi Cancer Center Hospital, Chikusa Ward, Nagoya 464-8681, Aichi, Japan
| | - Jie Qiu
- Graduate School of Informatics, Nagoya University, Chikusa Ward, Nagoya 464-8601, Aichi, Japan
| | - Yuichiro Hayashi
- Graduate School of Informatics, Nagoya University, Chikusa Ward, Nagoya 464-8601, Aichi, Japan
| | - Kensaku Mori
- Graduate School of Informatics, Nagoya University, Chikusa Ward, Nagoya 464-8601, Aichi, Japan
| | - Yasuhisa Hasegawa
- Institutes of Innovation for Future Society, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8603, Aichi, Japan (Y.H.)
| |
Collapse
|
3
|
Seetohul J, Shafiee M, Sirlantzis K. Augmented Reality (AR) for Surgical Robotic and Autonomous Systems: State of the Art, Challenges, and Solutions. SENSORS (BASEL, SWITZERLAND) 2023; 23:6202. [PMID: 37448050 DOI: 10.3390/s23136202] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2023] [Revised: 06/09/2023] [Accepted: 07/03/2023] [Indexed: 07/15/2023]
Abstract
Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future.
Collapse
Affiliation(s)
- Jenna Seetohul
- Mechanical Engineering Group, School of Engineering, University of Kent, Canterbury CT2 7NT, UK
| | - Mahmood Shafiee
- Mechanical Engineering Group, School of Engineering, University of Kent, Canterbury CT2 7NT, UK
- School of Mechanical Engineering Sciences, University of Surrey, Guildford GU2 7XH, UK
| | - Konstantinos Sirlantzis
- School of Engineering, Technology and Design, Canterbury Christ Church University, Canterbury CT1 1QU, UK
- Intelligent Interactions Group, School of Engineering, University of Kent, Canterbury CT2 7NT, UK
| |
Collapse
|
4
|
Coupling Effect Suppressed Compact Surgical Robot with 7-Axis Multi-Joint Using Wire-Driven Method. MATHEMATICS 2022. [DOI: 10.3390/math10101698] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Currently, the most prevalent surgical treatment method is laparoscopic surgery. Robotic surgery has many advantages over laparoscopic surgery. Therefore, robotic surgery technology is currently constantly evolving. The advantages of robotic surgery are that it can minimize incision, bleeding, and sequelae. Other advantages of robotic surgery are that it can reduce hospitalization, recovery period, and side effects. The appeal of robotic surgery is that it requires fewer surgical personnel compared to laparoscopic surgery. This paper proposes an ultra-compact 7-axis vertical multi-joint robot that employs the wire-driven method for minimally invasive surgery. The proposed robot analyzes the degree of freedom and motion coupling for control. The robot joint is composed of a total of seven joints, and among them, the 7-axis joint operates the forceps. At this time, the forceps joint (#7 axis) can only operate open and close functions, while the link is bent and rotatable, regardless of position change. This phenomenon can be analyzed by Forward Kinematics. Also, when the DOF rotates, the passing wires become twisted, and the wire is generated through length change and coupling phenomenon. The maximum rotation angle of DOF is 90° and the rotating passing wire is wound by the rotation of the wire pulley. If the DOF is rotated to the full range of 120°, the second DOF will be rotated to 90°, and at this time, the coupling phenomenon caused by the first DOF rotation can be eliminated. The length change and the robot joint angle change related to the motor drive, based on the surgical robot control using the wire-driven method, are correlated, and the values for the position and direction of the end effector of the robot can be obtained through a forward kinematic analysis. The coupling problem occurring in the wire connecting the robot driving part can be solved through a kinematic analysis. Therefore, it was possible to test the position of the slave robot and the performance of the surgical forceps movement using the master system.
Collapse
|
5
|
Gruijthuijsen C, Garcia-Peraza-Herrera LC, Borghesan G, Reynaerts D, Deprest J, Ourselin S, Vercauteren T, Vander Poorten E. Robotic Endoscope Control Via Autonomous Instrument Tracking. Front Robot AI 2022; 9:832208. [PMID: 35480090 PMCID: PMC9035496 DOI: 10.3389/frobt.2022.832208] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Accepted: 02/17/2022] [Indexed: 11/13/2022] Open
Abstract
Many keyhole interventions rely on bi-manual handling of surgical instruments, forcing the main surgeon to rely on a second surgeon to act as a camera assistant. In addition to the burden of excessively involving surgical staff, this may lead to reduced image stability, increased task completion time and sometimes errors due to the monotony of the task. Robotic endoscope holders, controlled by a set of basic instructions, have been proposed as an alternative, but their unnatural handling may increase the cognitive load of the (solo) surgeon, which hinders their clinical acceptance. More seamless integration in the surgical workflow would be achieved if robotic endoscope holders collaborated with the operating surgeon via semantically rich instructions that closely resemble instructions that would otherwise be issued to a human camera assistant, such as "focus on my right-hand instrument." As a proof of concept, this paper presents a novel system that paves the way towards a synergistic interaction between surgeons and robotic endoscope holders. The proposed platform allows the surgeon to perform a bimanual coordination and navigation task, while a robotic arm autonomously performs the endoscope positioning tasks. Within our system, we propose a novel tooltip localization method based on surgical tool segmentation and a novel visual servoing approach that ensures smooth and appropriate motion of the endoscope camera. We validate our vision pipeline and run a user study of this system. The clinical relevance of the study is ensured through the use of a laparoscopic exercise validated by the European Academy of Gynaecological Surgery which involves bi-manual coordination and navigation. Successful application of our proposed system provides a promising starting point towards broader clinical adoption of robotic endoscope holders.
Collapse
Affiliation(s)
| | - Luis C. Garcia-Peraza-Herrera
- Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
- Department of Surgical and Interventional Engineering, King’s College London, London, United Kingdom
| | - Gianni Borghesan
- Department of Mechanical Engineering, KU Leuven, Leuven, Belgium
- Core Lab ROB, Flanders Make, Lommel, Belgium
| | | | - Jan Deprest
- Department of Development and Regeneration, Division Woman and Child, KU Leuven, Leuven, Belgium
| | - Sebastien Ourselin
- Department of Surgical and Interventional Engineering, King’s College London, London, United Kingdom
| | - Tom Vercauteren
- Department of Surgical and Interventional Engineering, King’s College London, London, United Kingdom
| | | |
Collapse
|