1
|
Heller MT, Maderbacher G, Schuster MF, Forchhammer L, Scharf M, Renkawitz T, Pagano S. Comparison of an AI-driven planning tool and manual radiographic measurements in total knee arthroplasty. Comput Struct Biotechnol J 2025; 28:148-155. [PMID: 40276217 PMCID: PMC12019206 DOI: 10.1016/j.csbj.2025.04.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2024] [Revised: 04/07/2025] [Accepted: 04/08/2025] [Indexed: 04/26/2025] Open
Abstract
Background Accurate preoperative planning in total knee arthroplasty (TKA) is essential. Traditional manual radiographic planning can be time-consuming and potentially prone to inaccuracies. This study investigates the performance of an AI-based radiographic planning tool in comparison with manual measurements in patients undergoing total knee arthroplasty, using a retrospective observational design to assess reliability and efficiency. Methods We retrospectively compared the Autoplan tool integrated within the mediCAD software (mediCAD Hectec GmbH, Altdorf, Germany), routinely implemented in our institutional workflow, to manual measurements performed by two orthopedic specialists on pre- and postoperative radiographs of 100 patients who underwent elective TKA. The following parameters were measured: leg length, mechanical axis deviation (MAD), mechanical lateral proximal femoral angle (mLPFA), anatomical mechanical angle (AMA), mechanical lateral distal femoral angle (mLDFA), joint line convergence angle (JLCA), mechanical medial proximal tibial angle (mMPTA), and mechanical tibiofemoral angle (mTFA).Intraclass correlation coefficients (ICCs) were calculated to assess measurement reliability, and the time required for each method was recorded. Results The Autoplan tool demonstrated high reliability (ICC > 0.90) compared with manual measurements for linear parameters (e.g., leg length and MAD). However, the angular measurements of mLPFA, JLCA, and AMA exhibited poor reliability (ICC < 0.50) among all raters. The Autoplan tool significantly reduced the time required for measurements compared to manual measurements, with a mean time saving of 44.3 seconds per case (95 % CI: 43.5-45.1 seconds, p < 0.001). Conclusion AI-assisted tools like the Autoplan tool in mediCAD offer substantial time savings and demonstrate reliable measurements for certain linear parameters in preoperative TKA planning. However, the observed low reliability in some measurements, even amongst experienced human raters, suggests inherent challenges in the radiographic assessment of angular parameters. Further development is needed to improve the accuracy of automated angular measurements, and to address the inherent variability in their assessment.
Collapse
Affiliation(s)
- Marie Theres Heller
- Department of Orthopedic Surgery, University of Regensburg, Asklepios Klinikum, Bad Abbach, Germany
| | - Guenther Maderbacher
- Department of Orthopedic Surgery, University of Regensburg, Asklepios Klinikum, Bad Abbach, Germany
| | - Marie Farina Schuster
- Department of Orthopedic Surgery, University of Regensburg, Asklepios Klinikum, Bad Abbach, Germany
| | - Lina Forchhammer
- Department of Orthopedic Surgery, University of Regensburg, Asklepios Klinikum, Bad Abbach, Germany
| | - Markus Scharf
- Department of Orthopedic Surgery, University of Regensburg, Asklepios Klinikum, Bad Abbach, Germany
| | - Tobias Renkawitz
- Department of Orthopedic Surgery, University of Regensburg, Asklepios Klinikum, Bad Abbach, Germany
| | - Stefano Pagano
- Department of Orthopedic Surgery, University of Regensburg, Asklepios Klinikum, Bad Abbach, Germany
| |
Collapse
|
2
|
Doornbos MCJ, Peek JJ, Maat APWM, Ruurda JP, De Backer P, Cornelissen BMW, Mahtab EAF, Sadeghi AH, Kluin J. Augmented Reality Implementation in Minimally Invasive Surgery for Future Application in Pulmonary Surgery: A Systematic Review. Surg Innov 2024; 31:646-658. [PMID: 39370802 PMCID: PMC11475712 DOI: 10.1177/15533506241290412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/08/2024]
Abstract
OBJECTIVE This systematic review investigates of Augmented Reality (AR) systems used in minimally invasive surgery of deformable organs, focusing on initial registration, dynamic tracking, and visualization. The objective is to acquire a comprehensive understanding of the current knowledge, applications, and challenges associated with current AR-techniques, aiming to leverage these insights for developing a dedicated AR pulmonary Video or Robotic Assisted Thoracic Surgery (VATS/RATS) workflow. METHODS A systematic search was conducted within Embase, Medline (Ovid) and Web of Science on April 16, 2024, following the Preferred Reporting items for Systematic Reviews and Meta-Analyses (PRISMA). The search focused on intraoperative AR applications and intraoperative navigational purposes for deformable organs. Quality assessment was performed and studies were categorized according to initial registration and dynamic tracking methods. RESULTS 33 articles were included, of which one involved pulmonary surgery. Studies used both manual and (semi-) automatic registration methods, established through anatomical landmark-based, fiducial-based, or surface-based techniques. Diverse outcome measures were considered, including surgical outcomes and registration accuracy. The majority of studies that reached an registration accuracy below 5 mm applied surface-based registration. CONCLUSIONS AR can potentially aid surgeons with real-time navigation and decision making during anatomically complex minimally invasive procedures. Future research for pulmonary applications should focus on exploring surface-based registration methods, considering their non-invasive, marker-less nature, and promising accuracy. Additionally, vascular-labeling-based methods are worth exploring, given the importance and relative stability of broncho-vascular anatomy in pulmonary VATS/RATS. Assessing clinical feasibility of these approaches is crucial, particularly concerning registration accuracy and potential impact on surgical outcomes.
Collapse
Affiliation(s)
- Marie-Claire J. Doornbos
- Department of Cardiothoracic Surgery, Thoraxcenter, Erasmus MC, Rotterdam, The Netherlands
- Educational Program Technical Medicine, Leiden University Medical Center, Delft University of Technology & Erasmus University Medical Center Rotterdam, Leiden, The Netherlands
| | - Jette J. Peek
- Department of Cardiothoracic Surgery, Thoraxcenter, Erasmus MC, Rotterdam, The Netherlands
| | | | - Jelle P. Ruurda
- Department of Surgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | | | | | - Edris A. F. Mahtab
- Department of Cardiothoracic Surgery, Thoraxcenter, Erasmus MC, Rotterdam, The Netherlands
- Department of Cardiothoracic Surgery, Leiden University Medical Center, Leiden, The Netherlands
| | - Amir H. Sadeghi
- Department of Cardiothoracic Surgery, Thoraxcenter, Erasmus MC, Rotterdam, The Netherlands
- Department of Cardiothoracic Surgery, University Medical Center Utrecht, The Netherlands
| | - Jolanda Kluin
- Department of Cardiothoracic Surgery, Thoraxcenter, Erasmus MC, Rotterdam, The Netherlands
| |
Collapse
|
3
|
Kenig N, Monton Echeverria J, Muntaner Vives A. Artificial Intelligence in Surgery: A Systematic Review of Use and Validation. J Clin Med 2024; 13:7108. [PMID: 39685566 DOI: 10.3390/jcm13237108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2024] [Revised: 11/19/2024] [Accepted: 11/22/2024] [Indexed: 12/18/2024] Open
Abstract
Background: Artificial Intelligence (AI) holds promise for transforming healthcare, with AI models gaining increasing clinical use in surgery. However, new AI models are developed without established standards for their validation and use. Before AI can be widely adopted, it is crucial to ensure these models are both accurate and safe for patients. Without proper validation, there is a risk of integrating AI models into practice without sufficient evidence of their safety and accuracy, potentially leading to suboptimal patient outcomes. In this work, we review the current use and validation methods of AI models in clinical surgical settings and propose a novel classification system. Methods: A systematic review was conducted in PubMed and Cochrane using the keywords "validation", "artificial intelligence", and "surgery", following PRISMA guidelines. Results: The search yielded a total of 7627 articles, of which 102 were included for data extraction, encompassing 2,837,211 patients. A validation classification system named Surgical Validation Score (SURVAS) was developed. The primary applications of models were risk assessment and decision-making in the preoperative setting. Validation methods were ranked as high evidence in only 45% of studies, and only 14% of the studies provided publicly available datasets. Conclusions: AI has significant applications in surgery, but validation quality remains suboptimal, and public data availability is limited. Current AI applications are mainly focused on preoperative risk assessment and are suggested to improve decision-making. Classification systems such as SURVAS can help clinicians confirm the degree of validity of AI models before their application in practice.
Collapse
Affiliation(s)
- Nitzan Kenig
- Department of Plastic Surgery, Quironsalud Palmaplanas Hospital, 07010 Palma, Spain
| | | | - Aina Muntaner Vives
- Department Otolaryngology, Son Llatzer University Hospital, 07198 Palma, Spain
| |
Collapse
|
4
|
Khan DZ, Valetopoulou A, Das A, Hanrahan JG, Williams SC, Bano S, Borg A, Dorward NL, Barbarisi S, Culshaw L, Kerr K, Luengo I, Stoyanov D, Marcus HJ. Artificial intelligence assisted operative anatomy recognition in endoscopic pituitary surgery. NPJ Digit Med 2024; 7:314. [PMID: 39521895 PMCID: PMC11550325 DOI: 10.1038/s41746-024-01273-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2024] [Accepted: 09/26/2024] [Indexed: 11/16/2024] Open
Abstract
Pituitary tumours are surrounded by critical neurovascular structures and identification of these intra-operatively can be challenging. We have previously developed an AI model capable of sellar anatomy segmentation. This study aims to apply this model, and explore the impact of AI-assistance on clinician anatomy recognition. Participants were tasked with labelling the sella on six images, initially without assistance, then augmented by AI. Mean DICE scores and the proportion of annotations encompassing the centroid of the sella were calculated. Six medical students, six junior trainees, six intermediate trainees and six experts were recruited. There was an overall improvement in sella recognition from a DICE of score 70.7% without AI assistance to 77.5% with AI assistance (+6.7; p < 0.001). Medical students used and benefitted from AI assistance the most, improving from a DICE score of 66.2% to 78.9% (+12.8; p = 0.02). This technology has the potential to augment surgical education and eventually be used as an intra-operative decision support tool.
Collapse
Affiliation(s)
- Danyal Z Khan
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK.
- Hawkes Centre, Department of Computer Science, University College London, London, UK.
| | - Alexandra Valetopoulou
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
- Hawkes Centre, Department of Computer Science, University College London, London, UK
| | - Adrito Das
- Hawkes Centre, Department of Computer Science, University College London, London, UK
| | - John G Hanrahan
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
- Hawkes Centre, Department of Computer Science, University College London, London, UK
| | - Simon C Williams
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
- Hawkes Centre, Department of Computer Science, University College London, London, UK
| | - Sophia Bano
- Hawkes Centre, Department of Computer Science, University College London, London, UK
| | - Anouk Borg
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
| | - Neil L Dorward
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
| | | | | | - Karen Kerr
- Digital Surgery Ltd, Medtronic, London, UK
| | | | - Danail Stoyanov
- Hawkes Centre, Department of Computer Science, University College London, London, UK
- Digital Surgery Ltd, Medtronic, London, UK
| | - Hani J Marcus
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK.
- Hawkes Centre, Department of Computer Science, University College London, London, UK.
| |
Collapse
|
5
|
Oh N, Kim B, Kim T, Rhu J, Kim J, Choi GS. Real-time segmentation of biliary structure in pure laparoscopic donor hepatectomy. Sci Rep 2024; 14:22508. [PMID: 39341910 PMCID: PMC11439027 DOI: 10.1038/s41598-024-73434-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2024] [Accepted: 09/17/2024] [Indexed: 10/01/2024] Open
Abstract
Pure laparoscopic donor hepatectomy (PLDH) has become a standard practice for living donor liver transplantation in expert centers. Accurate understanding of biliary structures is crucial during PLDH to minimize the risk of complications. This study aims to develop a deep learning-based segmentation model for real-time identification of biliary structures, assisting surgeons in determining the optimal transection site during PLDH. A single-institution retrospective feasibility analysis was conducted on 30 intraoperative videos of PLDH. All videos were selected for their use of the indocyanine green near-infrared fluorescence technique to identify biliary structure. From the analysis, 10 representative frames were extracted from each video specifically during the bile duct division phase, resulting in 300 frames. These frames underwent pixel-wise annotation to identify biliary structures and the transection site. A segmentation task was then performed using a DeepLabV3+ algorithm, equipped with a ResNet50 encoder, focusing on the bile duct (BD) and anterior wall (AW) for transection. The model's performance was evaluated using the dice similarity coefficient (DSC). The model predicted biliary structures with a mean DSC of 0.728 ± 0.01 for BD and 0.429 ± 0.06 for AW. Inference was performed at a speed of 15.3 frames per second, demonstrating the feasibility of real-time recognition of anatomical structures during surgery. The deep learning-based semantic segmentation model exhibited promising performance in identifying biliary structures during PLDH. Future studies should focus on validating the clinical utility and generalizability of the model and comparing its efficacy with current gold standard practices to better evaluate its potential clinical applications.
Collapse
Affiliation(s)
- Namkee Oh
- Department of Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea.
| | - Bogeun Kim
- Department of Digital Health, SAIHST, Sungkyunkwan University, Seoul, Republic of Korea
| | - Taeyoung Kim
- Medical AI Research Center, Samsung Medical Center, Seoul, Republic of Korea
| | - Jinsoo Rhu
- Department of Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Jongman Kim
- Department of Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Gyu-Seong Choi
- Department of Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea.
| |
Collapse
|
6
|
Protserov S, Hunter J, Zhang H, Mashouri P, Masino C, Brudno M, Madani A. Development, deployment and scaling of operating room-ready artificial intelligence for real-time surgical decision support. NPJ Digit Med 2024; 7:231. [PMID: 39227660 PMCID: PMC11372100 DOI: 10.1038/s41746-024-01225-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2023] [Accepted: 08/14/2024] [Indexed: 09/05/2024] Open
Abstract
Deep learning for computer vision can be leveraged for interpreting surgical scenes and providing surgeons with real-time guidance to avoid complications. However, neither generalizability nor scalability of computer-vision-based surgical guidance systems have been demonstrated, especially to geographic locations that lack hardware and infrastructure necessary for real-time inference. We propose a new equipment-agnostic framework for real-time use in operating suites. Using laparoscopic cholecystectomy and semantic segmentation models for predicting safe/dangerous ("Go"/"No-Go") zones of dissection as an example use case, this study aimed to develop and test the performance of a novel data pipeline linked to a web-platform that enables real-time deployment from any edge device. To test this infrastructure and demonstrate its scalability and generalizability, lightweight U-Net and SegFormer models were trained on annotated frames from a large and diverse multicenter dataset from 136 institutions, and then tested on a separate prospectively collected dataset. A web-platform was created to enable real-time inference on any surgical video stream, and performance was tested on and optimized for a range of network speeds. The U-Net and SegFormer models respectively achieved mean Dice scores of 57% and 60%, precision 45% and 53%, and recall 82% and 75% for predicting the Go zone, and mean Dice scores of 76% and 76%, precision 68% and 68%, and recall 92% and 92% for predicting the No-Go zone. After optimization of the client-server interaction over the network, we deliver a prediction stream of at least 60 fps and with a maximum round-trip delay of 70 ms for speeds above 8 Mbps. Clinical deployment of machine learning models for surgical guidance is feasible and cost-effective using a generalizable, scalable and equipment-agnostic framework that lacks dependency on hardware with high computing performance or ultra-fast internet connection speed.
Collapse
Affiliation(s)
- Sergey Protserov
- DATA Team, University Health Network, Toronto, ON, Canada
- Department of Computer Science, University of Toronto, Toronto, ON, Canada
- Vector Institute for Artificial Intelligence, Toronto, ON, Canada
| | - Jaryd Hunter
- DATA Team, University Health Network, Toronto, ON, Canada
| | - Haochi Zhang
- DATA Team, University Health Network, Toronto, ON, Canada
| | | | - Caterina Masino
- Surgical Artificial Intelligence Research Academy, University Health Network, Toronto, ON, Canada
| | - Michael Brudno
- DATA Team, University Health Network, Toronto, ON, Canada.
- Department of Computer Science, University of Toronto, Toronto, ON, Canada.
- Vector Institute for Artificial Intelligence, Toronto, ON, Canada.
| | - Amin Madani
- Surgical Artificial Intelligence Research Academy, University Health Network, Toronto, ON, Canada.
- Department of Surgery, University of Toronto, Toronto, ON, Canada.
| |
Collapse
|
7
|
Madani A, Liu Y, Pryor A, Altieri M, Hashimoto DA, Feldman L. SAGES surgical data science task force: enhancing surgical innovation, education and quality improvement through data science. Surg Endosc 2024; 38:3489-3493. [PMID: 38831213 DOI: 10.1007/s00464-024-10921-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2024] [Accepted: 05/05/2024] [Indexed: 06/05/2024]
Affiliation(s)
- Amin Madani
- Department of Surgery, University of Toronto, Toronto, ON, Canada.
| | - Yao Liu
- Department of Surgery, Brown University, Providence, RI, USA
| | - Aurora Pryor
- Department of Surgery, Northwell Health, New York, NY, USA
| | - Maria Altieri
- Department of Surgery, Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA, USA
| | - Daniel A Hashimoto
- Department of Surgery, Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA, USA
| | - Liane Feldman
- Department of Surgery, McGill University, Montreal, QC, Canada
| |
Collapse
|
8
|
Bakker AFHA, de Nijs JV, Jaspers TJM, de With PHN, Beulens AJW, van der Poel HG, van der Sommen F, Brinkman WM. Estimating Surgical Urethral Length on Intraoperative Robot-Assisted Prostatectomy Images Using Artificial Intelligence Anatomy Recognition. J Endourol 2024; 38:690-696. [PMID: 38613819 DOI: 10.1089/end.2023.0697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/15/2024] Open
Abstract
Objective: To construct a convolutional neural network (CNN) model that can recognize and delineate anatomic structures on intraoperative video frames of robot-assisted radical prostatectomy (RARP) and to use these annotations to predict the surgical urethral length (SUL). Background: Urethral dissection during RARP impacts patient urinary incontinence (UI) outcomes, and requires extensive training. Large differences exist between incontinence outcomes of different urologists and hospitals. Also, surgeon experience and education are critical toward optimal outcomes. Therefore, new approaches are warranted. SUL is associated with UI. Artificial intelligence (AI) surgical image segmentation using a CNN could automate SUL estimation and contribute toward future AI-assisted RARP and surgeon guidance. Methods: Eighty-eight intraoperative RARP videos between June 2009 and September 2014 were collected from a single center. Two hundred sixty-four frames were annotated according to prostate, urethra, ligated plexus, and catheter. Thirty annotated images from different RARP videos were used as a test data set. The dice (similarity) coefficient (DSC) and 95th percentile Hausdorff distance (Hd95) were used to determine model performance. SUL was calculated using the catheter as a reference. Results: The DSC of the best performing model were 0.735 and 0.755 for the catheter and urethra classes, respectively, with a Hd95 of 29.27 and 72.62, respectively. The model performed moderately on the ligated plexus and prostate. The predicted SUL showed a mean difference of 0.64 to 1.86 mm difference vs human annotators, but with significant deviation (standard deviation = 3.28-3.56). Conclusion: This study shows that an AI image segmentation model can predict vital structures during RARP urethral dissection with moderate to fair accuracy. SUL estimation derived from it showed large deviations and outliers when compared with human annotators, but with a small mean difference (<2 mm). This is a promising development for further research on AI-assisted RARP.
Collapse
Affiliation(s)
- Aron F H A Bakker
- Department of Urology, Catharina Hospital, Eindhoven, The Netherlands
- Department of Oncological Urology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Joris V de Nijs
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Tim J M Jaspers
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Peter H N de With
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Alexander J W Beulens
- Department of Oncological Urology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Henk G van der Poel
- Department of Oncological Urology, Antoni van Leeuwenhoek Hospital, Amsterdam, The Netherlands
| | - Fons van der Sommen
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Willem M Brinkman
- Department of Oncological Urology, University Medical Center Utrecht, Utrecht, The Netherlands
| |
Collapse
|
9
|
Pagano S, Müller K, Götz J, Reinhard J, Schindler M, Grifka J, Maderbacher G. The Role and Efficiency of an AI-Powered Software in the Evaluation of Lower Limb Radiographs before and after Total Knee Arthroplasty. J Clin Med 2023; 12:5498. [PMID: 37685563 PMCID: PMC10487842 DOI: 10.3390/jcm12175498] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 08/19/2023] [Accepted: 08/22/2023] [Indexed: 09/10/2023] Open
Abstract
The rapid evolution of artificial intelligence (AI) in medical imaging analysis has significantly impacted musculoskeletal radiology, offering enhanced accuracy and speed in radiograph evaluations. The potential of AI in clinical settings, however, remains underexplored. This research investigates the efficiency of a commercial AI tool in analyzing radiographs of patients who have undergone total knee arthroplasty. The study retrospectively analyzed 200 radiographs from 100 patients, comparing AI software measurements to expert assessments. Assessed parameters included axial alignments (MAD, AMA), femoral and tibial angles (mLPFA, mLDFA, mMPTA, mLDTA), and other key measurements including JLCA, HKA, and Mikulicz line. The tool demonstrated good to excellent agreement with expert metrics (ICC = 0.78-1.00), analyzed radiographs twice as fast (p < 0.001), yet struggled with accuracy for the JLCA (ICC = 0.79, 95% CI = 0.72-0.84), the Mikulicz line (ICC = 0.78, 95% CI = 0.32-0.90), and if patients had a body mass index higher than 30 kg/m2 (p < 0.001). It also failed to analyze 45 (22.5%) radiographs, potentially due to image overlay or unique patient characteristics. These findings underscore the AI software's potential in musculoskeletal radiology but also highlight the necessity for further development for effective utilization in diverse clinical scenarios. Subsequent studies should explore the integration of AI tools in routine clinical practice and their impact on patient care.
Collapse
Affiliation(s)
- Stefano Pagano
- Department of Orthopedic Surgery, University of Regensburg, Asklepios Klinikum Bad Abbach, 93077 Bad Abbach, Germany
| | - Karolina Müller
- Center for Clinical Studies, University of Regensburg, 93053 Regensburg, Germany
| | - Julia Götz
- Department of Orthopedic Surgery, University of Regensburg, Asklepios Klinikum Bad Abbach, 93077 Bad Abbach, Germany
| | - Jan Reinhard
- Department of Orthopedic Surgery, University of Regensburg, Asklepios Klinikum Bad Abbach, 93077 Bad Abbach, Germany
| | - Melanie Schindler
- Department of Orthopedic Surgery, University of Regensburg, Asklepios Klinikum Bad Abbach, 93077 Bad Abbach, Germany
| | - Joachim Grifka
- Department of Orthopedic Surgery, University of Regensburg, Asklepios Klinikum Bad Abbach, 93077 Bad Abbach, Germany
| | - Günther Maderbacher
- Department of Orthopedic Surgery, University of Regensburg, Asklepios Klinikum Bad Abbach, 93077 Bad Abbach, Germany
| |
Collapse
|
10
|
den Boer RB, Jaspers TJM, de Jongh C, Pluim JPW, van der Sommen F, Boers T, van Hillegersberg R, Van Eijnatten MAJM, Ruurda JP. Deep learning-based recognition of key anatomical structures during robot-assisted minimally invasive esophagectomy. Surg Endosc 2023; 37:5164-5175. [PMID: 36947221 PMCID: PMC10322962 DOI: 10.1007/s00464-023-09990-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 02/25/2023] [Indexed: 03/23/2023]
Abstract
OBJECTIVE To develop a deep learning algorithm for anatomy recognition in thoracoscopic video frames from robot-assisted minimally invasive esophagectomy (RAMIE) procedures using deep learning. BACKGROUND RAMIE is a complex operation with substantial perioperative morbidity and a considerable learning curve. Automatic anatomy recognition may improve surgical orientation and recognition of anatomical structures and might contribute to reducing morbidity or learning curves. Studies regarding anatomy recognition in complex surgical procedures are currently lacking. METHODS Eighty-three videos of consecutive RAMIE procedures between 2018 and 2022 were retrospectively collected at University Medical Center Utrecht. A surgical PhD candidate and an expert surgeon annotated the azygos vein and vena cava, aorta, and right lung on 1050 thoracoscopic frames. 850 frames were used for training of a convolutional neural network (CNN) to segment the anatomical structures. The remaining 200 frames of the dataset were used for testing the CNN. The Dice and 95% Hausdorff distance (95HD) were calculated to assess algorithm accuracy. RESULTS The median Dice of the algorithm was 0.79 (IQR = 0.20) for segmentation of the azygos vein and/or vena cava. A median Dice coefficient of 0.74 (IQR = 0.86) and 0.89 (IQR = 0.30) were obtained for segmentation of the aorta and lung, respectively. Inference time was 0.026 s (39 Hz). The prediction of the deep learning algorithm was compared with the expert surgeon annotations, showing an accuracy measured in median Dice of 0.70 (IQR = 0.19), 0.88 (IQR = 0.07), and 0.90 (0.10) for the vena cava and/or azygos vein, aorta, and lung, respectively. CONCLUSION This study shows that deep learning-based semantic segmentation has potential for anatomy recognition in RAMIE video frames. The inference time of the algorithm facilitated real-time anatomy recognition. Clinical applicability should be assessed in prospective clinical studies.
Collapse
Affiliation(s)
- R B den Boer
- Department of Surgery, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht, The Netherlands
| | - T J M Jaspers
- Department of Biomedical Engineering, Eindhoven University of Technology, Groene Loper 3, 5612 AE, Eindhoven, The Netherlands
| | - C de Jongh
- Department of Surgery, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht, The Netherlands
| | - J P W Pluim
- Department of Biomedical Engineering, Eindhoven University of Technology, Groene Loper 3, 5612 AE, Eindhoven, The Netherlands
| | - F van der Sommen
- Department of Electrical Engineering, Eindhoven University of Technology, Groene Loper 19, 5612 AP, Eindhoven, The Netherlands
| | - T Boers
- Department of Electrical Engineering, Eindhoven University of Technology, Groene Loper 19, 5612 AP, Eindhoven, The Netherlands
| | - R van Hillegersberg
- Department of Surgery, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht, The Netherlands
| | - M A J M Van Eijnatten
- Department of Biomedical Engineering, Eindhoven University of Technology, Groene Loper 3, 5612 AE, Eindhoven, The Netherlands
| | - J P Ruurda
- Department of Surgery, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht, The Netherlands.
| |
Collapse
|