1
|
Hla DA, Hindin DI. Generative AI & machine learning in surgical education. Curr Probl Surg 2025; 63:101701. [PMID: 39922636 DOI: 10.1016/j.cpsurg.2024.101701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2024] [Accepted: 12/16/2024] [Indexed: 02/10/2025]
Affiliation(s)
- Diana A Hla
- Mayo Clinic Alix School of Medicine, Rochester, MN
| | - David I Hindin
- Division of General Surgery, Department of Surgery, Stanford University, Stanford, CA.
| |
Collapse
|
2
|
Protserov S, Hunter J, Zhang H, Mashouri P, Masino C, Brudno M, Madani A. Development, deployment and scaling of operating room-ready artificial intelligence for real-time surgical decision support. NPJ Digit Med 2024; 7:231. [PMID: 39227660 PMCID: PMC11372100 DOI: 10.1038/s41746-024-01225-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2023] [Accepted: 08/14/2024] [Indexed: 09/05/2024] Open
Abstract
Deep learning for computer vision can be leveraged for interpreting surgical scenes and providing surgeons with real-time guidance to avoid complications. However, neither generalizability nor scalability of computer-vision-based surgical guidance systems have been demonstrated, especially to geographic locations that lack hardware and infrastructure necessary for real-time inference. We propose a new equipment-agnostic framework for real-time use in operating suites. Using laparoscopic cholecystectomy and semantic segmentation models for predicting safe/dangerous ("Go"/"No-Go") zones of dissection as an example use case, this study aimed to develop and test the performance of a novel data pipeline linked to a web-platform that enables real-time deployment from any edge device. To test this infrastructure and demonstrate its scalability and generalizability, lightweight U-Net and SegFormer models were trained on annotated frames from a large and diverse multicenter dataset from 136 institutions, and then tested on a separate prospectively collected dataset. A web-platform was created to enable real-time inference on any surgical video stream, and performance was tested on and optimized for a range of network speeds. The U-Net and SegFormer models respectively achieved mean Dice scores of 57% and 60%, precision 45% and 53%, and recall 82% and 75% for predicting the Go zone, and mean Dice scores of 76% and 76%, precision 68% and 68%, and recall 92% and 92% for predicting the No-Go zone. After optimization of the client-server interaction over the network, we deliver a prediction stream of at least 60 fps and with a maximum round-trip delay of 70 ms for speeds above 8 Mbps. Clinical deployment of machine learning models for surgical guidance is feasible and cost-effective using a generalizable, scalable and equipment-agnostic framework that lacks dependency on hardware with high computing performance or ultra-fast internet connection speed.
Collapse
Affiliation(s)
- Sergey Protserov
- DATA Team, University Health Network, Toronto, ON, Canada
- Department of Computer Science, University of Toronto, Toronto, ON, Canada
- Vector Institute for Artificial Intelligence, Toronto, ON, Canada
| | - Jaryd Hunter
- DATA Team, University Health Network, Toronto, ON, Canada
| | - Haochi Zhang
- DATA Team, University Health Network, Toronto, ON, Canada
| | | | - Caterina Masino
- Surgical Artificial Intelligence Research Academy, University Health Network, Toronto, ON, Canada
| | - Michael Brudno
- DATA Team, University Health Network, Toronto, ON, Canada.
- Department of Computer Science, University of Toronto, Toronto, ON, Canada.
- Vector Institute for Artificial Intelligence, Toronto, ON, Canada.
| | - Amin Madani
- Surgical Artificial Intelligence Research Academy, University Health Network, Toronto, ON, Canada.
- Department of Surgery, University of Toronto, Toronto, ON, Canada.
| |
Collapse
|
3
|
Nardone V, Marmorino F, Germani MM, Cichowska-Cwalińska N, Menditti VS, Gallo P, Studiale V, Taravella A, Landi M, Reginelli A, Cappabianca S, Girnyi S, Cwalinski T, Boccardi V, Goyal A, Skokowski J, Oviedo RJ, Abou-Mrad A, Marano L. The Role of Artificial Intelligence on Tumor Boards: Perspectives from Surgeons, Medical Oncologists and Radiation Oncologists. Curr Oncol 2024; 31:4984-5007. [PMID: 39329997 PMCID: PMC11431448 DOI: 10.3390/curroncol31090369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2024] [Revised: 08/24/2024] [Accepted: 08/26/2024] [Indexed: 09/28/2024] Open
Abstract
The integration of multidisciplinary tumor boards (MTBs) is fundamental in delivering state-of-the-art cancer treatment, facilitating collaborative diagnosis and management by a diverse team of specialists. Despite the clear benefits in personalized patient care and improved outcomes, the increasing burden on MTBs due to rising cancer incidence and financial constraints necessitates innovative solutions. The advent of artificial intelligence (AI) in the medical field offers a promising avenue to support clinical decision-making. This review explores the perspectives of clinicians dedicated to the care of cancer patients-surgeons, medical oncologists, and radiation oncologists-on the application of AI within MTBs. Additionally, it examines the role of AI across various clinical specialties involved in cancer diagnosis and treatment. By analyzing both the potential and the challenges, this study underscores how AI can enhance multidisciplinary discussions and optimize treatment plans. The findings highlight the transformative role that AI may play in refining oncology care and sustaining the efficacy of MTBs amidst growing clinical demands.
Collapse
Affiliation(s)
- Valerio Nardone
- Department of Precision Medicine, University of Campania “L. Vanvitelli”, 80131 Naples, Italy; (V.N.); (V.S.M.); (P.G.); (A.R.); (S.C.)
| | - Federica Marmorino
- Unit of Medical Oncology 2, Azienda Ospedaliera Universitaria Pisana, 56126 Pisa, Italy; (F.M.); (M.M.G.); (V.S.); (A.T.); (M.L.)
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, 56126 Pisa, Italy
| | - Marco Maria Germani
- Unit of Medical Oncology 2, Azienda Ospedaliera Universitaria Pisana, 56126 Pisa, Italy; (F.M.); (M.M.G.); (V.S.); (A.T.); (M.L.)
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, 56126 Pisa, Italy
| | | | - Vittorio Salvatore Menditti
- Department of Precision Medicine, University of Campania “L. Vanvitelli”, 80131 Naples, Italy; (V.N.); (V.S.M.); (P.G.); (A.R.); (S.C.)
| | - Paolo Gallo
- Department of Precision Medicine, University of Campania “L. Vanvitelli”, 80131 Naples, Italy; (V.N.); (V.S.M.); (P.G.); (A.R.); (S.C.)
| | - Vittorio Studiale
- Unit of Medical Oncology 2, Azienda Ospedaliera Universitaria Pisana, 56126 Pisa, Italy; (F.M.); (M.M.G.); (V.S.); (A.T.); (M.L.)
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, 56126 Pisa, Italy
| | - Ada Taravella
- Unit of Medical Oncology 2, Azienda Ospedaliera Universitaria Pisana, 56126 Pisa, Italy; (F.M.); (M.M.G.); (V.S.); (A.T.); (M.L.)
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, 56126 Pisa, Italy
| | - Matteo Landi
- Unit of Medical Oncology 2, Azienda Ospedaliera Universitaria Pisana, 56126 Pisa, Italy; (F.M.); (M.M.G.); (V.S.); (A.T.); (M.L.)
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, 56126 Pisa, Italy
| | - Alfonso Reginelli
- Department of Precision Medicine, University of Campania “L. Vanvitelli”, 80131 Naples, Italy; (V.N.); (V.S.M.); (P.G.); (A.R.); (S.C.)
| | - Salvatore Cappabianca
- Department of Precision Medicine, University of Campania “L. Vanvitelli”, 80131 Naples, Italy; (V.N.); (V.S.M.); (P.G.); (A.R.); (S.C.)
| | - Sergii Girnyi
- Department of General Surgery and Surgical Oncology, “Saint Wojciech” Hospital, “Nicolaus Copernicus” Health Center, 80-462 Gdańsk, Poland; (S.G.); (T.C.); (J.S.); (L.M.)
| | - Tomasz Cwalinski
- Department of General Surgery and Surgical Oncology, “Saint Wojciech” Hospital, “Nicolaus Copernicus” Health Center, 80-462 Gdańsk, Poland; (S.G.); (T.C.); (J.S.); (L.M.)
| | - Virginia Boccardi
- Division of Gerontology and Geriatrics, Department of Medicine and Surgery, University of Perugia, 06123 Perugia, Italy;
| | - Aman Goyal
- Adesh Institute of Medical Sciences and Research, Bathinda 151109, Punjab, India;
| | - Jaroslaw Skokowski
- Department of General Surgery and Surgical Oncology, “Saint Wojciech” Hospital, “Nicolaus Copernicus” Health Center, 80-462 Gdańsk, Poland; (S.G.); (T.C.); (J.S.); (L.M.)
- Department of Medicine, Academy of Applied Medical and Social Sciences-AMiSNS: Akademia Medycznych I Spolecznych Nauk Stosowanych, 82-300 Elbląg, Poland
| | - Rodolfo J. Oviedo
- Nacogdoches Medical Center, Nacogdoches, TX 75965, USA
- Tilman J. Fertitta Family College of Medicine, University of Houston, Houston, TX 77021, USA
- College of Osteopathic Medicine, Sam Houston State University, Conroe, TX 77304, USA
| | - Adel Abou-Mrad
- Centre Hospitalier Universitaire d’Orléans, 45100 Orléans, France;
| | - Luigi Marano
- Department of General Surgery and Surgical Oncology, “Saint Wojciech” Hospital, “Nicolaus Copernicus” Health Center, 80-462 Gdańsk, Poland; (S.G.); (T.C.); (J.S.); (L.M.)
- Department of Medicine, Academy of Applied Medical and Social Sciences-AMiSNS: Akademia Medycznych I Spolecznych Nauk Stosowanych, 82-300 Elbląg, Poland
| |
Collapse
|
4
|
Li A, Javidan AP, Namazi B, Madani A, Forbes TL. Development of an Artificial Intelligence Tool for Intraoperative Guidance During Endovascular Abdominal Aortic Aneurysm Repair. Ann Vasc Surg 2024; 99:96-104. [PMID: 37914075 DOI: 10.1016/j.avsg.2023.08.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 08/02/2023] [Accepted: 08/15/2023] [Indexed: 11/03/2023]
Abstract
BACKGROUND Adverse events during surgery can occur in part due to errors in visual perception and judgment. Deep learning is a branch of artificial intelligence (AI) that has shown promise in providing real-time intraoperative guidance. This study aims to train and test the performance of a deep learning model that can identify inappropriate landing zones during endovascular aneurysm repair (EVAR). METHODS A deep learning model was trained to identify a "No-Go" landing zone during EVAR, defined by coverage of the lowest renal artery by the stent graft. Fluoroscopic images from elective EVAR procedures performed at a single institution and from open-access sources were selected. Annotations of the "No-Go" zone were performed by trained annotators. A 10-fold cross-validation technique was used to evaluate the performance of the model against human annotations. Primary outcomes were intersection-over-union (IoU) and F1 score and secondary outcomes were pixel-wise accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). RESULTS The AI model was trained using 369 images procured from 110 different patients/videos, including 18 patients/videos (44 images) from open-access sources. For the primary outcomes, IoU and F1 were 0.43 (standard deviation ± 0.29) and 0.53 (±0.32), respectively. For the secondary outcomes, accuracy, sensitivity, specificity, NPV, and PPV were 0.97 (±0.002), 0.51 (±0.34), 0.99 (±0.001). 0.99 (±0.002), and 0.62 (±0.34), respectively. CONCLUSIONS AI can effectively identify suboptimal areas of stent deployment during EVAR. Further directions include validating the model on datasets from other institutions and assessing its ability to predict optimal stent graft placement and clinical outcomes.
Collapse
Affiliation(s)
- Allen Li
- Faculty of Medicine & The Ottawa Hospital Research Institute, University of Ottawa, Ottawa, Ontario, Canada
| | - Arshia P Javidan
- Division of Vascular Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Babak Namazi
- Department of Surgery, University of Texas Southwestern Medical Center, Dallas, TX
| | - Amin Madani
- Department of Surgery, University Health Network & University of Toronto, Toronto, Ontario, Canada; Surgical Artificial Intelligence Research Academy, University Health Network, Toronto, Ontario, Canada
| | - Thomas L Forbes
- Department of Surgery, University Health Network & University of Toronto, Toronto, Ontario, Canada.
| |
Collapse
|
5
|
Shafa G, Kiani P, Masino C, Okrainec A, Pasternak JD, Alseidi A, Madani A. Training for excellence: using a multimodal videoconferencing platform to coach surgeons and improve intraoperative performance. Surg Endosc 2023; 37:9406-9413. [PMID: 37670189 DOI: 10.1007/s00464-023-10374-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Accepted: 07/30/2023] [Indexed: 09/07/2023]
Abstract
INTRODUCTION Continuing Professional Development opportunities for lifelong learning are fundamental to the acquisition of surgical expertise. However, few opportunities exist for longitudinal and structured learning to support the educational needs of surgeons in practice. While peer-to-peer coaching has been proposed as a potential solution, there remains significant logistical constraints and a lack of evidence to support its effectiveness. The purpose of this study is to determine whether the use of remote videoconferencing for video-based coaching improves operative performance. METHODS Early career surgeon mentees participated in a remote coaching intervention with a surgeon coach of their choice and using a virtual telestration platform (Zoom Video Communications, San Jose, CA). Feedback was articulated through annotating videos. The coach evaluated mentee performance using a modified Intraoperative Performance Assessment Tool (IPAT). Participants completed a 5-point Likert scale on the educational value of the coaching program. RESULTS Eight surgeons were enrolled in the study, six of whom completed a total of two coaching sessions (baseline, 6-month). Subspecialties included endocrine, hepatopancreatobiliary, and surgical oncology. Mean age of participants was 39 (SD 3.3), with mean 5 (SD 4.1) years in independent practice. Total IPAT scores increased significantly from the first session (mean 47.0, SD 1.9) to the second session (mean 51.8, SD 2.1), p = 0.03. Sub-category analysis showed a significant improvement in the Advanced Cognitive Skills domain with a mean of 33.2 (SD 2.5) versus a mean of 37.0 (SD 2.4), p < 0.01. There was no improvement in the psychomotor skills category. Participants agreed or strongly agreed that the coaching programs can improve surgical performance and decision-making (coaches 85%; mentees 100%). CONCLUSION Remote surgical coaching is feasible and has educational value using ubiquitous commercially available virtual platforms. Logistical issues with scheduling and finding cases aligned with learning objectives continue to challenge program adoption and widespread dissemination.
Collapse
Affiliation(s)
- Golsa Shafa
- Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
- Surgical Artificial Intelligence Research Academy, University Health Network, Toronto, ON, Canada
| | - Parmiss Kiani
- Surgical Artificial Intelligence Research Academy, University Health Network, Toronto, ON, Canada
| | - Caterina Masino
- Surgical Artificial Intelligence Research Academy, University Health Network, Toronto, ON, Canada
| | - Allan Okrainec
- Department of Surgery, University of Toronto, Toronto, ON, Canada
| | | | - Adnan Alseidi
- Department of Surgery, University of California, San Francisco, CA, USA
| | - Amin Madani
- Department of Surgery, University of Toronto, Toronto, ON, Canada.
- Surgical Artificial Intelligence Research Academy, University Health Network, Toronto, ON, Canada.
- University Health Network - Toronto Western Hospital, Main Pavilion, 13MP-312B, 399, Bathurst St, Toronto, ON, M5T 2S8, Canada.
| |
Collapse
|
6
|
Grover K, Mowoh DP, Chatha HN, Mallidi A, Sarvepalli S, Peery C, Galvani C, Havaleshko D, Taggar A, Khaitan L, Abbas M. A cognitive task analysis of expert surgeons performing the robotic roux-en-y gastric bypass. Surg Endosc 2023; 37:9523-9532. [PMID: 37702879 DOI: 10.1007/s00464-023-10354-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Accepted: 07/30/2023] [Indexed: 09/14/2023]
Abstract
BACKGROUND The safe and effective performance of a robotic roux-en-y gastric bypass (RRNY) requires the application of a complex body of knowledge and skills. This qualitative study aims to: (1) define the tasks, subtasks, decision points, and pitfalls in a RRNY; (2) create a framework upon which training and objective evaluation of a RRNY can be based. METHODS Hierarchical and cognitive task analyses for a RRNY were performed using semi-structured interviews of expert bariatric surgeons to describe the thoughts and behaviors that exemplify optimal performance. Verbal data was recorded, transcribed verbatim, supplemented with literary and video resources, coded, and thematically analyzed. RESULTS A conceptual framework was synthesized based on three book chapters, three articles, eight online videos, nine field observations, and interviews of four subject matter experts (SME). At the time of the interview, SME had practiced a median of 12.5 years and had completed a median of 424 RRNY cases. They estimated the number of RRNY to achieve competence and expertise were 25 cases and 237.5 cases, respectively. After four rounds of inductive analysis, 83 subtasks, 75 potential errors, 60 technical tips, and 15 decision points were identified and categorized into eight major procedural steps (pre-procedure preparation, abdominal entry & port placement, gastric pouch creation, omega loop creation, gastrojejunal anastomosis, jejunojejunal anastomosis, closure of mesenteric defects, leak test & port closure). Nine cognitive behaviors were elucidated (respect for patient-specific factors, tactical modification, adherence to core surgical principles, task completion, judicious technique & instrument selection, visuospatial awareness, team-based communication, anticipation & forward planning, finessed tissue handling). CONCLUSION This study defines the key elements that formed the basis of a conceptual framework used by expert bariatric surgeons to perform the RRNY safely and effectively. This framework has the potential to serve as foundational tool for training novices.
Collapse
Affiliation(s)
- Karan Grover
- Department of Surgery, University Hospitals Cleveland Medical Center, 11100 Euclid Ave, Lakeside 7, Cleveland, OH, 44106-5047, USA.
| | - Daniel Praise Mowoh
- Department of Surgery, University Hospitals Cleveland Medical Center, 11100 Euclid Ave, Lakeside 7, Cleveland, OH, 44106-5047, USA
| | | | - Ajitha Mallidi
- Department of Surgery, University Hospitals Cleveland Medical Center, 11100 Euclid Ave, Lakeside 7, Cleveland, OH, 44106-5047, USA
| | - Shravan Sarvepalli
- Department of Surgery, University Hospitals Cleveland Medical Center, 11100 Euclid Ave, Lakeside 7, Cleveland, OH, 44106-5047, USA
| | | | - Carlos Galvani
- Department of Surgery, Tulane University School of Medicine, New Orleans, LA, USA
| | | | - Amit Taggar
- Florida Surgical Weight Loss Centers, Tampa, FL, USA
| | - Leena Khaitan
- Department of Surgery, University Hospitals Cleveland Medical Center, 11100 Euclid Ave, Lakeside 7, Cleveland, OH, 44106-5047, USA
| | - Mujjahid Abbas
- Department of Surgery, University Hospitals Cleveland Medical Center, 11100 Euclid Ave, Lakeside 7, Cleveland, OH, 44106-5047, USA
| |
Collapse
|
7
|
Laplante S, Namazi B, Kiani P, Hashimoto DA, Alseidi A, Pasten M, Brunt LM, Gill S, Davis B, Bloom M, Pernar L, Okrainec A, Madani A. Validation of an artificial intelligence platform for the guidance of safe laparoscopic cholecystectomy. Surg Endosc 2023; 37:2260-2268. [PMID: 35918549 DOI: 10.1007/s00464-022-09439-9] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Accepted: 07/04/2022] [Indexed: 10/16/2022]
Abstract
BACKGROUND Many surgical adverse events, such as bile duct injuries during laparoscopic cholecystectomy (LC), occur due to errors in visual perception and judgment. Artificial intelligence (AI) can potentially improve the quality and safety of surgery, such as through real-time intraoperative decision support. GoNoGoNet is a novel AI model capable of identifying safe ("Go") and dangerous ("No-Go") zones of dissection on surgical videos of LC. Yet, it is unknown how GoNoGoNet performs in comparison to expert surgeons. This study aims to evaluate the GoNoGoNet's ability to identify Go and No-Go zones compared to an external panel of expert surgeons. METHODS A panel of high-volume surgeons from the SAGES Safe Cholecystectomy Task Force was recruited to draw free-hand annotations on frames of prospectively collected videos of LC to identify the Go and No-Go zones. Expert consensus on the location of Go and No-Go zones was established using Visual Concordance Test pixel agreement. Identification of Go and No-Go zones by GoNoGoNet was compared to expert-derived consensus using mean F1 Dice Score, and pixel accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV). RESULTS A total of 47 frames from 25 LC videos, procured from 3 countries and 9 surgeons, were annotated simultaneously by an expert panel of 6 surgeons and GoNoGoNet. Mean (± standard deviation) F1 Dice score were 0.58 (0.22) and 0.80 (0.12) for Go and No-Go zones, respectively. Mean (± standard deviation) accuracy, sensitivity, specificity, PPV and NPV for the Go zones were 0.92 (0.05), 0.52 (0.24), 0.97 (0.03), 0.70 (0.21), and 0.94 (0.04) respectively. For No-Go zones, these metrics were 0.92 (0.05), 0.80 (0.17), 0.95 (0.04), 0.84 (0.13) and 0.95 (0.05), respectively. CONCLUSIONS AI can be used to identify safe and dangerous zones of dissection within the surgical field, with high specificity/PPV for Go zones and high sensitivity/NPV for No-Go zones. Overall, model prediction was better for No-Go zones compared to Go zones. This technology may eventually be used to provide real-time guidance and minimize the risk of adverse events.
Collapse
Affiliation(s)
- Simon Laplante
- Surgical Artificial Intelligence Research Academy, University Health Network, Toronto, ON, Canada.
- Department of Surgery, University of Toronto, Toronto, ON, Canada.
- MIS Fellow, Toronto Western Hospital, Division of General Surgery, 8MP-325., 399 Bathurst St, Toronto,, ON, M5T 2S8, Canada.
| | - Babak Namazi
- Department of Surgery, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Parmiss Kiani
- Surgical Artificial Intelligence Research Academy, University Health Network, Toronto, ON, Canada
| | | | - Adnan Alseidi
- Department of Surgery, University of California, San Francisco, CA, USA
| | - Mauricio Pasten
- Instituto de Gastroenterologia Boliviano Japones, Cochabamba, Bolivia
| | - L Michael Brunt
- Department of Surgery, Washington University School of Medicine, St. Louis, MO, USA
| | - Sujata Gill
- Department of Surgery, Northeast Georgia Medical Center, Georgia, USA
| | - Brian Davis
- Department of Surgery, Texas Tech Paul L Foster School of Medicine, El Paso, TX, USA
| | - Matthew Bloom
- Department of Surgery, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Luise Pernar
- Department of Surgery, Boston medical center, Boston, MA, USA
| | - Allan Okrainec
- Surgical Artificial Intelligence Research Academy, University Health Network, Toronto, ON, Canada
- Department of Surgery, University of Toronto, Toronto, ON, Canada
| | - Amin Madani
- Surgical Artificial Intelligence Research Academy, University Health Network, Toronto, ON, Canada
- Department of Surgery, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
8
|
Madani A, Namazi B, Altieri MS, Hashimoto DA, Rivera AM, Pucher PH, Navarrete-Welton A, Sankaranarayanan G, Brunt LM, Okrainec A, Alseidi A. Artificial Intelligence for Intraoperative Guidance: Using Semantic Segmentation to Identify Surgical Anatomy During Laparoscopic Cholecystectomy. Ann Surg 2022; 276:363-369. [PMID: 33196488 PMCID: PMC8186165 DOI: 10.1097/sla.0000000000004594] [Citation(s) in RCA: 137] [Impact Index Per Article: 45.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
OBJECTIVE The aim of this study was to develop and evaluate the performance of artificial intelligence (AI) models that can identify safe and dangerous zones of dissection, and anatomical landmarks during laparoscopic cholecystectomy (LC). SUMMARY BACKGROUND DATA Many adverse events during surgery occur due to errors in visual perception and judgment leading to misinterpretation of anatomy. Deep learning, a subfield of AI, can potentially be used to provide real-time guidance intraoperatively. METHODS Deep learning models were developed and trained to identify safe (Go) and dangerous (No-Go) zones of dissection, liver, gallbladder, and hepatocystic triangle during LC. Annotations were performed by 4 high-volume surgeons. AI predictions were evaluated using 10-fold cross-validation against annotations by expert surgeons. Primary outcomes were intersection- over-union (IOU) and F1 score (validated spatial correlation indices), and secondary outcomes were pixel-wise accuracy, sensitivity, specificity, ± standard deviation. RESULTS AI models were trained on 2627 random frames from 290 LC videos, procured from 37 countries, 136 institutions, and 153 surgeons. Mean IOU, F1 score, accuracy, sensitivity, and specificity for the AI to identify Go zones were 0.53 (±0.24), 0.70 (±0.28), 0.94 (±0.05), 0.69 (±0.20). and 0.94 (±0.03), respectively. For No-Go zones, these metrics were 0.71 (±0.29), 0.83 (±0.31), 0.95 (±0.06), 0.80 (±0.21), and 0.98 (±0.05), respectively. Mean IOU for identification of the liver, gallbladder, and hepatocystic triangle were: 0.86 (±0.12), 0.72 (±0.19), and 0.65 (±0.22), respectively. CONCLUSIONS AI can be used to identify anatomy within the surgical field. This technology may eventually be used to provide real-time guidance and minimize the risk of adverse events.
Collapse
Affiliation(s)
- Amin Madani
- Department of Surgery, University Health Network, Toronto, ON, Canada
| | - Babak Namazi
- Center for Evidence-Based Simulation, Baylor University Medical Center, Dallas, TX, USA
| | - Maria S. Altieri
- Department of Surgery, East Carolina University Brody School of Medicine, Greenville, NC, USA
| | - Daniel A. Hashimoto
- Surgical AI & Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, Boston, MA, USA
| | | | | | - Allison Navarrete-Welton
- Surgical AI & Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, Boston, MA, USA
| | | | - L. Michael Brunt
- Department of Surgery, Washington University School of Medicine, St. Louis, MO, USA
| | - Allan Okrainec
- Department of Surgery, University Health Network, Toronto, ON, Canada
| | - Adnan Alseidi
- Department of Surgery, University of California – San Francisco, San Francisco, CA, USA
| |
Collapse
|
9
|
Naghawi H, Chau J, Madani A, Kaneva P, Monson J, Mueller C, Lee L. Development and evaluation of a virtual knowledge assessment tool for transanal total mesorectal excision. Tech Coloproctol 2022; 26:551-560. [PMID: 35503143 DOI: 10.1007/s10151-022-02621-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Accepted: 04/10/2022] [Indexed: 11/28/2022]
Abstract
BACKGROUND Transanal total mesorectal excision (TATME) is difficult to learn and can result in serious complications. Current paradigms for assessing performance and competency may be insufficient. This study aims to develop and provide preliminary validity evidence for a TATME virtual assessment tool (TATME-VAT) to assess the cognitive skills necessary to safely complete TATME dissection. METHODS Participants from North America, Europe, Japan and China completed the test via an interactive online platform between 11/2019 and 05/2020. They were grouped into expert, experienced and novice surgeons depending on the number of independently performed TATMEs. TATME-VAT is a 24-item web-based assessment evaluating advanced cognitive skills, designed according to a blueprint from consensus guidelines. Eight items were multiple choice questions. Sixteen items required making annotations on still frames of TATME videos (VCT) and were scored using a validated algorithm derived from experts' responses. Annotation (range 0-100), multiple choice (range 0-100), and overall scores (sum of annotation and multiple-choice scores, normalized to μ = 50 and σ = 10) were reported. RESULTS There were significant differences between the expert, experienced, and novice groups for the annotation (p < 0.001), multiple-choice (p < 0.001), and overall scores (p < 0.001). The annotation (p = 0.439) and overall (p = 0.152) scores were similar between the experienced and novice groups. Annotation scores were higher in participants with 51 or more vs. 30-50 vs. less than 30 cases. Scores were also lower in users with a self-reported recent complication vs. those without. CONCLUSIONS This study describes the development of an interactive video-based virtual assessment tool for TATME dissection and provides initial validity evidence for its use.
Collapse
Affiliation(s)
- Hamzeh Naghawi
- The Steinberg-Bernstein Centre for Minimally Invasive Surgery, McGill University Health Center, Montreal, QC, Canada
| | - Johnny Chau
- The Steinberg-Bernstein Centre for Minimally Invasive Surgery, McGill University Health Center, Montreal, QC, Canada
| | - Amin Madani
- The University Health Network - Toronto General Hospital, Toronto, ON, Canada
| | - Pepa Kaneva
- The Steinberg-Bernstein Centre for Minimally Invasive Surgery, McGill University Health Center, Montreal, QC, Canada
| | - John Monson
- AdventHealth Medical Group, Orlando, FL, USA
| | - Carmen Mueller
- The Steinberg-Bernstein Centre for Minimally Invasive Surgery, McGill University Health Center, Montreal, QC, Canada
| | - Lawrence Lee
- The Steinberg-Bernstein Centre for Minimally Invasive Surgery, McGill University Health Center, Montreal, QC, Canada. .,Colon and Rectal Surgery, Department of Surgery, McGill University Health Centre, 1001 Decarie Boulevard, DS1-3310, Montreal, QC, H4A 3J1, Canada.
| |
Collapse
|
10
|
Kitaguchi D, Takeshita N, Hasegawa H, Ito M. Artificial intelligence-based computer vision in surgery: Recent advances and future perspectives. Ann Gastroenterol Surg 2022; 6:29-36. [PMID: 35106412 PMCID: PMC8786689 DOI: 10.1002/ags3.12513] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Revised: 09/16/2021] [Accepted: 09/18/2021] [Indexed: 12/04/2022] Open
Abstract
Technology has advanced surgery, especially minimally invasive surgery (MIS), including laparoscopic surgery and robotic surgery. It has led to an increase in the number of technologies in the operating room. They can provide further information about a surgical procedure, e.g. instrument usage and trajectories. Among these surgery-related technologies, the amount of information extracted from a surgical video captured by an endoscope is especially great. Therefore, the automation of data analysis is essential in surgery to reduce the complexity of the data while maximizing its utility to enable new opportunities for research and development. Computer vision (CV) is the field of study that deals with how computers can understand digital images or videos and seeks to automate tasks that can be performed by the human visual system. Because this field deals with all the processes of real-world information acquisition by computers, the terminology "CV" is extensive, and ranges from hardware for image sensing to AI-based image recognition. AI-based image recognition for simple tasks, such as recognizing snapshots, has advanced and is comparable to humans in recent years. Although surgical video recognition is a more complex and challenging task, if we can effectively apply it to MIS, it leads to future surgical advancements, such as intraoperative decision-making support and image navigation surgery. Ultimately, automated surgery might be realized. In this article, we summarize the recent advances and future perspectives of AI-related research and development in the field of surgery.
Collapse
Affiliation(s)
- Daichi Kitaguchi
- Surgical Device Innovation OfficeNational Cancer Center Hospital EastKashiwaJapan
| | - Nobuyoshi Takeshita
- Surgical Device Innovation OfficeNational Cancer Center Hospital EastKashiwaJapan
| | - Hiro Hasegawa
- Surgical Device Innovation OfficeNational Cancer Center Hospital EastKashiwaJapan
| | - Masaaki Ito
- Surgical Device Innovation OfficeNational Cancer Center Hospital EastKashiwaJapan
| |
Collapse
|
11
|
Jopling JK, Visser BC. Mastering the thousand tiny details: Routine use of video to optimize performance in sport and in surgery. ANZ J Surg 2021; 91:1981-1986. [PMID: 34309995 DOI: 10.1111/ans.17076] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 06/20/2021] [Accepted: 07/06/2021] [Indexed: 12/31/2022]
Affiliation(s)
- Jeffrey K Jopling
- Department of Surgery, Stanford University, Stanford, California, USA
| | - Brendan C Visser
- Department of Surgery, Stanford University, Stanford, California, USA
| |
Collapse
|
12
|
Ward TM, Mascagni P, Madani A, Padoy N, Perretta S, Hashimoto DA. Surgical data science and artificial intelligence for surgical education. J Surg Oncol 2021; 124:221-230. [PMID: 34245578 DOI: 10.1002/jso.26496] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Revised: 03/29/2021] [Accepted: 04/02/2021] [Indexed: 11/11/2022]
Abstract
Surgical data science (SDS) aims to improve the quality of interventional healthcare and its value through the capture, organization, analysis, and modeling of procedural data. As data capture has increased and artificial intelligence (AI) has advanced, SDS can help to unlock augmented and automated coaching, feedback, assessment, and decision support in surgery. We review major concepts in SDS and AI as applied to surgical education and surgical oncology.
Collapse
Affiliation(s)
- Thomas M Ward
- Department of Surgery, Surgical AI & Innovation Laboratory, Massachusetts General Hospital, Boston, Massachusetts
| | - Pietro Mascagni
- ICube, University of Strasbourg, CNRS, France.,Fondazione Policlinico A. Gemelli IRCCS, Rome, Italy.,IHU Strasbourg, Strasbourg, France
| | - Amin Madani
- Department of Surgery, University Health Network, Toronto, Canada
| | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, France.,IHU Strasbourg, Strasbourg, France
| | | | - Daniel A Hashimoto
- Department of Surgery, Surgical AI & Innovation Laboratory, Massachusetts General Hospital, Boston, Massachusetts
| |
Collapse
|
13
|
Ward TM, Fer DM, Ban Y, Rosman G, Meireles OR, Hashimoto DA. Challenges in surgical video annotation. Comput Assist Surg (Abingdon) 2021; 26:58-68. [PMID: 34126014 DOI: 10.1080/24699322.2021.1937320] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022] Open
Abstract
Annotation of surgical video is important for establishing ground truth in surgical data science endeavors that involve computer vision. With the growth of the field over the last decade, several challenges have been identified in annotating spatial, temporal, and clinical elements of surgical video as well as challenges in selecting annotators. In reviewing current challenges, we provide suggestions on opportunities for improvement and possible next steps to enable translation of surgical data science efforts in surgical video analysis to clinical research and practice.
Collapse
Affiliation(s)
- Thomas M Ward
- Surgical AI & Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, Boston, MA, USA
| | - Danyal M Fer
- Department of Surgery, University of California San Francisco East Bay, Hayward, CA, USA
| | - Yutong Ban
- Surgical AI & Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, Boston, MA, USA.,Distributed Robotics Laboratory, Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Guy Rosman
- Surgical AI & Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, Boston, MA, USA.,Distributed Robotics Laboratory, Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Ozanan R Meireles
- Surgical AI & Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, Boston, MA, USA
| | - Daniel A Hashimoto
- Surgical AI & Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, Boston, MA, USA
| |
Collapse
|
14
|
Leveraging Videoconferencing Technology to Augment Surgical Training During a Pandemic. ANNALS OF SURGERY OPEN 2021; 2:e035. [PMID: 36590033 PMCID: PMC9793996 DOI: 10.1097/as9.0000000000000035] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Accepted: 01/01/2021] [Indexed: 01/04/2023] Open
Abstract
Our objective was to review the use of videoconferencing as a practical tool for remote surgical education and to propose a model to overcome the impact of a pandemic on resident training. Summary Background Data In response to the coronavirus disease 2019 pandemic, most institutions and residency programs have been restructured to minimize the number of residents in the hospital as well as their interactions with patients and to promote physical distancing measures. This has resulted in decreased resident operative exposure, responsibility, and autonomy, hindering their educational goals and ability to achieve surgical expertise necessary for independent practice. Methods We conducted a narrative review to explore the use of videoconferencing for remote broadcasting of surgical procedures, telecoaching using surgical videos, telesimulation for surgical skills training, and establishing a didactic lecture series. Results and Conclusions We present a multimodal approach for using practical videoconferencing tools that provide the means for audiovisual communication to help augment residents' operative experience and limit the impact of self-isolation, redeployment, and limited operative exposure on surgical training.
Collapse
|