1
|
Sengun B, Iscan Y, Yazici ZA, Sormaz IC, Aksakal N, Tunca F, Ekenel HK, GilesSenyurek Y. Utilization of Artificial Intelligence in Minimally Invasive Right Adrenalectomy: Recognition of Anatomical Landmarks with Deep Learning. Acta Chir Belg 2024:1-8. [PMID: 38841838 DOI: 10.1080/00015458.2024.2363599] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2024] [Accepted: 05/30/2024] [Indexed: 06/07/2024]
Abstract
BackgroundThe primary surgical approach for removing adrenal masses is minimally invasive adrenalectomy. Recognition of anatomical landmarks during surgery is critical for minimizing complications. Artificial intelligence-based tools can be utilized to create real-time navigation systems during laparoscopic and robotic right adrenalectomy. In this study, we aimed to develop deep learning models that can identify critical anatomical structures during minimally invasive right adrenalectomy.MethodsIn this experimental feasibility study, intraoperative videos of 20 patients who underwent minimally invasive right adrenalectomy in a tertiary care center between 2011 and 2023 were analyzed and used to develop an artificial intelligence-based anatomical landmark recognition system. Semantic segmentation of the liver, the inferior vena cava (IVC), and the right adrenal gland were performed. Fifty random images per patient during the dissection phase were extracted from videos. The experiments on the annotated images were performed on two state-of-the-art segmentation models named SwinUNETR and MedNeXt, which are transformer and convolutional neural network (CNN)-based segmentation architectures, respectively. Two loss function combinations, Dice-Cross Entropy and Dice-Focal Loss were experimented with for both of the models. The dataset was split into training and validation subsets with an 80:20 distribution on a patient basis in a 5-fold cross-validation approach. To introduce a sample variability to the dataset, strong-augmentation techniques were performed using intensity modifications and perspective transformations to represent different surgery environment scenarios. The models were evaluated by Dice Similarity Coefficient (DSC) and Intersection over Union (IoU) which are widely used segmentation metrics. For pixel-wise classification performance, Accuracy, Sensitivity and Specificity metrics were calculated on the validation subset.ResultsOut of 20 videos, 1000 images were extracted, and the anatomical landmarks (liver, IVC, and right adrenal gland) were annotated. Randomly distributed 800 images and 200 images were selected for the training and validation subsets, respectively. Our benchmark results show that the utilization of Dice-Cross Entropy Loss with the transformer-based SwinUNETR model achieved 78.37% whereas the CNN-based MedNeXt model reached a 77.09% mDSC score. Conversely, MedNeXt reaches a higher mIoU score of 63.71% than SwinUNETR by 62.10% on a three-region prediction task.ConclusionArtificial intelligence-based systems can predict anatomical landmarks with high performance in minimally invasive right adrenalectomy. Such tools can later be used to create real-time navigation systems during surgery in the near future.
Collapse
Affiliation(s)
- Berke Sengun
- Istanbul University - Istanbul Faculty of Medicine, Department of General Surgery
| | - Yalin Iscan
- Istanbul University - Istanbul Faculty of Medicine, Department of General Surgery
| | - Ziya Ata Yazici
- Istanbul Technical University, Faculty of Computer and Informatics Engineering
| | - Ismail Cem Sormaz
- Istanbul University - Istanbul Faculty of Medicine, Department of General Surgery
| | - Nihat Aksakal
- Istanbul University - Istanbul Faculty of Medicine, Department of General Surgery
| | - Fatih Tunca
- Istanbul University - Istanbul Faculty of Medicine, Department of General Surgery
| | - Hazim Kemal Ekenel
- Istanbul Technical University, Faculty of Computer and Informatics Engineering
| | | |
Collapse
|
2
|
Skinner GC, Liu YZ, Harzman AE, Husain SG, Gasior AC, Cunningham LA, Traugott AL, McCulloh CJ, Kalady MF, Kim PC, Huang ES. Clinical Utility of Laser Speckle Contrast Imaging and Real-Time Quantification of Bowel Perfusion in Minimally Invasive Left-Sided Colorectal Resections. Dis Colon Rectum 2024; 67:850-859. [PMID: 38408871 DOI: 10.1097/dcr.0000000000003098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/28/2024]
Abstract
BACKGROUND Left-sided colorectal surgery demonstrates high anastomotic leak rates, with tissue ischemia thought to influence outcomes. Indocyanine green is commonly used for perfusion assessment, but evidence remains mixed for whether it reduces colorectal anastomotic leaks. Laser speckle contrast imaging provides dye-free perfusion assessment in real-time through perfusion heat maps and quantification. OBJECTIVE This study investigates the efficacy of advanced visualization (indocyanine green versus laser speckle contrast imaging), perfusion assessment, and utility of laser speckle perfusion quantification in determining ischemic margins. DESIGN Prospective intervention group using advanced visualization with case-matched, retrospective control group. SETTINGS Single academic medical center. PATIENTS Forty adult patients undergoing elective, minimally invasive, left-sided colorectal surgery. INTERVENTIONS Intraoperative perfusion assessment using white light imaging and advanced visualization at 3 time points: T1-proximal colon after devascularization, before transection, T2-proximal/distal colon before anastomosis, and T3-completed anastomosis. MAIN OUTCOME MEASURES Intraoperative indication of ischemic line of demarcation before resection under each visualization method, surgical decision change using advanced visualization, post hoc laser speckle perfusion quantification of colorectal tissue, and 30-day postoperative outcomes. RESULTS Advanced visualization changed surgical decision-making in 17.5% of cases. For cases in which surgeons changed a decision, the average discordance between the line of demarcation in white light imaging and advanced visualization was 3.7 cm, compared to 0.41 cm ( p = 0.01) for cases without decision changes. There was no statistical difference between the line of ischemic demarcation using laser speckle versus indocyanine green ( p = 0.16). Laser speckle quantified lower perfusion values for tissues beyond the line of ischemic demarcation while suggesting an additional 1 cm of perfused tissue beyond this line. One (2.5%) anastomotic leak occurred in the intervention group. LIMITATIONS This study was not powered to detect differences in anastomotic leak rates. CONCLUSIONS Advanced visualization using laser speckle and indocyanine green provides valuable perfusion information that impacts surgical decision-making in minimally invasive left-sided colorectal surgeries. See Video Abstract . UTILIDAD CLNICA DE LAS IMGENES DE CONTRASTE MOTEADO CON LSER Y LA CUANTIFICACIN EN TIEMPO REAL DE LA PERFUSIN INTESTINAL EN RESECCIONES COLORRECTALES DEL LADO IZQUIERDO MNIMAMENTE INVASIVAS ANTECEDENTES:La cirugía colorrectal del lado izquierdo demuestra altas tasas de fuga anastomótica, y se cree que la isquemia tisular influye en los resultados. El verde de indocianina se utiliza habitualmente para evaluar la perfusión, pero la evidencia sobre si reduce las fugas anastomóticas colorrectales sigue siendo contradictoria. Las imágenes de contraste moteado con láser proporcionan una evaluación de la perfusión sin colorantes en tiempo real a través de mapas de calor de perfusión y cuantificación.OBJETIVO:Este estudio investiga la eficacia de la evaluación de la perfusión mediante visualización avanzada (verde de indocianina versus imágenes de contraste moteado con láser) y la utilidad de la cuantificación de la perfusión con moteado láser para determinar los márgenes isquémicos.DISEÑO:Grupo de intervención prospectivo que utiliza visualización avanzada con un grupo de control retrospectivo de casos emparejados.LUGARES:Centro médico académico único.PACIENTES:Cuarenta pacientes adultos sometidos a cirugía colorrectal electiva, mínimamente invasiva, del lado izquierdo.INTERVENCIONES:Evaluación de la perfusión intraoperatoria mediante imágenes con luz blanca y visualización avanzada en tres puntos temporales: T1-colon proximal después de la devascularización, antes de la transección; T2-colon proximal/distal antes de la anastomosis; y T3-anastomosis completa.PRINCIPALES MEDIDAS DE VALORACIÓN:Indicación intraoperatoria de la línea de demarcación isquémica antes de la resección bajo cada método de visualización, cambio de decisión quirúrgica mediante visualización avanzada, cuantificación post-hoc de la perfusión con láser moteado del tejido colorrectal y resultados posoperatorios a los 30 días.RESULTADOS:La visualización avanzada cambió la toma de decisiones quirúrgicas en el 17,5% de los casos. Para los casos en los que los cirujanos cambiaron una decisión, la discordancia promedio entre la línea de demarcación en las imágenes con luz blanca y la visualización avanzada fue de 3,7 cm, en comparación con 0,41 cm (p = 0,01) para los casos sin cambios de decisión. No hubo diferencias estadísticas entre la línea de demarcación isquémica utilizando láser moteado versus verde de indocianina (p = 0,16). El moteado con láser cuantificó valores de perfusión más bajos para los tejidos más allá de la línea de demarcación isquémica y al mismo tiempo sugirió 1 cm adicional de tejido perfundido más allá de esta línea. Se produjo una fuga anastomótica (2,5%) en el grupo de intervención.LIMITACIONES:Este estudio no tuvo el poder estadístico suficiente para detectar diferencias en las tasas de fuga anastomótica.CONCLUSIONES:La visualización avanzada utilizando moteado láser y verde de indocianina proporciona información valiosa sobre la perfusión que impacta la toma de decisiones quirúrgicas en cirugías colorrectales mínimamente invasivas del lado izquierdo. (Traducción-Dr. Ingrid Melo).
Collapse
Affiliation(s)
- Garrett C Skinner
- Department of Surgery, Jacobs School of Medicine and Biochemical Sciences, University at Buffalo, Buffalo, New York
- Activ Surgical, Boston, Massachusetts
| | - Yao Z Liu
- Activ Surgical, Boston, Massachusetts
- Department of Surgery, The Warren Alpert Medical School, Brown University, Providence, Rhode Island
| | - Alan E Harzman
- Division of Colorectal Surgery, Department of Surgery, The Ohio State University Wexner Medical Center and James Comprehensive Cancer Center, Columbus, Ohio
| | - Syed G Husain
- Division of Colorectal Surgery, Department of Surgery, The Ohio State University Wexner Medical Center and James Comprehensive Cancer Center, Columbus, Ohio
| | - Alessandra C Gasior
- Division of Colorectal Surgery, Department of Surgery, The Ohio State University Wexner Medical Center and James Comprehensive Cancer Center, Columbus, Ohio
| | - Lisa A Cunningham
- Division of Colorectal Surgery, Department of Surgery, The Ohio State University Wexner Medical Center and James Comprehensive Cancer Center, Columbus, Ohio
| | - Amber L Traugott
- Division of Colorectal Surgery, Department of Surgery, The Ohio State University Wexner Medical Center and James Comprehensive Cancer Center, Columbus, Ohio
| | | | - Matthew F Kalady
- Division of Colorectal Surgery, Department of Surgery, The Ohio State University Wexner Medical Center and James Comprehensive Cancer Center, Columbus, Ohio
| | - Peter C Kim
- Activ Surgical, Boston, Massachusetts
- Department of Surgery, The Warren Alpert Medical School, Brown University, Providence, Rhode Island
| | - Emily S Huang
- Division of Colorectal Surgery, Department of Surgery, The Ohio State University Wexner Medical Center and James Comprehensive Cancer Center, Columbus, Ohio
| |
Collapse
|
3
|
De Simone B, Abu-Zidan FM, Saeidi S, Deeken G, Biffl WL, Moore EE, Sartelli M, Coccolini F, Ansaloni L, Di Saverio S, Catena F. Knowledge, attitudes and practices of using Indocyanine Green (ICG) fluorescence in emergency surgery: an international web-based survey in the ARtificial Intelligence in Emergency and trauma Surgery (ARIES)-WSES project. Updates Surg 2024:10.1007/s13304-024-01853-z. [PMID: 38801604 DOI: 10.1007/s13304-024-01853-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Accepted: 04/10/2024] [Indexed: 05/29/2024]
Abstract
Fluorescence imaging is a real-time intraoperative navigation modality to enhance surgical vision and it can guide emergency surgeons while performing difficult, high-risk surgical procedures. The aim of this study is to assess current knowledge, attitudes, and practices of emergency surgeons in the use of indocyanine green (ICG) in emergency settings. Between March 08, 2023 and April 10, 2023, a questionnaire composed of 27 multiple choice and open-ended questions was sent to 200 emergency surgeons who had previously joined the ARtificial Intelligence in Emergency and trauma Surgery (ARIES) project promoted by the WSES. The questionnaire was developed by an emergency surgeon with an interest in advanced technologies and artificial intelligence. The response rate was 96% (192/200). Responders affirmed that ICG fluorescence can support the performance of difficult surgical procedures in the emergency setting, particularly in the presence of severe inflammation and in evaluating bowel viability. Nevertheless, there were concerns regarding accessibility and availability of fluorescence imaging in emergency settings. Eighty-seven out of 192 (45.3%) respondents have a fluorescence imaging system of vision for both elective and emergency surgical procedures; 32.3% of respondents have this system solely for elective procedures; 21.4% of respondents do not have this system, 15% do not have experience with it, and 38% do not use this imaging in emergency surgery. Less than 1% (2/192) affirmed that ICG fluorescence changed always their intraoperative decision-making. Precision surgery effectively tailors surgical interventions to individual patient characteristics using advanced technology, data analysis and artificial intelligence. ICG fluorescence can serve as a valid and safe tool to guide emergency surgery in different scenarios, such as intestinal ischemia and severe acute cholecystitis. Due to the lack of high-level evidence within this field, a consensus of expert emergency surgeons is needed to encourage stakeholders to increase the availability of fluorescence imaging systems and to support emergency surgeons in implementing ICG fluorescence in their daily practice.
Collapse
Affiliation(s)
- Belinda De Simone
- Department of Emergency and Digestive Minimally Invasive Surgery, Academic Hospital of Villeneuve St Georges, Villeneuve St Georges, France.
- Department of Emergency and General Minimally Invasive Surgery, Infermi Hospital, AUSL Romagna, Rimini, Italy.
- eCampus University, Novedrate, CO, Italy.
| | - Fikri M Abu-Zidan
- The Research Office, College of Medicine and Health Sciences, United Arab Emirates University, Al-Ain, UAE
| | - Sara Saeidi
- Minimally Invasive Research Center, Division of Minimally Invasive and Bariatric Surgery, Mashhad University of Medical Sciences, Mashhad, Iran
| | - Genevieve Deeken
- Center for Research in Epidemiology and Statistics (CRESS), Université Paris Cité, 75004, Paris, France
- Department of Global Public Health and Global Studies, University of Virginia, Charlottesville, VA, 22904-4132, USA
| | - Walter L Biffl
- Department of Trauma and Emergency Surgery, Scripps Clinic, La Jolla, San Diego, USA
| | | | - Massimo Sartelli
- Department of General Surgery, Macerata Hospital, Macerata, Italy
| | - Federico Coccolini
- Department of General and Trauma Surgery, University Hospital of Pisa, Pisa, Italy
| | - Luca Ansaloni
- Department of General Surgery, Pavia University Hospital, Pavia, Italy
| | - Salomone Di Saverio
- Department of Surgery, Santa Maria del Soccorso Hospital, San Benedetto del Tronto, Italy
| | - Fausto Catena
- Department of Emergency and General Surgery, Level I Trauma Center, Bufalini Hospital, AUSL Romagna, Cesena, Italy
| |
Collapse
|
4
|
Mascagni P, Alapatt D, Sestini L, Yu T, Alfieri S, Morales-Conde S, Padoy N, Perretta S. Applications of artificial intelligence in surgery: clinical, technical, and governance considerations. Cir Esp 2024:S2173-5077(24)00114-5. [PMID: 38704146 DOI: 10.1016/j.cireng.2024.04.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Accepted: 04/29/2024] [Indexed: 05/06/2024]
Abstract
Artificial intelligence (AI) will power many of the tools in the armamentarium of digital surgeons. AI methods and surgical proof-of-concept flourish, but we have yet to witness clinical translation and value. Here we exemplify the potential of AI in the care pathway of colorectal cancer patients and discuss clinical, technical, and governance considerations of major importance for the safe translation of surgical AI for the benefit of our patients and practices.
Collapse
Affiliation(s)
- Pietro Mascagni
- IHU Strasbourg, Strasbourg, France; Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy; Università Cattolica del Sacro Cuore, Rome, Italy.
| | - Deepak Alapatt
- University of Strasbourg, CNRS, INSERM, ICube, UMR7357, Strasbourg, France
| | - Luca Sestini
- University of Strasbourg, CNRS, INSERM, ICube, UMR7357, Strasbourg, France
| | - Tong Yu
- University of Strasbourg, CNRS, INSERM, ICube, UMR7357, Strasbourg, France
| | - Sergio Alfieri
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy; Università Cattolica del Sacro Cuore, Rome, Italy
| | | | - Nicolas Padoy
- IHU Strasbourg, Strasbourg, France; University of Strasbourg, CNRS, INSERM, ICube, UMR7357, Strasbourg, France
| | - Silvana Perretta
- IHU Strasbourg, Strasbourg, France; IRCAD, Research Institute Against Digestive Cancer, Strasbourg, France; Nouvel Hôpital Civil, Hôpitaux Universitaires de Strasbourg, Strasbourg, France
| |
Collapse
|
5
|
Lotter W, Hassett MJ, Schultz N, Kehl KL, Van Allen EM, Cerami E. Artificial Intelligence in Oncology: Current Landscape, Challenges, and Future Directions. Cancer Discov 2024; 14:711-726. [PMID: 38597966 PMCID: PMC11131133 DOI: 10.1158/2159-8290.cd-23-1199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 01/29/2024] [Accepted: 02/28/2024] [Indexed: 04/11/2024]
Abstract
Artificial intelligence (AI) in oncology is advancing beyond algorithm development to integration into clinical practice. This review describes the current state of the field, with a specific focus on clinical integration. AI applications are structured according to cancer type and clinical domain, focusing on the four most common cancers and tasks of detection, diagnosis, and treatment. These applications encompass various data modalities, including imaging, genomics, and medical records. We conclude with a summary of existing challenges, evolving solutions, and potential future directions for the field. SIGNIFICANCE AI is increasingly being applied to all aspects of oncology, where several applications are maturing beyond research and development to direct clinical integration. This review summarizes the current state of the field through the lens of clinical translation along the clinical care continuum. Emerging areas are also highlighted, along with common challenges, evolving solutions, and potential future directions for the field.
Collapse
Affiliation(s)
- William Lotter
- Department of Data Science, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Pathology, Brigham and Women’s Hospital, Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
| | - Michael J. Hassett
- Harvard Medical School, Boston, MA, USA
- Division of Population Sciences, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Medical Oncology, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Nikolaus Schultz
- Marie-Josée and Henry R. Kravis Center for Molecular Oncology, Memorial Sloan Kettering Cancer Center; New York, NY, USA
- Department of Epidemiology and Biostatistics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Kenneth L. Kehl
- Harvard Medical School, Boston, MA, USA
- Division of Population Sciences, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Medical Oncology, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Eliezer M. Van Allen
- Harvard Medical School, Boston, MA, USA
- Division of Population Sciences, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Medical Oncology, Dana-Farber Cancer Institute, Boston, MA, USA
- Cancer Program, Broad Institute of MIT and Harvard, Cambridge, MA, USA
| | - Ethan Cerami
- Department of Data Science, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, USA
| |
Collapse
|
6
|
Varghese C, Harrison EM, O'Grady G, Topol EJ. Artificial intelligence in surgery. Nat Med 2024; 30:1257-1268. [PMID: 38740998 DOI: 10.1038/s41591-024-02970-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Accepted: 04/03/2024] [Indexed: 05/16/2024]
Abstract
Artificial intelligence (AI) is rapidly emerging in healthcare, yet applications in surgery remain relatively nascent. Here we review the integration of AI in the field of surgery, centering our discussion on multifaceted improvements in surgical care in the preoperative, intraoperative and postoperative space. The emergence of foundation model architectures, wearable technologies and improving surgical data infrastructures is enabling rapid advances in AI interventions and utility. We discuss how maturing AI methods hold the potential to improve patient outcomes, facilitate surgical education and optimize surgical care. We review the current applications of deep learning approaches and outline a vision for future advances through multimodal foundation models.
Collapse
Affiliation(s)
- Chris Varghese
- Department of Surgery, University of Auckland, Auckland, New Zealand
| | - Ewen M Harrison
- Centre for Medical Informatics, Usher Institute, University of Edinburgh, Edinburgh, UK
| | - Greg O'Grady
- Department of Surgery, University of Auckland, Auckland, New Zealand
- Auckland Bioengineering Institute, University of Auckland, Auckland, New Zealand
| | - Eric J Topol
- Scripps Research Translational Institute, La Jolla, CA, USA.
| |
Collapse
|
7
|
Li Y, Bai B, Jia F. Parameter-efficient framework for surgical action triplet recognition. Int J Comput Assist Radiol Surg 2024:10.1007/s11548-024-03147-6. [PMID: 38689146 DOI: 10.1007/s11548-024-03147-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Accepted: 04/10/2024] [Indexed: 05/02/2024]
Abstract
PURPOSE Surgical action triplet recognition is a clinically significant yet challenging task. It provides surgeons with detailed information about surgical scenarios, thereby facilitating clinical decision-making. However, the high similarity among action triplets presents a formidable obstacle to recognition. To enhance accuracy, prior methods necessitated the utilization of larger models, thereby incurring a considerable computational burden. METHODS We propose a novel framework known as the Lite and Mega Models (LAM). It comprises a CNN-based fully fine-tuned model (LAM-Lite) and a parameter-efficient fine-tuned model based on the foundation model using Transformer architecture (LAM-Mega). Temporal multi-label data augmentation is introduced for extracting robust class-level features. RESULTS Our study demonstrates that LAM outperforms prior methods across various parameter scales on the CholecT50 dataset. Using fewer tunable parameters, LAM achieves a mean average precision (mAP) of 42.1%, a 3.6% improvement over the previous state of the art. CONCLUSION Leveraging effective structural design and robust capabilities of the foundational model, our proposed approach successfully strikes a balance between accuracy and computational efficiency. The source code is accessible at https://github.com/Lycus99/LAM .
Collapse
Affiliation(s)
- Yuchong Li
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Bizhe Bai
- University of Toronto, Toronto, ON, Canada
| | - Fucang Jia
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.
- The Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, China.
| |
Collapse
|
8
|
Tappero S, Fallara G, Chierigo F, Micalef A, Ambrosini F, Diaz R, Dorotei A, Pompeo E, Limena A, Bravi CA, Longoni M, Piccinelli ML, Barletta F, Albano L, Mazzone E, Dell'Oglio P. Intraoperative image-guidance during robotic surgery: is there clinical evidence of enhanced patient outcomes? Eur J Nucl Med Mol Imaging 2024:10.1007/s00259-024-06706-w. [PMID: 38607386 DOI: 10.1007/s00259-024-06706-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Accepted: 03/25/2024] [Indexed: 04/13/2024]
Abstract
BACKGROUND To date, the benefit of image guidance during robot-assisted surgery (IGS) is an object of debate. The current study aims to address the quality of the contemporary body of literature concerning IGS in robotic surgery throughout different surgical specialties. METHODS A systematic review of all English-language articles on IGS, from January 2013 to March 2023, was conducted using PubMed, Cochrane library's Central, EMBASE, MEDLINE, and Scopus databases. Comparative studies that tested performance of IGS vs control were included for the quantitative synthesis, which addressed outcomes analyzed in at least three studies: operative time, length of stay, blood loss, surgical margins, complications, number of nodal retrievals, metastatic nodes, ischemia time, and renal function loss. Bias-corrected ratio of means (ROM) and bias-corrected odds ratio (OR) compared continuous and dichotomous variables, respectively. Subgroup analyses according to guidance type (i.e., 3D virtual reality vs ultrasound vs near-infrared fluoresce) were performed. RESULTS Twenty-nine studies, based on 11 surgical procedures of three specialties (general surgery, gynecology, urology), were included in the quantitative synthesis. IGS was associated with 12% reduction in length of stay (ROM 0.88; p = 0.03) and 13% reduction in blood loss (ROM 0.87; p = 0.03) but did not affect operative time (ROM 1.00; p = 0.9), or complications (OR 0.93; p = 0.4). IGS was associated with an estimated 44% increase in mean number of removed nodes (ROM 1.44; p < 0.001), and a significantly higher rate of metastatic nodal disease (OR 1.82; p < 0.001), as well as a significantly lower rate of positive surgical margins (OR 0.62; p < 0.001). In nephron sparing surgery, IGS significantly decreased renal function loss (ROM 0.37; p = 0.002). CONCLUSIONS Robot-assisted surgery benefits from image guidance, especially in terms of pathologic outcomes, namely higher detection of metastatic nodes and lower surgical margins. Moreover, IGS enhances renal function preservation and lowers surgical blood loss.
Collapse
Affiliation(s)
- Stefano Tappero
- Department of Urology, ASST Grande Ospedale Metropolitano Niguarda, Milan, Italy
| | - Giuseppe Fallara
- Department of Urology, European Institute of Oncology (IEO), University of Milan, Milan, Italy
| | - Francesco Chierigo
- Department of Urology, ASST Grande Ospedale Metropolitano Niguarda, Milan, Italy
- Department of Urology, Azienda Ospedaliera Nazionale SS. Antonio e Biagio e Cesare Arrigo, Alessandria, Italy
- Department of Urology, IRCCS Ospedale Policlinico San Martino, University of Genova, Genoa, Italy
- Department of Surgical and Diagnostic Integrated Sciences (DISC), University of Genova, Genoa, Italy
| | - Andrea Micalef
- Department of General Surgery, Luigi Sacco University Hospital, Milan, Italy
- Università Degli Studi Di Milano, Milan, Italy
| | - Francesca Ambrosini
- Department of Urology, IRCCS Ospedale Policlinico San Martino, University of Genova, Genoa, Italy
- Department of Surgical and Diagnostic Integrated Sciences (DISC), University of Genova, Genoa, Italy
| | - Raquel Diaz
- Department of Surgical and Diagnostic Integrated Sciences (DISC), University of Genova, Genoa, Italy
| | - Andrea Dorotei
- Department of Orthopaedics, Humanitas Clinical and Research Center, IRCCS, Rozzano, Milan, Italy
- Department of Biomedical Sciences, Humanitas University, Milan, Italy
| | - Edoardo Pompeo
- Neurosurgery and Gamma Knife Radiosurgery Unit, IRCCS Ospedale San Raffaele, Milan, Italy
| | - Alessia Limena
- Infertility Unit, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico, Milan, Italy
- Department of Clinical Sciences and Community Health, Università degli Studi di Milano, Milan, Italy
| | - Carlo Andrea Bravi
- Department of Urology, Northampton General Hospital, Northampton, UK
- Department of Urology, Royal Marsden Foundation Trust, London, UK
| | - Mattia Longoni
- Unit of Urology/Division of Oncology, Gianfranco Soldera Prostate Cancer Lab, IRCCS San Raffaele Scientific Institute, Milan, Italy
- Vita-Salute San Raffaele University, Milan, Italy
| | - Mattia Luca Piccinelli
- Department of Urology, European Institute of Oncology (IEO), University of Milan, Milan, Italy
| | - Francesco Barletta
- Unit of Urology/Division of Oncology, Gianfranco Soldera Prostate Cancer Lab, IRCCS San Raffaele Scientific Institute, Milan, Italy
- Vita-Salute San Raffaele University, Milan, Italy
| | - Luigi Albano
- Neurosurgery and Gamma Knife Radiosurgery Unit, IRCCS Ospedale San Raffaele, Milan, Italy
- Neuroimaging Research Unit, Division of Neuroscience, IRCCS Ospedale San Raffaele, Milan, Italy
| | - Elio Mazzone
- Unit of Urology/Division of Oncology, Gianfranco Soldera Prostate Cancer Lab, IRCCS San Raffaele Scientific Institute, Milan, Italy
- Vita-Salute San Raffaele University, Milan, Italy
| | - Paolo Dell'Oglio
- Department of Urology, ASST Grande Ospedale Metropolitano Niguarda, Milan, Italy.
- Department of Urology, Netherlands Cancer Institute-Antoni Van Leeuwenhoek Hospital, Amsterdam, The Netherlands.
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands.
| |
Collapse
|
9
|
Meyer A, Mazellier JP, Dana J, Padoy N. On-the-fly point annotation for fast medical video labeling. Int J Comput Assist Radiol Surg 2024:10.1007/s11548-024-03098-y. [PMID: 38573565 DOI: 10.1007/s11548-024-03098-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Accepted: 03/04/2024] [Indexed: 04/05/2024]
Abstract
PURPOSE In medical research, deep learning models rely on high-quality annotated data, a process often laborious and time-consuming. This is particularly true for detection tasks where bounding box annotations are required. The need to adjust two corners makes the process inherently frame-by-frame. Given the scarcity of experts' time, efficient annotation methods suitable for clinicians are needed. METHODS We propose an on-the-fly method for live video annotation to enhance the annotation efficiency. In this approach, a continuous single-point annotation is maintained by keeping the cursor on the object in a live video, mitigating the need for tedious pausing and repetitive navigation inherent in traditional annotation methods. This novel annotation paradigm inherits the point annotation's ability to generate pseudo-labels using a point-to-box teacher model. We empirically evaluate this approach by developing a dataset and comparing on-the-fly annotation time against traditional annotation method. RESULTS Using our method, annotation speed was 3.2 × faster than the traditional annotation technique. We achieved a mean improvement of 6.51 ± 0.98 AP@50 over conventional method at equivalent annotation budgets on the developed dataset. CONCLUSION Without bells and whistles, our approach offers a significant speed-up in annotation tasks. It can be easily implemented on any annotation platform to accelerate the integration of deep learning in video-based medical research.
Collapse
Affiliation(s)
- Adrien Meyer
- ICube, CNRS, University of Strasbourg, Strasbourg, France.
| | - Jean-Paul Mazellier
- ICube, CNRS, University of Strasbourg, Strasbourg, France
- IHU Strasbourg, Strasbourg, France
| | - Jérémy Dana
- IHU Strasbourg, Strasbourg, France
- Department of Diagnostic Radiology, McGill University, Montréal, Canada
| | - Nicolas Padoy
- ICube, CNRS, University of Strasbourg, Strasbourg, France
- IHU Strasbourg, Strasbourg, France
| |
Collapse
|
10
|
Arensmeyer J, Bedetti B, Schnorr P, Buermann J, Zalepugas D, Schmidt J, Feodorovici P. A System for Mixed-Reality Holographic Overlays of Real-Time Rendered 3D-Reconstructed Imaging Using a Video Pass-through Head-Mounted Display-A Pathway to Future Navigation in Chest Wall Surgery. J Clin Med 2024; 13:2080. [PMID: 38610849 PMCID: PMC11012529 DOI: 10.3390/jcm13072080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 03/26/2024] [Accepted: 03/27/2024] [Indexed: 04/14/2024] Open
Abstract
Background: Three-dimensional reconstructions of state-of-the-art high-resolution imaging are progressively being used more for preprocedural assessment in thoracic surgery. It is a promising tool that aims to improve patient-specific treatment planning, for example, for minimally invasive or robotic-assisted lung resections. Increasingly available mixed-reality hardware based on video pass-through technology enables the projection of image data as a hologram onto the patient. We describe the novel method of real-time 3D surgical planning in a mixed-reality setting by presenting three representative cases utilizing volume rendering. Materials: A mixed-reality system was set up using a high-performance workstation running a video pass-through-based head-mounted display. Image data from computer tomography were imported and volume-rendered in real-time to be customized through live editing. The image-based hologram was projected onto the patient, highlighting the regions of interest. Results: Three oncological cases were selected to explore the potentials of the mixed-reality system. Two of them presented large tumor masses in the thoracic cavity, while a third case presented an unclear lesion of the chest wall. We aligned real-time rendered 3D holographic image data onto the patient allowing us to investigate the relationship between anatomical structures and their respective body position. Conclusions: The exploration of holographic overlay has proven to be promising in improving preprocedural surgical planning, particularly for complex oncological tasks in the thoracic surgical field. Further studies on outcome-related surgical planning and navigation should therefore be conducted. Ongoing technological progress of extended reality hardware and intelligent software features will most likely enhance applicability and the range of use in surgical fields within the near future.
Collapse
Affiliation(s)
- Jan Arensmeyer
- Division of Thoracic Surgery, Department of General, Thoracic and Vascular Surgery, University Hospital Bonn, 53127 Bonn, Germany (P.F.)
- Bonn Surgical Technology Center (BOSTER), University Hospital Bonn, 53227 Bonn, Germany
| | - Benedetta Bedetti
- Division of Thoracic Surgery, Department of General, Thoracic and Vascular Surgery, University Hospital Bonn, 53127 Bonn, Germany (P.F.)
- Department of Thoracic Surgery, Helios Hospital Bonn/Rhein-Sieg, 53123 Bonn, Germany
| | - Philipp Schnorr
- Division of Thoracic Surgery, Department of General, Thoracic and Vascular Surgery, University Hospital Bonn, 53127 Bonn, Germany (P.F.)
- Department of Thoracic Surgery, Helios Hospital Bonn/Rhein-Sieg, 53123 Bonn, Germany
| | - Jens Buermann
- Division of Thoracic Surgery, Department of General, Thoracic and Vascular Surgery, University Hospital Bonn, 53127 Bonn, Germany (P.F.)
- Department of Thoracic Surgery, Helios Hospital Bonn/Rhein-Sieg, 53123 Bonn, Germany
| | - Donatas Zalepugas
- Division of Thoracic Surgery, Department of General, Thoracic and Vascular Surgery, University Hospital Bonn, 53127 Bonn, Germany (P.F.)
- Department of Thoracic Surgery, Helios Hospital Bonn/Rhein-Sieg, 53123 Bonn, Germany
| | - Joachim Schmidt
- Division of Thoracic Surgery, Department of General, Thoracic and Vascular Surgery, University Hospital Bonn, 53127 Bonn, Germany (P.F.)
- Bonn Surgical Technology Center (BOSTER), University Hospital Bonn, 53227 Bonn, Germany
- Department of Thoracic Surgery, Helios Hospital Bonn/Rhein-Sieg, 53123 Bonn, Germany
| | - Philipp Feodorovici
- Division of Thoracic Surgery, Department of General, Thoracic and Vascular Surgery, University Hospital Bonn, 53127 Bonn, Germany (P.F.)
- Bonn Surgical Technology Center (BOSTER), University Hospital Bonn, 53227 Bonn, Germany
| |
Collapse
|
11
|
Hashimoto DA, Varas J, Schwartz TA. Practical Guide to Machine Learning and Artificial Intelligence in Surgical Education Research. JAMA Surg 2024; 159:455-456. [PMID: 38170510 DOI: 10.1001/jamasurg.2023.6687] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2024]
Abstract
This Guide to Statistics and Methods gives an overview of artificial intelligence techniques and tools in surgical education research.
Collapse
Affiliation(s)
- Daniel A Hashimoto
- Department of Surgery, University of Pennsylvania Perelman School of Medicine, Philadelphia
- Department of Computer and Information Science, University of Pennsylvania School of Engineering and Applied Science, Philadelphia
| | - Julian Varas
- Centro de Cirugía Experimental y Simulación, Pontifica Universidad Católica, Santiago, Chile
| | - Todd A Schwartz
- Department of Biostatistics, Gillings School of Global Public Health, University of North Carolina at Chapel Hill
- Statistical Editor, JAMA Surgery
| |
Collapse
|
12
|
Deol ES, Tollefson MK, Antolin A, Zohar M, Bar O, Ben-Ayoun D, Mynderse LA, Lomas DJ, Avant RA, Miller AR, Elliott DS, Boorjian SA, Wolf T, Asselmann D, Khanna A. Automated surgical step recognition in transurethral bladder tumor resection using artificial intelligence: transfer learning across surgical modalities. Front Artif Intell 2024; 7:1375482. [PMID: 38525302 PMCID: PMC10958784 DOI: 10.3389/frai.2024.1375482] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Accepted: 02/26/2024] [Indexed: 03/26/2024] Open
Abstract
Objective Automated surgical step recognition (SSR) using AI has been a catalyst in the "digitization" of surgery. However, progress has been limited to laparoscopy, with relatively few SSR tools in endoscopic surgery. This study aimed to create a SSR model for transurethral resection of bladder tumors (TURBT), leveraging a novel application of transfer learning to reduce video dataset requirements. Materials and methods Retrospective surgical videos of TURBT were manually annotated with the following steps of surgery: primary endoscopic evaluation, resection of bladder tumor, and surface coagulation. Manually annotated videos were then utilized to train a novel AI computer vision algorithm to perform automated video annotation of TURBT surgical video, utilizing a transfer-learning technique to pre-train on laparoscopic procedures. Accuracy of AI SSR was determined by comparison to human annotations as the reference standard. Results A total of 300 full-length TURBT videos (median 23.96 min; IQR 14.13-41.31 min) were manually annotated with sequential steps of surgery. One hundred and seventy-nine videos served as a training dataset for algorithm development, 44 for internal validation, and 77 as a separate test cohort for evaluating algorithm accuracy. Overall accuracy of AI video analysis was 89.6%. Model accuracy was highest for the primary endoscopic evaluation step (98.2%) and lowest for the surface coagulation step (82.7%). Conclusion We developed a fully automated computer vision algorithm for high-accuracy annotation of TURBT surgical videos. This represents the first application of transfer-learning from laparoscopy-based computer vision models into surgical endoscopy, demonstrating the promise of this approach in adapting to new procedure types.
Collapse
Affiliation(s)
- Ekamjit S. Deol
- Department of Urology, Mayo Clinic, Rochester, MN, United States
| | | | | | - Maya Zohar
- theator.io, Palo Alto, CA, United States
| | - Omri Bar
- theator.io, Palo Alto, CA, United States
| | | | | | - Derek J. Lomas
- Department of Urology, Mayo Clinic, Rochester, MN, United States
| | - Ross A. Avant
- Department of Urology, Mayo Clinic, Rochester, MN, United States
| | - Adam R. Miller
- Department of Urology, Mayo Clinic, Rochester, MN, United States
| | | | | | - Tamir Wolf
- theator.io, Palo Alto, CA, United States
| | | | - Abhinav Khanna
- Department of Urology, Mayo Clinic, Rochester, MN, United States
| |
Collapse
|
13
|
Zuluaga L, Rich JM, Gupta R, Pedraza A, Ucpinar B, Okhawere KE, Saini I, Dwivedi P, Patel D, Zaytoun O, Menon M, Tewari A, Badani KK. AI-powered real-time annotations during urologic surgery: The future of training and quality metrics. Urol Oncol 2024; 42:57-66. [PMID: 38142209 DOI: 10.1016/j.urolonc.2023.11.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 10/23/2023] [Accepted: 11/02/2023] [Indexed: 12/25/2023]
Abstract
INTRODUCTION AND OBJECTIVE Real-time artificial intelligence (AI) annotation of the surgical field has the potential to automatically extract information from surgical videos, helping to create a robust surgical atlas. This content can be used for surgical education and qualitative initiatives. We demonstrate the first use of AI in urologic robotic surgery to capture live surgical video and annotate key surgical steps and safety milestones in real-time. SUMMARY BACKGROUND DATA While AI models possess the capability to generate automated annotations based on a collection of video images, the real-time implementation of such technology in urological robotic surgery to aid surgeon and training staff it is still pending to be studied. METHODS We conducted an educational symposium, which broadcasted 2 live procedures, a robotic-assisted radical prostatectomy (RARP) and a robotic-assisted partial nephrectomy (RAPN). A surgical AI platform system (Theator, Palo Alto, CA) generated real-time annotations and identified operative safety milestones. This was achieved through trained algorithms, conventional video recognition, and novel Video Transfer Network technology which captures clips in full context, enabling automatic recognition and surgical mapping in real-time. RESULTS Real-time AI annotations for procedure #1, RARP, are found in Table 1. The safety milestone annotations included the apical safety maneuver and deliberate views of structures such as the external iliac vessels and the obturator nerve. Real-time AI annotations for procedure #2, RAPN, are found in Table 1. Safety milestones included deliberate views of structures such as the gonadal vessels and the ureter. AI annotated surgical events included intraoperative ultrasound, temporary clip application and removal, hemostatic powder application, and notable hemorrhage. CONCLUSIONS For the first time, surgical intelligence successfully showcased real-time AI annotations of 2 separate urologic robotic procedures during a live telecast. These annotations may provide the technological framework for send automatic notifications to clinical or operational stakeholders. This technology is a first step in real-time intraoperative decision support, leveraging big data to improve the quality of surgical care, potentially improve surgical outcomes, and support training and education.
Collapse
Affiliation(s)
- Laura Zuluaga
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY.
| | - Jordan Miller Rich
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Raghav Gupta
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Adriana Pedraza
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Burak Ucpinar
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Kennedy E Okhawere
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Indu Saini
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Priyanka Dwivedi
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Dhruti Patel
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Osama Zaytoun
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Mani Menon
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Ashutosh Tewari
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Ketan K Badani
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| |
Collapse
|
14
|
Ali JT, Yang G, Green CA, Reed BL, Madani A, Ponsky TA, Hazey J, Rothenberg SS, Schlachta CM, Oleynikov D, Szoka N. Defining digital surgery: a SAGES white paper. Surg Endosc 2024; 38:475-487. [PMID: 38180541 DOI: 10.1007/s00464-023-10551-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Accepted: 10/17/2023] [Indexed: 01/06/2024]
Abstract
BACKGROUND Digital surgery is a new paradigm within the surgical innovation space that is rapidly advancing and encompasses multiple areas. METHODS This white paper from the SAGES Digital Surgery Working Group outlines the scope of digital surgery, defines key terms, and analyzes the challenges and opportunities surrounding this disruptive technology. RESULTS In its simplest form, digital surgery inserts a computer interface between surgeon and patient. We divide the digital surgery space into the following elements: advanced visualization, enhanced instrumentation, data capture, data analytics with artificial intelligence/machine learning, connectivity via telepresence, and robotic surgical platforms. We will define each area, describe specific terminology, review current advances as well as discuss limitations and opportunities for future growth. CONCLUSION Digital Surgery will continue to evolve and has great potential to bring value to all levels of the healthcare system. The surgical community has an essential role in understanding, developing, and guiding this emerging field.
Collapse
Affiliation(s)
- Jawad T Ali
- University of Texas at Austin, Austin, TX, USA
| | - Gene Yang
- University at Buffalo, Buffalo, NY, USA
| | | | | | - Amin Madani
- University of Toronto, Toronto, ON, Canada
- Surgical Artificial Intelligence Research Academy, University Health Network, Toronto, ON, Canada
| | - Todd A Ponsky
- Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | | | | | | | - Dmitry Oleynikov
- Monmouth Medical Center, Robert Wood Johnson Barnabas Health, Rutgers School of Medicine, Long Branch, NJ, USA
| | - Nova Szoka
- Department of Surgery, West Virginia University, Suite 7500 HSS, PO Box 9238, Morgantown, WV, 26506-9238, USA.
| |
Collapse
|
15
|
Schonfeld E, Pant A, Shah A, Sadeghzadeh S, Pangal D, Rodrigues A, Yoo K, Marianayagam N, Haider G, Veeravagu A. Evaluating Computer Vision, Large Language, and Genome-Wide Association Models in a Limited Sized Patient Cohort for Pre-Operative Risk Stratification in Adult Spinal Deformity Surgery. J Clin Med 2024; 13:656. [PMID: 38337352 PMCID: PMC10856542 DOI: 10.3390/jcm13030656] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Revised: 01/10/2024] [Accepted: 01/21/2024] [Indexed: 02/12/2024] Open
Abstract
Background: Adult spinal deformities (ASD) are varied spinal abnormalities, often necessitating surgical intervention when associated with pain, worsening deformity, or worsening function. Predicting post-operative complications and revision surgery is critical for surgical planning and patient counseling. Due to the relatively small number of cases of ASD surgery, machine learning applications have been limited to traditional models (e.g., logistic regression or standard neural networks) and coarse clinical variables. We present the novel application of advanced models (CNN, LLM, GWAS) using complex data types (radiographs, clinical notes, genomics) for ASD outcome prediction. Methods: We developed a CNN trained on 209 ASD patients (1549 radiographs) from the Stanford Research Repository, a CNN pre-trained on VinDr-SpineXR (10,468 spine radiographs), and an LLM using free-text clinical notes from the same 209 patients, trained via Gatortron. Additionally, we conducted a GWAS using the UK Biobank, contrasting 540 surgical ASD patients with 7355 non-surgical ASD patients. Results: The LLM notably outperformed the CNN in predicting pulmonary complications (F1: 0.545 vs. 0.2881), neurological complications (F1: 0.250 vs. 0.224), and sepsis (F1: 0.382 vs. 0.132). The pre-trained CNN showed improved sepsis prediction (AUC: 0.638 vs. 0.534) but reduced performance for neurological complication prediction (AUC: 0.545 vs. 0.619). The LLM demonstrated high specificity (0.946) and positive predictive value (0.467) for neurological complications. The GWAS identified 21 significant (p < 10-5) SNPs associated with ASD surgery risk (OR: mean: 3.17, SD: 1.92, median: 2.78), with the highest odds ratio (8.06) for the LDB2 gene, which is implicated in ectoderm differentiation. Conclusions: This study exemplifies the innovative application of cutting-edge models to forecast outcomes in ASD, underscoring the utility of complex data in outcome prediction for neurosurgical conditions. It demonstrates the promise of genetic models when identifying surgical risks and supports the integration of complex machine learning tools for informed surgical decision-making in ASD.
Collapse
Affiliation(s)
- Ethan Schonfeld
- Stanford University School of Medicine, Stanford University, Stanford, CA 94304, USA; (A.P.); (S.S.)
| | - Aaradhya Pant
- Stanford University School of Medicine, Stanford University, Stanford, CA 94304, USA; (A.P.); (S.S.)
| | - Aaryan Shah
- Department of Computer Science, Stanford University, Stanford, CA 94304, USA;
| | - Sina Sadeghzadeh
- Stanford University School of Medicine, Stanford University, Stanford, CA 94304, USA; (A.P.); (S.S.)
| | - Dhiraj Pangal
- Department of Neurosurgery, Stanford University School of Medicine, Stanford University, Stanford, CA 94304, USA; (D.P.); (K.Y.); (N.M.); (G.H.); (A.V.)
| | - Adrian Rodrigues
- Department of Neurosurgery, Massachusetts General Hospital, Boston, MA 02114, USA;
| | - Kelly Yoo
- Department of Neurosurgery, Stanford University School of Medicine, Stanford University, Stanford, CA 94304, USA; (D.P.); (K.Y.); (N.M.); (G.H.); (A.V.)
| | - Neelan Marianayagam
- Department of Neurosurgery, Stanford University School of Medicine, Stanford University, Stanford, CA 94304, USA; (D.P.); (K.Y.); (N.M.); (G.H.); (A.V.)
| | - Ghani Haider
- Department of Neurosurgery, Stanford University School of Medicine, Stanford University, Stanford, CA 94304, USA; (D.P.); (K.Y.); (N.M.); (G.H.); (A.V.)
| | - Anand Veeravagu
- Department of Neurosurgery, Stanford University School of Medicine, Stanford University, Stanford, CA 94304, USA; (D.P.); (K.Y.); (N.M.); (G.H.); (A.V.)
| |
Collapse
|
16
|
Balu A, Kugener G, Pangal DJ, Lee H, Lasky S, Han J, Buchanan I, Liu J, Zada G, Donoho DA. Simulated outcomes for durotomy repair in minimally invasive spine surgery. Sci Data 2024; 11:62. [PMID: 38200013 PMCID: PMC10781746 DOI: 10.1038/s41597-023-02744-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Accepted: 11/13/2023] [Indexed: 01/12/2024] Open
Abstract
Minimally invasive spine surgery (MISS) is increasingly performed using endoscopic and microscopic visualization, and the captured video can be used for surgical education and development of predictive artificial intelligence (AI) models. Video datasets depicting adverse event management are also valuable, as predictive models not exposed to adverse events may exhibit poor performance when these occur. Given that no dedicated spine surgery video datasets for AI model development are publicly available, we introduce Simulated Outcomes for Durotomy Repair in Minimally Invasive Spine Surgery (SOSpine). A validated MISS cadaveric dural repair simulator was used to educate neurosurgery residents, and surgical microscope video recordings were paired with outcome data. Objects including durotomy, needle, grasper, needle driver, and nerve hook were then annotated. Altogether, SOSpine contains 15,698 frames with 53,238 annotations and associated durotomy repair outcomes. For validation, an AI model was fine-tuned on SOSpine video and detected surgical instruments with a mean average precision of 0.77. In summary, SOSpine depicts spine surgeons managing a common complication, providing opportunities to develop surgical AI models.
Collapse
Affiliation(s)
- Alan Balu
- Department of Neurosurgery, Georgetown University School of Medicine, 3900 Reservoir Rd NW, Washington, D.C., 20007, USA.
| | - Guillaume Kugener
- Department of Neurological Surgery, Keck School of Medicine of University of Southern California, 1200 North State St., Suite 3300, Los Angeles, CA, 90033, USA
| | - Dhiraj J Pangal
- Department of Neurological Surgery, Keck School of Medicine of University of Southern California, 1200 North State St., Suite 3300, Los Angeles, CA, 90033, USA
| | - Heewon Lee
- University of Southern California, 3709 Trousdale Pkwy., Los Angeles, CA, 90089, USA
| | - Sasha Lasky
- University of Southern California, 3709 Trousdale Pkwy., Los Angeles, CA, 90089, USA
| | - Jane Han
- University of Southern California, 3709 Trousdale Pkwy., Los Angeles, CA, 90089, USA
| | - Ian Buchanan
- Department of Neurological Surgery, Keck School of Medicine of University of Southern California, 1200 North State St., Suite 3300, Los Angeles, CA, 90033, USA
| | - John Liu
- Department of Neurological Surgery, Keck School of Medicine of University of Southern California, 1200 North State St., Suite 3300, Los Angeles, CA, 90033, USA
| | - Gabriel Zada
- Department of Neurological Surgery, Keck School of Medicine of University of Southern California, 1200 North State St., Suite 3300, Los Angeles, CA, 90033, USA
| | - Daniel A Donoho
- Department of Neurosurgery, Children's National Hospital, 111 Michigan Avenue NW, Washington, DC, 20010, USA
| |
Collapse
|
17
|
Mascagni P, Alapatt D, Lapergola A, Vardazaryan A, Mazellier JP, Dallemagne B, Mutter D, Padoy N. Early-stage clinical evaluation of real-time artificial intelligence assistance for laparoscopic cholecystectomy. Br J Surg 2024; 111:znad353. [PMID: 37935636 DOI: 10.1093/bjs/znad353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 07/24/2023] [Accepted: 08/26/2023] [Indexed: 11/09/2023]
Abstract
Lay Summary
The growing availability of surgical digital data and developments in analytics such as artificial intelligence (AI) are being harnessed to improve surgical care. However, technical and cultural barriers to real-time intraoperative AI assistance exist. This early-stage clinical evaluation shows the technical feasibility of concurrently deploying several AIs in operating rooms for real-time assistance during procedures. In addition, potentially relevant clinical applications of these AI models are explored with a multidisciplinary cohort of key stakeholders.
Collapse
Affiliation(s)
- Pietro Mascagni
- ICube, University of Strasbourg, CNRS, IHU Strasbourg, Strasbourg, France
- Department of Medical and Abdominal Surgery and Endocrine-Metabolic Science, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Deepak Alapatt
- ICube, University of Strasbourg, CNRS, IHU Strasbourg, Strasbourg, France
| | - Alfonso Lapergola
- Department of Digestive and Endocrine Surgery, Nouvel Hôpital Civil, Hôpitaux Universitaires de Strasbourg, Strasbourg, France
| | | | | | - Bernard Dallemagne
- Institute for Research against Digestive Cancer (IRCAD), Strasbourg, France
| | - Didier Mutter
- Department of Digestive and Endocrine Surgery, Nouvel Hôpital Civil, Hôpitaux Universitaires de Strasbourg, Strasbourg, France
- Institute of Image-Guided Surgery, IHU-Strasbourg, Strasbourg, France
| | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, IHU Strasbourg, Strasbourg, France
- Institute of Image-Guided Surgery, IHU-Strasbourg, Strasbourg, France
| |
Collapse
|
18
|
Kourounis G, Elmahmudi AA, Thomson B, Hunter J, Ugail H, Wilson C. Computer image analysis with artificial intelligence: a practical introduction to convolutional neural networks for medical professionals. Postgrad Med J 2023; 99:1287-1294. [PMID: 37794609 PMCID: PMC10658730 DOI: 10.1093/postmj/qgad095] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 09/06/2023] [Accepted: 09/13/2023] [Indexed: 10/06/2023]
Abstract
Artificial intelligence tools, particularly convolutional neural networks (CNNs), are transforming healthcare by enhancing predictive, diagnostic, and decision-making capabilities. This review provides an accessible and practical explanation of CNNs for clinicians and highlights their relevance in medical image analysis. CNNs have shown themselves to be exceptionally useful in computer vision, a field that enables machines to 'see' and interpret visual data. Understanding how these models work can help clinicians leverage their full potential, especially as artificial intelligence continues to evolve and integrate into healthcare. CNNs have already demonstrated their efficacy in diverse medical fields, including radiology, histopathology, and medical photography. In radiology, CNNs have been used to automate the assessment of conditions such as pneumonia, pulmonary embolism, and rectal cancer. In histopathology, CNNs have been used to assess and classify colorectal polyps, gastric epithelial tumours, as well as assist in the assessment of multiple malignancies. In medical photography, CNNs have been used to assess retinal diseases and skin conditions, and to detect gastric and colorectal polyps during endoscopic procedures. In surgical laparoscopy, they may provide intraoperative assistance to surgeons, helping interpret surgical anatomy and demonstrate safe dissection zones. The integration of CNNs into medical image analysis promises to enhance diagnostic accuracy, streamline workflow efficiency, and expand access to expert-level image analysis, contributing to the ultimate goal of delivering further improvements in patient and healthcare outcomes.
Collapse
Affiliation(s)
- Georgios Kourounis
- NIHR Blood and Transplant Research Unit, Newcastle University and Cambridge University, Newcastle upon Tyne, NE1 7RU, United Kingdom
- Institute of Transplantation, The Freeman Hospital, Newcastle upon Tyne, NE7 7DN, United Kingdom
| | - Ali Ahmed Elmahmudi
- Faculty of Engineering and Informatics, Bradford University, Bradford, BD7 1DP, United Kingdom
| | - Brian Thomson
- Faculty of Engineering and Informatics, Bradford University, Bradford, BD7 1DP, United Kingdom
| | - James Hunter
- Nuffield Department of Surgical Sciences, University of Oxford, Oxford, OX3 9DU, United Kingdom
| | - Hassan Ugail
- Faculty of Engineering and Informatics, Bradford University, Bradford, BD7 1DP, United Kingdom
| | - Colin Wilson
- NIHR Blood and Transplant Research Unit, Newcastle University and Cambridge University, Newcastle upon Tyne, NE1 7RU, United Kingdom
- Institute of Transplantation, The Freeman Hospital, Newcastle upon Tyne, NE7 7DN, United Kingdom
| |
Collapse
|
19
|
De Backer P, Peraire Lores M, Demuynck M, Piramide F, Simoens J, Oosterlinck T, Bogaert W, Shan CV, Van Regemorter K, Wastyn A, Checcucci E, Debbaut C, Van Praet C, Farinha R, De Groote R, Gallagher A, Decaestecker K, Mottrie A. Surgical Phase Duration in Robot-Assisted Partial Nephrectomy: A Surgical Data Science Exploration for Clinical Relevance. Diagnostics (Basel) 2023; 13:3386. [PMID: 37958283 PMCID: PMC10650909 DOI: 10.3390/diagnostics13213386] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 10/29/2023] [Accepted: 11/03/2023] [Indexed: 11/15/2023] Open
Abstract
(1) Background: Surgical phases form the basic building blocks for surgical skill assessment, feedback, and teaching. The phase duration itself and its correlation with clinical parameters at diagnosis have not yet been investigated. Novel commercial platforms provide phase indications but have not been assessed for accuracy yet. (2) Methods: We assessed 100 robot-assisted partial nephrectomy videos for phase durations based on previously defined proficiency metrics. We developed an annotation framework and subsequently compared our annotations to an existing commercial solution (Touch Surgery, Medtronic™). We subsequently explored clinical correlations between phase durations and parameters derived from diagnosis and treatment. (3) Results: An objective and uniform phase assessment requires precise definitions derived from an iterative revision process. A comparison to a commercial solution shows large differences in definitions across phases. BMI and the duration of renal tumor identification are positively correlated, as are tumor complexity and both tumor excision and renorrhaphy duration. (4) Conclusions: The surgical phase duration can be correlated with certain clinical outcomes. Further research should investigate whether the retrieved correlations are also clinically meaningful. This requires an increase in dataset sizes and facilitation through intelligent computer vision algorithms. Commercial platforms can facilitate this dataset expansion and help unlock the full potential, provided that the phase annotation details are disclosed.
Collapse
Affiliation(s)
- Pieter De Backer
- ORSI Academy, 9090 Melle, Belgium
- IbiTech-Biommeda, Department of Electronics and Information Systems, Faculty of Engineering and Architecture, Ghent University, 9000 Ghent, Belgium
- Department of Human Structure and Repair, Faculty of Medicine and Health Sciences, Ghent University, 9000 Ghent, Belgium (C.V.P.)
- Young Academic Urologist—Urotechnology Working Group, NL-6803 Arnhem, The Netherlands
- Department of Urology, ERN eUROGEN Accredited Centre, Ghent University Hospital, 9000 Ghent, Belgium
| | | | - Meret Demuynck
- Department of Human Structure and Repair, Faculty of Medicine and Health Sciences, Ghent University, 9000 Ghent, Belgium (C.V.P.)
| | - Federico Piramide
- ORSI Academy, 9090 Melle, Belgium
- Department of Surgery, Candiolo Cancer Institute, FPO-IRCCS, 10060 Turin, Italy
| | | | | | - Wouter Bogaert
- IbiTech-Biommeda, Department of Electronics and Information Systems, Faculty of Engineering and Architecture, Ghent University, 9000 Ghent, Belgium
| | - Chi Victor Shan
- Department of Human Structure and Repair, Faculty of Medicine and Health Sciences, Ghent University, 9000 Ghent, Belgium (C.V.P.)
| | - Karel Van Regemorter
- Department of Human Structure and Repair, Faculty of Medicine and Health Sciences, Ghent University, 9000 Ghent, Belgium (C.V.P.)
| | - Aube Wastyn
- Department of Human Structure and Repair, Faculty of Medicine and Health Sciences, Ghent University, 9000 Ghent, Belgium (C.V.P.)
| | - Enrico Checcucci
- Young Academic Urologist—Urotechnology Working Group, NL-6803 Arnhem, The Netherlands
- Department of Surgery, Candiolo Cancer Institute, FPO-IRCCS, 10060 Turin, Italy
| | - Charlotte Debbaut
- IbiTech-Biommeda, Department of Electronics and Information Systems, Faculty of Engineering and Architecture, Ghent University, 9000 Ghent, Belgium
| | - Charles Van Praet
- Department of Human Structure and Repair, Faculty of Medicine and Health Sciences, Ghent University, 9000 Ghent, Belgium (C.V.P.)
- Department of Urology, ERN eUROGEN Accredited Centre, Ghent University Hospital, 9000 Ghent, Belgium
| | | | - Ruben De Groote
- Department of Urology, Onze-Lieve Vrouwziekenhuis Hospital, 9300 Aalst, Belgium
| | | | - Karel Decaestecker
- Department of Human Structure and Repair, Faculty of Medicine and Health Sciences, Ghent University, 9000 Ghent, Belgium (C.V.P.)
- Department of Urology, ERN eUROGEN Accredited Centre, Ghent University Hospital, 9000 Ghent, Belgium
- Department of Urology, AZ Maria Middelares Hospital, 9000 Ghent, Belgium
| | - Alexandre Mottrie
- ORSI Academy, 9090 Melle, Belgium
- Department of Urology, Onze-Lieve Vrouwziekenhuis Hospital, 9300 Aalst, Belgium
| |
Collapse
|
20
|
Ortenzi M, Rapoport Ferman J, Antolin A, Bar O, Zohar M, Perry O, Asselmann D, Wolf T. A novel high accuracy model for automatic surgical workflow recognition using artificial intelligence in laparoscopic totally extraperitoneal inguinal hernia repair (TEP). Surg Endosc 2023; 37:8818-8828. [PMID: 37626236 PMCID: PMC10615930 DOI: 10.1007/s00464-023-10375-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Accepted: 07/30/2023] [Indexed: 08/27/2023]
Abstract
INTRODUCTION Artificial intelligence and computer vision are revolutionizing the way we perceive video analysis in minimally invasive surgery. This emerging technology has increasingly been leveraged successfully for video segmentation, documentation, education, and formative assessment. New, sophisticated platforms allow pre-determined segments chosen by surgeons to be automatically presented without the need to review entire videos. This study aimed to validate and demonstrate the accuracy of the first reported AI-based computer vision algorithm that automatically recognizes surgical steps in videos of totally extraperitoneal (TEP) inguinal hernia repair. METHODS Videos of TEP procedures were manually labeled by a team of annotators trained to identify and label surgical workflow according to six major steps. For bilateral hernias, an additional change of focus step was also included. The videos were then used to train a computer vision AI algorithm. Performance accuracy was assessed in comparison to the manual annotations. RESULTS A total of 619 full-length TEP videos were analyzed: 371 were used to train the model, 93 for internal validation, and the remaining 155 as a test set to evaluate algorithm accuracy. The overall accuracy for the complete procedure was 88.8%. Per-step accuracy reached the highest value for the hernia sac reduction step (94.3%) and the lowest for the preperitoneal dissection step (72.2%). CONCLUSIONS These results indicate that the novel AI model was able to provide fully automated video analysis with a high accuracy level. High-accuracy models leveraging AI to enable automation of surgical video analysis allow us to identify and monitor surgical performance, providing mathematical metrics that can be stored, evaluated, and compared. As such, the proposed model is capable of enabling data-driven insights to improve surgical quality and demonstrate best practices in TEP procedures.
Collapse
Affiliation(s)
- Monica Ortenzi
- Theator Inc., Palo Alto, CA, USA.
- Department of General and Emergency Surgery, Polytechnic University of Marche, Ancona, Italy.
| | | | | | - Omri Bar
- Theator Inc., Palo Alto, CA, USA
| | | | | | | | | |
Collapse
|
21
|
Zhang J, Zhou S, Wang Y, Shi S, Wan C, Zhao H, Cai X, Ding H. Laparoscopic Image-Based Critical Action Recognition and Anticipation With Explainable Features. IEEE J Biomed Health Inform 2023; 27:5393-5404. [PMID: 37603480 DOI: 10.1109/jbhi.2023.3306818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/23/2023]
Abstract
Surgical workflow analysis integrates perception, comprehension, and prediction of the surgical workflow, which helps real-time surgical support systems provide proper guidance and assistance for surgeons. This article promotes the idea of critical actions, which refer to the essential surgical actions that progress towards the fulfillment of the operation. Fine-grained workflow analysis involves recognizing current critical actions and previewing the moving tendency of instruments in the early stage of critical actions. Aiming at this, we propose a framework that incorporates operational experience to improve the robustness and interpretability of action recognition in in-vivo situations. High-dimensional images are mapped into an experience-based explainable feature space with low dimensions to achieve critical action recognition through a hierarchical classification structure. To forecast the instrument's motion tendency, we model the motion primitives in the polar coordinate system (PCS) to represent patterns of complex trajectories. Given the laparoscopy variance, the adaptive pattern recognition (APR) method, which adapts to uncertain trajectories by modifying model parameters, is designed to improve prediction accuracy. The in-vivo dataset validations show that our framework fulfilled the surgical awareness tasks with exceptional accuracy and real-time performance.
Collapse
|
22
|
Choksi S, Szot S, Zang C, Yarali K, Cao Y, Ahmad F, Xiang Z, Bitner DP, Kostic Z, Filicori F. Bringing Artificial Intelligence to the operating room: edge computing for real-time surgical phase recognition. Surg Endosc 2023; 37:8778-8784. [PMID: 37580578 DOI: 10.1007/s00464-023-10322-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Accepted: 07/19/2023] [Indexed: 08/16/2023]
Abstract
BACKGROUND Automation of surgical phase recognition is a key effort toward the development of Computer Vision (CV) algorithms, for workflow optimization and video-based assessment. CV is a form of Artificial Intelligence (AI) that allows interpretation of images through a deep learning (DL)-based algorithm. The improvements in Graphic Processing Unit (GPU) computing devices allow researchers to apply these algorithms for recognition of content in videos in real-time. Edge computing, where data is collected, analyzed, and acted upon in close proximity to the collection source, is essential meet the demands of workflow optimization by providing real-time algorithm application. We implemented a real-time phase recognition workflow and demonstrated its performance on 10 Robotic Inguinal Hernia Repairs (RIHR) to obtain phase predictions during the procedure. METHODS Our phase recognition algorithm was developed with 211 videos of RIHR originally annotated into 14 surgical phases. Using these videos, a DL model with a ResNet-50 backbone was trained and validated to automatically recognize surgical phases. The model was deployed to a GPU, the Nvidia® Jetson Xavier™ NX edge computing device. RESULTS This model was tested on 10 inguinal hernia repairs from four surgeons in real-time. The model was improved using post-recording processing methods such as phase merging into seven final phases (peritoneal scoring, mesh placement, preperitoneal dissection, reduction of hernia, out of body, peritoneal closure, and transitionary idle) and averaging of frames. Predictions were made once per second with a processing latency of approximately 250 ms. The accuracy of the real-time predictions ranged from 59.8 to 78.2% with an average accuracy of 68.7%. CONCLUSION A real-time phase prediction of RIHR using a CV deep learning model was successfully implemented. This real-time CV phase segmentation system can be useful for monitoring surgical progress and be integrated into software to provide hospital workflow optimization.
Collapse
Affiliation(s)
- Sarah Choksi
- Intraoperative Performance Analytics Laboratory (IPAL), Department of Surgery, Lenox Hill Hospital, Northwell Health, 186 E 76th Street, 1st Fl, New York, NY, 10021, USA.
| | - Skyler Szot
- Department of Electrical Engineering, Columbia University, 500 W 120 Street, Mudd 1310, New York, NY, 10027, USA
| | - Chengbo Zang
- Department of Electrical Engineering, Columbia University, 500 W 120 Street, Mudd 1310, New York, NY, 10027, USA
| | - Kaan Yarali
- Department of Electrical Engineering, Columbia University, 500 W 120 Street, Mudd 1310, New York, NY, 10027, USA
| | - Yuqing Cao
- Department of Electrical Engineering, Columbia University, 500 W 120 Street, Mudd 1310, New York, NY, 10027, USA
| | - Feroz Ahmad
- Department of Electrical Engineering, Columbia University, 500 W 120 Street, Mudd 1310, New York, NY, 10027, USA
| | - Zixuan Xiang
- Department of Electrical Engineering, Columbia University, 500 W 120 Street, Mudd 1310, New York, NY, 10027, USA
| | - Daniel P Bitner
- Intraoperative Performance Analytics Laboratory (IPAL), Department of Surgery, Lenox Hill Hospital, Northwell Health, 186 E 76th Street, 1st Fl, New York, NY, 10021, USA
| | - Zoran Kostic
- Department of Electrical Engineering, Columbia University, 500 W 120 Street, Mudd 1310, New York, NY, 10027, USA
| | - Filippo Filicori
- Intraoperative Performance Analytics Laboratory (IPAL), Department of Surgery, Lenox Hill Hospital, Northwell Health, 186 E 76th Street, 1st Fl, New York, NY, 10021, USA
- Zucker School of Medicine at Hofstra/Northwell Health, 5000 Hofstra Blvd, Hempstead, NY, 11549, USA
| |
Collapse
|
23
|
Knoedler L, Knoedler S, Allam O, Remy K, Miragall M, Safi AF, Alfertshofer M, Pomahac B, Kauke-Navarro M. Application possibilities of artificial intelligence in facial vascularized composite allotransplantation-a narrative review. Front Surg 2023; 10:1266399. [PMID: 38026484 PMCID: PMC10646214 DOI: 10.3389/fsurg.2023.1266399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 09/26/2023] [Indexed: 12/01/2023] Open
Abstract
Facial vascularized composite allotransplantation (FVCA) is an emerging field of reconstructive surgery that represents a dogmatic shift in the surgical treatment of patients with severe facial disfigurements. While conventional reconstructive strategies were previously considered the goldstandard for patients with devastating facial trauma, FVCA has demonstrated promising short- and long-term outcomes. Yet, there remain several obstacles that complicate the integration of FVCA procedures into the standard workflow for facial trauma patients. Artificial intelligence (AI) has been shown to provide targeted and resource-effective solutions for persisting clinical challenges in various specialties. However, there is a paucity of studies elucidating the combination of FVCA and AI to overcome such hurdles. Here, we delineate the application possibilities of AI in the field of FVCA and discuss the use of AI technology for FVCA outcome simulation, diagnosis and prediction of rejection episodes, and malignancy screening. This line of research may serve as a fundament for future studies linking these two revolutionary biotechnologies.
Collapse
Affiliation(s)
- Leonard Knoedler
- Department of Plastic, Hand- and Reconstructive Surgery, University Hospital Regensburg, Regensburg, Germany
- Division of Plastic Surgery, Department of Surgery, Yale New Haven Hospital, Yale School of Medicine, New Haven, CT, United States
| | - Samuel Knoedler
- Division of Plastic Surgery, Department of Surgery, Yale New Haven Hospital, Yale School of Medicine, New Haven, CT, United States
| | - Omar Allam
- Division of Plastic Surgery, Department of Surgery, Yale New Haven Hospital, Yale School of Medicine, New Haven, CT, United States
| | - Katya Remy
- Department of Oral and Maxillofacial Surgery, University Hospital Regensburg, Regensburg, Germany
| | - Maximilian Miragall
- Department of Oral and Maxillofacial Surgery, University Hospital Regensburg, Regensburg, Germany
| | - Ali-Farid Safi
- Craniologicum, Center for Cranio-Maxillo-Facial Surgery, Bern, Switzerland
- Faculty of Medicine, University of Bern, Bern, Switzerland
| | - Michael Alfertshofer
- Division of Hand, Plastic and Aesthetic Surgery, Ludwig-Maximilians University Munich, Munich, Germany
| | - Bohdan Pomahac
- Division of Plastic Surgery, Department of Surgery, Yale New Haven Hospital, Yale School of Medicine, New Haven, CT, United States
| | - Martin Kauke-Navarro
- Division of Plastic Surgery, Department of Surgery, Yale New Haven Hospital, Yale School of Medicine, New Haven, CT, United States
| |
Collapse
|
24
|
Cao J, Yip HC, Chen Y, Scheppach M, Luo X, Yang H, Cheng MK, Long Y, Jin Y, Chiu PWY, Yam Y, Meng HML, Dou Q. Intelligent surgical workflow recognition for endoscopic submucosal dissection with real-time animal study. Nat Commun 2023; 14:6676. [PMID: 37865629 PMCID: PMC10590425 DOI: 10.1038/s41467-023-42451-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Accepted: 10/11/2023] [Indexed: 10/23/2023] Open
Abstract
Recent advancements in artificial intelligence have witnessed human-level performance; however, AI-enabled cognitive assistance for therapeutic procedures has not been fully explored nor pre-clinically validated. Here we propose AI-Endo, an intelligent surgical workflow recognition suit, for endoscopic submucosal dissection (ESD). Our AI-Endo is trained on high-quality ESD cases from an expert endoscopist, covering a decade time expansion and consisting of 201,026 labeled frames. The learned model demonstrates outstanding performance on validation data, including cases from relatively junior endoscopists with various skill levels, procedures conducted with different endoscopy systems and therapeutic skills, and cohorts from international multi-centers. Furthermore, we integrate our AI-Endo with the Olympus endoscopic system and validate the AI-enabled cognitive assistance system with animal studies in live ESD training sessions. Dedicated data analysis from surgical phase recognition results is summarized in an automatically generated report for skill assessment.
Collapse
Affiliation(s)
- Jianfeng Cao
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Hon-Chi Yip
- Department of Surgery, The Chinese University of Hong Kong, Hong Kong, China.
| | - Yueyao Chen
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Markus Scheppach
- Internal Medicine III-Gastroenterology, University Hospital of Augsburg, Augsburg, Germany
| | - Xiaobei Luo
- Guangdong Provincial Key Laboratory of Gastroenterology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Hongzheng Yang
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Ming Kit Cheng
- Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Yonghao Long
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Yueming Jin
- Department of Biomedical Engineering, National University of Singapore, Singapore, Singapore
| | - Philip Wai-Yan Chiu
- Multi-scale Medical Robotics Center and The Chinese University of Hong Kong, Hong Kong, China.
| | - Yeung Yam
- Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Hong Kong, China.
- Multi-scale Medical Robotics Center and The Chinese University of Hong Kong, Hong Kong, China.
- Centre for Perceptual and Interactive Intelligence and The Chinese University of Hong Kong, Hong Kong, China.
| | - Helen Mei-Ling Meng
- Centre for Perceptual and Interactive Intelligence and The Chinese University of Hong Kong, Hong Kong, China.
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China.
| |
Collapse
|
25
|
Nwoye CI, Yu T, Sharma S, Murali A, Alapatt D, Vardazaryan A, Yuan K, Hajek J, Reiter W, Yamlahi A, Smidt FH, Zou X, Zheng G, Oliveira B, Torres HR, Kondo S, Kasai S, Holm F, Özsoy E, Gui S, Li H, Raviteja S, Sathish R, Poudel P, Bhattarai B, Wang Z, Rui G, Schellenberg M, Vilaça JL, Czempiel T, Wang Z, Sheet D, Thapa SK, Berniker M, Godau P, Morais P, Regmi S, Tran TN, Fonseca J, Nölke JH, Lima E, Vazquez E, Maier-Hein L, Navab N, Mascagni P, Seeliger B, Gonzalez C, Mutter D, Padoy N. CholecTriplet2022: Show me a tool and tell me the triplet - An endoscopic vision challenge for surgical action triplet detection. Med Image Anal 2023; 89:102888. [PMID: 37451133 DOI: 10.1016/j.media.2023.102888] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 06/23/2023] [Accepted: 06/28/2023] [Indexed: 07/18/2023]
Abstract
Formalizing surgical activities as triplets of the used instruments, actions performed, and target anatomies is becoming a gold standard approach for surgical activity modeling. The benefit is that this formalization helps to obtain a more detailed understanding of tool-tissue interaction which can be used to develop better Artificial Intelligence assistance for image-guided surgery. Earlier efforts and the CholecTriplet challenge introduced in 2021 have put together techniques aimed at recognizing these triplets from surgical footage. Estimating also the spatial locations of the triplets would offer a more precise intraoperative context-aware decision support for computer-assisted intervention. This paper presents the CholecTriplet2022 challenge, which extends surgical action triplet modeling from recognition to detection. It includes weakly-supervised bounding box localization of every visible surgical instrument (or tool), as the key actors, and the modeling of each tool-activity in the form of ‹instrument, verb, target› triplet. The paper describes a baseline method and 10 new deep learning algorithms presented at the challenge to solve the task. It also provides thorough methodological comparisons of the methods, an in-depth analysis of the obtained results across multiple metrics, visual and procedural challenges; their significance, and useful insights for future research directions and applications in surgery.
Collapse
Affiliation(s)
| | - Tong Yu
- ICube, University of Strasbourg, CNRS, France
| | | | | | | | | | - Kun Yuan
- ICube, University of Strasbourg, CNRS, France; Technical University Munich, Germany
| | | | | | - Amine Yamlahi
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Finn-Henri Smidt
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Xiaoyang Zou
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, China
| | - Guoyan Zheng
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, China
| | - Bruno Oliveira
- 2Ai School of Technology, IPCA, Barcelos, Portugal; Life and Health Science Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal; Algoritimi Center, School of Engineering, University of Minho, Guimeraes, Portugal
| | - Helena R Torres
- 2Ai School of Technology, IPCA, Barcelos, Portugal; Life and Health Science Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal; Algoritimi Center, School of Engineering, University of Minho, Guimeraes, Portugal
| | | | | | | | - Ege Özsoy
- Technical University Munich, Germany
| | | | - Han Li
- Southern University of Science and Technology, China
| | | | | | | | | | | | | | - Melanie Schellenberg
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany; National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | | | | | - Zhenkun Wang
- Southern University of Science and Technology, China
| | | | - Shrawan Kumar Thapa
- Nepal Applied Mathematics and Informatics Institute for research (NAAMII), Nepal
| | | | - Patrick Godau
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany; National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Pedro Morais
- 2Ai School of Technology, IPCA, Barcelos, Portugal
| | - Sudarshan Regmi
- Nepal Applied Mathematics and Informatics Institute for research (NAAMII), Nepal
| | - Thuy Nuong Tran
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Jaime Fonseca
- Algoritimi Center, School of Engineering, University of Minho, Guimeraes, Portugal
| | - Jan-Hinrich Nölke
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany; National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Estevão Lima
- Life and Health Science Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal
| | | | - Lena Maier-Hein
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | | | - Pietro Mascagni
- Fondazione Policlinico Universitario Agostino Gemelli IRCCS, Rome, Italy
| | - Barbara Seeliger
- ICube, University of Strasbourg, CNRS, France; University Hospital of Strasbourg, France; IHU Strasbourg, France
| | | | - Didier Mutter
- University Hospital of Strasbourg, France; IHU Strasbourg, France
| | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, France; IHU Strasbourg, France
| |
Collapse
|
26
|
Kitaguchi D, Harai Y, Kosugi N, Hayashi K, Kojima S, Ishikawa Y, Yamada A, Hasegawa H, Takeshita N, Ito M. Artificial intelligence for the recognition of key anatomical structures in laparoscopic colorectal surgery. Br J Surg 2023; 110:1355-1358. [PMID: 37552629 DOI: 10.1093/bjs/znad249] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Revised: 07/10/2023] [Accepted: 07/10/2023] [Indexed: 08/10/2023]
Abstract
Lay Summary
To prevent intraoperative organ injury, surgeons strive to identify anatomical structures as early and accurately as possible during surgery. The objective of this prospective observational study was to develop artificial intelligence (AI)-based real-time automatic organ recognition models in laparoscopic surgery and to compare its performance with that of surgeons. The time taken to recognize target anatomy between AI and both expert and novice surgeons was compared. The AI models demonstrated faster recognition of target anatomy than surgeons, especially novice surgeons. These findings suggest that AI has the potential to compensate for the skill and experience gap between surgeons.
Collapse
Affiliation(s)
- Daichi Kitaguchi
- Department for the Promotion of Medical Device Innovation, National Cancer Centre Hospital East, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Centre Hospital East, Chiba, Japan
| | - Yuriko Harai
- Department for the Promotion of Medical Device Innovation, National Cancer Centre Hospital East, Chiba, Japan
| | - Norihito Kosugi
- Department for the Promotion of Medical Device Innovation, National Cancer Centre Hospital East, Chiba, Japan
| | - Kazuyuki Hayashi
- Department for the Promotion of Medical Device Innovation, National Cancer Centre Hospital East, Chiba, Japan
| | - Shigehiro Kojima
- Department for the Promotion of Medical Device Innovation, National Cancer Centre Hospital East, Chiba, Japan
| | - Yuto Ishikawa
- Department for the Promotion of Medical Device Innovation, National Cancer Centre Hospital East, Chiba, Japan
| | - Atsushi Yamada
- Department for the Promotion of Medical Device Innovation, National Cancer Centre Hospital East, Chiba, Japan
| | - Hiro Hasegawa
- Department for the Promotion of Medical Device Innovation, National Cancer Centre Hospital East, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Centre Hospital East, Chiba, Japan
| | - Nobuyoshi Takeshita
- Department for the Promotion of Medical Device Innovation, National Cancer Centre Hospital East, Chiba, Japan
| | - Masaaki Ito
- Department for the Promotion of Medical Device Innovation, National Cancer Centre Hospital East, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Centre Hospital East, Chiba, Japan
| |
Collapse
|
27
|
Hashimoto DA, Johnson KB. The Use of Artificial Intelligence Tools to Prepare Medical School Applications. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:978-982. [PMID: 37369073 DOI: 10.1097/acm.0000000000005309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/29/2023]
Abstract
Advances in artificial intelligence (AI) have been changing the landscape in daily life and the practice of medicine. As these tools have evolved to become consumer-friendly, AI has become more accessible to many individuals, including applicants to medical school. With the rise of AI models capable of generating complex passages of text, questions have arisen regarding the appropriateness of using such tools to assist in the preparation of medical school applications. In this commentary, the authors offer a brief history of AI tools in medicine and describe large language models, a form of AI capable of generating natural language text passages. They question whether AI assistance should be considered inappropriate in preparing applications and compare it with the assistance some applicants receive from family, physician friends, or consultants. They call for clearer guidelines on what forms of assistance-human and technological-are permitted in the preparation of medical school applications. They recommend that medical schools steer away from blanket bans on AI tools in medical education and instead consider mechanisms for knowledge sharing about AI between students and faculty members, incorporation of AI tools into assignments, and the development of curricula to teach the use of AI tools as a competency.
Collapse
Affiliation(s)
- Daniel A Hashimoto
- D.A. Hashimoto is assistant professor of surgery and computer and information science, and affiliated faculty, General Robotics, Automation, Sensing, and Perception Laboratory, University of Pennsylvania, Philadelphia, Pennsylvania; ORCID: http://orcid.org/0000-0003-4725-3104
| | - Kevin B Johnson
- K.B. Johnson is the David L. Cohen University Professor of Pediatrics, Biomedical Informatics, and Science Communication, University of Pennsylvania, Philadelphia, Pennsylvania
| |
Collapse
|
28
|
Ramamurthi A, Are C, Kothari AN. From ChatGPT to Treatment: the Future of AI and Large Language Models in Surgical Oncology. Indian J Surg Oncol 2023; 14:537-539. [PMID: 37900654 PMCID: PMC10611626 DOI: 10.1007/s13193-023-01836-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 10/04/2023] [Indexed: 10/31/2023] Open
Abstract
This paper explores the transformative potential of Large Language Models (LLMs) within the context of surgical oncology and outlines the foundational mechanisms behind these models. LLMs, such as GPT-4, have rapidly evolved in terms of scale and capabilities, with profound implications for their applications in healthcare. These models, rooted in the Generative Pretrained Transformer architecture, exhibit advanced natural language understanding and generation skills. Within surgical oncology, LLMs, when integrated into a Generalist Medical AI (GMAI) framework, hold great promise in offering real-time support throughout the cancer journey. However, alongside these opportunities, this paper underscores the importance of ethical, privacy, and efficacy considerations, especially in light of issues like data drift and potential biases. Collaborative efforts among healthcare providers, AI developers, and regulatory bodies are pivotal in ensuring responsible and effective use of LLMs in surgical oncology, thereby contributing to enhanced patient care and safety. As LLMs continue to advance, they are poised to become indispensable tools in the delivery of high-quality, efficient care in this specialized medical field.
Collapse
Affiliation(s)
- Adhitya Ramamurthi
- Department of Surgical Oncology, Medical College of Wisconsin, Milwaukee, WI USA
| | - Chandrakanth Are
- Department of Surgery, University of Nebraska Medical Center, Omaha, NE USA
| | - Anai N. Kothari
- Department of Surgical Oncology, Medical College of Wisconsin, Milwaukee, WI USA
| |
Collapse
|
29
|
Whittaker R, Dobson R, Jin CK, Style R, Jayathissa P, Hiini K, Ross K, Kawamura K, Muir P. An example of governance for AI in health services from Aotearoa New Zealand. NPJ Digit Med 2023; 6:164. [PMID: 37658119 PMCID: PMC10474148 DOI: 10.1038/s41746-023-00882-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Accepted: 07/21/2023] [Indexed: 09/03/2023] Open
Abstract
Artificial Intelligence (AI) is undergoing rapid development, meaning that potential risks in application are not able to be fully understood. Multiple international principles and guidance documents have been published to guide the implementation of AI tools in various industries, including healthcare practice. In Aotearoa New Zealand (NZ) we recognised that the challenge went beyond simply adapting existing risk frameworks and governance guidance to our specific health service context and population. We also deemed prioritising the voice of Māori (the indigenous people of Aotearoa NZ) a necessary aspect of honouring Te Tiriti (the Treaty of Waitangi), as well as prioritising the needs of healthcare service users and their families. Here we report on the development and establishment of comprehensive and effective governance over the development and implementation of AI tools within a health service in Aotearoa NZ. The implementation of the framework in practice includes testing with real-world proposals and ongoing iteration and refinement of our processes.
Collapse
Affiliation(s)
- R Whittaker
- Te Whatu Ora Waitematā, Auckland, New Zealand.
- National Institute for Health Innovation, University of Auckland, Auckland, New Zealand.
| | - R Dobson
- Te Whatu Ora Waitematā, Auckland, New Zealand
- National Institute for Health Innovation, University of Auckland, Auckland, New Zealand
| | - C K Jin
- Te Whatu Ora Waitematā, Auckland, New Zealand
| | - R Style
- Te Whatu Ora Waitematā, Auckland, New Zealand
| | | | - K Hiini
- Te Whatu Ora Waitematā, Auckland, New Zealand
| | - K Ross
- Precision Driven Health, Auckland, New Zealand
| | - K Kawamura
- Te Whatu Ora Waitematā, Auckland, New Zealand
| | - P Muir
- Te Whatu Ora Waitematā, Auckland, New Zealand
| |
Collapse
|
30
|
Hajek M, Yao CM. Updates in Robotic Head and Neck Reconstructive Surgery. Semin Plast Surg 2023; 37:184-187. [PMID: 37842542 PMCID: PMC10569870 DOI: 10.1055/s-0043-1771303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2023]
Abstract
The use of robotics in head and neck surgery has drastically increased over the past two decades. Transoral robotic surgery has revolutionized the surgical approach to the upper aerodigestive tract including the oropharynx and supraglottic larynx. The expanded use and improving technology of robotics have allowed for new approaches in both the ablative and reconstructive aspects of head and neck surgery. Here, we discuss the recent updates in robotics in head and neck surgery and future directions the field may turn.
Collapse
Affiliation(s)
- Michael Hajek
- Department of Otolaryngology Head and Neck Surgery, University Health Network – Princess Margaret Cancer Center, University of Toronto, Toronto, Canada
| | - Christopher M.K.L. Yao
- Department of Otolaryngology Head and Neck Surgery, University Health Network – Princess Margaret Cancer Center, University of Toronto, Toronto, Canada
| |
Collapse
|
31
|
Kolbinger FR, Bodenstedt S, Carstens M, Leger S, Krell S, Rinner FM, Nielen TP, Kirchberg J, Fritzmann J, Weitz J, Distler M, Speidel S. Artificial Intelligence for context-aware surgical guidance in complex robot-assisted oncological procedures: An exploratory feasibility study. EUROPEAN JOURNAL OF SURGICAL ONCOLOGY 2023:106996. [PMID: 37591704 DOI: 10.1016/j.ejso.2023.106996] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Revised: 06/20/2023] [Accepted: 07/26/2023] [Indexed: 08/19/2023]
Abstract
INTRODUCTION Complex oncological procedures pose various surgical challenges including dissection in distinct tissue planes and preservation of vulnerable anatomical structures throughout different surgical phases. In rectal surgery, violation of dissection planes increases the risk of local recurrence and autonomous nerve damage resulting in incontinence and sexual dysfunction. This work explores the feasibility of phase recognition and target structure segmentation in robot-assisted rectal resection (RARR) using machine learning. MATERIALS AND METHODS A total of 57 RARR were recorded and subsets of these were annotated with respect to surgical phases and exact locations of target structures (anatomical structures, tissue types, static structures, and dissection areas). For surgical phase recognition, three machine learning models were trained: LSTM, MSTCN, and Trans-SVNet. Based on pixel-wise annotations of target structures in 9037 images, individual segmentation models based on DeepLabv3 were trained. Model performance was evaluated using F1 score, Intersection-over-Union (IoU), accuracy, precision, recall, and specificity. RESULTS The best results for phase recognition were achieved with the MSTCN model (F1 score: 0.82 ± 0.01, accuracy: 0.84 ± 0.03). Mean IoUs for target structure segmentation ranged from 0.14 ± 0.22 to 0.80 ± 0.14 for organs and tissue types and from 0.11 ± 0.11 to 0.44 ± 0.30 for dissection areas. Image quality, distorting factors (i.e. blood, smoke), and technical challenges (i.e. lack of depth perception) considerably impacted segmentation performance. CONCLUSION Machine learning-based phase recognition and segmentation of selected target structures are feasible in RARR. In the future, such functionalities could be integrated into a context-aware surgical guidance system for rectal surgery.
Collapse
Affiliation(s)
- Fiona R Kolbinger
- Department of Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307 Dresden, Germany; National Center for Tumor Diseases Dresden (NCT/UCC), Germany: German Cancer Research Center (DKFZ), Heidelberg, Germany; Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany; Helmholtz-Zentrum Dresden - Rossendorf, Dresden, Germany; Else Kröner Fresenius Center for Digital Health (EKFZ), Technische Universität Dresden, Fetscherstraße 74, 01307, Dresden, Germany.
| | - Sebastian Bodenstedt
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Partner Site Dresden, Fetscherstraße 74, 01307, Dresden, Germany; Centre for Tactile Internet with Human-in-the-Loop (CeTI), Technische Universität Dresden, Dresden, Germany
| | - Matthias Carstens
- Department of Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307 Dresden, Germany
| | - Stefan Leger
- Else Kröner Fresenius Center for Digital Health (EKFZ), Technische Universität Dresden, Fetscherstraße 74, 01307, Dresden, Germany; Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Partner Site Dresden, Fetscherstraße 74, 01307, Dresden, Germany
| | - Stefanie Krell
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Partner Site Dresden, Fetscherstraße 74, 01307, Dresden, Germany
| | - Franziska M Rinner
- Department of Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307 Dresden, Germany
| | - Thomas P Nielen
- Department of Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307 Dresden, Germany
| | - Johanna Kirchberg
- Department of Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307 Dresden, Germany; National Center for Tumor Diseases Dresden (NCT/UCC), Germany: German Cancer Research Center (DKFZ), Heidelberg, Germany; Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany; Helmholtz-Zentrum Dresden - Rossendorf, Dresden, Germany
| | - Johannes Fritzmann
- Department of Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307 Dresden, Germany; National Center for Tumor Diseases Dresden (NCT/UCC), Germany: German Cancer Research Center (DKFZ), Heidelberg, Germany; Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany; Helmholtz-Zentrum Dresden - Rossendorf, Dresden, Germany
| | - Jürgen Weitz
- Department of Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307 Dresden, Germany; National Center for Tumor Diseases Dresden (NCT/UCC), Germany: German Cancer Research Center (DKFZ), Heidelberg, Germany; Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany; Helmholtz-Zentrum Dresden - Rossendorf, Dresden, Germany; Else Kröner Fresenius Center for Digital Health (EKFZ), Technische Universität Dresden, Fetscherstraße 74, 01307, Dresden, Germany; Centre for Tactile Internet with Human-in-the-Loop (CeTI), Technische Universität Dresden, Dresden, Germany
| | - Marius Distler
- Department of Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307 Dresden, Germany; National Center for Tumor Diseases Dresden (NCT/UCC), Germany: German Cancer Research Center (DKFZ), Heidelberg, Germany; Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany; Helmholtz-Zentrum Dresden - Rossendorf, Dresden, Germany
| | - Stefanie Speidel
- Else Kröner Fresenius Center for Digital Health (EKFZ), Technische Universität Dresden, Fetscherstraße 74, 01307, Dresden, Germany; Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Partner Site Dresden, Fetscherstraße 74, 01307, Dresden, Germany; Centre for Tactile Internet with Human-in-the-Loop (CeTI), Technische Universität Dresden, Dresden, Germany.
| |
Collapse
|
32
|
Bedetti B, Zalepugas D, Arensmeyer JC, Feodorovici P, Schmidt J. [Robotics in thoracic surgery]. Pneumologie 2023; 77:374-385. [PMID: 37311471 DOI: 10.1055/a-1854-2770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The increasing diffusion of the robotic-assisted technique in thoracic surgery (RATS) in Germany was initially delayed in comparison with other countries. Therefore, there is a large potential to implement the volume of the surgical procedures performed by RATS.The RATS-technique has many positive aspects. For example, the angulated instruments allow a full wristed dexterity like the human hand, but with a greater range of motion. The surgical Robot has a tremor filter and replicates perfectly the surgeon's movements. Furthermore, the 3D-scope enables an image magnification up to 10 times compared to the normal thoracoscopes. The RATS has also some disadvantages. For example, the operating surgeon sits far away from the patient and is not sterile while performing surgery. This is an important factor in in case of emergency situations, like major bleeding, which often require a conversion to thoracotomy.All robotic systems are built after the same master-slave technology, that allows the operating surgeon to have full control of the master system. The slave system consists of mechanical actuators that respond to the master system's inputs, so the surgical robot will translate every single movement of the surgeon at the console.The main surgical indications for RATS are: mediastinal tumors, diaphragm plication and anatomical lung resection like segment resections, lobectomies or sleeve resections.In the future, the implementation of virtual and augmented reality is expected in the training but also in the planning of RATS-operations.
Collapse
|
33
|
Marwaha JS, Raza MM, Kvedar JC. The digital transformation of surgery. NPJ Digit Med 2023; 6:103. [PMID: 37258642 PMCID: PMC10232406 DOI: 10.1038/s41746-023-00846-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2023] [Accepted: 05/15/2023] [Indexed: 06/02/2023] Open
Abstract
Rapid advances in digital technology and artificial intelligence in recent years have already begun to transform many industries, and are beginning to make headway into healthcare. There is tremendous potential for new digital technologies to improve the care of surgical patients. In this piece, we highlight work being done to advance surgical care using machine learning, computer vision, wearable devices, remote patient monitoring, and virtual and augmented reality. We describe ways these technologies can be used to improve the practice of surgery, and discuss opportunities and challenges to their widespread adoption and use in operating rooms and at the bedside.
Collapse
Affiliation(s)
- Jayson S Marwaha
- Beth Israel Deaconess Medical Center, Boston, MA, USA.
- Harvard Medical School, Boston, MA, USA.
| | | | - Joseph C Kvedar
- Harvard Medical School, Boston, MA, USA
- Mass General Brigham, Boston, MA, USA
| |
Collapse
|
34
|
Ramesh S, Srivastav V, Alapatt D, Yu T, Murali A, Sestini L, Nwoye CI, Hamoud I, Sharma S, Fleurentin A, Exarchakis G, Karargyris A, Padoy N. Dissecting self-supervised learning methods for surgical computer vision. Med Image Anal 2023; 88:102844. [PMID: 37270898 DOI: 10.1016/j.media.2023.102844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 05/08/2023] [Accepted: 05/15/2023] [Indexed: 06/06/2023]
Abstract
The field of surgical computer vision has undergone considerable breakthroughs in recent years with the rising popularity of deep neural network-based methods. However, standard fully-supervised approaches for training such models require vast amounts of annotated data, imposing a prohibitively high cost; especially in the clinical domain. Self-Supervised Learning (SSL) methods, which have begun to gain traction in the general computer vision community, represent a potential solution to these annotation costs, allowing to learn useful representations from only unlabeled data. Still, the effectiveness of SSL methods in more complex and impactful domains, such as medicine and surgery, remains limited and unexplored. In this work, we address this critical need by investigating four state-of-the-art SSL methods (MoCo v2, SimCLR, DINO, SwAV) in the context of surgical computer vision. We present an extensive analysis of the performance of these methods on the Cholec80 dataset for two fundamental and popular tasks in surgical context understanding, phase recognition and tool presence detection. We examine their parameterization, then their behavior with respect to training data quantities in semi-supervised settings. Correct transfer of these methods to surgery, as described and conducted in this work, leads to substantial performance gains over generic uses of SSL - up to 7.4% on phase recognition and 20% on tool presence detection - as well as state-of-the-art semi-supervised phase recognition approaches by up to 14%. Further results obtained on a highly diverse selection of surgical datasets exhibit strong generalization properties. The code is available at https://github.com/CAMMA-public/SelfSupSurg.
Collapse
Affiliation(s)
- Sanat Ramesh
- ICube, University of Strasbourg, CNRS, Strasbourg 67000, France; Altair Robotics Lab, Department of Computer Science, University of Verona, Verona 37134, Italy
| | - Vinkle Srivastav
- ICube, University of Strasbourg, CNRS, Strasbourg 67000, France.
| | - Deepak Alapatt
- ICube, University of Strasbourg, CNRS, Strasbourg 67000, France
| | - Tong Yu
- ICube, University of Strasbourg, CNRS, Strasbourg 67000, France
| | - Aditya Murali
- ICube, University of Strasbourg, CNRS, Strasbourg 67000, France
| | - Luca Sestini
- ICube, University of Strasbourg, CNRS, Strasbourg 67000, France; Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milano 20133, Italy
| | | | - Idris Hamoud
- ICube, University of Strasbourg, CNRS, Strasbourg 67000, France
| | - Saurav Sharma
- ICube, University of Strasbourg, CNRS, Strasbourg 67000, France
| | | | - Georgios Exarchakis
- ICube, University of Strasbourg, CNRS, Strasbourg 67000, France; IHU Strasbourg, Strasbourg 67000, France
| | - Alexandros Karargyris
- ICube, University of Strasbourg, CNRS, Strasbourg 67000, France; IHU Strasbourg, Strasbourg 67000, France
| | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, Strasbourg 67000, France; IHU Strasbourg, Strasbourg 67000, France
| |
Collapse
|