1
|
Cizmic A, Mitra AT, Preukschas AA, Kemper M, Melling NT, Mann O, Markar S, Hackert T, Nickel F. Artificial intelligence for intraoperative video analysis in robotic-assisted esophagectomy. Surg Endosc 2025; 39:2774-2783. [PMID: 40164839 PMCID: PMC12041040 DOI: 10.1007/s00464-025-11685-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2024] [Accepted: 03/17/2025] [Indexed: 04/02/2025]
Abstract
BACKGROUND Robotic-assisted minimally invasive esophagectomy (RAMIE) is a complex surgical procedure for treating esophageal cancer. Artificial intelligence (AI) is an uprising technology with increasing applications in the surgical field. This scoping review aimed to assess the current AI applications in RAMIE, with a focus on intraoperative video analysis. METHODS To identify all articles utilizing AI in RAMIE, a comprehensive literature search was performed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-analysis for scoping reviews of Medline and Embase databases and the Cochrane Library. Two independent reviewers assessed articles for quality and inclusion. RESULTS One hundred and seventeen articles were identified, of which four were included in the final analysis. Results demonstrated that the main AI applications in RAMIE were intraoperative video assessment and the evaluation of surgical technical skills to evaluate surgical performance. AI was also used for surgical phase recognition to support clinical decision-making through intraoperative guidance and identify key anatomical landmarks. Various deep-learning networks were used to generate AI models, and there was a strong emphasis on using high-quality standardized video frames. CONCLUSIONS The use of AI in RAMIE, especially in intraoperative video analysis and surgical phase recognition, is still a relatively new field that should be further explored. The advantages of using AI algorithms to evaluate intraoperative videos in an automated manner may be harnessed to improve technical performance and intraoperative decision-making, achieve a higher quality of surgery, and improve postoperative outcomes.
Collapse
Affiliation(s)
- Amila Cizmic
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg-Eppendorf, Martinistraße 52, 20246, Hamburg, Germany
| | - Anuja T Mitra
- Department of Surgery & Cancer, Imperial College, London, UK
| | - Anas A Preukschas
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg-Eppendorf, Martinistraße 52, 20246, Hamburg, Germany
| | - Marius Kemper
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg-Eppendorf, Martinistraße 52, 20246, Hamburg, Germany
| | - Nathaniel T Melling
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg-Eppendorf, Martinistraße 52, 20246, Hamburg, Germany
| | - Oliver Mann
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg-Eppendorf, Martinistraße 52, 20246, Hamburg, Germany
| | - Sheraz Markar
- Nuffield Department of Surgical Sciences, University of Oxford, Oxford, UK
| | - Thilo Hackert
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg-Eppendorf, Martinistraße 52, 20246, Hamburg, Germany
| | - Felix Nickel
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg-Eppendorf, Martinistraße 52, 20246, Hamburg, Germany.
| |
Collapse
|
2
|
Shin Y, Lee M, Lee Y, Kim K, Kim T. Artificial Intelligence-Powered Quality Assurance: Transforming Diagnostics, Surgery, and Patient Care-Innovations, Limitations, and Future Directions. Life (Basel) 2025; 15:654. [PMID: 40283208 PMCID: PMC12028931 DOI: 10.3390/life15040654] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2025] [Revised: 04/09/2025] [Accepted: 04/14/2025] [Indexed: 04/29/2025] Open
Abstract
Artificial intelligence is rapidly transforming quality assurance in healthcare, driving advancements in diagnostics, surgery, and patient care. This review presents a comprehensive analysis of artificial intelligence integration-particularly convolutional and recurrent neural networks-across key clinical domains, significantly enhancing diagnostic accuracy, surgical performance, and pathology evaluation. Artificial intelligence-based approaches have demonstrated clear superiority over conventional methods: convolutional neural networks achieved 91.56% accuracy in scanner fault detection, surpassing manual inspections; endoscopic lesion detection sensitivity rose from 2.3% to 6.1% with artificial intelligence assistance; and gastric cancer invasion depth classification reached 89.16% accuracy, outperforming human endoscopists by 17.25%. In pathology, artificial intelligence achieved 93.2% accuracy in identifying out-of-focus regions and an F1 score of 0.94 in lymphocyte quantification, promoting faster and more reliable diagnostics. Similarly, artificial intelligence improved surgical workflow recognition with over 81% accuracy and exceeded 95% accuracy in skill assessment classification. Beyond traditional diagnostics and surgical support, AI-powered wearable sensors, drug delivery systems, and biointegrated devices are advancing personalized treatment by optimizing physiological monitoring, automating care protocols, and enhancing therapeutic precision. Despite these achievements, challenges remain in areas such as data standardization, ethical governance, and model generalizability. Overall, the findings underscore artificial intelligence's potential to outperform traditional techniques across multiple parameters, emphasizing the need for continued development, rigorous clinical validation, and interdisciplinary collaboration to fully realize its role in precision medicine and patient safety.
Collapse
Affiliation(s)
- Yoojin Shin
- College of Medicine, The Catholic University of Korea, 222 Banpo-Daero, Seocho-gu, Seoul 06591, Republic of Korea; (Y.S.); (M.L.); (Y.L.)
| | - Mingyu Lee
- College of Medicine, The Catholic University of Korea, 222 Banpo-Daero, Seocho-gu, Seoul 06591, Republic of Korea; (Y.S.); (M.L.); (Y.L.)
| | - Yoonji Lee
- College of Medicine, The Catholic University of Korea, 222 Banpo-Daero, Seocho-gu, Seoul 06591, Republic of Korea; (Y.S.); (M.L.); (Y.L.)
| | - Kyuri Kim
- College of Medicine, Ewha Womans University, 25 Magokdong-ro 2-gil, Gangseo-gu, Seoul 07804, Republic of Korea;
| | - Taejung Kim
- Department of Hospital Pathology, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, 10, 63-ro, Yeongdeungpo-gu, Seoul 07345, Republic of Korea
| |
Collapse
|
3
|
Zeng Z, Luo S, Zhang H, Wu M, Ma D, Wang Q, Xie M, Xu Q, Ouyang J, Xiao Y, Song Y, Feng B, Xu Q, Wang Y, Zhang Y, Shi L, Ling L, Zhang X, Huang L, Yang Z, Peng J, Wu X, Ren D, Huang M, Lan P, Wang J, Tong W, Ren M, Liu H, Kang L. Transanal vs Laparoscopic Total Mesorectal Excision and 3-Year Disease-Free Survival in Rectal Cancer: The TaLaR Randomized Clinical Trial. JAMA 2025; 333:774-783. [PMID: 39847361 PMCID: PMC11880948 DOI: 10.1001/jama.2024.24276] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/02/2024] [Accepted: 10/16/2024] [Indexed: 01/24/2025]
Abstract
Importance Previous studies have demonstrated the advantages of short-term histopathological outcomes and complications associated with transanal total mesorectal excision (TME) compared with laparoscopic TME. However, the long-term oncological outcomes of transanal TME remain ambiguous. This study aims to compare 3-year disease-free survival of transanal TME with laparoscopic TME. Objective To evaluate 3-year disease-free survival between transanal TME and laparoscopic TME in patients with rectal cancer. Design, Setting, and Participants This randomized, open-label, noninferiority, phase 3 clinical trial was performed in 16 different centers in China. Between April 2016 and June 2021, a total of 1115 patients with clinical stage I to III mid-low rectal cancer were enrolled. The last date of participant follow-up was in June 2024. Interventions Participants were randomly assigned in a 1:1 ratio before their surgical procedure to undergo either transanal TME (n = 558) or laparoscopic TME (n = 557). Main Outcomes and Measures The primary end point was 3-year disease-free survival, with a noninferiority margin of -10% for the comparison between transanal TME and laparoscopic TME. Secondary outcomes included 3-year overall survival and 3-year local recurrence. Results In the primary analysis set, the median patient age was 60 years. A total of 692 male and 397 female patients were included in the analysis. Three-year disease-free survival was 82.1% (97.5% CI, 78.4%-85.8%) for the transanal TME group and 79.4% (97.5% CI, 75.6%-83.4%) for the laparoscopic TME group, with a difference of 2.7% (97.5% CI, -3.0% to 8.1%). The lower tail of a 2-tailed 97.5% CI for the group difference in 3-year disease-free survival was above the noninferiority margin of -10 percentage points. Furthermore, the 3-year local recurrence was 3.6% (95% CI, 2.0%-5.1%) for transanal TME and 4.4% (95% CI, 2.6%-6.1%) for laparoscopic TME. Three-year overall survival was 92.6% (95% CI, 90.4%-94.8%) for transanal TME and 90.7% (95% CI, 88.3%-93.2%) for laparoscopic TME. Conclusions and Relevance In patients with mid-low rectal cancer, 3-year disease-free survival for transanal TME was noninferior to that of laparoscopic TME. Trial Registration ClinicalTrials.gov Identifier: NCT02966483.
Collapse
Affiliation(s)
- Ziwei Zeng
- Department of General Surgery (Colorectal Surgery), Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases, Biomedical Innovation Center, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Shuangling Luo
- Department of General Surgery (Colorectal Surgery), Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases, Biomedical Innovation Center, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Hong Zhang
- Department of Colorectal Surgery, Shengjing Hospital of China Medical University, Shenyang, Liaoning, China
| | - Miao Wu
- Department of Gastrointestinal Surgery, The Second People’s Hospital of Yibin, Yibin, Sichuan, China
| | - Dan Ma
- Department of General Surgery, Xinqiao Hospital, Army Medical University, Chongqing, China
| | - Quan Wang
- Department of Gastrointestinal Surgery, The First Hospital of Jilin University, Changchun, Jilin, China
| | - Ming Xie
- Department of Gastrointestinal Surgery, Affiliated Hospital of Zunyi Medical University, Zunyi, Guizhou, China
| | - Qing Xu
- Department of Gastrointestinal Surgery, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Jun Ouyang
- Department of Gastrointestinal Surgery, The First Affiliated Hospital of University of South China, Hengyang, Hunan, China
| | - Yi Xiao
- Department of General Surgery, Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing, China
| | - Yongchun Song
- Department of Gastrointestinal Surgery, The First Affiliated Hospital of Xi’an Jiaotong University, Xian, Shanxi, China
| | - Bo Feng
- Department of Gastrointestinal Surgery, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Qingwen Xu
- Department of Gastrointestinal Surgery, The Affiliated Hospital of Guangdong Medical University, Zhanjiang, Guangdong, China
| | - Yanan Wang
- Department of Gastrointestinal Surgery, Nanfang Hospital, Southern Medical University, Guangzhou, Guangdong, China
| | - Yi Zhang
- Department of Gastrointestinal Surgery, The Third Xiangya Hospital of Central South University, Changsha, Hunan, China
| | - Lishuo Shi
- Clinical Research Center, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Li Ling
- Department of Medical Statistics, School of Public Health, Sun Yat-sen University, Guangzhou, China
| | - Xingwei Zhang
- Department of General Surgery (Colorectal Surgery), Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases, Biomedical Innovation Center, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Liang Huang
- Department of General Surgery (Colorectal Surgery), Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases, Biomedical Innovation Center, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Zuli Yang
- Department of General Surgery (Colorectal Surgery), Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases, Biomedical Innovation Center, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Junsheng Peng
- Department of General Surgery (Colorectal Surgery), Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases, Biomedical Innovation Center, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Xiaojian Wu
- Department of General Surgery (Colorectal Surgery), Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases, Biomedical Innovation Center, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Donglin Ren
- Department of General Surgery (Colorectal Surgery), Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases, Biomedical Innovation Center, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Meijin Huang
- Department of General Surgery (Colorectal Surgery), Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases, Biomedical Innovation Center, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Ping Lan
- Department of General Surgery (Colorectal Surgery), Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases, Biomedical Innovation Center, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Jianping Wang
- Department of General Surgery (Colorectal Surgery), Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases, Biomedical Innovation Center, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Weidong Tong
- Department of General Surgery, Daping Hospital, Army Medical University, Chongqing, China
| | - Mingyang Ren
- Department of Gastrointestinal Surgery, The Affiliated Nanchong Central Hospital of North Sichuan Medical College, Nanchong, Sichuan, China
| | - Huashan Liu
- Department of General Surgery (Colorectal Surgery), Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases, Biomedical Innovation Center, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Liang Kang
- Department of General Surgery (Colorectal Surgery), Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases, Biomedical Innovation Center, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| |
Collapse
|
4
|
Lavanchy JL, Ramesh S, Dall'Alba D, Gonzalez C, Fiorini P, Müller-Stich BP, Nett PC, Marescaux J, Mutter D, Padoy N. Challenges in multi-centric generalization: phase and step recognition in Roux-en-Y gastric bypass surgery. Int J Comput Assist Radiol Surg 2024; 19:2249-2257. [PMID: 38761319 PMCID: PMC11541311 DOI: 10.1007/s11548-024-03166-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2023] [Accepted: 04/02/2024] [Indexed: 05/20/2024]
Abstract
PURPOSE Most studies on surgical activity recognition utilizing artificial intelligence (AI) have focused mainly on recognizing one type of activity from small and mono-centric surgical video datasets. It remains speculative whether those models would generalize to other centers. METHODS In this work, we introduce a large multi-centric multi-activity dataset consisting of 140 surgical videos (MultiBypass140) of laparoscopic Roux-en-Y gastric bypass (LRYGB) surgeries performed at two medical centers, i.e., the University Hospital of Strasbourg, France (StrasBypass70) and Inselspital, Bern University Hospital, Switzerland (BernBypass70). The dataset has been fully annotated with phases and steps by two board-certified surgeons. Furthermore, we assess the generalizability and benchmark different deep learning models for the task of phase and step recognition in 7 experimental studies: (1) Training and evaluation on BernBypass70; (2) Training and evaluation on StrasBypass70; (3) Training and evaluation on the joint MultiBypass140 dataset; (4) Training on BernBypass70, evaluation on StrasBypass70; (5) Training on StrasBypass70, evaluation on BernBypass70; Training on MultiBypass140, (6) evaluation on BernBypass70 and (7) evaluation on StrasBypass70. RESULTS The model's performance is markedly influenced by the training data. The worst results were obtained in experiments (4) and (5) confirming the limited generalization capabilities of models trained on mono-centric data. The use of multi-centric training data, experiments (6) and (7), improves the generalization capabilities of the models, bringing them beyond the level of independent mono-centric training and validation (experiments (1) and (2)). CONCLUSION MultiBypass140 shows considerable variation in surgical technique and workflow of LRYGB procedures between centers. Therefore, generalization experiments demonstrate a remarkable difference in model performance. These results highlight the importance of multi-centric datasets for AI model generalization to account for variance in surgical technique and workflows. The dataset and code are publicly available at https://github.com/CAMMA-public/MultiBypass140.
Collapse
Affiliation(s)
- Joël L Lavanchy
- University Digestive Health Care Center - Clarunis, 4002, Basel, Switzerland.
- Department of Biomedical Engineering, University of Basel, 4123, Allschwil, Switzerland.
- Institute of Image-Guided Surgery, IHU Strasbourg, 67000, Strasbourg, France.
| | - Sanat Ramesh
- Institute of Image-Guided Surgery, IHU Strasbourg, 67000, Strasbourg, France
- ICube, University of Strasbourg, CNRS, 67000, Strasbourg, France
- Altair Robotics Lab, University of Verona, 37134, Verona, Italy
| | - Diego Dall'Alba
- Altair Robotics Lab, University of Verona, 37134, Verona, Italy
| | - Cristians Gonzalez
- Institute of Image-Guided Surgery, IHU Strasbourg, 67000, Strasbourg, France
- University Hospital of Strasbourg, 67000, Strasbourg, France
| | - Paolo Fiorini
- Altair Robotics Lab, University of Verona, 37134, Verona, Italy
| | - Beat P Müller-Stich
- University Digestive Health Care Center - Clarunis, 4002, Basel, Switzerland
- Department of Biomedical Engineering, University of Basel, 4123, Allschwil, Switzerland
| | - Philipp C Nett
- Department of Visceral Surgery and Medicine, Inselspital Bern University Hospital, 3010, Bern, Switzerland
| | | | - Didier Mutter
- Institute of Image-Guided Surgery, IHU Strasbourg, 67000, Strasbourg, France
- University Hospital of Strasbourg, 67000, Strasbourg, France
| | - Nicolas Padoy
- Institute of Image-Guided Surgery, IHU Strasbourg, 67000, Strasbourg, France
- ICube, University of Strasbourg, CNRS, 67000, Strasbourg, France
| |
Collapse
|
5
|
Hossain I, Madani A, Laplante S. Machine learning perioperative applications in visceral surgery: a narrative review. Front Surg 2024; 11:1493779. [PMID: 39539511 PMCID: PMC11557547 DOI: 10.3389/fsurg.2024.1493779] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2024] [Accepted: 10/18/2024] [Indexed: 11/16/2024] Open
Abstract
Artificial intelligence in surgery has seen an expansive rise in research and clinical implementation in recent years, with many of the models being driven by machine learning. In the preoperative setting, machine learning models have been utilized to guide indications for surgery, appropriate timing of operations, calculation of risks and prognostication, along with improving estimations of time and resources required for surgeries. Intraoperative applications that have been demonstrated are visual annotations of the surgical field, automated classification of surgical phases and prediction of intraoperative patient decompensation. Postoperative applications have been studied the most, with most efforts put towards prediction of postoperative complications, recurrence patterns of malignancy, enhanced surgical education and assessment of surgical skill. Challenges to implementation of these models in clinical practice include the need for more quantity and quality of standardized data to improve model performance, sufficient resources and infrastructure to train and use machine learning, along with addressing ethical and patient acceptance considerations.
Collapse
Affiliation(s)
- Intekhab Hossain
- Department of Surgery, University of Toronto, Toronto, ON, Canada
- Surgical Artificial Intelligence Research Academy, University Health Network, Toronto, ON, Canada
| | - Amin Madani
- Department of Surgery, University of Toronto, Toronto, ON, Canada
- Surgical Artificial Intelligence Research Academy, University Health Network, Toronto, ON, Canada
| | - Simon Laplante
- Surgical Artificial Intelligence Research Academy, University Health Network, Toronto, ON, Canada
- Department of Surgery, Mayo Clinic, Rochester, MN, United States
| |
Collapse
|
6
|
Honda R, Kitaguchi D, Ishikawa Y, Kosugi N, Hayashi K, Hasegawa H, Takeshita N, Ito M. Deep learning-based surgical step recognition for laparoscopic right-sided colectomy. Langenbecks Arch Surg 2024; 409:309. [PMID: 39419830 DOI: 10.1007/s00423-024-03502-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2024] [Accepted: 10/10/2024] [Indexed: 10/19/2024]
Abstract
PURPOSE Understanding the complex anatomy and surgical steps involved in laparoscopic right-sided colectomy (LAP-RC) is essential for standardizing the surgical procedure. Deep-learning (DL)-based computer vision can achieve this. This study aimed to develop a step recognition model for LAP-RC using a dataset of surgical videos with annotated step information and evaluate its recognition performance. METHODS This single-center retrospective study utilized a video dataset of laparoscopic ileocecal resection (LAP-ICR) and laparoscopic right-sided hemicolectomy (LAP-RHC) for right-sided colon cancer performed between January 2018 and March 2022. The videos were split into still images, which were divided into training, validation, and test sets using 66%, 17%, and 17% of the data, respectively. Videos were manually classified into eight main steps: 1) medial mobilization, 2) central vascular ligation, 3) dissection of the superior mesenteric vein, 4) retroperitoneal mobilization, 5) lateral mobilization, 6) cranial mobilization, 7) mesocolon resection, and 8) intracorporeal anastomosis. In a simpler version, consecutive surgical steps were combined, resulting in five steps. Precision, recall, F1 scores, and overall accuracy were assessed to evaluate the model's performance in the surgical step classification task. RESULTS Seventy-eight patients were included; LAP-ICR and LAP-RHC were performed in 35 (44%) and 44 (56%) patients, respectively. The overall accuracy was 72.1% and 82.9% for the eight-step and combined five-step classification tasks, respectively. CONCLUSIONS The automatic surgical step-recognition model for LAP-RCs, developed using a DL algorithm, exhibited a fairly high classification performance. A model that understands the complex steps of LAP-RC will aid the standardization of the surgical procedure.
Collapse
Affiliation(s)
- Ryoya Honda
- Department for the Promotion of Medical Device Innovation, National Cancer Center Hospital East, Chiba, Japan
- Course of Advanced Clinical Research of Cancer, Juntendo University Graduate School of Medicine, Tokyo, Japan
| | - Daichi Kitaguchi
- Department of Colorectal Surgery, National Cancer Center Hospital East, Chiba, Japan
| | - Yuto Ishikawa
- Department for the Promotion of Medical Device Innovation, National Cancer Center Hospital East, Chiba, Japan
| | - Norihito Kosugi
- Department for the Promotion of Medical Device Innovation, National Cancer Center Hospital East, Chiba, Japan
| | - Kazuyuki Hayashi
- Department for the Promotion of Medical Device Innovation, National Cancer Center Hospital East, Chiba, Japan
| | - Hiro Hasegawa
- Department for the Promotion of Medical Device Innovation, National Cancer Center Hospital East, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Center Hospital East, Chiba, Japan
| | - Nobuyoshi Takeshita
- Department for the Promotion of Medical Device Innovation, National Cancer Center Hospital East, Chiba, Japan
| | - Masaaki Ito
- Department for the Promotion of Medical Device Innovation, National Cancer Center Hospital East, Chiba, Japan.
- Department of Colorectal Surgery, National Cancer Center Hospital East, Chiba, Japan.
| |
Collapse
|
7
|
Oh N, Kim B, Kim T, Rhu J, Kim J, Choi GS. Real-time segmentation of biliary structure in pure laparoscopic donor hepatectomy. Sci Rep 2024; 14:22508. [PMID: 39341910 PMCID: PMC11439027 DOI: 10.1038/s41598-024-73434-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2024] [Accepted: 09/17/2024] [Indexed: 10/01/2024] Open
Abstract
Pure laparoscopic donor hepatectomy (PLDH) has become a standard practice for living donor liver transplantation in expert centers. Accurate understanding of biliary structures is crucial during PLDH to minimize the risk of complications. This study aims to develop a deep learning-based segmentation model for real-time identification of biliary structures, assisting surgeons in determining the optimal transection site during PLDH. A single-institution retrospective feasibility analysis was conducted on 30 intraoperative videos of PLDH. All videos were selected for their use of the indocyanine green near-infrared fluorescence technique to identify biliary structure. From the analysis, 10 representative frames were extracted from each video specifically during the bile duct division phase, resulting in 300 frames. These frames underwent pixel-wise annotation to identify biliary structures and the transection site. A segmentation task was then performed using a DeepLabV3+ algorithm, equipped with a ResNet50 encoder, focusing on the bile duct (BD) and anterior wall (AW) for transection. The model's performance was evaluated using the dice similarity coefficient (DSC). The model predicted biliary structures with a mean DSC of 0.728 ± 0.01 for BD and 0.429 ± 0.06 for AW. Inference was performed at a speed of 15.3 frames per second, demonstrating the feasibility of real-time recognition of anatomical structures during surgery. The deep learning-based semantic segmentation model exhibited promising performance in identifying biliary structures during PLDH. Future studies should focus on validating the clinical utility and generalizability of the model and comparing its efficacy with current gold standard practices to better evaluate its potential clinical applications.
Collapse
Affiliation(s)
- Namkee Oh
- Department of Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea.
| | - Bogeun Kim
- Department of Digital Health, SAIHST, Sungkyunkwan University, Seoul, Republic of Korea
| | - Taeyoung Kim
- Medical AI Research Center, Samsung Medical Center, Seoul, Republic of Korea
| | - Jinsoo Rhu
- Department of Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Jongman Kim
- Department of Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Gyu-Seong Choi
- Department of Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea.
| |
Collapse
|
8
|
Bangolo A, Wadhwani N, Nagesh VK, Dey S, Tran HHV, Aguilar IK, Auda A, Sidiqui A, Menon A, Daoud D, Liu J, Pulipaka SP, George B, Furman F, Khan N, Plumptre A, Sekhon I, Lo A, Weissman S. Impact of artificial intelligence in the management of esophageal, gastric and colorectal malignancies. Artif Intell Gastrointest Endosc 2024; 5:90704. [DOI: 10.37126/aige.v5.i2.90704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Revised: 01/28/2024] [Accepted: 03/04/2024] [Indexed: 05/11/2024] Open
Abstract
The incidence of gastrointestinal malignancies has increased over the past decade at an alarming rate. Colorectal and gastric cancers are the third and fifth most commonly diagnosed cancers worldwide but are cited as the second and third leading causes of mortality. Early institution of appropriate therapy from timely diagnosis can optimize patient outcomes. Artificial intelligence (AI)-assisted diagnostic, prognostic, and therapeutic tools can assist in expeditious diagnosis, treatment planning/response prediction, and post-surgical prognostication. AI can intercept neoplastic lesions in their primordial stages, accurately flag suspicious and/or inconspicuous lesions with greater accuracy on radiologic, histopathological, and/or endoscopic analyses, and eliminate over-dependence on clinicians. AI-based models have shown to be on par, and sometimes even outperformed experienced gastroenterologists and radiologists. Convolutional neural networks (state-of-the-art deep learning models) are powerful computational models, invaluable to the field of precision oncology. These models not only reliably classify images, but also accurately predict response to chemotherapy, tumor recurrence, metastasis, and survival rates post-treatment. In this systematic review, we analyze the available evidence about the diagnostic, prognostic, and therapeutic utility of artificial intelligence in gastrointestinal oncology.
Collapse
Affiliation(s)
- Ayrton Bangolo
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Nikita Wadhwani
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Vignesh K Nagesh
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Shraboni Dey
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Hadrian Hoang-Vu Tran
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Izage Kianifar Aguilar
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Auda Auda
- Department of Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Aman Sidiqui
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Aiswarya Menon
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Deborah Daoud
- Department of Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - James Liu
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Sai Priyanka Pulipaka
- Department of Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Blessy George
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Flor Furman
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Nareeman Khan
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Adewale Plumptre
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Imranjot Sekhon
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Abraham Lo
- Department of Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Simcha Weissman
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| |
Collapse
|
9
|
Furube T, Takeuchi M, Kawakubo H, Maeda Y, Matsuda S, Fukuda K, Nakamura R, Kato M, Yahagi N, Kitagawa Y. Automated artificial intelligence-based phase-recognition system for esophageal endoscopic submucosal dissection (with video). Gastrointest Endosc 2024; 99:830-838. [PMID: 38185182 DOI: 10.1016/j.gie.2023.12.037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 11/13/2023] [Accepted: 12/22/2023] [Indexed: 01/09/2024]
Abstract
BACKGROUND AND AIMS Endoscopic submucosal dissection (ESD) for superficial esophageal cancer is a multistep treatment involving several endoscopic processes. Although analyzing each phase separately is worthwhile, it is not realistic in practice owing to the need for considerable manpower. To solve this problem, we aimed to establish a state-of-the-art artificial intelligence (AI)-based system, specifically, an automated phase-recognition system that can automatically identify each endoscopic phase based on video images. METHODS Ninety-four videos of ESD procedures for superficial esophageal cancer were evaluated in this single-center study. A deep neural network-based phase-recognition system was developed in an automated manner to recognize each of the endoscopic phases. The system was trained with the use of videos that were annotated and verified by 2 GI endoscopists. RESULTS The overall accuracy of the AI model for automated phase recognition was 90%, and the average precision, recall, and F value rates were 91%, 90%, and 90%, respectively. Two representative ESD videos predicted by the model indicated the usability of AI in clinical practice. CONCLUSIONS We demonstrated that an AI-based automated phase-recognition system for esophageal ESD can be established with high accuracy. To the best of our knowledge, this is the first report on automated recognition of ESD treatment phases. Because this system enabled a detailed analysis of phases, collecting large volumes of data in the future may help to identify quality indicators for treatment techniques and uncover unmet medical needs that necessitate the creation of new treatment methods and devices.
Collapse
Affiliation(s)
- Tasuku Furube
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Masashi Takeuchi
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Hirofumi Kawakubo
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Yusuke Maeda
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Satoru Matsuda
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Kazumasa Fukuda
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Rieko Nakamura
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Motohiko Kato
- Center for Diagnostic and Therapeutic Endoscopy, Keio University School of Medicine, Tokyo, Japan
| | - Naohisa Yahagi
- Division of Research and Development for Minimally Invasive Treatment, Cancer Center, Graduate School of Medicine, Keio University School of Medicine, Tokyo, Japan
| | - Yuko Kitagawa
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| |
Collapse
|
10
|
Chen Z, Yang D, Li A, Sun L, Zhao J, Liu J, Liu L, Zhou X, Chen Y, Cai Y, Wu Z, Cheng K, Cai H, Tang M, Peng B, Wang X. Decoding surgical skill: an objective and efficient algorithm for surgical skill classification based on surgical gesture features -experimental studies. Int J Surg 2024; 110:1441-1449. [PMID: 38079605 PMCID: PMC10942222 DOI: 10.1097/js9.0000000000000975] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2023] [Accepted: 11/21/2023] [Indexed: 03/16/2024]
Abstract
BACKGROUND Various surgical skills lead to differences in patient outcomes and identifying poorly skilled surgeons with constructive feedback contributes to surgical quality improvement. The aim of the study was to develop an algorithm for evaluating surgical skills in laparoscopic cholecystectomy based on the features of elementary functional surgical gestures (Surgestures). MATERIALS AND METHODS Seventy-five laparoscopic cholecystectomy videos were collected from 33 surgeons in five hospitals. The phase of mobilization hepatocystic triangle and gallbladder dissection from the liver bed of each video were annotated with 14 Surgestures. The videos were grouped into competent and incompetent based on the quantiles of modified global operative assessment of laparoscopic skills (mGOALS). Surgeon-related information, clinical data, and intraoperative events were analyzed. Sixty-three Surgesture features were extracted to develop the surgical skill classification algorithm. The area under the receiver operating characteristic curve of the classification and the top features were evaluated. RESULTS Correlation analysis revealed that most perioperative factors had no significant correlation with mGOALS scores. The incompetent group has a higher probability of cholecystic vascular injury compared to the competent group (30.8 vs 6.1%, P =0.004). The competent group demonstrated fewer inefficient Surgestures, lower shift frequency, and a larger dissection-exposure ratio of Surgestures during the procedure. The area under the receiver operating characteristic curve of the classification algorithm achieved 0.866. Different Surgesture features contributed variably to overall performance and specific skill items. CONCLUSION The computer algorithm accurately classified surgeons with different skill levels using objective Surgesture features, adding insight into designing automatic laparoscopic surgical skill assessment tools with technical feedback.
Collapse
Affiliation(s)
- Zixin Chen
- Department of General Surgery, Division of Pancreatic Surgery
- West China School of Medicine, West China Hospital of Sichuan University
| | - Dewei Yang
- Chongqing University of Posts and Telecommunications, School of Advanced Manufacturing Engineering, Chongqing
| | - Ang Li
- Department of General Surgery, Division of Pancreatic Surgery
- Guang’an People’s Hospital, Guang’an
| | - Louzong Sun
- Department of Hepatobiliary Surgery, Zigong First People’s Hospital, Zigong
| | - Jifan Zhao
- Chengdu Withai Innovations Technology Company, Chengdu
| | - Jie Liu
- Chengdu Withai Innovations Technology Company, Chengdu
| | - Linxun Liu
- Department of General Surgery, Qinghai Provincial People’s Hospital, Xining, People’s Republic of China
| | - Xiaobo Zhou
- School of Biomedical Informatics, McGovern Medical School, University of Texas Health Science Center, Houston, USA
| | - Yonghua Chen
- Department of General Surgery, Division of Pancreatic Surgery
| | - Yunqiang Cai
- Department of General Surgery, Division of Pancreatic Surgery
| | - Zhong Wu
- Department of General Surgery, Division of Pancreatic Surgery
| | - Ke Cheng
- Department of General Surgery, Division of Pancreatic Surgery
| | - He Cai
- Department of General Surgery, Division of Pancreatic Surgery
| | - Ming Tang
- Department of General Surgery, Division of Pancreatic Surgery
- West China School of Medicine, West China Hospital of Sichuan University
| | - Bing Peng
- Department of General Surgery, Division of Pancreatic Surgery
| | - Xin Wang
- Department of General Surgery, Division of Pancreatic Surgery
| |
Collapse
|
11
|
Komatsu M, Kitaguchi D, Yura M, Takeshita N, Yoshida M, Yamaguchi M, Kondo H, Kinoshita T, Ito M. Automatic surgical phase recognition-based skill assessment in laparoscopic distal gastrectomy using multicenter videos. Gastric Cancer 2024; 27:187-196. [PMID: 38038811 DOI: 10.1007/s10120-023-01450-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Accepted: 10/31/2023] [Indexed: 12/02/2023]
Abstract
BACKGROUND Gastric surgery involves numerous surgical phases; however, its steps can be clearly defined. Deep learning-based surgical phase recognition can promote stylization of gastric surgery with applications in automatic surgical skill assessment. This study aimed to develop a deep learning-based surgical phase-recognition model using multicenter videos of laparoscopic distal gastrectomy, and examine the feasibility of automatic surgical skill assessment using the developed model. METHODS Surgical videos from 20 hospitals were used. Laparoscopic distal gastrectomy was defined and annotated into nine phases and a deep learning-based image classification model was developed for phase recognition. We examined whether the developed model's output, including the number of frames in each phase and the adequacy of the surgical field development during the phase of supra-pancreatic lymphadenectomy, correlated with the manually assigned skill assessment score. RESULTS The overall accuracy of phase recognition was 88.8%. Regarding surgical skill assessment based on the number of frames during the phases of lymphadenectomy of the left greater curvature and reconstruction, the number of frames in the high-score group were significantly less than those in the low-score group (829 vs. 1,152, P < 0.01; 1,208 vs. 1,586, P = 0.01, respectively). The output score of the adequacy of the surgical field development, which is the developed model's output, was significantly higher in the high-score group than that in the low-score group (0.975 vs. 0.970, P = 0.04). CONCLUSION The developed model had high accuracy in phase-recognition tasks and has the potential for application in automatic surgical skill assessment systems.
Collapse
Affiliation(s)
- Masaru Komatsu
- Gastric Surgery Division, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
- Department for the Promotion of Medical Device Innovation, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
- Course of Advanced Clinical Research of Cancer, Juntendo University Graduate School of Medicine, 2-1-1, Hongo, Bunkyo-Ward, Tokyo, 113-8421, Japan
| | - Daichi Kitaguchi
- Department for the Promotion of Medical Device Innovation, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Masahiro Yura
- Gastric Surgery Division, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Nobuyoshi Takeshita
- Department for the Promotion of Medical Device Innovation, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Mitsumasa Yoshida
- Gastric Surgery Division, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Masayuki Yamaguchi
- Course of Advanced Clinical Research of Cancer, Juntendo University Graduate School of Medicine, 2-1-1, Hongo, Bunkyo-Ward, Tokyo, 113-8421, Japan
| | - Hibiki Kondo
- Department for the Promotion of Medical Device Innovation, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Takahiro Kinoshita
- Gastric Surgery Division, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Masaaki Ito
- Department for the Promotion of Medical Device Innovation, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan.
- Surgical Device Innovation Office, National Cancer Center Hospital East, 6-5-1, Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan.
| |
Collapse
|
12
|
Jearanai S, Wangkulangkul P, Sae-Lim W, Cheewatanakornkul S. Development of a deep learning model for safe direct optical trocar insertion in minimally invasive surgery: an innovative method to prevent trocar injuries. Surg Endosc 2023; 37:7295-7304. [PMID: 37558826 DOI: 10.1007/s00464-023-10309-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 07/12/2023] [Indexed: 08/11/2023]
Abstract
BACKGROUND Direct optical trocar insertion is a common procedure in laparoscopic minimally invasive surgery. However, misinterpretations of the abdominal wall anatomy can lead to severe complications. Artificial intelligence has shown promise in surgical endoscopy, particularly in the employment of deep learning models for anatomical landmark identification. This study aimed to integrate a deep learning model with an alarm system algorithm for the precise detection of abdominal wall layers during trocar placement. METHOD Annotated bounding boxes and assigned classes were based on the six layers of the abdominal wall: subcutaneous, anterior rectus sheath, rectus muscle, posterior rectus sheath, peritoneum, and abdominal cavity. The cutting-edge YOLOv8 model was combined with a deep learning detector to train the dataset. The model was trained on still images and inferenced on laparoscopic videos to ensure real-time detection in the operating room. The alarm system was activated upon recognizing the peritoneum and abdominal cavity layers. We assessed the model's performance using mean average precision (mAP), precision, and recall metrics. RESULTS A total of 3600 images were captured from 89 laparoscopic video cases. The proposed model was trained on 3000 images, validated with a set of 200 images, and tested on a separate set of 400 images. The results from the test set were 95.8% mAP, 89.8% precision, and 91.7% recall. The alarm system was validated and accepted by experienced surgeons at our institute. CONCLUSION We demonstrated that deep learning has the potential to assist surgeons during direct optical trocar insertion. During trocar insertion, the proposed model promptly detects precise landmark references in real-time. The integration of this model with the alarm system enables timely reminders for surgeons to tilt the scope accordingly. Consequently, the implementation of the framework provides the potential to mitigate complications associated with direct optical trocar placement, thereby enhancing surgical safety and outcomes.
Collapse
Affiliation(s)
- Supakool Jearanai
- Minimally Invasive Surgery Unit, Department of Surgery, Faculty of Medicine, Prince of Songkla University, Songkhla, Thailand
| | - Piyanun Wangkulangkul
- Minimally Invasive Surgery Unit, Department of Surgery, Faculty of Medicine, Prince of Songkla University, Songkhla, Thailand
| | - Wannipa Sae-Lim
- Department of Computer Science, Faculty of Science, Prince of Songkla University, Songkhla, Thailand
| | - Siripong Cheewatanakornkul
- Minimally Invasive Surgery Unit, Department of Surgery, Faculty of Medicine, Prince of Songkla University, Songkhla, Thailand.
| |
Collapse
|
13
|
Kinoshita T, Komatsu M. Artificial Intelligence in Surgery and Its Potential for Gastric Cancer. J Gastric Cancer 2023; 23:400-409. [PMID: 37553128 PMCID: PMC10412972 DOI: 10.5230/jgc.2023.23.e27] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Revised: 07/19/2023] [Accepted: 07/20/2023] [Indexed: 08/10/2023] Open
Abstract
Artificial intelligence (AI) has made significant progress in recent years, and many medical fields are attempting to introduce AI technology into clinical practice. Currently, much research is being conducted to evaluate that AI can be incorporated into surgical procedures to make them safer and more efficient, subsequently to obtain better outcomes for patients. In this paper, we review basic AI research regarding surgery and discuss the potential for implementing AI technology in gastric cancer surgery. At present, research and development is focused on AI technologies that assist the surgeon's understandings and judgment during surgery, such as anatomical navigation. AI systems are also being developed to recognize in which the surgical phase is ongoing. Such a surgical phase recognition systems is considered for effective storage of surgical videos and education, in the future, for use in systems to objectively evaluate the skill of surgeons. At this time, it is not considered practical to let AI make intraoperative decisions or move forceps automatically from an ethical standpoint, too. At present, AI research on surgery has various limitations, and it is desirable to develop practical systems that will truly benefit clinical practice in the future.
Collapse
Affiliation(s)
- Takahiro Kinoshita
- Gastric Surgery Division, National Cancer Center Hospital East, Kashiwa, Japan.
| | - Masaru Komatsu
- Gastric Surgery Division, National Cancer Center Hospital East, Kashiwa, Japan
| |
Collapse
|
14
|
Sone K, Tanimoto S, Toyohara Y, Taguchi A, Miyamoto Y, Mori M, Iriyama T, Wada-Hiraike O, Osuga Y. Evolution of a surgical system using deep learning in minimally invasive surgery (Review). Biomed Rep 2023; 19:45. [PMID: 37324165 PMCID: PMC10265572 DOI: 10.3892/br.2023.1628] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Accepted: 03/31/2023] [Indexed: 06/17/2023] Open
Abstract
Recently, artificial intelligence (AI) has been applied in various fields due to the development of new learning methods, such as deep learning, and the marked progress in computational processing speed. AI is also being applied in the medical field for medical image recognition and omics analysis of genomes and other data. Recently, AI applications for videos of minimally invasive surgeries have also advanced, and studies on such applications are increasing. In the present review, studies that focused on the following topics were selected: i) Organ and anatomy identification, ii) instrument identification, iii) procedure and surgical phase recognition, iv) surgery-time prediction, v) identification of an appropriate incision line, and vi) surgical education. The development of autonomous surgical robots is also progressing, with the Smart Tissue Autonomous Robot (STAR) and RAVEN systems being the most reported developments. STAR, in particular, is currently being used in laparoscopic imaging to recognize the surgical site from laparoscopic images and is in the process of establishing an automated suturing system, albeit in animal experiments. The present review examined the possibility of fully autonomous surgical robots in the future.
Collapse
Affiliation(s)
- Kenbun Sone
- Department of Obstetrics and Gynecology, Faculty of Medicine, The University of Tokyo, Tokyo 113-8655, Japan
| | - Saki Tanimoto
- Department of Obstetrics and Gynecology, Faculty of Medicine, The University of Tokyo, Tokyo 113-8655, Japan
| | - Yusuke Toyohara
- Department of Obstetrics and Gynecology, Faculty of Medicine, The University of Tokyo, Tokyo 113-8655, Japan
| | - Ayumi Taguchi
- Department of Obstetrics and Gynecology, Faculty of Medicine, The University of Tokyo, Tokyo 113-8655, Japan
| | - Yuichiro Miyamoto
- Department of Obstetrics and Gynecology, Faculty of Medicine, The University of Tokyo, Tokyo 113-8655, Japan
| | - Mayuyo Mori
- Department of Obstetrics and Gynecology, Faculty of Medicine, The University of Tokyo, Tokyo 113-8655, Japan
| | - Takayuki Iriyama
- Department of Obstetrics and Gynecology, Faculty of Medicine, The University of Tokyo, Tokyo 113-8655, Japan
| | - Osamu Wada-Hiraike
- Department of Obstetrics and Gynecology, Faculty of Medicine, The University of Tokyo, Tokyo 113-8655, Japan
| | - Yutaka Osuga
- Department of Obstetrics and Gynecology, Faculty of Medicine, The University of Tokyo, Tokyo 113-8655, Japan
| |
Collapse
|
15
|
Furube T, Takeuchi M, Kawakubo H, Maeda Y, Matsuda S, Fukuda K, Nakamura R, Kitagawa Y. The relationship between the esophageal endoscopic submucosal dissection technical difficulty and its intraoperative process. Esophagus 2023; 20:264-271. [PMID: 36508068 DOI: 10.1007/s10388-022-00974-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 11/28/2022] [Indexed: 02/03/2023]
Abstract
BACKGROUND Estimating the esophageal endoscopic submucosal dissection (ESD) technical difficulty is important to reduce complications. Endoscopic duration is one of the related factors to a technical difficulty. The relationship between the esophageal ESD technical difficulty and its intraoperative process was analyzed as a first step toward automatic technical difficulty recognition using artificial intelligence. METHODS This study enrolled 75 patients with superficial esophageal cancer who underwent esophageal ESD. The technical difficulty score was established, which consisted of three factors, including total procedure duration, en bloc resection, and complications. Additionally, technical difficulty-related factors, which were perioperative factors that included the intraoperative process, were investigated. RESULTS Eight (11%) patients were allocated to high difficulty, whereas 67 patients (89%) were allocated to low difficulty. The intraoperative process, which was shown as the extension of each endoscopic phase, was significantly related to a technical difficulty. The area under the curve (AUC) values were higher at all the phase duration than at the clinical characteristics. Submucosal dissection phase (AUC 0.902; 95% confidence intervals (CI) 0.752-1.000), marking phase (AUC 0.827; 95% CI 0.703-0.951), and early phase which was defined as the duration from the start of marking to the end of submucosal injection (AUC 0.847; 95% CI 0.701-0.992) were significantly related to technical difficulty. CONCLUSIONS The intraoperative process, particularly early phase, was strongly associated with esophageal ESD technical difficulty. This study demonstrated the potential for automatic evaluation of esophageal ESD technical difficulty when combined with an AI-based automatic phase evaluation system.
Collapse
Affiliation(s)
- Tasuku Furube
- Department of Surgery, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-Ku, Tokyo, 160-8582, Japan
| | - Masashi Takeuchi
- Department of Surgery, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-Ku, Tokyo, 160-8582, Japan
| | - Hirofumi Kawakubo
- Department of Surgery, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-Ku, Tokyo, 160-8582, Japan.
| | - Yusuke Maeda
- Department of Surgery, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-Ku, Tokyo, 160-8582, Japan
| | - Satoru Matsuda
- Department of Surgery, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-Ku, Tokyo, 160-8582, Japan
| | - Kazumasa Fukuda
- Department of Surgery, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-Ku, Tokyo, 160-8582, Japan
| | - Rieko Nakamura
- Department of Surgery, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-Ku, Tokyo, 160-8582, Japan
| | - Yuko Kitagawa
- Department of Surgery, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-Ku, Tokyo, 160-8582, Japan
| |
Collapse
|
16
|
Mansur A, Saleem Z, Elhakim T, Daye D. Role of artificial intelligence in risk prediction, prognostication, and therapy response assessment in colorectal cancer: current state and future directions. Front Oncol 2023; 13:1065402. [PMID: 36761957 PMCID: PMC9905815 DOI: 10.3389/fonc.2023.1065402] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2022] [Accepted: 01/09/2023] [Indexed: 01/26/2023] Open
Abstract
Artificial Intelligence (AI) is a branch of computer science that utilizes optimization, probabilistic and statistical approaches to analyze and make predictions based on a vast amount of data. In recent years, AI has revolutionized the field of oncology and spearheaded novel approaches in the management of various cancers, including colorectal cancer (CRC). Notably, the applications of AI to diagnose, prognosticate, and predict response to therapy in CRC, is gaining traction and proving to be promising. There have also been several advancements in AI technologies to help predict metastases in CRC and in Computer-Aided Detection (CAD) Systems to improve miss rates for colorectal neoplasia. This article provides a comprehensive review of the role of AI in predicting risk, prognosis, and response to therapies among patients with CRC.
Collapse
Affiliation(s)
- Arian Mansur
- Harvard Medical School, Boston, MA, United States
| | | | - Tarig Elhakim
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - Dania Daye
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| |
Collapse
|
17
|
Takeuchi M, Collins T, Ndagijimana A, Kawakubo H, Kitagawa Y, Marescaux J, Mutter D, Perretta S, Hostettler A, Dallemagne B. Automatic surgical phase recognition in laparoscopic inguinal hernia repair with artificial intelligence. Hernia 2022; 26:1669-1678. [PMID: 35536371 DOI: 10.1007/s10029-022-02621-x] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 04/21/2022] [Indexed: 11/26/2022]
Abstract
BACKGROUND Because of the complexity of the intra-abdominal anatomy in the posterior approach, a longer learning curve has been observed in laparoscopic transabdominal preperitoneal (TAPP) inguinal hernia repair. Consequently, automatic tools using artificial intelligence (AI) to monitor TAPP procedures and assess learning curves are required. The primary objective of this study was to establish a deep learning-based automated surgical phase recognition system for TAPP. A secondary objective was to investigate the relationship between surgical skills and phase duration. METHODS This study enrolled 119 patients who underwent the TAPP procedure. The surgical videos were annotated (delineated in time) and split into seven surgical phases (preparation, peritoneal flap incision, peritoneal flap dissection, hernia dissection, mesh deployment, mesh fixation, peritoneal flap closure, and additional closure). An AI model was trained to automatically recognize surgical phases from videos. The relationship between phase duration and surgical skills were also evaluated. RESULTS A fourfold cross-validation was used to assess the performance of the AI model. The accuracy was 88.81 and 85.82%, in unilateral and bilateral cases, respectively. In unilateral hernia cases, the duration of peritoneal incision (p = 0.003) and hernia dissection (p = 0.014) detected via AI were significantly shorter for experts than for trainees. CONCLUSION An automated surgical phase recognition system was established for TAPP using deep learning with a high accuracy. Our AI-based system can be useful for the automatic monitoring of surgery progress, improving OR efficiency, evaluating surgical skills and video-based surgical education. Specific phase durations detected via the AI model were significantly associated with the surgeons' learning curve.
Collapse
Affiliation(s)
- M Takeuchi
- IRCAD, Research Institute Against Digestive Cancer (IRCAD) France, 1, place de l'Hôpital, 67091, Strasbourg, France.
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan.
| | - T Collins
- IRCAD, Research Institute Against Digestive Cancer (IRCAD) France, 1, place de l'Hôpital, 67091, Strasbourg, France
- IRCAD, Research Institute Against Digestive Cancer (IRCAD) Africa, Kigali, Rwanda
| | - A Ndagijimana
- IRCAD, Research Institute Against Digestive Cancer (IRCAD) Africa, Kigali, Rwanda
| | - H Kawakubo
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Y Kitagawa
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - J Marescaux
- IRCAD, Research Institute Against Digestive Cancer (IRCAD) France, 1, place de l'Hôpital, 67091, Strasbourg, France
- IRCAD, Research Institute Against Digestive Cancer (IRCAD) Africa, Kigali, Rwanda
| | - D Mutter
- IRCAD, Research Institute Against Digestive Cancer (IRCAD) France, 1, place de l'Hôpital, 67091, Strasbourg, France
- Department of Digestive and Endocrine Surgery, University Hospital, Strasbourg, France
| | - S Perretta
- IRCAD, Research Institute Against Digestive Cancer (IRCAD) France, 1, place de l'Hôpital, 67091, Strasbourg, France
- Department of Digestive and Endocrine Surgery, University Hospital, Strasbourg, France
| | - A Hostettler
- IRCAD, Research Institute Against Digestive Cancer (IRCAD) France, 1, place de l'Hôpital, 67091, Strasbourg, France
- IRCAD, Research Institute Against Digestive Cancer (IRCAD) Africa, Kigali, Rwanda
| | - B Dallemagne
- IRCAD, Research Institute Against Digestive Cancer (IRCAD) France, 1, place de l'Hôpital, 67091, Strasbourg, France
- Department of Digestive and Endocrine Surgery, University Hospital, Strasbourg, France
| |
Collapse
|
18
|
Quero G, Mascagni P, Kolbinger FR, Fiorillo C, De Sio D, Longo F, Schena CA, Laterza V, Rosa F, Menghi R, Papa V, Tondolo V, Cina C, Distler M, Weitz J, Speidel S, Padoy N, Alfieri S. Artificial Intelligence in Colorectal Cancer Surgery: Present and Future Perspectives. Cancers (Basel) 2022; 14:3803. [PMID: 35954466 PMCID: PMC9367568 DOI: 10.3390/cancers14153803] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Revised: 07/29/2022] [Accepted: 08/03/2022] [Indexed: 02/05/2023] Open
Abstract
Artificial intelligence (AI) and computer vision (CV) are beginning to impact medicine. While evidence on the clinical value of AI-based solutions for the screening and staging of colorectal cancer (CRC) is mounting, CV and AI applications to enhance the surgical treatment of CRC are still in their early stage. This manuscript introduces key AI concepts to a surgical audience, illustrates fundamental steps to develop CV for surgical applications, and provides a comprehensive overview on the state-of-the-art of AI applications for the treatment of CRC. Notably, studies show that AI can be trained to automatically recognize surgical phases and actions with high accuracy even in complex colorectal procedures such as transanal total mesorectal excision (TaTME). In addition, AI models were trained to interpret fluorescent signals and recognize correct dissection planes during total mesorectal excision (TME), suggesting CV as a potentially valuable tool for intraoperative decision-making and guidance. Finally, AI could have a role in surgical training, providing automatic surgical skills assessment in the operating room. While promising, these proofs of concept require further development, validation in multi-institutional data, and clinical studies to confirm AI as a valuable tool to enhance CRC treatment.
Collapse
Affiliation(s)
- Giuseppe Quero
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Pietro Mascagni
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
- Institute of Image-Guided Surgery, IHU-Strasbourg, 67000 Strasbourg, France
| | - Fiona R. Kolbinger
- Department for Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, 01307 Dresden, Germany
| | - Claudio Fiorillo
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
| | - Davide De Sio
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
| | - Fabio Longo
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
| | - Carlo Alberto Schena
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Vito Laterza
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Fausto Rosa
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Roberta Menghi
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Valerio Papa
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Vincenzo Tondolo
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
| | - Caterina Cina
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
| | - Marius Distler
- Department for Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, 01307 Dresden, Germany
| | - Juergen Weitz
- Department for Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, 01307 Dresden, Germany
| | - Stefanie Speidel
- National Center for Tumor Diseases (NCT), Partner Site Dresden, 01307 Dresden, Germany
| | - Nicolas Padoy
- Institute of Image-Guided Surgery, IHU-Strasbourg, 67000 Strasbourg, France
- ICube, Centre National de la Recherche Scientifique (CNRS), University of Strasbourg, 67000 Strasbourg, France
| | - Sergio Alfieri
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| |
Collapse
|
19
|
Takeuchi M, Kawakubo H, Saito K, Maeda Y, Matsuda S, Fukuda K, Nakamura R, Kitagawa Y. Automated Surgical-Phase Recognition for Robot-Assisted Minimally Invasive Esophagectomy Using Artificial Intelligence. Ann Surg Oncol 2022; 29:6847-6855. [PMID: 35763234 DOI: 10.1245/s10434-022-11996-1] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 05/11/2022] [Indexed: 11/18/2022]
Abstract
BACKGROUND Although a number of robot-assisted minimally invasive esophagectomy (RAMIE) procedures have been performed due to three-dimensional field of view, image stabilization, and flexible joint function, both the surgeons and surgical teams require proficiency. This study aimed to establish an artificial intelligence (AI)-based automated surgical-phase recognition system for RAMIE by analyzing robotic surgical videos. METHODS This study enrolled 31 patients who underwent RAMIE. The videos were annotated into the following nine surgical phases: preparation, lower mediastinal dissection, upper mediastinal dissection, azygos vein division, subcarinal lymph node dissection (LND), right recurrent laryngeal nerve (RLN) LND, left RLN LND, esophageal transection, and post-dissection to completion of surgery to train the AI for automated phase recognition. An additional phase ("no step") was used to indicate video sequences upon removal of the camera from the thoracic cavity. All the patients were divided into two groups, namely, early period (20 patients) and late period (11 patients), after which the relationship between the surgical-phase duration and the surgical periods was assessed. RESULTS Fourfold cross validation was applied to evaluate the performance of the current model. The AI had an accuracy of 84%. The preparation (p = 0.012), post-dissection to completion of surgery (p = 0.003), and "no step" (p < 0.001) phases predicted by the AI were significantly shorter in the late period than in the early period. CONCLUSIONS A highly accurate automated surgical-phase recognition system for RAMIE was established using deep learning. Specific phase durations were significantly associated with the surgical period at the authors' institution.
Collapse
Affiliation(s)
- Masashi Takeuchi
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Hirofumi Kawakubo
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan.
| | - Kosuke Saito
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Yusuke Maeda
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Satoru Matsuda
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Kazumasa Fukuda
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Rieko Nakamura
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Yuko Kitagawa
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| |
Collapse
|
20
|
Shinozuka K, Turuda S, Fujinaga A, Nakanuma H, Kawamura M, Matsunobu Y, Tanaka Y, Kamiyama T, Ebe K, Endo Y, Etoh T, Inomata M, Tokuyasu T. Artificial intelligence software available for medical devices: surgical phase recognition in laparoscopic cholecystectomy. Surg Endosc 2022; 36:7444-7452. [PMID: 35266049 PMCID: PMC9485170 DOI: 10.1007/s00464-022-09160-7] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Accepted: 02/18/2022] [Indexed: 11/29/2022]
Abstract
Background Surgical process modeling automatically identifies surgical phases, and further improvement in recognition accuracy is expected with deep learning. Surgical tool or time series information has been used to improve the recognition accuracy of a model. However, it is difficult to collect this information continuously intraoperatively. The present study aimed to develop a deep convolution neural network (CNN) model that correctly identifies the surgical phase during laparoscopic cholecystectomy (LC). Methods We divided LC into six surgical phases (P1–P6) and one redundant phase (P0). We prepared 115 LC videos and converted them to image frames at 3 fps. Three experienced doctors labeled the surgical phases in all image frames. Our deep CNN model was trained with 106 of the 115 annotation datasets and was evaluated with the remaining datasets. By depending on both the prediction probability and frequency for a certain period, we aimed for highly accurate surgical phase recognition in the operation room. Results Nine full LC videos were converted into image frames and were fed to our deep CNN model. The average accuracy, precision, and recall were 0.970, 0.855, and 0.863, respectively. Conclusion The deep learning CNN model in this study successfully identified both the six surgical phases and the redundant phase, P0, which may increase the versatility of the surgical process recognition model for clinical use. We believe that this model can be used in artificial intelligence for medical devices. The degree of recognition accuracy is expected to improve with developments in advanced deep learning algorithms.
Collapse
Affiliation(s)
- Ken'ichi Shinozuka
- Faculty of Information Engineering, Department of Information and Systems Engineering, Fukuoka Institute of Technology, 1-30-1 Wajiro higashi, Higashi-ku, Fukuoka, Fukuoka, 811-0295, Japan
| | - Sayaka Turuda
- Faculty of Information Engineering, Department of Information and Systems Engineering, Fukuoka Institute of Technology, 1-30-1 Wajiro higashi, Higashi-ku, Fukuoka, Fukuoka, 811-0295, Japan
| | - Atsuro Fujinaga
- Faculty of Medicine, Department of Gastroenterological and Pediatric Surgery, Oita University, Oita, Japan
| | - Hiroaki Nakanuma
- Faculty of Medicine, Department of Gastroenterological and Pediatric Surgery, Oita University, Oita, Japan
| | - Masahiro Kawamura
- Faculty of Medicine, Department of Gastroenterological and Pediatric Surgery, Oita University, Oita, Japan
| | - Yusuke Matsunobu
- Faculty of Information Engineering, Department of Information and Systems Engineering, Fukuoka Institute of Technology, 1-30-1 Wajiro higashi, Higashi-ku, Fukuoka, Fukuoka, 811-0295, Japan
| | - Yuki Tanaka
- Customer Solutions Development, Platform Technology, Olympus Technologies Asia, Olympus Corporation, Tokyo, Japan
| | - Toshiya Kamiyama
- Customer Solutions Development, Platform Technology, Olympus Technologies Asia, Olympus Corporation, Tokyo, Japan
| | - Kohei Ebe
- Customer Solutions Development, Platform Technology, Olympus Technologies Asia, Olympus Corporation, Tokyo, Japan
| | - Yuichi Endo
- Faculty of Medicine, Department of Gastroenterological and Pediatric Surgery, Oita University, Oita, Japan
| | - Tsuyoshi Etoh
- Faculty of Medicine, Department of Gastroenterological and Pediatric Surgery, Oita University, Oita, Japan
| | - Masafumi Inomata
- Faculty of Medicine, Department of Gastroenterological and Pediatric Surgery, Oita University, Oita, Japan
| | - Tatsushi Tokuyasu
- Faculty of Information Engineering, Department of Information and Systems Engineering, Fukuoka Institute of Technology, 1-30-1 Wajiro higashi, Higashi-ku, Fukuoka, Fukuoka, 811-0295, Japan.
| |
Collapse
|