101
|
Podlasek J, Heesch M, Podlasek R, Kilisiński W, Filip R. Real-time deep learning-based colorectal polyp localization on clinical video footage achievable with a wide array of hardware configurations. Endosc Int Open 2021; 9:E741-E748. [PMID: 33937516 PMCID: PMC8062241 DOI: 10.1055/a-1388-6735] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Accepted: 12/30/2020] [Indexed: 02/08/2023] Open
Abstract
Background and study aims Several computer-assisted polyp detection systems have been proposed, but they have various limitations, from utilizing outdated neural network architectures to a requirement for multi-graphics processing unit (GPU) processing, to validating on small or non-robust datasets. To address these problems, we developed a system based on a state-of-the-art convolutional neural network architecture able to detect polyps in real time on a single GPU and tested on both public datasets and full clinical examination recordings. Methods The study comprised 165 colonoscopy procedure recordings and 2678 still photos gathered retrospectively. The system was trained on 81,962 polyp frames in total and then tested on footage from 42 colonoscopies and CVC-ClinicDB, CVC-ColonDB, Hyper-Kvasir, and ETIS-Larib public datasets. Clinical videos were evaluated for polyp detection and false-positive rates whereas the public datasets were assessed for F1 score. The system was tested for runtime performance on a wide array of hardware. Results The performance on public datasets varied from an F1 score of 0.727 to 0.942. On full examination videos, it detected 94 % of the polyps found by the endoscopist with a 3 % false-positive rate and identified additional polyps that were missed during initial video assessment. The system's runtime fits within the real-time constraints on all but one of the hardware configurations. Conclusions We have created a polyp detection system with a post-processing pipeline that works in real time on a wide array of hardware. The system does not require extensive computational power, which could help broaden the adaptation of new commercially available systems.
Collapse
Affiliation(s)
- Jeremi Podlasek
- Department of Technology, moretho Ltd., Manchester, United Kingdom
| | - Mateusz Heesch
- Department of Technology, moretho Ltd., Manchester, United Kingdom
- Department of Robotics and Mechatronics, AGH University of Science and Technology, Kraków, Poland
| | - Robert Podlasek
- Department of Surgery with the Trauma and Orthopedic Division, District Hospital in Strzyżów, Strzyżów, Poland
| | - Wojciech Kilisiński
- Department of Gastroenterology with IBD Unit, Voivodship Hospital No 2 in Rzeszow, Rzeszów, Poland
| | - Rafał Filip
- Department of Gastroenterology with IBD Unit, Voivodship Hospital No 2 in Rzeszow, Rzeszów, Poland
- Faculty of Medicine, University of Rzeszów, Rzeszów, Poland
| |
Collapse
|
102
|
Ali S, Dmitrieva M, Ghatwary N, Bano S, Polat G, Temizel A, Krenzer A, Hekalo A, Guo YB, Matuszewski B, Gridach M, Voiculescu I, Yoganand V, Chavan A, Raj A, Nguyen NT, Tran DQ, Huynh LD, Boutry N, Rezvy S, Chen H, Choi YH, Subramanian A, Balasubramanian V, Gao XW, Hu H, Liao Y, Stoyanov D, Daul C, Realdon S, Cannizzaro R, Lamarque D, Tran-Nguyen T, Bailey A, Braden B, East JE, Rittscher J. Deep learning for detection and segmentation of artefact and disease instances in gastrointestinal endoscopy. Med Image Anal 2021; 70:102002. [PMID: 33657508 DOI: 10.1016/j.media.2021.102002] [Citation(s) in RCA: 53] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2020] [Revised: 02/04/2021] [Accepted: 02/11/2021] [Indexed: 12/12/2022]
Abstract
The Endoscopy Computer Vision Challenge (EndoCV) is a crowd-sourcing initiative to address eminent problems in developing reliable computer aided detection and diagnosis endoscopy systems and suggest a pathway for clinical translation of technologies. Whilst endoscopy is a widely used diagnostic and treatment tool for hollow-organs, there are several core challenges often faced by endoscopists, mainly: 1) presence of multi-class artefacts that hinder their visual interpretation, and 2) difficulty in identifying subtle precancerous precursors and cancer abnormalities. Artefacts often affect the robustness of deep learning methods applied to the gastrointestinal tract organs as they can be confused with tissue of interest. EndoCV2020 challenges are designed to address research questions in these remits. In this paper, we present a summary of methods developed by the top 17 teams and provide an objective comparison of state-of-the-art methods and methods designed by the participants for two sub-challenges: i) artefact detection and segmentation (EAD2020), and ii) disease detection and segmentation (EDD2020). Multi-center, multi-organ, multi-class, and multi-modal clinical endoscopy datasets were compiled for both EAD2020 and EDD2020 sub-challenges. The out-of-sample generalization ability of detection algorithms was also evaluated. Whilst most teams focused on accuracy improvements, only a few methods hold credibility for clinical usability. The best performing teams provided solutions to tackle class imbalance, and variabilities in size, origin, modality and occurrences by exploring data augmentation, data fusion, and optimal class thresholding techniques.
Collapse
Affiliation(s)
- Sharib Ali
- Institute of Biomedical Engineering and Big Data Institute, Old Road Campus, University of Oxford, Oxford, UK; Oxford NIHR Biomedical Research Centre, Oxford, UK.
| | - Mariia Dmitrieva
- Institute of Biomedical Engineering and Big Data Institute, Old Road Campus, University of Oxford, Oxford, UK
| | - Noha Ghatwary
- Computer Engineering Department, Arab Academy for Science and Technology, Alexandria, Egypt
| | - Sophia Bano
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences(WEISS) and Department of Computer Science, University College London, London, UK
| | - Gorkem Polat
- Graduate School of Informatics, Middle East Technical University, Ankara, Turkey
| | - Alptekin Temizel
- Graduate School of Informatics, Middle East Technical University, Ankara, Turkey
| | - Adrian Krenzer
- Department of Artificial Intelligence and Knowledge Systems, University of Würzburg, Germany
| | - Amar Hekalo
- Department of Artificial Intelligence and Knowledge Systems, University of Würzburg, Germany
| | - Yun Bo Guo
- School of Engineering, University of Central Lancashire, UK
| | | | - Mourad Gridach
- Ibn Zohr University, Computer Science HIT, Agadir, Morocco
| | | | - Vishnusai Yoganand
- Mimyk Medical Simulations Pvt Ltd, Indian Institute of Science, Bengaluru, India
| | - Arnav Chavan
- Indian Institute of Technology (ISM), Dhanbad, India
| | - Aryan Raj
- Indian Institute of Technology (ISM), Dhanbad, India
| | - Nhan T Nguyen
- Medical Imaging Department, Vingroup Big Data Institute (VinBDI), Hanoi, Vietnam
| | - Dat Q Tran
- Medical Imaging Department, Vingroup Big Data Institute (VinBDI), Hanoi, Vietnam
| | - Le Duy Huynh
- EPITA Research and Development Laboratory (LRDE), F-94270 Le Kremlin-Bicêtre, France
| | - Nicolas Boutry
- EPITA Research and Development Laboratory (LRDE), F-94270 Le Kremlin-Bicêtre, France
| | - Shahadate Rezvy
- School of Science and Technology, Middlesex University London, UK
| | - Haijian Chen
- Department of Computer Science, School of Informatics, Xiamen University, China
| | - Yoon Ho Choi
- Dept. of Health Sciences & Tech., Samsung Advanced Institute for Health Sciences & Tech. (SAIHST), Sungkyunkwan University, Seoul, Republic of Korea
| | | | | | - Xiaohong W Gao
- School of Science and Technology, Middlesex University London, UK
| | - Hongyu Hu
- Shanghai Jiaotong University, Shanghai, China
| | | | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences(WEISS) and Department of Computer Science, University College London, London, UK
| | - Christian Daul
- CRAN UMR 7039, University of Lorraine, CNRS, Nancy, France
| | | | | | - Dominique Lamarque
- Université de Versailles St-Quentin en Yvelines, Hôpital Ambroise Paré, France
| | - Terry Tran-Nguyen
- Translational Gastroenterology Unit, Experimental Medicine Div., John Radcliffe Hospital, University of Oxford, Oxford, UK
| | - Adam Bailey
- Translational Gastroenterology Unit, Experimental Medicine Div., John Radcliffe Hospital, University of Oxford, Oxford, UK; Oxford NIHR Biomedical Research Centre, Oxford, UK
| | - Barbara Braden
- Translational Gastroenterology Unit, Experimental Medicine Div., John Radcliffe Hospital, University of Oxford, Oxford, UK; Oxford NIHR Biomedical Research Centre, Oxford, UK
| | - James E East
- Translational Gastroenterology Unit, Experimental Medicine Div., John Radcliffe Hospital, University of Oxford, Oxford, UK; Oxford NIHR Biomedical Research Centre, Oxford, UK
| | - Jens Rittscher
- Institute of Biomedical Engineering and Big Data Institute, Old Road Campus, University of Oxford, Oxford, UK
| |
Collapse
|
103
|
Jha D, Ali S, Hicks S, Thambawita V, Borgli H, Smedsrud PH, de Lange T, Pogorelov K, Wang X, Harzig P, Tran MT, Meng W, Hoang TH, Dias D, Ko TH, Agrawal T, Ostroukhova O, Khan Z, Atif Tahir M, Liu Y, Chang Y, Kirkerød M, Johansen D, Lux M, Johansen HD, Riegler MA, Halvorsen P. A comprehensive analysis of classification methods in gastrointestinal endoscopy imaging. Med Image Anal 2021; 70:102007. [PMID: 33740740 DOI: 10.1016/j.media.2021.102007] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2020] [Revised: 01/20/2021] [Accepted: 02/16/2021] [Indexed: 12/24/2022]
Abstract
Gastrointestinal (GI) endoscopy has been an active field of research motivated by the large number of highly lethal GI cancers. Early GI cancer precursors are often missed during the endoscopic surveillance. The high missed rate of such abnormalities during endoscopy is thus a critical bottleneck. Lack of attentiveness due to tiring procedures, and requirement of training are few contributing factors. An automatic GI disease classification system can help reduce such risks by flagging suspicious frames and lesions. GI endoscopy consists of several multi-organ surveillance, therefore, there is need to develop methods that can generalize to various endoscopic findings. In this realm, we present a comprehensive analysis of the Medico GI challenges: Medical Multimedia Task at MediaEval 2017, Medico Multimedia Task at MediaEval 2018, and BioMedia ACM MM Grand Challenge 2019. These challenges are initiative to set-up a benchmark for different computer vision methods applied to the multi-class endoscopic images and promote to build new approaches that could reliably be used in clinics. We report the performance of 21 participating teams over a period of three consecutive years and provide a detailed analysis of the methods used by the participants, highlighting the challenges and shortcomings of the current approaches and dissect their credibility for the use in clinical settings. Our analysis revealed that the participants achieved an improvement on maximum Mathew correlation coefficient (MCC) from 82.68% in 2017 to 93.98% in 2018 and 95.20% in 2019 challenges, and a significant increase in computational speed over consecutive years.
Collapse
Affiliation(s)
- Debesh Jha
- SimulaMet, Oslo, Norway; UiT The Arctic University of Norway, Tromsø, Norway.
| | - Sharib Ali
- Department of Engineering Science, University of Oxford, Oxford, UK; Oxford NIHR Biomedical Research Centre, Oxford, UK
| | - Steven Hicks
- SimulaMet, Oslo, Norway; Oslo Metropolitan University, Oslo, Norway
| | | | - Hanna Borgli
- SimulaMet, Oslo, Norway; University of Oslo, Oslo, Norway
| | - Pia H Smedsrud
- SimulaMet, Oslo, Norway; University of Oslo, Oslo, Norway; Augere Medical AS, Oslo, Norway
| | - Thomas de Lange
- SimulaMet, Oslo, Norway; Augere Medical AS, Oslo, Norway; Sahlgrenska University Hospital, Molndal, Sweden; Bærum Hospital, Vestre Viken, Oslo, Norway
| | | | | | | | | | | | | | | | | | | | - Olga Ostroukhova
- Research Institute of Multiprocessor Computation Systems, Russia
| | - Zeshan Khan
- School of Computer Science, National University of Computer and Emerging Sciences, Karachi Campus, Pakistan
| | - Muhammad Atif Tahir
- School of Computer Science, National University of Computer and Emerging Sciences, Karachi Campus, Pakistan
| | - Yang Liu
- Hong Kong Baptist University, Hong Kong
| | - Yuan Chang
- Beijing University of Posts and Telecom., China
| | | | - Dag Johansen
- UiT The Arctic University of Norway, Tromsø, Norway
| | - Mathias Lux
- Alpen-Adria-Universität Klagenfurt, Klagenfurt, Austria
| | | | | | - Pål Halvorsen
- SimulaMet, Oslo, Norway; Oslo Metropolitan University, Oslo, Norway
| |
Collapse
|
104
|
Shen P, Li WZ, Li JX, Pei ZC, Luo YX, Mu JB, Li W, Wang XM. Real-time use of a computer-aided system for polyp detection during colonoscopy, an ambispective study. J Dig Dis 2021; 22:256-262. [PMID: 33742774 DOI: 10.1111/1751-2980.12985] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/07/2021] [Revised: 03/03/2021] [Accepted: 03/15/2021] [Indexed: 12/11/2022]
Abstract
OBJECTIVE This study aimed to evaluate ambispectively the effectiveness of a real-time computer-aided detection (CADe) system on the number of polyp (PPC) or adenoma per colonoscopy (APC), and polyp (PDR) or adenoma detection rate (ADR). METHODS Eight-five videos marked using the CADe system, together with the unmarked videos, were reviewed by two senior endoscopists. Polyps detected in the marked and unmarked videos were recounted in parallel. Additionally, 128 consecutive patients were enrolled for a prospective evaluation using a standard colonoscopy or the CADe monitor alternately every 2 weeks. The PC, APC, PDR and ADR were compared between the two groups. RESULTS The total number of polyps reported in the unmarked and marked videos were 73 and 88, respectively (mean PPC 0.86 vs 1.04, P = 0.001). The proportion of polyps detected per colonoscopy increased by 20.5%. Of the 128 prospectively enrolled patients, 186 polyps were detected. The mean PPC was higher in the CADe colonoscopy than in the standard colonoscopy (1.66 vs 1.13, P = 0.039). The PDR using the CADe colonoscopy was significantly higher than that of the standard colonoscopy (78.1% vs 56.3%, P = 0.008). CONCLUSION Real-time CADe system significantly increases the PDR and PPC under the situation of a high rate of polyp detection.
Collapse
Affiliation(s)
- Ping Shen
- Graduate School, Tianjin Medical University, Tianjin, China.,Tianjin Clinical Medicine Research Centre for Integrated Chinese and Western Medicine Acute Abdomen, Tianjin Hospital of Integrated Chinese and Western Medicine Nankai Hospital, Tianjin, China
| | - Wei Zhi Li
- Tianjin Clinical Medicine Research Centre for Integrated Chinese and Western Medicine Acute Abdomen, Tianjin Hospital of Integrated Chinese and Western Medicine Nankai Hospital, Tianjin, China
| | - Jia Xin Li
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
| | - Zheng Cun Pei
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
| | - Yu Xuan Luo
- Tianjin YuJin Artificial Intelligence Medical Technology Co., Ltd., Tianjin, China
| | - Jin Bao Mu
- Tianjin YuJin Artificial Intelligence Medical Technology Co., Ltd., Tianjin, China
| | - Wen Li
- Department of Endoscopy, Tianjin Union Medical Center, Tianjin, China
| | - Xi Mo Wang
- Tianjin Clinical Medicine Research Centre for Integrated Chinese and Western Medicine Acute Abdomen, Tianjin Hospital of Integrated Chinese and Western Medicine Nankai Hospital, Tianjin, China
| |
Collapse
|
105
|
Cao C, Wang R, Yu Y, zhang H, Yu Y, Sun C. Gastric polyp detection in gastroscopic images using deep neural network. PLoS One 2021; 16:e0250632. [PMID: 33909671 PMCID: PMC8081222 DOI: 10.1371/journal.pone.0250632] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2020] [Accepted: 04/08/2021] [Indexed: 12/26/2022] Open
Abstract
This paper presents the research results of detecting gastric polyps with deep learning object detection method in gastroscopic images. Gastric polyps have various sizes. The difficulty of polyp detection is that small polyps are difficult to detect from the background. We propose a feature extraction and fusion module and combine it with the YOLOv3 network to form our network. This method performs better than other methods in the detection of small polyps because it can fuse the semantic information of high-level feature maps with low-level feature maps to help small polyps detection. In this work, we use a dataset of gastric polyps created by ourselves, containing 1433 training images and 508 validation images. We train and validate our network on our dataset. In comparison with other methods of polyps detection, our method has a significant improvement in precision, recall rate, F1, and F2 score. The precision, recall rate, F1 score, and F2 score of our method can achieve 91.6%, 86.2%, 88.8%, and 87.2%.
Collapse
Affiliation(s)
- Chanting Cao
- Beijing Engineering Research Center of Industrial Spectrum Imaging, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China
| | - Ruilin Wang
- Beijing Engineering Research Center of Industrial Spectrum Imaging, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China
| | - Yao Yu
- Beijing Engineering Research Center of Industrial Spectrum Imaging, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China
- * E-mail:
| | - Hui zhang
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Ying Yu
- Beijing An Zhen Hospital, Beijing, China
| | - Changyin Sun
- School of Automation, Southeast University, Nanjing, China
| |
Collapse
|
106
|
Ozyoruk KB, Gokceler GI, Bobrow TL, Coskun G, Incetan K, Almalioglu Y, Mahmood F, Curto E, Perdigoto L, Oliveira M, Sahin H, Araujo H, Alexandrino H, Durr NJ, Gilbert HB, Turan M. EndoSLAM dataset and an unsupervised monocular visual odometry and depth estimation approach for endoscopic videos. Med Image Anal 2021; 71:102058. [PMID: 33930829 DOI: 10.1016/j.media.2021.102058] [Citation(s) in RCA: 54] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 01/23/2021] [Accepted: 03/29/2021] [Indexed: 02/07/2023]
Abstract
Deep learning techniques hold promise to develop dense topography reconstruction and pose estimation methods for endoscopic videos. However, currently available datasets do not support effective quantitative benchmarking. In this paper, we introduce a comprehensive endoscopic SLAM dataset consisting of 3D point cloud data for six porcine organs, capsule and standard endoscopy recordings, synthetically generated data as well as clinically in use conventional endoscope recording of the phantom colon with computed tomography(CT) scan ground truth. A Panda robotic arm, two commercially available capsule endoscopes, three conventional endoscopes with different camera properties, two high precision 3D scanners, and a CT scanner were employed to collect data from eight ex-vivo porcine gastrointestinal (GI)-tract organs and a silicone colon phantom model. In total, 35 sub-datasets are provided with 6D pose ground truth for the ex-vivo part: 18 sub-datasets for colon, 12 sub-datasets for stomach, and 5 sub-datasets for small intestine, while four of these contain polyp-mimicking elevations carried out by an expert gastroenterologist. To verify the applicability of this data for use with real clinical systems, we recorded a video sequence with a state-of-the-art colonoscope from a full representation silicon colon phantom. Synthetic capsule endoscopy frames from stomach, colon, and small intestine with both depth and pose annotations are included to facilitate the study of simulation-to-real transfer learning algorithms. Additionally, we propound Endo-SfMLearner, an unsupervised monocular depth and pose estimation method that combines residual networks with a spatial attention module in order to dictate the network to focus on distinguishable and highly textured tissue regions. The proposed approach makes use of a brightness-aware photometric loss to improve the robustness under fast frame-to-frame illumination changes that are commonly seen in endoscopic videos. To exemplify the use-case of the EndoSLAM dataset, the performance of Endo-SfMLearner is extensively compared with the state-of-the-art: SC-SfMLearner, Monodepth2, and SfMLearner. The codes and the link for the dataset are publicly available at https://github.com/CapsuleEndoscope/EndoSLAM. A video demonstrating the experimental setup and procedure is accessible as Supplementary Video 1.
Collapse
Affiliation(s)
| | | | - Taylor L Bobrow
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Gulfize Coskun
- Institute of Biomedical Engineering, Bogazici University, Turkey
| | - Kagan Incetan
- Institute of Biomedical Engineering, Bogazici University, Turkey
| | | | - Faisal Mahmood
- Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Cancer Data Science, Dana Farber Cancer Institute, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
| | - Eva Curto
- Institute for Systems and Robotics, University of Coimbra, Portugal
| | - Luis Perdigoto
- Institute for Systems and Robotics, University of Coimbra, Portugal
| | - Marina Oliveira
- Institute for Systems and Robotics, University of Coimbra, Portugal
| | - Hasan Sahin
- Institute of Biomedical Engineering, Bogazici University, Turkey
| | - Helder Araujo
- Institute for Systems and Robotics, University of Coimbra, Portugal
| | - Henrique Alexandrino
- Faculty of Medicine, Clinical Academic Center of Coimbra, University of Coimbra, Coimbra, Portugal
| | - Nicholas J Durr
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Hunter B Gilbert
- Department of Mechanical and Industrial Engineering, Louisiana State University, Baton Rouge, LA, USA
| | - Mehmet Turan
- Institute of Biomedical Engineering, Bogazici University, Turkey.
| |
Collapse
|
107
|
Liu X, Guo X, Liu Y, Yuan Y. Consolidated domain adaptive detection and localization framework for cross-device colonoscopic images. Med Image Anal 2021; 71:102052. [PMID: 33895616 DOI: 10.1016/j.media.2021.102052] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2020] [Revised: 02/19/2021] [Accepted: 03/22/2021] [Indexed: 12/17/2022]
Abstract
Automatic polyp detection has been proven to be crucial in improving the diagnosis accuracy and reducing colorectal cancer mortality during the precancerous stage. However, the performance of deep neural networks may degrade severely when being deployed to polyp data in a distinct domain. This domain distinction can be caused by different scanners, hospitals, or imaging protocols. In this paper, we propose a consolidated domain adaptive detection and localization framework to bridge the domain gap between different colonosopic datasets effectively, consisting of two parts: the pixel-level adaptation and the hierarchical feature-level adaptation. For the pixel-level adaptation part, we propose a Gaussian Fourier Domain Adaptation (GFDA) method to sample the matched source and target image pairs from Gaussian distributions then unify their styles via the low-level spectrum replacement, which can reduce the domain discrepancy of the cross-device polyp datasets in appearance level without distorting their contents. The hierarchical feature-level adaptation part comprising a Hierarchical Attentive Adaptation (HAA) module to minimize the domain discrepancy in high-level semantics and an Iconic Concentrative Adaptation (ICA) module to perform reliable instance alignment. These two modules are regularized by a Generalized Consistency Regularizer (GCR) for maintaining the consistency of their domain predictions. We further extend our framework to the polyp localization task and present a Centre Besiegement (CB) loss for better location optimization. Experimental results show that our framework outperforms other domain adaptation detectors by a large margin in the detection task meanwhile achieves the state-of-the-art recall rate of 87.5% in the localization task. The source code is available at https://github.com/CityU-AIM-Group/ConsolidatedPolypDA.
Collapse
Affiliation(s)
- Xinyu Liu
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong SAR, China
| | - Xiaoqing Guo
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong SAR, China
| | - Yajie Liu
- Department of Radiation Oncology, Peking University Shenzhen Hospital, Shenzhen, China
| | - Yixuan Yuan
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong SAR, China.
| |
Collapse
|
108
|
|
109
|
Real-time automatic polyp detection in colonoscopy using feature enhancement module and spatiotemporal similarity correlation unit. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102503] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
110
|
Jha D, Ali S, Tomar NK, Johansen HD, Johansen D, Rittscher J, Riegler MA, Halvorsen P. Real-Time Polyp Detection, Localization and Segmentation in Colonoscopy Using Deep Learning. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2021; 9:40496-40510. [PMID: 33747684 PMCID: PMC7968127 DOI: 10.1109/access.2021.3063716] [Citation(s) in RCA: 85] [Impact Index Per Article: 21.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/02/2021] [Accepted: 02/15/2021] [Indexed: 05/16/2023]
Abstract
Computer-aided detection, localisation, and segmentation methods can help improve colonoscopy procedures. Even though many methods have been built to tackle automatic detection and segmentation of polyps, benchmarking of state-of-the-art methods still remains an open problem. This is due to the increasing number of researched computer vision methods that can be applied to polyp datasets. Benchmarking of novel methods can provide a direction to the development of automated polyp detection and segmentation tasks. Furthermore, it ensures that the produced results in the community are reproducible and provide a fair comparison of developed methods. In this paper, we benchmark several recent state-of-the-art methods using Kvasir-SEG, an open-access dataset of colonoscopy images for polyp detection, localisation, and segmentation evaluating both method accuracy and speed. Whilst, most methods in literature have competitive performance over accuracy, we show that the proposed ColonSegNet achieved a better trade-off between an average precision of 0.8000 and mean IoU of 0.8100, and the fastest speed of 180 frames per second for the detection and localisation task. Likewise, the proposed ColonSegNet achieved a competitive dice coefficient of 0.8206 and the best average speed of 182.38 frames per second for the segmentation task. Our comprehensive comparison with various state-of-the-art methods reveals the importance of benchmarking the deep learning methods for automated real-time polyp identification and delineations that can potentially transform current clinical practices and minimise miss-detection rates.
Collapse
Affiliation(s)
- Debesh Jha
- SimulaMet0167OsloNorway
- Department of Engineering ScienceBig Data Institute, University of OxfordOxfordOX3 7XFU.K.
| | - Sharib Ali
- Department of Engineering ScienceBig Data Institute, University of OxfordOxfordOX3 7XFU.K.
- Oxford NIHR Biomedical Research CentreOxfordOX4 2PGvU.K.
| | | | - Håvard D. Johansen
- Department of Computer ScienceUiT–The Arctic University of Norway9037TromsøNorway
| | - Dag Johansen
- Department of Computer ScienceUiT–The Arctic University of Norway9037TromsøNorway
| | - Jens Rittscher
- Department of Engineering ScienceBig Data Institute, University of OxfordOxfordOX3 7XFU.K.
- Oxford NIHR Biomedical Research CentreOxfordOX4 2PGvU.K.
| | | | - Pål Halvorsen
- SimulaMet0167OsloNorway
- Department of Computer ScienceOslo Metropolitan University0167OsloNorway
| |
Collapse
|
111
|
Wang S, Cong Y, Zhu H, Chen X, Qu L, Fan H, Zhang Q, Liu M. Multi-Scale Context-Guided Deep Network for Automated Lesion Segmentation With Endoscopy Images of Gastrointestinal Tract. IEEE J Biomed Health Inform 2021; 25:514-525. [PMID: 32750912 DOI: 10.1109/jbhi.2020.2997760] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Accurate lesion segmentation based on endoscopy images is a fundamental task for the automated diagnosis of gastrointestinal tract (GI Tract) diseases. Previous studies usually use hand-crafted features for representing endoscopy images, while feature definition and lesion segmentation are treated as two standalone tasks. Due to the possible heterogeneity between features and segmentation models, these methods often result in sub-optimal performance. Several fully convolutional networks have been recently developed to jointly perform feature learning and model training for GI Tract disease diagnosis. However, they generally ignore local spatial details of endoscopy images, as down-sampling operations (e.g., pooling and convolutional striding) may result in irreversible loss of image spatial information. To this end, we propose a multi-scale context-guided deep network (MCNet) for end-to-end lesion segmentation of endoscopy images in GI Tract, where both global and local contexts are captured as guidance for model training. Specifically, one global subnetwork is designed to extract the global structure and high-level semantic context of each input image. Then we further design two cascaded local subnetworks based on output feature maps of the global subnetwork, aiming to capture both local appearance information and relatively high-level semantic information in a multi-scale manner. Those feature maps learned by three subnetworks are further fused for the subsequent task of lesion segmentation. We have evaluated the proposed MCNet on 1,310 endoscopy images from the public EndoVis-Ab and CVC-ClinicDB datasets for abnormal segmentation and polyp segmentation, respectively. Experimental results demonstrate that MCNet achieves [Formula: see text] and [Formula: see text] mean intersection over union (mIoU) on two datasets, respectively, outperforming several state-of-the-art approaches in automated lesion segmentation with endoscopy images of GI Tract.
Collapse
|
112
|
Tran ST, Cheng CH, Nguyen TT, Le MH, Liu DG. TMD-Unet: Triple-Unet with Multi-Scale Input Features and Dense Skip Connection for Medical Image Segmentation. Healthcare (Basel) 2021; 9:54. [PMID: 33419018 PMCID: PMC7825313 DOI: 10.3390/healthcare9010054] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2020] [Revised: 12/29/2020] [Accepted: 01/02/2021] [Indexed: 11/18/2022] Open
Abstract
Deep learning is one of the most effective approaches to medical image processing applications. Network models are being studied more and more for medical image segmentation challenges. The encoder-decoder structure is achieving great success, in particular the Unet architecture, which is used as a baseline architecture for the medical image segmentation networks. Traditional Unet and Unet-based networks still have a limitation that is not able to fully exploit the output features of the convolutional units in the node. In this study, we proposed a new network model named TMD-Unet, which had three main enhancements in comparison with Unet: (1) modifying the interconnection of the network node, (2) using dilated convolution instead of the standard convolution, and (3) integrating the multi-scale input features on the input side of the model and applying a dense skip connection instead of a regular skip connection. Our experiments were performed on seven datasets, including many different medical image modalities such as colonoscopy, electron microscopy (EM), dermoscopy, computed tomography (CT), and magnetic resonance imaging (MRI). The segmentation applications implemented in the paper include EM, nuclei, polyp, skin lesion, left atrium, spleen, and liver segmentation. The dice score of our proposed models achieved 96.43% for liver segmentation, 95.51% for spleen segmentation, 92.65% for polyp segmentation, 94.11% for EM segmentation, 92.49% for nuclei segmentation, 91.81% for left atrium segmentation, and 87.27% for skin lesion segmentation. The experimental results showed that the proposed model was superior to the popular models for all seven applications, which demonstrates the high generality of the proposed model.
Collapse
Affiliation(s)
- Song-Toan Tran
- Program of Electrical and Communications Engineering, Feng Chia University, Taichung 40724, Taiwan; (T.-T.N.); (M.-H.L.); (D.-G.L.)
- Department of Electrical and Electronics, Tra Vinh University, Tra Vinh 87000, Vietnam
| | - Ching-Hwa Cheng
- Department of Electronic Engineering, Feng Chia University, Taichung 40724, Taiwan;
| | - Thanh-Tuan Nguyen
- Program of Electrical and Communications Engineering, Feng Chia University, Taichung 40724, Taiwan; (T.-T.N.); (M.-H.L.); (D.-G.L.)
| | - Minh-Hai Le
- Program of Electrical and Communications Engineering, Feng Chia University, Taichung 40724, Taiwan; (T.-T.N.); (M.-H.L.); (D.-G.L.)
- Department of Electrical and Electronics, Tra Vinh University, Tra Vinh 87000, Vietnam
| | - Don-Gey Liu
- Program of Electrical and Communications Engineering, Feng Chia University, Taichung 40724, Taiwan; (T.-T.N.); (M.-H.L.); (D.-G.L.)
- Department of Electronic Engineering, Feng Chia University, Taichung 40724, Taiwan;
| |
Collapse
|
113
|
|
114
|
Sánchez-Peralta LF, Pagador JB, Sánchez-Margallo FM. Artificial Intelligence for Colorectal Polyps in Colonoscopy. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_308-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
115
|
Chen H, Sung JJY. Potentials of AI in medical image analysis in Gastroenterology and Hepatology. J Gastroenterol Hepatol 2021; 36:31-38. [PMID: 33140875 DOI: 10.1111/jgh.15327] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/16/2020] [Revised: 10/30/2020] [Accepted: 10/30/2020] [Indexed: 12/15/2022]
Abstract
With the advancement of artificial intelligence (AI) technology, it comes in a big wave carrying possibly huge impact in the field of medicine. Gastroenterology and hepatology, being a specialty relying much on diagnostic imaging, endoscopy, and histopathology, AI technology has promised improving the quality and consistency of care to the patients. In this review, we will elucidate the development of machine learning methods, especially the visual representation mechanism in deep learning on recognition tasks. Various AI-image analysis applications in endoscopy, radiology, and pathology are covered in gastroenterology and hepatology and reveal the enormous potentials for AI in assisting diagnosis, prognosis, and treatment. We also discuss the promises as well as pitfalls for AI in medical image analysis and pointing out future research directions.
Collapse
Affiliation(s)
- Hao Chen
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - Joseph J Y Sung
- Department of Medicine and Therapeutics, The Chinese University of Hong Kong, Shatin, Hong Kong
| |
Collapse
|
116
|
Artificial Intelligence in Medicine. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_163-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
117
|
Marzullo A, Moccia S, Calimeri F, De Momi E. AIM in Endoscopy Procedures. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_164-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/07/2022]
|
118
|
Mohapatra S, Swarnkar T, Mishra M, Al-Dabass D, Mascella R. Deep learning in gastroenterology. HANDBOOK OF COMPUTATIONAL INTELLIGENCE IN BIOMEDICAL ENGINEERING AND HEALTHCARE 2021:121-149. [DOI: 10.1016/b978-0-12-822260-7.00001-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/07/2025]
|
119
|
Yang X, Wei Q, Zhang C, Zhou K, Kong L, Jiang W. Colon Polyp Detection and Segmentation Based on Improved MRCNN. IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 2021; 70:1-10. [DOI: 10.1109/tim.2020.3038011] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2025]
|
120
|
Wittenberg T, Raithel M. Artificial Intelligence-Based Polyp Detection in Colonoscopy: Where Have We Been, Where Do We Stand, and Where Are We Headed? Visc Med 2020; 36:428-438. [PMID: 33447598 PMCID: PMC7768101 DOI: 10.1159/000512438] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2020] [Accepted: 10/20/2020] [Indexed: 12/21/2022] Open
Abstract
BACKGROUND In the past, image-based computer-assisted diagnosis and detection systems have been driven mainly from the field of radiology, and more specifically mammography. Nevertheless, with the availability of large image data collections (known as the "Big Data" phenomenon) in correlation with developments from the domain of artificial intelligence (AI) and particularly so-called deep convolutional neural networks, computer-assisted detection of adenomas and polyps in real-time during screening colonoscopy has become feasible. SUMMARY With respect to these developments, the scope of this contribution is to provide a brief overview about the evolution of AI-based detection of adenomas and polyps during colonoscopy of the past 35 years, starting with the age of "handcrafted geometrical features" together with simple classification schemes, over the development and use of "texture-based features" and machine learning approaches, and ending with current developments in the field of deep learning using convolutional neural networks. In parallel, the need and necessity of large-scale clinical data will be discussed in order to develop such methods, up to commercially available AI products for automated detection of polyps (adenoma and benign neoplastic lesions). Finally, a short view into the future is made regarding further possibilities of AI methods within colonoscopy. KEY MESSAGES Research of image-based lesion detection in colonoscopy data has a 35-year-old history. Milestones such as the Paris nomenclature, texture features, big data, and deep learning were essential for the development and availability of commercial AI-based systems for polyp detection.
Collapse
|
121
|
de Almeida Thomaz V, Sierra-Franco CA, Raposo AB. Training data enhancements for improving colonic polyp detection using deep convolutional neural networks. Artif Intell Med 2020; 111:101988. [PMID: 33461694 DOI: 10.1016/j.artmed.2020.101988] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2019] [Revised: 08/04/2020] [Accepted: 11/03/2020] [Indexed: 01/10/2023]
Abstract
BACKGROUND Over the last years, the most relevant results in the context of polyp detection were achieved through deep learning techniques. However, the most common obstacles in this field are the small datasets with a reduced number of samples and the lack of data variability. This paper describes a method to reduce this limitation and improve polyp detection results using publicly available colonoscopic datasets. METHODS To address this issue, we increased the number and variety of images from the original dataset. Our method consists on adding polyps to the dataset images. The developed algorithm performs a rigorous selection of the best region within the image to receive the polyp. This procedure preserves the realistic features of the images while creating more diverse samples for training purposes. Our method allows copying existing polyps to new non-polypoid target regions. We also develop a strategy to generate new and more varied polyps through generative adversarial neural networks. Hence, the developed approach enriches the training data, creating automatically new samples with their appropriate labels. RESULTS We applied the proposed data enhancement over a colonic polyp dataset. Thus, we can assess the effectiveness of our approach through a Faster R-CNN detection model. Performance results show improvements over the polyp detections while reducing the false-negative rate. The experimental results also show better recall metrics in comparison with both the original training set and other studies in the literature. CONCLUSION We demonstrate that our proposed method has the potential to increase the data variability and number of samples in a reduced polyp dataset, improving the polyp detection rate and recall values. These results open new possibilities for advancing the study and implementation of new methods to improve computer-assisted medical image analysis.
Collapse
Affiliation(s)
- Victor de Almeida Thomaz
- Pontifical Catholic University of Rio de Janeiro, Rua Marquês de São Vicente, 225, Gávea Rio de Janeiro, Brazil.
| | - Cesar A Sierra-Franco
- Tecgraf Institute of Technical-Scientific Software Development, Rua Marquês de São Vicente, 225, Gávea Rio de Janeiro, Brazil.
| | - Alberto B Raposo
- Tecgraf Institute of Technical-Scientific Software Development, Rua Marquês de São Vicente, 225, Gávea Rio de Janeiro, Brazil.
| |
Collapse
|
122
|
Qadir HA, Shin Y, Solhusvik J, Bergsland J, Aabakken L, Balasingham I. Toward real-time polyp detection using fully CNNs for 2D Gaussian shapes prediction. Med Image Anal 2020; 68:101897. [PMID: 33260111 DOI: 10.1016/j.media.2020.101897] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2020] [Revised: 10/26/2020] [Accepted: 10/28/2020] [Indexed: 12/24/2022]
Abstract
To decrease colon polyp miss-rate during colonoscopy, a real-time detection system with high accuracy is needed. Recently, there have been many efforts to develop models for real-time polyp detection, but work is still required to develop real-time detection algorithms with reliable results. We use single-shot feed-forward fully convolutional neural networks (F-CNN) to develop an accurate real-time polyp detection system. F-CNNs are usually trained on binary masks for object segmentation. We propose the use of 2D Gaussian masks instead of binary masks to enable these models to detect different types of polyps more effectively and efficiently and reduce the number of false positives. The experimental results showed that the proposed 2D Gaussian masks are efficient for detection of flat and small polyps with unclear boundaries between background and polyp parts. The masks make a better training effect to discriminate polyps from the polyp-like false positives. The proposed method achieved state-of-the-art results on two polyp datasets. On the ETIS-LARIB dataset we achieved 86.54% recall, 86.12% precision, and 86.33% F1-score, and on the CVC-ColonDB we achieved 91% recall, 88.35% precision, and F1-score 89.65%.
Collapse
Affiliation(s)
- Hemin Ali Qadir
- Intervention Centre, Oslo University Hospital, Oslo, Norway; Department of Informatics, University of Oslo, Oslo, Norway; OmniVision Technologies Norway AS, Oslo, Norway.
| | - Younghak Shin
- Department of Computer Engineering, Mokpo National University, Mokpo, Korea.
| | | | | | - Lars Aabakken
- Department of Transplantation Medicine, University of Oslo, Oslo, Norway
| | - Ilangko Balasingham
- Intervention Centre, Oslo University Hospital, Oslo, Norway; Department of Electronic Systems, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
123
|
Asif M, Song H, Chen L, Yang J, Frangi AF. Intrinsic layer based automatic specular reflection detection in endoscopic images. Comput Biol Med 2020; 128:104106. [PMID: 33221640 DOI: 10.1016/j.compbiomed.2020.104106] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Revised: 11/04/2020] [Accepted: 11/04/2020] [Indexed: 01/10/2023]
Abstract
Endoscopic images are used to observe the internal structure of the human body. Specular reflection (SR) images are mostly a consequence of the strong light and bright regions appearing on endoscopic images, which affects the performance of minimally invasive surgery. In this study, we propose a novel method for automatic SR detection based on intrinsic image layer separation (IILS). The proposed method consists of three steps. Initially, it involves the normalization of the image followed by the extraction of high gradient area, and the separation of SR is done on the basis of the color model. The image melding technique is utilized to reconstruct the reflected pixels. The experiments were conducted on 912 endoscopic images from CVC-EndoSceneStill. The results of accuracy, sensitivity, specificity, precision, Jaccard index, Dice coefficient, standard deviation, and pixel count difference show that the detection performance of the proposed method outperforms that of other state-of-the-art methods. The evaluation of the proposed IILS-based SR detection demonstrates that our method obtains better qualitative and quantitative assessments compared with other methods, which can be used as a promising preprocessing step for further analysis of endoscopic images.
Collapse
Affiliation(s)
| | - Hong Song
- Beijing Institute of Technology, Beijing, China.
| | - Lei Chen
- Beijing Institute of Technology, Beijing, China
| | - Jian Yang
- Beijing Institute of Technology, Beijing, China.
| | | |
Collapse
|
124
|
Pacal I, Karaboga D, Basturk A, Akay B, Nalbantoglu U. A comprehensive review of deep learning in colon cancer. Comput Biol Med 2020; 126:104003. [DOI: 10.1016/j.compbiomed.2020.104003] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2020] [Revised: 08/28/2020] [Accepted: 08/28/2020] [Indexed: 12/17/2022]
|
125
|
Chadebecq F, Vasconcelos F, Mazomenos E, Stoyanov D. Computer Vision in the Surgical Operating Room. Visc Med 2020; 36:456-462. [PMID: 33447601 DOI: 10.1159/000511934] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Accepted: 09/30/2020] [Indexed: 12/20/2022] Open
Abstract
Background Multiple types of surgical cameras are used in modern surgical practice and provide a rich visual signal that is used by surgeons to visualize the clinical site and make clinical decisions. This signal can also be used by artificial intelligence (AI) methods to provide support in identifying instruments, structures, or activities both in real-time during procedures and postoperatively for analytics and understanding of surgical processes. Summary In this paper, we provide a succinct perspective on the use of AI and especially computer vision to power solutions for the surgical operating room (OR). The synergy between data availability and technical advances in computational power and AI methodology has led to rapid developments in the field and promising advances. Key Messages With the increasing availability of surgical video sources and the convergence of technologies around video storage, processing, and understanding, we believe clinical solutions and products leveraging vision are going to become an important component of modern surgical capabilities. However, both technical and clinical challenges remain to be overcome to efficiently make use of vision-based approaches into the clinic.
Collapse
Affiliation(s)
- François Chadebecq
- Department of Computer Science, Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, United Kingdom
| | - Francisco Vasconcelos
- Department of Computer Science, Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, United Kingdom
| | - Evangelos Mazomenos
- Department of Computer Science, Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, United Kingdom
| | - Danail Stoyanov
- Department of Computer Science, Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, United Kingdom
| |
Collapse
|
126
|
Sachdeva N, Klopukh M, Clair RS, Hahn WE. Using conditional generative adversarial networks to reduce the effects of latency in robotic telesurgery. J Robot Surg 2020; 15:635-641. [PMID: 33025374 DOI: 10.1007/s11701-020-01149-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Accepted: 09/23/2020] [Indexed: 11/25/2022]
Abstract
The introduction of surgical robots brought about advancements in surgical procedures. The applications of remote telesurgery range from building medical clinics in underprivileged areas, to placing robots abroad in military hot-spots where accessibility and diversity of medical experience may be limited. Poor wireless connectivity may result in a prolonged delay, referred to as latency, between a surgeon's input and action which a robot takes. In surgery, any micro-delay can injure a patient severely and, in some cases, result in fatality. One way to increase safety is to mitigate the effects of latency using deep learning aided computer vision. While the current surgical robots use calibrated sensors to measure the position of the arms and tools, in this work, we present a purely optical approach that provides a measurement of the tool position in relation to the patient's tissues. This research aimed to produce a neural network that allowed a robot to detect its own mechanical manipulator arms. A conditional generative adversarial network (cGAN) was trained on 1107 frames of a mock gastrointestinal robotic surgery from the 2015 EndoVis Instrument Challenge and corresponding hand-drawn labels for each frame. When run on new testing data, the network generated near-perfect labels of the input images which were visually consistent with the hand-drawn labels and was able to do this in 299 ms. These accurately generated labels can then be used as simplified identifiers for the robot to track its own controlled tools. These results show potential for conditional GANs as a reaction mechanism, such that the robot can detect when its arms move outside the operating area in a patient. This system allows for more accurate monitoring of the position of surgical instruments in relation to the patient's tissue, increasing safety measures that are integral to successful telesurgery systems.
Collapse
Affiliation(s)
- Neil Sachdeva
- Florida Atlantic University, Boca Raton, FL, USA.
- Pine Crest School, Science Research, Fort Lauderdale, USA.
| | | | | | | |
Collapse
|
127
|
Lin D, Li Y, Nwe TL, Dong S, Oo ZM. RefineU-Net: Improved U-Net with progressive global feedbacks and residual attention guided local refinement for medical image segmentation. Pattern Recognit Lett 2020. [DOI: 10.1016/j.patrec.2020.07.013] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
128
|
Abstract
Colorectal cancer is one of the leading cancer death causes worldwide, but its early diagnosis highly improves the survival rates. The success of deep learning has also benefited this clinical field. When training a deep learning model, it is optimized based on the selected loss function. In this work, we consider two networks (U-Net and LinkNet) and two backbones (VGG-16 and Densnet121). We analyzed the influence of seven loss functions and used a principal component analysis (PCA) to determine whether the PCA-based decomposition allows for the defining of the coefficients of a non-redundant primal loss function that can outperform the individual loss functions and different linear combinations. The eigenloss is defined as a linear combination of the individual losses using the elements of the eigenvector as coefficients. Empirical results show that the proposed eigenloss improves the general performance of individual loss functions and outperforms other linear combinations when Linknet is used, showing potential for its application in polyp segmentation problems.
Collapse
|
129
|
Sánchez-Peralta LF, Bote-Curiel L, Picón A, Sánchez-Margallo FM, Pagador JB. Deep learning to find colorectal polyps in colonoscopy: A systematic literature review. Artif Intell Med 2020; 108:101923. [PMID: 32972656 DOI: 10.1016/j.artmed.2020.101923] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2019] [Revised: 03/03/2020] [Accepted: 07/01/2020] [Indexed: 02/07/2023]
Abstract
Colorectal cancer has a great incidence rate worldwide, but its early detection significantly increases the survival rate. Colonoscopy is the gold standard procedure for diagnosis and removal of colorectal lesions with potential to evolve into cancer and computer-aided detection systems can help gastroenterologists to increase the adenoma detection rate, one of the main indicators for colonoscopy quality and predictor for colorectal cancer prevention. The recent success of deep learning approaches in computer vision has also reached this field and has boosted the number of proposed methods for polyp detection, localization and segmentation. Through a systematic search, 35 works have been retrieved. The current systematic review provides an analysis of these methods, stating advantages and disadvantages for the different categories used; comments seven publicly available datasets of colonoscopy images; analyses the metrics used for reporting and identifies future challenges and recommendations. Convolutional neural networks are the most used architecture together with an important presence of data augmentation strategies, mainly based on image transformations and the use of patches. End-to-end methods are preferred over hybrid methods, with a rising tendency. As for detection and localization tasks, the most used metric for reporting is the recall, while Intersection over Union is highly used in segmentation. One of the major concerns is the difficulty for a fair comparison and reproducibility of methods. Even despite the organization of challenges, there is still a need for a common validation framework based on a large, annotated and publicly available database, which also includes the most convenient metrics to report results. Finally, it is also important to highlight that efforts should be focused in the future on proving the clinical value of the deep learning based methods, by increasing the adenoma detection rate.
Collapse
Affiliation(s)
| | - Luis Bote-Curiel
- Jesús Usón Minimally Invasive Surgery Centre, Ctra. N-521, km 41.8, 10071 Cáceres, Spain.
| | - Artzai Picón
- Tecnalia, Parque Científico y Tecnológico de Bizkaia, C/ Astondo bidea, Edificio 700, 48160 Derio, Spain.
| | | | - J Blas Pagador
- Jesús Usón Minimally Invasive Surgery Centre, Ctra. N-521, km 41.8, 10071 Cáceres, Spain.
| |
Collapse
|
130
|
Fast colonic polyp detection using a Hamilton–Jacobi approach to non-dominated sorting. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2020.102035] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
131
|
Patel K, Li K, Tao K, Wang Q, Bansal A, Rastogi A, Wang G. A comparative study on polyp classification using convolutional neural networks. PLoS One 2020; 15:e0236452. [PMID: 32730279 PMCID: PMC7392235 DOI: 10.1371/journal.pone.0236452] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 07/07/2020] [Indexed: 12/31/2022] Open
Abstract
Colorectal cancer is the third most common cancer diagnosed in both men and women in the United States. Most colorectal cancers start as a growth on the inner lining of the colon or rectum, called 'polyp'. Not all polyps are cancerous, but some can develop into cancer. Early detection and recognition of the type of polyps is critical to prevent cancer and change outcomes. However, visual classification of polyps is challenging due to varying illumination conditions of endoscopy, variant texture, appearance, and overlapping morphology between polyps. More importantly, evaluation of polyp patterns by gastroenterologists is subjective leading to a poor agreement among observers. Deep convolutional neural networks have proven very successful in object classification across various object categories. In this work, we compare the performance of the state-of-the-art general object classification models for polyp classification. We trained a total of six CNN models end-to-end using a dataset of 157 video sequences composed of two types of polyps: hyperplastic and adenomatous. Our results demonstrate that the state-of-the-art CNN models can successfully classify polyps with an accuracy comparable or better than reported among gastroenterologists. The results of this study can guide future research in polyp classification.
Collapse
Affiliation(s)
- Krushi Patel
- School of Engineering, University of Kansas, Lawrence, KS, United States of America
| | - Kaidong Li
- School of Engineering, University of Kansas, Lawrence, KS, United States of America
| | - Ke Tao
- The First Hospital of Jilin University, Changchun, China
| | - Quan Wang
- The First Hospital of Jilin University, Changchun, China
| | - Ajay Bansal
- The University of Kansas Medical Center, Kansas City, KS, United States of America
| | - Amit Rastogi
- The University of Kansas Medical Center, Kansas City, KS, United States of America
| | - Guanghui Wang
- School of Engineering, University of Kansas, Lawrence, KS, United States of America
| |
Collapse
|
132
|
Wang W, Tian J, Zhang C, Luo Y, Wang X, Li J. An improved deep learning approach and its applications on colonic polyp images detection. BMC Med Imaging 2020; 20:83. [PMID: 32698839 PMCID: PMC7374886 DOI: 10.1186/s12880-020-00482-3] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2020] [Accepted: 07/08/2020] [Indexed: 12/22/2022] Open
Abstract
Background Colonic polyps are more likely to be cancerous, especially those with large diameter, large number and atypical hyperplasia. If colonic polyps cannot be treated in early stage, they are likely to develop into colon cancer. Colonoscopy is easily limited by the operator’s experience, and factors such as inexperience and visual fatigue will directly affect the accuracy of diagnosis. Cooperating with Hunan children’s hospital, we proposed and improved a deep learning approach with global average pooling (GAP) in colonoscopy for assisted diagnosis. Our approach for assisted diagnosis in colonoscopy can prompt endoscopists to pay attention to polyps that may be ignored in real time, improve the detection rate, reduce missed diagnosis, and improve the efficiency of medical diagnosis. Methods We selected colonoscopy images from the gastrointestinal endoscopy room of Hunan children’s hospital to form the colonic polyp datasets. And we applied the image classification method based on Deep Learning to the classification of Colonic Polyps. The classic networks we used are VGGNets and ResNets. By using global average pooling, we proposed the improved approaches: VGGNets-GAP and ResNets-GAP. Results The accuracies of all models in datasets exceed 98%. The TPR and TNR are above 96 and 98% respectively. In addition, VGGNets-GAP networks not only have high classification accuracies, but also have much fewer parameters than those of VGGNets. Conclusions The experimental results show that the proposed approach has good effect on the automatic detection of colonic polyps. The innovations of our method are in two aspects: (1) the detection accuracy of colonic polyps has been improved. (2) our approach reduces the memory consumption and makes the model lightweight. Compared with the original VGG networks, the parameters of our VGG19-GAP networks are greatly reduced.
Collapse
Affiliation(s)
- Wei Wang
- School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha, 410114, China.
| | - Jinge Tian
- School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha, 410114, China
| | - Chengwen Zhang
- School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha, 410114, China
| | - Yanhong Luo
- Hunan Children's Hospital, Changsha, 410000, China.
| | - Xin Wang
- School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha, 410114, China.
| | - Ji Li
- School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha, 410114, China
| |
Collapse
|
133
|
Guo Y, Bernal J, J. Matuszewski B. Polyp Segmentation with Fully Convolutional Deep Neural Networks-Extended Evaluation Study. J Imaging 2020; 6:69. [PMID: 34460662 PMCID: PMC8321061 DOI: 10.3390/jimaging6070069] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2020] [Revised: 07/07/2020] [Accepted: 07/08/2020] [Indexed: 02/06/2023] Open
Abstract
Analysis of colonoscopy images plays a significant role in early detection of colorectal cancer. Automated tissue segmentation can be useful for two of the most relevant clinical target applications-lesion detection and classification, thereby providing important means to make both processes more accurate and robust. To automate video colonoscopy analysis, computer vision and machine learning methods have been utilized and shown to enhance polyp detectability and segmentation objectivity. This paper describes a polyp segmentation algorithm, developed based on fully convolutional network models, that was originally developed for the Endoscopic Vision Gastrointestinal Image Analysis (GIANA) polyp segmentation challenges. The key contribution of the paper is an extended evaluation of the proposed architecture, by comparing it against established image segmentation benchmarks utilizing several metrics with cross-validation on the GIANA training dataset. Different experiments are described, including examination of various network configurations, values of design parameters, data augmentation approaches, and polyp characteristics. The reported results demonstrate the significance of the data augmentation, and careful selection of the method's design parameters. The proposed method delivers state-of-the-art results with near real-time performance. The described solution was instrumental in securing the top spot for the polyp segmentation sub-challenge at the 2017 GIANA challenge and second place for the standard image resolution segmentation task at the 2018 GIANA challenge.
Collapse
Affiliation(s)
- Yunbo Guo
- Computer Vision and Machine Learning (CVML) Group, School of Engineering, University of Central Lancashire, Preston PR1 2HE, UK;
| | - Jorge Bernal
- Image Sequence Evaluation laboratory, Computer Vision Center and Computer Science Department, Universitat Autònoma de Barcelona, 08193 Bellaterra, Barcelona, Spain;
| | - Bogdan J. Matuszewski
- Computer Vision and Machine Learning (CVML) Group, School of Engineering, University of Central Lancashire, Preston PR1 2HE, UK;
| |
Collapse
|
134
|
Thambawita V, Jha D, Hammer HL, Johansen HD, Johansen D, Halvorsen P, Riegler MA. An Extensive Study on Cross-Dataset Bias and Evaluation Metrics Interpretation for Machine Learning Applied to Gastrointestinal Tract Abnormality Classification. ACTA ACUST UNITED AC 2020. [DOI: 10.1145/3386295] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Precise and efficient automated identification of gastrointestinal (GI) tract diseases can help doctors treat more patients and improve the rate of disease detection and identification. Currently, automatic analysis of diseases in the GI tract is a hot topic in both computer science and medical-related journals. Nevertheless, the evaluation of such an automatic analysis is often incomplete or simply wrong. Algorithms are often only tested on small and biased datasets, and cross-dataset evaluations are rarely performed. A clear understanding of evaluation metrics and machine learning models with cross datasets is crucial to bring research in the field to a new quality level. Toward this goal, we present comprehensive evaluations of five distinct machine learning models using global features and deep neural networks that can classify 16 different key types of GI tract conditions, including pathological findings, anatomical landmarks, polyp removal conditions, and normal findings from images captured by common GI tract examination instruments. In our evaluation, we introduce performance hexagons using six performance metrics, such as recall, precision, specificity, accuracy, F1-score, and the Matthews correlation coefficient to demonstrate how to determine the real capabilities of models rather than evaluating them shallowly. Furthermore, we perform cross-dataset evaluations using different datasets for training and testing. With these cross-dataset evaluations, we demonstrate the challenge of actually building a generalizable model that could be used across different hospitals. Our experiments clearly show that more sophisticated performance metrics and evaluation methods need to be applied to get reliable models rather than depending on evaluations of the splits of the same dataset—that is, the performance metrics should always be interpreted together rather than relying on a single metric.
Collapse
Affiliation(s)
| | - Debesh Jha
- SimulaMet and UiT—The Arctic University of Norway, Tromsø, Norway
| | | | | | - Dag Johansen
- UiT—The Arctic University of Norway, Tromsø, Norway
| | - Pål Halvorsen
- SimulaMet and Oslo Metropolitan University, Oslo, Norway
| | | |
Collapse
|
135
|
Bisschops R, Dinis-Ribeiro M. Resect and discard: Is it ready or time to shift gear? Endosc Int Open 2020; 8:E924-E926. [PMID: 32617396 PMCID: PMC7297617 DOI: 10.1055/a-1178-9711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Affiliation(s)
- Raf Bisschops
- University Hospital Gasthuisberg, Gastroenterology, Belgium
| | | |
Collapse
|
136
|
Development of a Deep Learning-Based Algorithm to Detect the Distal End of a Surgical Instrument. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10124245] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
This work aims to develop an algorithm to detect the distal end of a surgical instrument using object detection with deep learning. We employed nine video recordings of carotid endarterectomies for training and testing. We obtained regions of interest (ROI; 32 × 32 pixels), at the end of the surgical instrument on the video images, as supervised data. We applied data augmentation to these ROIs. We employed a You Only Look Once Version 2 (YOLOv2) -based convolutional neural network as the network model for training. The detectors were validated to evaluate average detection precision. The proposed algorithm used the central coordinates of the bounding boxes predicted by YOLOv2. Using the test data, we calculated the detection rate. The average precision (AP) for the ROIs, without data augmentation, was 0.4272 ± 0.108. The AP with data augmentation, of 0.7718 ± 0.0824, was significantly higher than that without data augmentation. The detection rates, including the calculated coordinates of the center points in the centers of 8 × 8 pixels and 16 × 16 pixels, were 0.6100 ± 0.1014 and 0.9653 ± 0.0177, respectively. We expect that the proposed algorithm will be efficient for the analysis of surgical records.
Collapse
|
137
|
Guo X, Yuan Y. Semi-supervised WCE image classification with adaptive aggregated attention. Med Image Anal 2020; 64:101733. [PMID: 32574987 DOI: 10.1016/j.media.2020.101733] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2019] [Revised: 04/01/2020] [Accepted: 05/22/2020] [Indexed: 02/08/2023]
Abstract
Accurate abnormality classification in Wireless Capsule Endoscopy (WCE) images is crucial for early gastrointestinal (GI) tract cancer diagnosis and treatment, while it remains challenging due to the limited annotated dataset, the huge intra-class variances and the high degree of inter-class similarities. To tackle these dilemmas, we propose a novel semi-supervised learning method with Adaptive Aggregated Attention (AAA) module for automatic WCE image classification. Firstly, a novel deformation field based image preprocessing strategy is proposed to remove the black background and circular boundaries in WCE images. Then we propose a synergic network to learn discriminative image features, consisting of two branches: an abnormal regions estimator (the first branch) and an abnormal information distiller (the second branch). The first branch utilizes the proposed AAA module to capture global dependencies and incorporate context information to highlight the most meaningful regions, while the second branch mainly focuses on these calculated attention regions for accurate and robust abnormality classification. Finally, these two branches are jointly optimized by minimizing the proposed discriminative angular (DA) loss and Jensen-Shannon divergence (JS) loss with labeled data as well as unlabeled data. Comprehensive experiments have been conducted on the public CAD-CAP WCE dataset. The proposed method achieves 93.17% overall accuracy in a fourfold cross-validation, verifying its effectiveness for WCE image classification. The source code is available at https://github.com/Guo-Xiaoqing/SSL_WCE.
Collapse
Affiliation(s)
- Xiaoqing Guo
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong SAR, China
| | - Yixuan Yuan
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong SAR, China.
| |
Collapse
|
138
|
Mostafiz R, Rahman MM, Uddin MS. Gastrointestinal polyp classification through empirical mode decomposition and neural features. SN APPLIED SCIENCES 2020. [DOI: 10.1007/s42452-020-2944-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023] Open
|
139
|
Ciuti G, Skonieczna-Żydecka K, Marlicz W, Iacovacci V, Liu H, Stoyanov D, Arezzo A, Chiurazzi M, Toth E, Thorlacius H, Dario P, Koulaouzidis A. Frontiers of Robotic Colonoscopy: A Comprehensive Review of Robotic Colonoscopes and Technologies. J Clin Med 2020; 9:1648. [PMID: 32486374 PMCID: PMC7356873 DOI: 10.3390/jcm9061648] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Revised: 05/16/2020] [Accepted: 05/19/2020] [Indexed: 12/15/2022] Open
Abstract
Flexible colonoscopy remains the prime mean of screening for colorectal cancer (CRC) and the gold standard of all population-based screening pathways around the world. Almost 60% of CRC deaths could be prevented with screening. However, colonoscopy attendance rates are affected by discomfort, fear of pain and embarrassment or loss of control during the procedure. Moreover, the emergence and global thread of new communicable diseases might seriously affect the functioning of contemporary centres performing gastrointestinal endoscopy. Innovative solutions are needed: artificial intelligence (AI) and physical robotics will drastically contribute for the future of the healthcare services. The translation of robotic technologies from traditional surgery to minimally invasive endoscopic interventions is an emerging field, mainly challenged by the tough requirements for miniaturization. Pioneering approaches for robotic colonoscopy have been reported in the nineties, with the appearance of inchworm-like devices. Since then, robotic colonoscopes with assistive functionalities have become commercially available. Research prototypes promise enhanced accessibility and flexibility for future therapeutic interventions, even via autonomous or robotic-assisted agents, such as robotic capsules. Furthermore, the pairing of such endoscopic systems with AI-enabled image analysis and recognition methods promises enhanced diagnostic yield. By assembling a multidisciplinary team of engineers and endoscopists, the paper aims to provide a contemporary and highly-pictorial critical review for robotic colonoscopes, hence providing clinicians and researchers with a glimpse of the major changes and challenges that lie ahead.
Collapse
Affiliation(s)
- Gastone Ciuti
- The BioRobotics Institute, Scuola Superiore Sant’Anna, 56025 Pisa, Italy; (V.I.); (M.C.); (P.D.)
- Department of Excellence in Robotics & AI, Scuola Superiore Sant’Anna, 56127 Pisa, Italy
| | - Karolina Skonieczna-Żydecka
- Department of Human Nutrition and Metabolomics, Pomeranian Medical University in Szczecin, 71-460 Szczecin, Poland;
| | - Wojciech Marlicz
- Department of Gastroenterology, Pomeranian Medical University in Szczecin, 71-252 Szczecin, Poland;
- Endoklinika sp. z o.o., 70-535 Szczecin, Poland
| | - Veronica Iacovacci
- The BioRobotics Institute, Scuola Superiore Sant’Anna, 56025 Pisa, Italy; (V.I.); (M.C.); (P.D.)
- Department of Excellence in Robotics & AI, Scuola Superiore Sant’Anna, 56127 Pisa, Italy
| | - Hongbin Liu
- School of Biomedical Engineering & Imaging Sciences, Faculty of Life Sciences and Medicine, King’s College London, London SE1 7EH, UK;
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London W1W 7TY, UK;
| | - Alberto Arezzo
- Department of Surgical Sciences, University of Torino, 10126 Torino, Italy;
| | - Marcello Chiurazzi
- The BioRobotics Institute, Scuola Superiore Sant’Anna, 56025 Pisa, Italy; (V.I.); (M.C.); (P.D.)
- Department of Excellence in Robotics & AI, Scuola Superiore Sant’Anna, 56127 Pisa, Italy
| | - Ervin Toth
- Department of Gastroenterology, Skåne University Hospital, Lund University, 20502 Malmö, Sweden;
| | - Henrik Thorlacius
- Department of Clinical Sciences, Section of Surgery, Lund University, 20502 Malmö, Sweden;
| | - Paolo Dario
- The BioRobotics Institute, Scuola Superiore Sant’Anna, 56025 Pisa, Italy; (V.I.); (M.C.); (P.D.)
- Department of Excellence in Robotics & AI, Scuola Superiore Sant’Anna, 56127 Pisa, Italy
| | | |
Collapse
|
140
|
Yang J, Chang L, Li S, He X, Zhu T. WCE polyp detection based on novel feature descriptor with normalized variance locality-constrained linear coding. Int J Comput Assist Radiol Surg 2020; 15:1291-1302. [PMID: 32447521 DOI: 10.1007/s11548-020-02190-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2020] [Accepted: 04/28/2020] [Indexed: 12/14/2022]
Abstract
PURPOSE Wireless capsule endoscopy (WCE) has become an effective facility to detect digestive tract diseases. To further improve the accuracy and efficiency of computer-aided diagnosis system in the detection of intestine polyps, a novel algorithm is proposed for WCE polyp detection in this paper. METHODS First, by considering the rich color information of endoscopic images, a novel local color texture feature called histogram of local color difference (LCDH) is proposed for describing endoscopic images. A codebook acquisition method which is based upon positive samples is also proposed, generating more balanced visual words with the LCDH features. Furthermore, based on locality-constrained linear coding (LLC) algorithm, a normalized variance regular term is introduced as NVLLC algorithm, which considers the dispersion degree between k nearest visual words and features in the approximate coding phase. The final image representations are obtained from using the spatial matching pyramid model. Finally, the support vector machine is employed to classify the polyp images. RESULTS The WCE dataset including 500 polyp and 500 normal images is adopted for evaluating the proposed method. Experimental results indicate that the classification accuracy, sensitivity and specificity have reached 96.00%, 95.80% and 96.20%, which performances better than traditional ways. CONCLUSION A novel method for WCE polyp detection is developed using LCDH feature descriptor and NVLLC coding scheme, which achieves a promising performance and can be implemented in clinical-assisted diagnosis of intestinal diseases.
Collapse
Affiliation(s)
- Jianjun Yang
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, Zhejiang, People's Republic of China
| | - Liping Chang
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, Zhejiang, People's Republic of China.
| | - Sheng Li
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, Zhejiang, People's Republic of China
| | - Xiongxiong He
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, Zhejiang, People's Republic of China
| | - Tingwei Zhu
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, Zhejiang, People's Republic of China
| |
Collapse
|
141
|
Real-time detection of colon polyps during colonoscopy using deep learning: systematic validation with four independent datasets. Sci Rep 2020; 10:8379. [PMID: 32433506 PMCID: PMC7239848 DOI: 10.1038/s41598-020-65387-1] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2019] [Accepted: 04/28/2020] [Indexed: 01/06/2023] Open
Abstract
We developed and validated a deep-learning algorithm for polyp detection. We used a YOLOv2 to develop the algorithm for automatic polyp detection on 8,075 images (503 polyps). We validated the algorithm using three datasets: A: 1,338 images with 1,349 polyps; B: an open, public CVC-clinic database with 612 polyp images; and C: 7 colonoscopy videos with 26 polyps. To reduce the number of false positives in the video analysis, median filtering was applied. We tested the algorithm performance using 15 unaltered colonoscopy videos (dataset D). For datasets A and B, the per-image polyp detection sensitivity was 96.7% and 90.2%, respectively. For video study (dataset C), the per-image polyp detection sensitivity was 87.7%. False positive rates were 12.5% without a median filter and 6.3% with a median filter with a window size of 13. For dataset D, the sensitivity and false positive rate were 89.3% and 8.3%, respectively. The algorithm detected all 38 polyps that the endoscopists detected and 7 additional polyps. The operation speed was 67.16 frames per second. The automatic polyp detection algorithm exhibited good performance, as evidenced by the high detection sensitivity and rapid processing. Our algorithm may help endoscopists improve polyp detection.
Collapse
|
142
|
Poon CCY, Jiang Y, Zhang R, Lo WWY, Cheung MSH, Yu R, Zheng Y, Wong JCT, Liu Q, Wong SH, Mak TWC, Lau JYW. AI-doscopist: a real-time deep-learning-based algorithm for localising polyps in colonoscopy videos with edge computing devices. NPJ Digit Med 2020; 3:73. [PMID: 32435701 PMCID: PMC7235017 DOI: 10.1038/s41746-020-0281-z] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2019] [Accepted: 04/28/2020] [Indexed: 12/24/2022] Open
Abstract
We have designed a deep-learning model, an "Artificial Intelligent Endoscopist (a.k.a. AI-doscopist)", to localise colonic neoplasia during colonoscopy. This study aims to evaluate the agreement between endoscopists and AI-doscopist for colorectal neoplasm localisation. AI-doscopist was pre-trained by 1.2 million non-medical images and fine-tuned by 291,090 colonoscopy and non-medical images. The colonoscopy images were obtained from six databases, where the colonoscopy images were classified into 13 categories and the polyps' locations were marked image-by-image by the smallest bounding boxes. Seven categories of non-medical images, which were believed to share some common features with colorectal polyps, were downloaded from an online search engine. Written informed consent were obtained from 144 patients who underwent colonoscopy and their full colonoscopy videos were prospectively recorded for evaluation. A total of 128 suspicious lesions were resected or biopsied for histological confirmation. When evaluated image-by-image on the 144 full colonoscopies, the specificity of AI-doscopist was 93.3%. AI-doscopist were able to localise 124 out of 128 polyps (polyp-based sensitivity = 96.9%). Furthermore, after reviewing the suspected regions highlighted by AI-doscopist in a 102-patient cohort, an endoscopist has high confidence in recognizing four missed polyps in three patients who were not diagnosed with any lesion during their original colonoscopies. In summary, AI-doscopist can localise 96.9% of the polyps resected by the endoscopists. If AI-doscopist were to be used in real-time, it can potentially assist endoscopists in detecting one more patient with polyp in every 20-33 colonoscopies.
Collapse
Affiliation(s)
- Carmen C. Y. Poon
- Division of Biomedical Engineering Research, Department of Surgery, The Chinese University of Hong Kong, Hong Kong SAR, People’s Republic of China
| | - Yuqi Jiang
- Division of Biomedical Engineering Research, Department of Surgery, The Chinese University of Hong Kong, Hong Kong SAR, People’s Republic of China
| | - Ruikai Zhang
- Division of Biomedical Engineering Research, Department of Surgery, The Chinese University of Hong Kong, Hong Kong SAR, People’s Republic of China
| | - Winnie W. Y. Lo
- Division of Biomedical Engineering Research, Department of Surgery, The Chinese University of Hong Kong, Hong Kong SAR, People’s Republic of China
| | - Maggie S. H. Cheung
- Division of Vascular and General Surgery, Department of Surgery, Prince of Wales Hospital, The Chinese University of Hong Kong, Hong Kong SAR, People’s Republic of China
| | - Ruoxi Yu
- Division of Biomedical Engineering Research, Department of Surgery, The Chinese University of Hong Kong, Hong Kong SAR, People’s Republic of China
| | - Yali Zheng
- Division of Biomedical Engineering Research, Department of Surgery, The Chinese University of Hong Kong, Hong Kong SAR, People’s Republic of China
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, People’s Republic of China
| | - John C. T. Wong
- Division of Gastroenterology and Hepatology, Department of Medicine and Therapeutics, Institute of Digestive Disease, The Chinese University of Hong Kong, Hong Kong SAR, People’s Republic of China
| | - Qing Liu
- Department of Electrical and Electronic Engineering, Xi’an Jiaotong-Liverpool University, Suzhou, People’s Republic of China
| | - Sunny H. Wong
- Division of Gastroenterology and Hepatology, Department of Medicine and Therapeutics, Institute of Digestive Disease, The Chinese University of Hong Kong, Hong Kong SAR, People’s Republic of China
| | - Tony W. C. Mak
- Division of Colorectal Surgery, Department of Surgery, The Chinese University of Hong Kong, Hong Kong SAR, People’s Republic of China
| | - James Y. W. Lau
- Division of Vascular and General Surgery, Department of Surgery, Prince of Wales Hospital, The Chinese University of Hong Kong, Hong Kong SAR, People’s Republic of China
| |
Collapse
|
143
|
Sánchez-Montes C, Bernal J, García-Rodríguez A, Córdova H, Fernández-Esparrach G. Review of computational methods for the detection and classification of polyps in colonoscopy imaging. GASTROENTEROLOGIA Y HEPATOLOGIA 2020; 43:222-232. [PMID: 32143918 DOI: 10.1016/j.gastrohep.2019.11.004] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/28/2019] [Accepted: 11/24/2019] [Indexed: 02/06/2023]
Abstract
Computer-aided diagnosis (CAD) is a tool with great potential to help endoscopists in the tasks of detecting and histologically classifying colorectal polyps. In recent years, different technologies have been described and their potential utility has been increasingly evidenced, which has generated great expectations among scientific societies. However, most of these works are retrospective and use images of different quality and characteristics which are analysed off line. This review aims to familiarise gastroenterologists with computational methods and the particularities of endoscopic imaging, which have an impact on image processing analysis. Finally, the publicly available image databases, needed to compare and confirm the results obtained with different methods, are presented.
Collapse
Affiliation(s)
- Cristina Sánchez-Montes
- Unidad de Endoscopia Digestiva, Hospital Universitari i Politècnic La Fe, Grupo de Investigación de Endoscopia Digestiva, IIS La Fe, Valencia, España
| | - Jorge Bernal
- Centro de Visión por Computador, Departamento de Ciencias de la Computación, Universidad Autónoma de Barcelona, Barcelona, España
| | - Ana García-Rodríguez
- Unidad de Endoscopia, Servicio de Gastroenterología, Hospital Clínic, IDIBAPS, CIBEREHD, Universidad de Barcelona, Barcelona, España
| | - Henry Córdova
- Unidad de Endoscopia, Servicio de Gastroenterología, Hospital Clínic, IDIBAPS, CIBEREHD, Universidad de Barcelona, Barcelona, España; IDIBAPS, CIBEREHD, Barcelona, España
| | - Gloria Fernández-Esparrach
- Unidad de Endoscopia, Servicio de Gastroenterología, Hospital Clínic, IDIBAPS, CIBEREHD, Universidad de Barcelona, Barcelona, España; IDIBAPS, CIBEREHD, Barcelona, España.
| |
Collapse
|
144
|
Leenhardt R, Li C, Le Mouel JP, Rahmi G, Saurin JC, Cholet F, Boureille A, Amiot X, Delvaux M, Duburque C, Leandri C, Gérard R, Lecleire S, Mesli F, Nion-Larmurier I, Romain O, Sacher-Huvelin S, Simon-Shane C, Vanbiervliet G, Marteau P, Histace A, Dray X. CAD-CAP: a 25,000-image database serving the development of artificial intelligence for capsule endoscopy. Endosc Int Open 2020; 8:E415-E420. [PMID: 32118115 PMCID: PMC7035135 DOI: 10.1055/a-1035-9088] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/05/2019] [Accepted: 09/16/2019] [Indexed: 02/06/2023] Open
Abstract
Background and study aims Capsule endoscopy (CE) is the preferred method for small bowel (SB) exploration. With a mean number of 50,000 SB frames per video, SBCE reading is time-consuming and tedious (30 to 60 minutes per video). We describe a large, multicenter database named CAD-CAP (Computer-Assisted Diagnosis for CAPsule Endoscopy, CAD-CAP). This database aims to serve the development of CAD tools for CE reading. Materials and methods Twelve French endoscopy centers were involved. All available third-generation SB-CE videos (Pillcam, Medtronic) were retrospectively selected from these centers and deidentified. Any pathological frame was extracted and included in the database. Manual segmentation of findings within these frames was performed by two pre-med students trained and supervised by an expert reader. All frames were then classified by type and clinical relevance by a panel of three expert readers. An automated extraction process was also developed to create a dataset of normal, proofread, control images from normal, complete, SB-CE videos. Results Four-thousand-one-hundred-and-seventy-four SB-CE were included. Of them, 1,480 videos (35 %) containing at least one pathological finding were selected. Findings from 5,184 frames (with their short video sequences) were extracted and delimited: 718 frames with fresh blood, 3,097 frames with vascular lesions, and 1,369 frames with inflammatory and ulcerative lesions. Twenty-thousand normal frames were extracted from 206 SB-CE normal videos. CAD-CAP has already been used for development of automated tools for angiectasia detection and also for two international challenges on medical computerized analysis.
Collapse
Affiliation(s)
| | - Cynthia Li
- Drexel University, College of Arts & Sciences, Philadelphia, Pennsylvania, United States
| | - Jean-Philippe Le Mouel
- Gastroenterology, Amiens University Hospital, Université de Picardie Jules Verne, Amiens, France
| | - Gabriel Rahmi
- Georges Pompidou European Hospital, APHP, Department of Gastroenterology and Endoscopy, Paris, France
| | - Jean Christophe Saurin
- Department of Endoscopy and Gastroenterology, Pavillon L, Hôpital Edouard Herriot, Lyon, France
| | - Franck Cholet
- Digestive Endoscopy Unit, University Hospital, Brest, France
| | - Arnaud Boureille
- Department of Hepato-Gastroenterology, Institut des Maladies de l'Appareil Digestif, Nantes, France
| | - Xavier Amiot
- Tenon Hospital, Gastroenterology Department, Paris, France
| | - Michel Delvaux
- CHU Strasbourg, Gastroenterology Department, Strasbourg, France
| | | | - Chloé Leandri
- Cochin Hospital Gastroenterology Department, Paris, France
| | - Romain Gérard
- CHRU Lille, Gastroenterology Department, Lille, France
| | | | - Farida Mesli
- CHU Henri Mondor, Gastroenterology Department, Creteil, France
| | | | - Olivier Romain
- ETIS, Université de Cergy-Pontoise, ENSEA, CNRS, Cergy-Pontoise Cedex, France
| | - Sylvie Sacher-Huvelin
- Department of Hepato-Gastroenterology, Institut des Maladies de l'Appareil Digestif, Nantes, France
| | - Camille Simon-Shane
- ETIS, Université de Cergy-Pontoise, ENSEA, CNRS, Cergy-Pontoise Cedex, France
| | | | | | - Aymeric Histace
- ETIS, Université de Cergy-Pontoise, ENSEA, CNRS, Cergy-Pontoise Cedex, France
| | - Xavier Dray
- Sorbonne University, Endoscopy Unit,ETIS, Université de Cergy-Pontoise, ENSEA, CNRS, Cergy-Pontoise Cedex, France,Corresponding author Pr Xavier Dray Hopital Saint-Antoine – Endoscopy Unit184 Rue du Faubourg Saint-AntoineParis 75012France
| |
Collapse
|
145
|
Hoerter N, Gross SA, Liang PS. Artificial Intelligence and Polyp Detection. CURRENT TREATMENT OPTIONS IN GASTROENTEROLOGY 2020; 18:120-136. [PMID: 31960282 PMCID: PMC7371513 DOI: 10.1007/s11938-020-00274-2] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
PURPOSE OF REVIEW This review highlights the history, recent advances, and ongoing challenges of artificial intelligence (AI) technology in colonic polyp detection. RECENT FINDINGS Hand-crafted AI algorithms have recently given way to convolutional neural networks with the ability to detect polyps in real-time. The first randomized controlled trial comparing an AI system to standard colonoscopy found a 9% increase in adenoma detection rate, but the improvement was restricted to polyps smaller than 10 mm and the results need validation. As this field rapidly evolves, important issues to consider include standardization of outcomes, dataset availability, real-world applications, and regulatory approval. SUMMARY AI has shown great potential for improving colonic polyp detection while requiring minimal training for endoscopists. The question of when AI will enter endoscopic practice depends on whether the technology can be integrated into existing hardware and an assessment of its added value for patient care.
Collapse
Affiliation(s)
| | | | - Peter S Liang
- NYU Langone Health, New York, NY, USA.
- VA New York Harbor Health Care System, New York, NY, USA.
| |
Collapse
|
146
|
Jha D, Smedsrud PH, Riegler MA, Halvorsen P, de Lange T, Johansen D, Johansen HD. Kvasir-SEG: A Segmented Polyp Dataset. MULTIMEDIA MODELING 2020. [DOI: 10.1007/978-3-030-37734-2_37] [Citation(s) in RCA: 172] [Impact Index Per Article: 34.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
147
|
Moccia S, Romeo L, Migliorelli L, Frontoni E, Zingaretti P. Supervised CNN Strategies for Optical Image Segmentation and Classification in Interventional Medicine. INTELLIGENT SYSTEMS REFERENCE LIBRARY 2020. [DOI: 10.1007/978-3-030-42750-4_8] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
148
|
Le Berre C, Sandborn WJ, Aridhi S, Devignes MD, Fournier L, Smaïl-Tabbone M, Danese S, Peyrin-Biroulet L. Application of Artificial Intelligence to Gastroenterology and Hepatology. Gastroenterology 2020; 158:76-94.e2. [PMID: 31593701 DOI: 10.1053/j.gastro.2019.08.058] [Citation(s) in RCA: 318] [Impact Index Per Article: 63.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/20/2018] [Revised: 08/22/2019] [Accepted: 08/24/2019] [Indexed: 02/07/2023]
Abstract
Since 2010, substantial progress has been made in artificial intelligence (AI) and its application to medicine. AI is explored in gastroenterology for endoscopic analysis of lesions, in detection of cancer, and to facilitate the analysis of inflammatory lesions or gastrointestinal bleeding during wireless capsule endoscopy. AI is also tested to assess liver fibrosis and to differentiate patients with pancreatic cancer from those with pancreatitis. AI might also be used to establish prognoses of patients or predict their response to treatments, based on multiple factors. We review the ways in which AI may help physicians make a diagnosis or establish a prognosis and discuss its limitations, knowing that further randomized controlled studies will be required before the approval of AI techniques by the health authorities.
Collapse
Affiliation(s)
- Catherine Le Berre
- Institut des Maladies de l'Appareil Digestif, Nantes University Hospital, France; Institut National de la Santé et de la Recherche Médicale U954 and Department of Gastroenterology, Nancy University Hospital, University of Lorraine, France
| | | | - Sabeur Aridhi
- University of Lorraine, Le Centre National de la Recherche Scientifique, Inria, Laboratoire Lorrain de Recherche en Informatique et ses Applications, Nancy, France
| | - Marie-Dominique Devignes
- University of Lorraine, Le Centre National de la Recherche Scientifique, Inria, Laboratoire Lorrain de Recherche en Informatique et ses Applications, Nancy, France
| | - Laure Fournier
- Université Paris-Descartes, Institut National de la Santé et de la Recherche Médicale, Unité Mixte De Recherché S970, Assistance Publique-Hôpitaux de Paris, Paris, France
| | - Malika Smaïl-Tabbone
- University of Lorraine, Le Centre National de la Recherche Scientifique, Inria, Laboratoire Lorrain de Recherche en Informatique et ses Applications, Nancy, France
| | - Silvio Danese
- Inflammatory Bowel Disease Center and Department of Biomedical Sciences, Humanitas Clinical and Research Center, Humanitas University, Milan, Italy
| | - Laurent Peyrin-Biroulet
- Institut National de la Santé et de la Recherche Médicale U954 and Department of Gastroenterology, Nancy University Hospital, University of Lorraine, France.
| |
Collapse
|
149
|
Ibtehaz N, Rahman MS. MultiResUNet : Rethinking the U-Net architecture for multimodal biomedical image segmentation. Neural Netw 2020; 121:74-87. [DOI: 10.1016/j.neunet.2019.08.025] [Citation(s) in RCA: 320] [Impact Index Per Article: 64.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2019] [Revised: 08/17/2019] [Accepted: 08/22/2019] [Indexed: 10/26/2022]
|
150
|
Vercauteren T, Unberath M, Padoy N, Navab N. CAI4CAI: The Rise of Contextual Artificial Intelligence in Computer Assisted Interventions. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2020; 108:198-214. [PMID: 31920208 PMCID: PMC6952279 DOI: 10.1109/jproc.2019.2946993] [Citation(s) in RCA: 53] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2019] [Revised: 09/12/2019] [Accepted: 10/04/2019] [Indexed: 05/10/2023]
Abstract
Data-driven computational approaches have evolved to enable extraction of information from medical images with a reliability, accuracy and speed which is already transforming their interpretation and exploitation in clinical practice. While similar benefits are longed for in the field of interventional imaging, this ambition is challenged by a much higher heterogeneity. Clinical workflows within interventional suites and operating theatres are extremely complex and typically rely on poorly integrated intra-operative devices, sensors, and support infrastructures. Taking stock of some of the most exciting developments in machine learning and artificial intelligence for computer assisted interventions, we highlight the crucial need to take context and human factors into account in order to address these challenges. Contextual artificial intelligence for computer assisted intervention, or CAI4CAI, arises as an emerging opportunity feeding into the broader field of surgical data science. Central challenges being addressed in CAI4CAI include how to integrate the ensemble of prior knowledge and instantaneous sensory information from experts, sensors and actuators; how to create and communicate a faithful and actionable shared representation of the surgery among a mixed human-AI actor team; how to design interventional systems and associated cognitive shared control schemes for online uncertainty-aware collaborative decision making ultimately producing more precise and reliable interventions.
Collapse
Affiliation(s)
- Tom Vercauteren
- School of Biomedical Engineering & Imaging SciencesKing’s College LondonLondonWC2R 2LSU.K.
| | - Mathias Unberath
- Department of Computer ScienceJohns Hopkins UniversityBaltimoreMD21218USA
| | - Nicolas Padoy
- ICube institute, CNRS, IHU Strasbourg, University of Strasbourg67081StrasbourgFrance
| | - Nassir Navab
- Fakultät für InformatikTechnische Universität München80333MunichGermany
| |
Collapse
|