1
|
Tomasini C, Rodriguez-Puigvert J, Polanco D, Viñuales M, Riazuelo L, Murillo AC. Automated vision-based assistance tools in bronchoscopy: stenosis severity estimation. Int J Comput Assist Radiol Surg 2025:10.1007/s11548-025-03398-x. [PMID: 40372596 DOI: 10.1007/s11548-025-03398-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2025] [Accepted: 04/14/2025] [Indexed: 05/16/2025]
Abstract
PURPOSE Subglottic stenosis refers to the narrowing of the subglottis, the airway between the vocal cords and the trachea. Its severity is typically evaluated by estimating the percentage of obstructed airway. This estimation can be obtained from CT data or through visual inspection by experts exploring the region. However, visual inspections are inherently subjective, leading to less consistent and robust diagnoses. No public methods or datasets are currently available for automated evaluation of this condition from bronchoscopy video. METHODS We propose a pipeline for automated subglottic stenosis severity estimation during the bronchoscopy exploration, without requiring the physician to traverse the stenosed region. Our approach exploits the physical effect of illumination decline in endoscopy to segment and track the lumen and obtain a 3D model of the airway. This 3D model is obtained from a single frame and is used to measure the airway narrowing. RESULTS Our pipeline is the first to enable automated and robust subglottic stenosis severity measurement using bronchoscopy images. The results show consistency with ground-truth estimations from CT scans and expert estimations and reliable repeatability across multiple estimations on the same patient. Our evaluation is performed on our new Subglottic Stenosis Dataset of real bronchoscopy procedures data. CONCLUSION We demonstrate how to automate evaluation of subglottic stenosis severity using only bronchoscopy. Our approach can assist with and shorten diagnosis and monitoring procedures, with automated and repeatable estimations and less exploration time, and save radiation exposure to patients as no CT is required. Additionally, we release the first public benchmark for subglottic stenosis severity assessment.
Collapse
|
2
|
Göbel B, Reiterer A, Möller K. Image-Based 3D Reconstruction in Laparoscopy: A Review Focusing on the Quantitative Evaluation by Applying the Reconstruction Error. J Imaging 2024; 10:180. [PMID: 39194969 DOI: 10.3390/jimaging10080180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Revised: 07/16/2024] [Accepted: 07/18/2024] [Indexed: 08/29/2024] Open
Abstract
Image-based 3D reconstruction enables laparoscopic applications as image-guided navigation and (autonomous) robot-assisted interventions, which require a high accuracy. The review's purpose is to present the accuracy of different techniques to label the most promising. A systematic literature search with PubMed and google scholar from 2015 to 2023 was applied by following the framework of "Review articles: purpose, process, and structure". Articles were considered when presenting a quantitative evaluation (root mean squared error and mean absolute error) of the reconstruction error (Euclidean distance between real and reconstructed surface). The search provides 995 articles, which were reduced to 48 articles after applying exclusion criteria. From these, a reconstruction error data set could be generated for the techniques of stereo vision, Shape-from-Motion, Simultaneous Localization and Mapping, deep-learning, and structured light. The reconstruction error varies from below one millimeter to higher than ten millimeters-with deep-learning and Simultaneous Localization and Mapping delivering the best results under intraoperative conditions. The high variance emerges from different experimental conditions. In conclusion, submillimeter accuracy is challenging, but promising image-based 3D reconstruction techniques could be identified. For future research, we recommend computing the reconstruction error for comparison purposes and use ex/in vivo organs as reference objects for realistic experiments.
Collapse
Affiliation(s)
- Birthe Göbel
- Department of Sustainable Systems Engineering-INATECH, University of Freiburg, Emmy-Noether-Street 2, 79110 Freiburg im Breisgau, Germany
- KARL STORZ SE & Co. KG, Dr.-Karl-Storz-Street 34, 78532 Tuttlingen, Germany
| | - Alexander Reiterer
- Department of Sustainable Systems Engineering-INATECH, University of Freiburg, Emmy-Noether-Street 2, 79110 Freiburg im Breisgau, Germany
- Fraunhofer Institute for Physical Measurement Techniques IPM, 79110 Freiburg im Breisgau, Germany
| | - Knut Möller
- Institute of Technical Medicine-ITeM, Furtwangen University (HFU), 78054 Villingen-Schwenningen, Germany
- Mechanical Engineering, University of Canterbury, Christchurch 8140, New Zealand
| |
Collapse
|
3
|
Azagra P, Sostres C, Ferrández Á, Riazuelo L, Tomasini C, Barbed OL, Morlana J, Recasens D, Batlle VM, Gómez-Rodríguez JJ, Elvira R, López J, Oriol C, Civera J, Tardós JD, Murillo AC, Lanas A, Montiel JMM. Endomapper dataset of complete calibrated endoscopy procedures. Sci Data 2023; 10:671. [PMID: 37789003 PMCID: PMC10547713 DOI: 10.1038/s41597-023-02564-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Accepted: 09/14/2023] [Indexed: 10/05/2023] Open
Abstract
Computer-assisted systems are becoming broadly used in medicine. In endoscopy, most research focuses on the automatic detection of polyps or other pathologies, but localization and navigation of the endoscope are completely performed manually by physicians. To broaden this research and bring spatial Artificial Intelligence to endoscopies, data from complete procedures is needed. This paper introduces the Endomapper dataset, the first collection of complete endoscopy sequences acquired during regular medical practice, making secondary use of medical data. Its main purpose is to facilitate the development and evaluation of Visual Simultaneous Localization and Mapping (VSLAM) methods in real endoscopy data. The dataset contains more than 24 hours of video. It is the first endoscopic dataset that includes endoscope calibration as well as the original calibration videos. Meta-data and annotations associated with the dataset vary from the anatomical landmarks, procedure labeling, segmentations, reconstructions, simulated sequences with ground truth and same patient procedures. The software used in this paper is publicly available.
Collapse
Affiliation(s)
- Pablo Azagra
- Instituto de Investigación en Ingeniería de Aragón (I3A), Universidad de Zaragoza, Zaragoza, Spain.
| | - Carlos Sostres
- Digestive Disease Service, Hospital Clínico Universitario Lozano Blesa, Zaragoza, Spain
- Department of Medicine, Universidad de Zaragoza, Zaragoza, Spain
- Instituto de Investigación Sanitaria Aragón (IIS Aragón), Zaragoza, Spain
- Centro de Investigación Biomédica en Red, Enfermedades Hepáticas y Digestivas (CIBEREHD), Madrid, Spain
| | - Ángel Ferrández
- Digestive Disease Service, Hospital Clínico Universitario Lozano Blesa, Zaragoza, Spain
- Department of Medicine, Universidad de Zaragoza, Zaragoza, Spain
- Instituto de Investigación Sanitaria Aragón (IIS Aragón), Zaragoza, Spain
- Centro de Investigación Biomédica en Red, Enfermedades Hepáticas y Digestivas (CIBEREHD), Madrid, Spain
| | - Luis Riazuelo
- Instituto de Investigación en Ingeniería de Aragón (I3A), Universidad de Zaragoza, Zaragoza, Spain
| | - Clara Tomasini
- Instituto de Investigación en Ingeniería de Aragón (I3A), Universidad de Zaragoza, Zaragoza, Spain
| | - O León Barbed
- Instituto de Investigación en Ingeniería de Aragón (I3A), Universidad de Zaragoza, Zaragoza, Spain
| | - Javier Morlana
- Instituto de Investigación en Ingeniería de Aragón (I3A), Universidad de Zaragoza, Zaragoza, Spain
| | - David Recasens
- Instituto de Investigación en Ingeniería de Aragón (I3A), Universidad de Zaragoza, Zaragoza, Spain
| | - Víctor M Batlle
- Instituto de Investigación en Ingeniería de Aragón (I3A), Universidad de Zaragoza, Zaragoza, Spain
| | - Juan J Gómez-Rodríguez
- Instituto de Investigación en Ingeniería de Aragón (I3A), Universidad de Zaragoza, Zaragoza, Spain
| | - Richard Elvira
- Instituto de Investigación en Ingeniería de Aragón (I3A), Universidad de Zaragoza, Zaragoza, Spain
| | - Julia López
- Digestive Disease Service, Hospital Clínico Universitario Lozano Blesa, Zaragoza, Spain
| | - Cristina Oriol
- Instituto de Investigación en Ingeniería de Aragón (I3A), Universidad de Zaragoza, Zaragoza, Spain
| | - Javier Civera
- Instituto de Investigación en Ingeniería de Aragón (I3A), Universidad de Zaragoza, Zaragoza, Spain
| | - Juan D Tardós
- Instituto de Investigación en Ingeniería de Aragón (I3A), Universidad de Zaragoza, Zaragoza, Spain
| | - Ana C Murillo
- Instituto de Investigación en Ingeniería de Aragón (I3A), Universidad de Zaragoza, Zaragoza, Spain
| | - Angel Lanas
- Digestive Disease Service, Hospital Clínico Universitario Lozano Blesa, Zaragoza, Spain
- Department of Medicine, Universidad de Zaragoza, Zaragoza, Spain
- Instituto de Investigación Sanitaria Aragón (IIS Aragón), Zaragoza, Spain
- Centro de Investigación Biomédica en Red, Enfermedades Hepáticas y Digestivas (CIBEREHD), Madrid, Spain
| | - José M M Montiel
- Instituto de Investigación en Ingeniería de Aragón (I3A), Universidad de Zaragoza, Zaragoza, Spain
| |
Collapse
|
4
|
Bardozzo F, Collins T, Forgione A, Hostettler A, Tagliaferri R. StaSiS-Net: a stacked and siamese disparity estimation network for depth reconstruction in modern 3D laparoscopy. Med Image Anal 2022; 77:102380. [DOI: 10.1016/j.media.2022.102380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2021] [Revised: 01/26/2022] [Accepted: 01/27/2022] [Indexed: 10/19/2022]
|