1
|
Bano S, Casella A, Vasconcelos F, Qayyum A, Benzinou A, Mazher M, Meriaudeau F, Lena C, Cintorrino IA, De Paolis GR, Biagioli J, Grechishnikova D, Jiao J, Bai B, Qiao Y, Bhattarai B, Gaire RR, Subedi R, Vazquez E, Płotka S, Lisowska A, Sitek A, Attilakos G, Wimalasundera R, David AL, Paladini D, Deprest J, De Momi E, Mattos LS, Moccia S, Stoyanov D. Placental vessel segmentation and registration in fetoscopy: Literature review and MICCAI FetReg2021 challenge findings. Med Image Anal 2024; 92:103066. [PMID: 38141453 PMCID: PMC11162867 DOI: 10.1016/j.media.2023.103066] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Revised: 11/27/2023] [Accepted: 12/19/2023] [Indexed: 12/25/2023]
Abstract
Fetoscopy laser photocoagulation is a widely adopted procedure for treating Twin-to-Twin Transfusion Syndrome (TTTS). The procedure involves photocoagulation pathological anastomoses to restore a physiological blood exchange among twins. The procedure is particularly challenging, from the surgeon's side, due to the limited field of view, poor manoeuvrability of the fetoscope, poor visibility due to amniotic fluid turbidity, and variability in illumination. These challenges may lead to increased surgery time and incomplete ablation of pathological anastomoses, resulting in persistent TTTS. Computer-assisted intervention (CAI) can provide TTTS surgeons with decision support and context awareness by identifying key structures in the scene and expanding the fetoscopic field of view through video mosaicking. Research in this domain has been hampered by the lack of high-quality data to design, develop and test CAI algorithms. Through the Fetoscopic Placental Vessel Segmentation and Registration (FetReg2021) challenge, which was organized as part of the MICCAI2021 Endoscopic Vision (EndoVis) challenge, we released the first large-scale multi-center TTTS dataset for the development of generalized and robust semantic segmentation and video mosaicking algorithms with a focus on creating drift-free mosaics from long duration fetoscopy videos. For this challenge, we released a dataset of 2060 images, pixel-annotated for vessels, tool, fetus and background classes, from 18 in-vivo TTTS fetoscopy procedures and 18 short video clips of an average length of 411 frames for developing placental scene segmentation and frame registration for mosaicking techniques. Seven teams participated in this challenge and their model performance was assessed on an unseen test dataset of 658 pixel-annotated images from 6 fetoscopic procedures and 6 short clips. For the segmentation task, overall baseline performed was the top performing (aggregated mIoU of 0.6763) and was the best on the vessel class (mIoU of 0.5817) while team RREB was the best on the tool (mIoU of 0.6335) and fetus (mIoU of 0.5178) classes. For the registration task, overall the baseline performed better than team SANO with an overall mean 5-frame SSIM of 0.9348. Qualitatively, it was observed that team SANO performed better in planar scenarios, while baseline was better in non-planner scenarios. The detailed analysis showed that no single team outperformed on all 6 test fetoscopic videos. The challenge provided an opportunity to create generalized solutions for fetoscopic scene understanding and mosaicking. In this paper, we present the findings of the FetReg2021 challenge, alongside reporting a detailed literature review for CAI in TTTS fetoscopy. Through this challenge, its analysis and the release of multi-center fetoscopic data, we provide a benchmark for future research in this field.
Collapse
Affiliation(s)
- Sophia Bano
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, UK.
| | - Alessandro Casella
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Italy; Department of Electronics, Information and Bioengineering, Politecnico di Milano, Italy
| | - Francisco Vasconcelos
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, UK
| | | | | | - Moona Mazher
- Department of Computer Engineering and Mathematics, University Rovira i Virgili, Spain
| | | | - Chiara Lena
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Italy
| | | | - Gaia Romana De Paolis
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Italy
| | - Jessica Biagioli
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Italy
| | | | | | - Bizhe Bai
- Medical Computer Vision and Robotics Group, Department of Mathematical and Computational Sciences, University of Toronto, Canada
| | - Yanyan Qiao
- Shanghai MicroPort MedBot (Group) Co., Ltd, China
| | - Binod Bhattarai
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, UK
| | | | - Ronast Subedi
- NepAL Applied Mathematics and Informatics Institute for Research, Nepal
| | | | - Szymon Płotka
- Sano Center for Computational Medicine, Poland; Quantitative Healthcare Analysis Group, Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands
| | | | - Arkadiusz Sitek
- Sano Center for Computational Medicine, Poland; Center for Advanced Medical Computing and Simulation, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - George Attilakos
- Fetal Medicine Unit, Elizabeth Garrett Anderson Wing, University College London Hospital, UK; EGA Institute for Women's Health, Faculty of Population Health Sciences, University College London, UK
| | - Ruwan Wimalasundera
- Fetal Medicine Unit, Elizabeth Garrett Anderson Wing, University College London Hospital, UK; EGA Institute for Women's Health, Faculty of Population Health Sciences, University College London, UK
| | - Anna L David
- Fetal Medicine Unit, Elizabeth Garrett Anderson Wing, University College London Hospital, UK; EGA Institute for Women's Health, Faculty of Population Health Sciences, University College London, UK; Department of Development and Regeneration, University Hospital Leuven, Belgium
| | - Dario Paladini
- Department of Fetal and Perinatal Medicine, Istituto "Giannina Gaslini", Italy
| | - Jan Deprest
- EGA Institute for Women's Health, Faculty of Population Health Sciences, University College London, UK; Department of Development and Regeneration, University Hospital Leuven, Belgium
| | - Elena De Momi
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Italy
| | - Leonardo S Mattos
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Italy
| | - Sara Moccia
- The BioRobotics Institute and Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, Italy
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, UK
| |
Collapse
|
2
|
van der Schot AM, Sikkel E, August Spaanderman ME, Vandenbussche FP. Computer-assisted fetal laser surgery in the treatment of twin-to-twin transfusion syndrome recent trends and prospects. Prenat Diagn 2022; 42:1225-1234. [PMID: 35983630 PMCID: PMC9541851 DOI: 10.1002/pd.6225] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 06/06/2022] [Accepted: 08/02/2022] [Indexed: 11/06/2022]
Abstract
Fetal laser surgery has emerged as the preferred treatment of twin-to-twin transfusion syndrome (TTTS). However, the limited field of view of the fetoscope and the complexity of the procedure make the treatment challenging. Therefore, preoperative planning and intraoperative guidance solutions have been proposed to cope with these challenges. This review uncovers the literature on computer-assisted software solutions focused on TTTS. These solutions are classified by the pre- or intraoperative phase of the procedure and further categorized by discussed hardware and software approaches. In addition, it evaluates the current maturity of technologies by the technology readiness level and enumerates the necessary aspects to bring these new technologies to the clinical practice. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
| | - Esther Sikkel
- Department Obstetrics & Gynecology, Radboudumc/Amalia Children's hospital, Nijmegen, the Netherlands
| | - Marc Erich August Spaanderman
- Department Obstetrics & Gynecology, Radboudumc/Amalia Children's hospital, Nijmegen, the Netherlands.,Department Obstetrics & Gynecology, Maastricht UMC+, Maastricht, the Netherlands
| | | |
Collapse
|
3
|
Kim DT, Cheng CH, Liu DG, Liu KCJ, Huang WSW. Designing a New Endoscope for Panoramic-View with Focus-Area 3D-Vision in Minimally Invasive Surgery. J Med Biol Eng 2019. [DOI: 10.1007/s40846-019-00503-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Abstract
Purpose
The minimally invasive surgery (MIS) has shown advantages when compared to traditional surgery. However, there are two major challenges in the MIS technique: the limited field of view (FOV) and the lack of depth perception provided by the standard monocular endoscope. Therefore, in this study, we proposed a New Endoscope for Panoramic-View with Focus-Area 3D-Vision (3DMISPE) in order to provide surgeons with a broad view field and 3D images in the surgical area for real-time display.
Method
The proposed system consisted of two endoscopic cameras fixed to each other. Compared to our previous study, the proposed algorithm for the stitching videos was novel. This proposed stitching algorithm was based on the stereo vision synthesis theory. Thus, this new method can support 3D reconstruction and image stitching at the same time. Moreover, our approach employed the same functions on reconstructing 3D surface images by calculating the overlap region’s disparity and performing image stitching with the two-view images from both the cameras.
Results
The experimental results demonstrated that the proposed method can combine two endoscope’s FOV into one wider FOV. In addition, the part in the overlap region could also be synthesized for a 3D display to provide more information about depth and distance, with an error of about 1 mm. In the proposed system, the performance could achieve a frame rate of up to 11.3 fps on a single Intel i5-4590 CPU computer and 17.6 fps on a computer with an additional GTX1060 Nvidia GeForce GPU. Furthermore, the proposed stitching method in this study could be made 1.4 times after when compared to that in our previous report. Besides, our method also improved stitched image quality by significantly reducing the alignment errors or “ghosting” when compared to the SURF-based stitching method employed in our previous study.
Conclusion
The proposed system can provide a more efficient way for the doctors with a broad area of view while still providing a 3D surface image in real-time applications. Our system give promises to improve existing limitations in laparoscopic surgery such as the limited FOV and the lack of depth perception.
Collapse
|
4
|
Performance Improvement for Two-Lens Panoramic Endoscopic System during Minimally Invasive Surgery. JOURNAL OF HEALTHCARE ENGINEERING 2019. [DOI: 10.1155/2019/2097284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
One of the major challenges for Minimally Invasive Surgery (MIS) is the limited field of vision (FOV) of the endoscope. A previous study by the authors designed a MIS Panoramic Endoscope (MISPE) that gives the physician a broad field of view, but this approach is still limited, in terms of performance and quality because it encounters difficulty when there is smoke, specular reflections, or a change in viewpoint. This study proposes a novel algorithm that increases the MISPE’s performance. The method calculates the disparity for the region that is overlapped by the two cameras to allow image stitching. An improved evaluation of the homography matrix uses a frame-by-frame calculation, so the stitched videos are more stable for MIS. The experimental results show that the revised MISPE has a FOV that is 55% greater, and the system operates stably in real time. The proposed system allows a frame rate of 26.7 fps on a single CPU computer. The proposed stitching method is 1.55 times faster than the previous method. The stitched image that is obtained using the proposed method is as similar as the ground truth as the SURF-based stitching method that was used in the previous study.
Collapse
|
5
|
Speed Improvement in Image Stitching for Panoramic Dynamic Images during Minimally Invasive Surgery. JOURNAL OF HEALTHCARE ENGINEERING 2018; 2018:3654210. [PMID: 30631411 PMCID: PMC6304838 DOI: 10.1155/2018/3654210] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/06/2018] [Revised: 10/09/2018] [Accepted: 10/25/2018] [Indexed: 12/25/2022]
Abstract
Minimally invasive surgery (MIS) minimizes the surgical incisions that need to be made and hence reduces the physical trauma involved during the surgical process. The ultimate goal is to reduce postoperative pain and blood loss as well as to limit the scarring area and hence accelerate recovery. It is therefore of great interest to both the surgeon and the patient. However, a major problem with MIS is that the field of vision of the surgeon is very narrow. We had previously developed and tested an MIS panoramic endoscope (MISPE) that provides the surgeon with a broader field of view. However, one issue with the MISPE was its low rate of video stitching. Therefore, in this paper, we propose using the region of interest in combination with the downsizing technique to improve the image-stitching performance of the MISPE. Experimental results confirm that, by using the proposed method, the image size can be increased by more than 160%, with the image resolution also improving. For instance, we could achieve performance improvements of 10× (CPU) and 23× (GPU) as compared to that of the original method.
Collapse
|
6
|
Sadda P, Imamoglu M, Dombrowski M, Papademetris X, Bahtiyar MO, Onofrey J. Deep-learned placental vessel segmentation for intraoperative video enhancement in fetoscopic surgery. Int J Comput Assist Radiol Surg 2018; 14:227-235. [PMID: 30484115 DOI: 10.1007/s11548-018-1886-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2018] [Accepted: 11/06/2018] [Indexed: 12/26/2022]
Abstract
INTRODUCTION Twin-to-twin transfusion syndrome (TTTS) is a potentially lethal condition that affects pregnancies in which twins share a single placenta. The definitive treatment for TTTS is fetoscopic laser photocoagulation, a procedure in which placental blood vessels are selectively cauterized. Challenges in this procedure include difficulty in quickly identifying placental blood vessels due to the many artifacts in the endoscopic video that the surgeon uses for navigation. We propose using deep-learned segmentations of blood vessels to create masks that can be recombined with the original fetoscopic video frame in such a way that the location of placental blood vessels is discernable at a glance. METHODS In a process approved by an institutional review board, intraoperative videos were acquired from ten fetoscopic laser photocoagulation surgeries performed at Yale New Haven Hospital. A total of 345 video frames were selected from these videos at regularly spaced time intervals. The video frames were segmented once by an expert human rater (a clinician) and once by a novice, but trained human rater (an undergraduate student). The segmentations were used to train a fully convolutional neural network of 25 layers. RESULTS The neural network was able to produce segmentations with a high similarity to ground truth segmentations produced by an expert human rater (sensitivity = 92.15% ± 10.69%) and produced segmentations that were significantly more accurate than those produced by a novice human rater (sensitivity = 56.87% ± 21.64%; p < 0.01). CONCLUSION A convolutional neural network can be trained to segment placental blood vessels with near-human accuracy and can exceed the accuracy of novice human raters. Recombining these segmentations with the original fetoscopic video frames can produced enhanced frames in which blood vessels are easily detectable. This has significant implications for aiding fetoscopic surgeons-especially trainees who are not yet at an expert level.
Collapse
Affiliation(s)
| | - Metehan Imamoglu
- Yale University School of Medicine, New Haven, USA.,Department of Obstetrics and Gynecology, Yale University School of Medicine, New Haven, USA.,Yale Fetal Care Center, New Haven, USA
| | - Michael Dombrowski
- Yale University School of Medicine, New Haven, USA.,Department of Obstetrics and Gynecology, Yale University School of Medicine, New Haven, USA.,Yale Fetal Care Center, New Haven, USA
| | - Xenophon Papademetris
- Yale University School of Medicine, New Haven, USA.,Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, USA.,Department of Biomedical Engineering, Yale University School of Medicine, New Haven, USA
| | - Mert O Bahtiyar
- Yale University School of Medicine, New Haven, USA.,Department of Obstetrics and Gynecology, Yale University School of Medicine, New Haven, USA.,Yale Fetal Care Center, New Haven, USA
| | - John Onofrey
- Yale University School of Medicine, New Haven, USA.,Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, USA
| |
Collapse
|