1
|
Liu S, Fan J, Yang Y, Xiao D, Ai D, Song H, Wang Y, Yang J. Monocular endoscopy images depth estimation with multi-scale residual fusion. Comput Biol Med 2024; 169:107850. [PMID: 38145602 DOI: 10.1016/j.compbiomed.2023.107850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 11/16/2023] [Accepted: 12/11/2023] [Indexed: 12/27/2023]
Abstract
BACKGROUND Monocular depth estimation plays a fundamental role in clinical endoscopy surgery. However, the coherent illumination, smooth surfaces, and texture-less nature of endoscopy images present significant challenges to traditional depth estimation methods. Existing approaches struggle to accurately perceive depth in such settings. METHOD To overcome these challenges, this paper proposes a novel multi-scale residual fusion method for estimating the depth of monocular endoscopy images. Specifically, we address the issue of coherent illumination by leveraging image frequency domain component space transformation, thereby enhancing the stability of the scene's light source. Moreover, we employ an image radiation intensity attenuation model to estimate the initial depth map. Finally, to refine the accuracy of depth estimation, we utilize a multi-scale residual fusion optimization technique. RESULTS To evaluate the performance of our proposed method, extensive experiments were conducted on public datasets. The structural similarity measures for continuous frames in three distinct clinical data scenes reached impressive values of 0.94, 0.82, and 0.84, respectively. These results demonstrate the effectiveness of our approach in capturing the intricate details of endoscopy images. Furthermore, the depth estimation accuracy achieved remarkable levels of 89.3 % and 91.2 % for the two models' data, respectively, underscoring the robustness of our method. CONCLUSIONS Overall, the promising results obtained on public datasets highlight the significant potential of our method for clinical applications, facilitating reliable depth estimation and enhancing the quality of endoscopy surgical procedures.
Collapse
Affiliation(s)
- Shiyuan Liu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China; China Center for Information Industry Development, Beijing, 100081, China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| | - Yun Yang
- Department of General Surgery, Beijing Friendship Hospital, Capital Medical University, National Clinical Research Center for Digestive Diseases, Beijing 100050, China
| | - Deqiang Xiao
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, 100081, China
| | - Yongtian Wang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| |
Collapse
|
2
|
Lin Z, Lei C, Yang L. Modern Image-Guided Surgery: A Narrative Review of Medical Image Processing and Visualization. SENSORS (BASEL, SWITZERLAND) 2023; 23:9872. [PMID: 38139718 PMCID: PMC10748263 DOI: 10.3390/s23249872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Revised: 11/15/2023] [Accepted: 12/13/2023] [Indexed: 12/24/2023]
Abstract
Medical image analysis forms the basis of image-guided surgery (IGS) and many of its fundamental tasks. Driven by the growing number of medical imaging modalities, the research community of medical imaging has developed methods and achieved functionality breakthroughs. However, with the overwhelming pool of information in the literature, it has become increasingly challenging for researchers to extract context-relevant information for specific applications, especially when many widely used methods exist in a variety of versions optimized for their respective application domains. By being further equipped with sophisticated three-dimensional (3D) medical image visualization and digital reality technology, medical experts could enhance their performance capabilities in IGS by multiple folds. The goal of this narrative review is to organize the key components of IGS in the aspects of medical image processing and visualization with a new perspective and insights. The literature search was conducted using mainstream academic search engines with a combination of keywords relevant to the field up until mid-2022. This survey systemically summarizes the basic, mainstream, and state-of-the-art medical image processing methods as well as how visualization technology like augmented/mixed/virtual reality (AR/MR/VR) are enhancing performance in IGS. Further, we hope that this survey will shed some light on the future of IGS in the face of challenges and opportunities for the research directions of medical image processing and visualization.
Collapse
Affiliation(s)
- Zhefan Lin
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310030, China;
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| | - Chen Lei
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| | - Liangjing Yang
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310030, China;
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| |
Collapse
|
3
|
Long Z, Chi Y, Yu X, Jiang Z, Yang D. ArthroNavi framework: stereo endoscope-guided instrument localization for arthroscopic minimally invasive surgeries. JOURNAL OF BIOMEDICAL OPTICS 2023; 28:106002. [PMID: 37841507 PMCID: PMC10576396 DOI: 10.1117/1.jbo.28.10.106002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 09/24/2023] [Accepted: 09/29/2023] [Indexed: 10/17/2023]
Abstract
Significance As an example of a minimally invasive arthroscopic surgical procedure, arthroscopic osteochondral autograft transplantation (OAT) is a common option for repairing focal cartilage defects in the knee joints. Arthroscopic OAT offers considerable benefits to patients, such as less post-operative pain and shorter hospital stays. However, performing OAT arthroscopically is an extremely demanding task because the osteochondral graft harvester must remain perpendicular to the cartilage surface to avoid differences in angulation. Aim We present a practical ArthroNavi framework for instrument pose localization by combining a self-developed stereo endoscopy with electromagnetic computation, which equips surgeons with surgical navigation assistance that eases the operational constraints of arthroscopic OAT surgery. Approach A prototype of a stereo endoscope specifically fit for a texture-less scene is introduced extensively. Then, the proposed framework employs the semi-global matching algorithm integrating the matching cubes method for real-time processing of the 3D point cloud. To address issues regarding initialization and occlusion, a displaying method based on patient tracking coordinates is proposed for intra-operative robust navigation. A geometrical constraint method that utilizes the 3D point cloud is used to compute a pose for the instrument. Finally, a hemisphere tabulation method is presented for pose accuracy evaluation. Results Experimental results show that our endoscope achieves 3D shape measurement with an accuracy of < 730 μ m . The mean error of pose localization is 15.4 deg (range of 10.3 deg to 21.3 deg; standard deviation of 3.08 deg) in our ArthroNavi method, which is within the same order of magnitude as that achieved by experienced surgeons using a freehand technique. Conclusions The effectiveness of the proposed ArthroNavi has been validated on a phantom femur. The potential contribution of this framework may provide a new computer-aided option for arthroscopic OAT surgery.
Collapse
Affiliation(s)
- Zhongjie Long
- Beijing Information Science & Technology University, School of Electromechanical Engineering, Beijing, China
| | - Yongting Chi
- Beijing Information Science & Technology University, School of Electromechanical Engineering, Beijing, China
| | - Xiaotong Yu
- Guang’anmen Hospital, China Academy of Chinese Medical Sciences, Beijing, China
| | - Zhouxiang Jiang
- Beijing Information Science & Technology University, School of Electromechanical Engineering, Beijing, China
| | - Dejin Yang
- Beijing Jishuitan Hospital, Capital Medical School, 4th Clinical College of Peking University, Department of Orthopedics, Beijing, China
| |
Collapse
|
4
|
van der Schot AM, Sikkel E, August Spaanderman ME, Vandenbussche FP. Computer-assisted fetal laser surgery in the treatment of twin-to-twin transfusion syndrome recent trends and prospects. Prenat Diagn 2022; 42:1225-1234. [PMID: 35983630 PMCID: PMC9541851 DOI: 10.1002/pd.6225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 06/06/2022] [Accepted: 08/02/2022] [Indexed: 11/06/2022]
Abstract
Fetal laser surgery has emerged as the preferred treatment of twin-to-twin transfusion syndrome (TTTS). However, the limited field of view of the fetoscope and the complexity of the procedure make the treatment challenging. Therefore, preoperative planning and intraoperative guidance solutions have been proposed to cope with these challenges. This review uncovers the literature on computer-assisted software solutions focused on TTTS. These solutions are classified by the pre- or intraoperative phase of the procedure and further categorized by discussed hardware and software approaches. In addition, it evaluates the current maturity of technologies by the technology readiness level and enumerates the necessary aspects to bring these new technologies to the clinical practice. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
| | - Esther Sikkel
- Department Obstetrics & Gynecology, Radboudumc/Amalia Children's hospital, Nijmegen, the Netherlands
| | - Marc Erich August Spaanderman
- Department Obstetrics & Gynecology, Radboudumc/Amalia Children's hospital, Nijmegen, the Netherlands.,Department Obstetrics & Gynecology, Maastricht UMC+, Maastricht, the Netherlands
| | | |
Collapse
|
5
|
Yang W, Knorr F, Latka I, Vogt M, Hofmann GO, Popp J, Schie IW. Real-time molecular imaging of near-surface tissue using Raman spectroscopy. LIGHT, SCIENCE & APPLICATIONS 2022; 11:90. [PMID: 35396506 PMCID: PMC8993924 DOI: 10.1038/s41377-022-00773-0] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Revised: 03/09/2022] [Accepted: 03/19/2022] [Indexed: 05/08/2023]
Abstract
The steady progress in medical diagnosis and treatment of diseases largely hinges on the steady development and improvement of modern imaging modalities. Raman spectroscopy has attracted increasing attention for clinical applications as it is label-free, non-invasive, and delivers molecular fingerprinting information of a sample. In combination with fiber optic probes, it also allows easy access to different body parts of a patient. However, image acquisition with fiber optic probes is currently not possible. Here, we introduce a fiber optic probe-based Raman imaging system for the real-time molecular virtual reality data visualization of chemical boundaries on a computer screen and the physical world. The approach is developed around a computer vision-based positional tracking system in conjunction with photometric stereo and augmented and mixed chemical reality, enabling molecular imaging and direct visualization of molecular boundaries of three-dimensional surfaces. The proposed approach achieves a spatial resolution of 0.5 mm in the transverse plane and a topology resolution of 0.6 mm, with a spectral sampling frequency of 10 Hz, and can be used to image large tissue areas in a few minutes, making it highly suitable for clinical tissue-boundary demarcation. A variety of applications on biological samples, i.e., distribution of pharmaceutical compounds, brain-tumor phantom, and various types of sarcoma have been characterized, showing that the system enables rapid and intuitive assessment of molecular boundaries.
Collapse
Affiliation(s)
- Wei Yang
- Leibniz Institute of Photonic Technology Jena, Albert-Einstein-Straße 9, 07745, Jena, Germany
| | - Florian Knorr
- Leibniz Institute of Photonic Technology Jena, Albert-Einstein-Straße 9, 07745, Jena, Germany
| | - Ines Latka
- Leibniz Institute of Photonic Technology Jena, Albert-Einstein-Straße 9, 07745, Jena, Germany
| | - Matthias Vogt
- Department of Trauma, Hand and Reconstructive Surgery, University Hospital Jena, Am Klinikum 1, 07747, Jena, Germany
| | - Gunther O Hofmann
- Department of Trauma, Hand and Reconstructive Surgery, University Hospital Jena, Am Klinikum 1, 07747, Jena, Germany
| | - Jürgen Popp
- Leibniz Institute of Photonic Technology Jena, Albert-Einstein-Straße 9, 07745, Jena, Germany
- Institute of Physical Chemistry and Abbe Center of Photonics, Friedrich-Schiller University Jena, Helmholtzweg 4, 07743, Jena, Germany
| | - Iwan W Schie
- Leibniz Institute of Photonic Technology Jena, Albert-Einstein-Straße 9, 07745, Jena, Germany.
- Department of Medical Engineering and Biotechnology, University of Applied Sciences - Jena, Carl-Zeiss-Promenade 2, 07745, Jena, Germany.
| |
Collapse
|
6
|
Lian J, Zhang M, Jiang N, Bi W, Dong X. Feature Extraction of Kidney Tissue Image Based on Ultrasound Image Segmentation. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:9915697. [PMID: 33986943 PMCID: PMC8093061 DOI: 10.1155/2021/9915697] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 04/09/2021] [Accepted: 04/16/2021] [Indexed: 11/17/2022]
Abstract
The kidney tissue image is affected by other interferences in the tissue, which makes it difficult to extract the kidney tissue image features, and it is difficult to judge the lesion characteristics and types by intelligent feature recognition. In order to improve the efficiency and accuracy of feature extraction of kidney tissue images, refer to the ultrasonic heart image for analysis and then apply it to the feature extraction of kidney tissue. This paper proposes a feature extraction method based on ultrasound image segmentation. Moreover, this study combines the optical flow method and the speckle tracking algorithm to select the best image tracking method and optimizes the algorithm speed through the full search method and the two-dimensional log search method. In addition, this study verifies the performance of the method proposed in this paper through comparative experimental research, and this study combines statistical analysis methods to perform data analysis. The research results show that the algorithm proposed in this paper has a certain effect.
Collapse
Affiliation(s)
- Jie Lian
- Department of Ultrasound, Harbin Medical University Fourth Hospital, Harbin 150001, Heilongjiang, China
| | - Mingyu Zhang
- Department of Cardiology, Harbin Medical University Fourth Hospital, Harbin 150001, Heilongjiang, China
| | - Na Jiang
- Department of Ultrasound, Harbin Medical University Fourth Hospital, Harbin 150001, Heilongjiang, China
| | - Wei Bi
- Department of Ultrasound, Harbin Medical University Fourth Hospital, Harbin 150001, Heilongjiang, China
| | - Xiaoqiu Dong
- Department of Ultrasound, Harbin Medical University Fourth Hospital, Harbin 150001, Heilongjiang, China
| |
Collapse
|
7
|
Yang L, Etsuko K. Review on vision‐based tracking in surgical navigation. IET CYBER-SYSTEMS AND ROBOTICS 2020. [DOI: 10.1049/iet-csr.2020.0013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Affiliation(s)
- Liangjing Yang
- Zhejiang University/University of Illinois at Urbana‐Champaign Institute, Zhejiang University Haining People's Republic of China
- School of Mechanical Engineering Zhejiang University Hangzhou People's Republic of China
- Department of Mechanical Science and Engineering University of Illinois at Urbana‐Champaign Urbana USA
| | - Kobayashi Etsuko
- Graduate School of Engineering The University of Tokyo Tokyo Japan
- Institute of Advanced Biomedical Engineering and Science Tokyo Women's Medical University Tokyo Japan
| |
Collapse
|
8
|
Daunizeau L, Nguyen A, Le Garrec M, Chapelon JY, N'Djin WA. Robot-assisted ultrasound navigation platform for 3D HIFU treatment planning: Initial evaluation for conformal interstitial ablation. Comput Biol Med 2020; 124:103941. [PMID: 32818742 DOI: 10.1016/j.compbiomed.2020.103941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Revised: 07/19/2020] [Accepted: 07/27/2020] [Indexed: 10/23/2022]
Abstract
Interstitial Ultrasound-guided High Intensity Focused Ultrasound (USgHIFU) therapy has the potential to deliver ablative treatments which conform to the target tumor. In this study, a robot-assisted US-navigation platform has been developed for 3D US guidance and planning of conformal HIFU ablations. The platform was used to evaluate a conformal therapeutic strategy associated with an interstitial dual-mode USgHIFU catheter prototype (64 elements linear-array, measured central frequency f = 6.5 MHz), developed for the treatment of HepatoCellular Carcinoma (HCC). The platform included a 3D navigation environment communicating in real-time with an open research dual-mode US scanner/HIFU generator and a robotic arm, on which the USgHIFU catheter was mounted. 3D US-navigation was evaluated in vitro for guiding and planning conformal HIFU ablations using a tumor-mimic model in porcine liver. Tumor-mimic volumes were then used as targets for evaluating conformal HIFU treatment planning in simulation. Height tumor-mimics (ovoid- or disc-shaped, sizes: 3-29 cm3) were created and visualized in liver using interstitial 2D US imaging. Robot-assisted spatial manipulation of these images and real-time 3D navigation allowed reconstructions of 3D B-mode US images for accurate tumor-mimic volume estimation (relative error: 4 ± 5%). Sectorial and full-revolution HIFU scanning (angular sectors: 88-360°) could both result in conformal ablations of the tumor volumes, as soon as their radii remained ≤ 24 mm. The presented US navigation-guided HIFU procedure demonstrated advantages for developing conformal interstitial therapies in standard operative rooms. Moreover, the modularity of the developed platform makes it potentially useful for developing other HIFU approaches.
Collapse
Affiliation(s)
- L Daunizeau
- LabTAU, INSERM, Centre Léon Bérard, Université Lyon 1, Univ Lyon, F-69003, Lyon, France.
| | - A Nguyen
- LabTAU, INSERM, Centre Léon Bérard, Université Lyon 1, Univ Lyon, F-69003, Lyon, France
| | - M Le Garrec
- LabTAU, INSERM, Centre Léon Bérard, Université Lyon 1, Univ Lyon, F-69003, Lyon, France
| | - J Y Chapelon
- LabTAU, INSERM, Centre Léon Bérard, Université Lyon 1, Univ Lyon, F-69003, Lyon, France
| | - W A N'Djin
- LabTAU, INSERM, Centre Léon Bérard, Université Lyon 1, Univ Lyon, F-69003, Lyon, France
| |
Collapse
|
9
|
Liu X, Sinha A, Ishii M, Hager GD, Reiter A, Taylor RH, Unberath M. Dense Depth Estimation in Monocular Endoscopy With Self-Supervised Learning Methods. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:1438-1447. [PMID: 31689184 PMCID: PMC7289272 DOI: 10.1109/tmi.2019.2950936] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
We present a self-supervised approach to training convolutional neural networks for dense depth estimation from monocular endoscopy data without a priori modeling of anatomy or shading. Our method only requires monocular endoscopic videos and a multi-view stereo method, e.g., structure from motion, to supervise learning in a sparse manner. Consequently, our method requires neither manual labeling nor patient computed tomography (CT) scan in the training and application phases. In a cross-patient experiment using CT scans as groundtruth, the proposed method achieved submillimeter mean residual error. In a comparison study to recent self-supervised depth estimation methods designed for natural video on in vivo sinus endoscopy data, we demonstrate that the proposed approach outperforms the previous methods by a large margin. The source code for this work is publicly available online at https://github.com/lppllppl920/EndoscopyDepthEstimation-Pytorch.
Collapse
|
10
|
Recent Trends, Technical Concepts and Components of Computer-Assisted Orthopedic Surgery Systems: A Comprehensive Review. SENSORS 2019; 19:s19235199. [PMID: 31783631 PMCID: PMC6929084 DOI: 10.3390/s19235199] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Revised: 11/08/2019] [Accepted: 11/12/2019] [Indexed: 12/17/2022]
Abstract
Computer-assisted orthopedic surgery (CAOS) systems have become one of the most important and challenging types of system in clinical orthopedics, as they enable precise treatment of musculoskeletal diseases, employing modern clinical navigation systems and surgical tools. This paper brings a comprehensive review of recent trends and possibilities of CAOS systems. There are three types of the surgical planning systems, including: systems based on the volumetric images (computer tomography (CT), magnetic resonance imaging (MRI) or ultrasound images), further systems utilize either 2D or 3D fluoroscopic images, and the last one utilizes the kinetic information about the joints and morphological information about the target bones. This complex review is focused on three fundamental aspects of CAOS systems: their essential components, types of CAOS systems, and mechanical tools used in CAOS systems. In this review, we also outline the possibilities for using ultrasound computer-assisted orthopedic surgery (UCAOS) systems as an alternative to conventionally used CAOS systems.
Collapse
|
11
|
Yang W, Mondol AS, Stiebing C, Marcu L, Popp J, Schie IW. Raman ChemLighter: Fiber optic Raman probe imaging in combination with augmented chemical reality. JOURNAL OF BIOPHOTONICS 2019; 12:e201800447. [PMID: 30848073 DOI: 10.1002/jbio.201800447] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/02/2018] [Revised: 02/01/2019] [Accepted: 03/05/2019] [Indexed: 06/09/2023]
Abstract
Raman spectroscopy using fiber optic probe combines non-contacted and label-free molecular fingerprinting with high mechanical flexibility for biomedical, clinical and industrial applications. Inherently, fiber optic Raman probes provide information from a single point only, and the acquisition of images is not straightforward. For many applications, it is highly crucial to determine the molecular distribution and provide imaging information of the sample. Here, we propose an approach for Raman imaging using a handheld fiber optic probe, which is built around computer vision-based assessment of positional information and simultaneous acquisition of spectroscopic information. By combining this implementation with real-time data processing and analysis, it is possible to create not only fiber-based Raman imaging but also an augmented chemical reality image of the molecular distribution of the sample surface in real-time. We experimentally demonstrated that using our approach, it is possible to determine and to distinguish borders of different bimolecular compounds in a short time. Because the method can be transferred to other optical probes and other spectroscopic techniques, it is expected that the implementation will have a large impact for clinical, biomedical and industrial applications.
Collapse
Affiliation(s)
- Wei Yang
- Leibniz Institute of Photonic Technology Jena, Jena, Germany
| | | | - Clara Stiebing
- Leibniz Institute of Photonic Technology Jena, Jena, Germany
| | - Laura Marcu
- Department of Biomedical Engineering, University of California Davis, Davis, California
| | - Jürgen Popp
- Leibniz Institute of Photonic Technology Jena, Jena, Germany
- Institute of Physical Chemistry and Abbe Center of Photonics, Friedrich-Schiller University Jena, Jena, Germany
| | - Iwan W Schie
- Leibniz Institute of Photonic Technology Jena, Jena, Germany
| |
Collapse
|
12
|
Song E, Yu F, Liu H, Cheng N, Li Y, Jin L, Hung CC. A Novel Endoscope System for Position Detection and Depth Estimation of the Ureter. J Med Syst 2016; 40:266. [DOI: 10.1007/s10916-016-0607-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2016] [Accepted: 09/13/2016] [Indexed: 12/18/2022]
|
13
|
Gorpas D, Ma D, Bec J, Yankelevich DR, Marcu L. Real-Time Visualization of Tissue Surface Biochemical Features Derived From Fluorescence Lifetime Measurements. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1802-11. [PMID: 26890641 PMCID: PMC5131727 DOI: 10.1109/tmi.2016.2530621] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Fiber based fluorescence lifetime imaging has shown great potential for intraoperative diagnosis and guidance of surgical procedures. Here we describe a novel method addressing a significant challenge for the practical implementation of this technique, i.e., the real-time display of the quantified biochemical or functional tissue properties superimposed on the interrogated area. Specifically, an aiming beam (450 nm) generated by a continuous-wave laser beam was merged with the pulsed fluorescence excitation light in a single delivery/collection fiber and then imaged and segmented using a color-based algorithm. We demonstrate that this approach enables continuous delineation of the interrogated location and dynamic augmentation of the acquired frames with the corresponding fluorescence decay parameters. The method was evaluated on a fluorescence phantom and fresh tissue samples. Current results demonstrate that 34 frames per second can be achieved for augmenting videos of 640 × 512 pixels resolution. Also we show that the spatial resolution of the fluorescence lifetime map depends on the tissue optical properties, the scanning speed, and the frame rate. The dice similarity coefficient between the fluorescence phantom and the reconstructed maps was estimated to be as high as 93%. The reported method could become a valuable tool for augmenting the surgeon's field of view with diagnostic information derived from the analysis of fluorescence lifetime data in real-time using handheld, automated, or endoscopic scanning systems. Current method provides also a means for maintaining the tissue light exposure within safety limits. This study provides a framework for using an aiming beam with other point spectroscopy applications.
Collapse
Affiliation(s)
- Dimitris Gorpas
- Department of Biomedical Engineering, University of California Davis, CA 95616 USA
| | - Dinglong Ma
- Department of Biomedical Engineering, University of California Davis, CA 95616 USA
| | - Julien Bec
- Department of Biomedical Engineering, University of California Davis, CA 95616 USA
| | - Diego R. Yankelevich
- Department of Biomedical Engineering and with the Department of Electrical and Computer Engineering, University of California Davis, CA 95616 USA
| | - Laura Marcu
- Department of Biomedical Engineering, University of California Davis, CA 95616 USA
| |
Collapse
|
14
|
Yang L, Wang J, Ando T, Kubota A, Yamashita H, Sakuma I, Chiba T, Kobayashi E. Self-contained image mapping of placental vasculature in 3D ultrasound-guided fetoscopy. Surg Endosc 2015; 30:4136-49. [DOI: 10.1007/s00464-015-4690-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2015] [Accepted: 11/16/2015] [Indexed: 10/22/2022]
|
15
|
Yang L, Wang J, Ando T, Kubota A, Yamashita H, Sakuma I, Chiba T, Kobayashi E. Towards scene adaptive image correspondence for placental vasculature mosaic in computer assisted fetoscopic procedures. Int J Med Robot 2015; 12:375-86. [PMID: 26443691 DOI: 10.1002/rcs.1700] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/21/2015] [Indexed: 11/11/2022]
Abstract
BACKGROUND Visualization of the vast placental vasculature is crucial in fetoscopic laser photocoagulation for twin-to-twin transfusion syndrome treatment. However, vasculature mosaic is challenging due to the fluctuating imaging conditions during fetoscopic surgery. METHOD A scene adaptive feature-based approach for image correspondence in free-hand endoscopic placental video is proposed. It contributes towards existing techniques by introducing a failure detection method based on statistical attributes of the feature distribution, and an updating mechanism that self-tunes parameters to recover from registration failures. RESULTS Validations on endoscopic image sequences of a phantom and a monkey placenta are carried out to demonstrate mismatch recovery. In two 100-frame sequences, automatic self-tuned results improved by 8% compared with manual experience-based tuning and a slight 2.5% deterioration against exhaustive tuning (gold standard). CONCLUSION This scene-adaptive image correspondence approach, which is not restricted to a set of generalized parameters, is suitable for applications associated with dynamically changing imaging conditions. Copyright © 2015 John Wiley & Sons, Ltd.
Collapse
Affiliation(s)
- Liangjing Yang
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Junchen Wang
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Takehiro Ando
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Akihiro Kubota
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Hiromasa Yamashita
- Clinical Research Center, National Center for Child Health and Development, Tokyo, Japan
| | - Ichiro Sakuma
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Toshio Chiba
- Clinical Research Center, National Center for Child Health and Development, Tokyo, Japan
| | - Etsuko Kobayashi
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| |
Collapse
|