1
|
Jong LJS, Veluponnar D, Geldof F, Sanders J, Guimaraes MDS, Vrancken Peeters MJTFD, van Duijnhoven F, Sterenborg HJCM, Dashtbozorg B, Ruers TJM. Toward real-time margin assessment in breast-conserving surgery with hyperspectral imaging. Sci Rep 2025; 15:9556. [PMID: 40108280 PMCID: PMC11923364 DOI: 10.1038/s41598-025-94526-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2024] [Accepted: 03/14/2025] [Indexed: 03/22/2025] Open
Abstract
Margin assessment in breast-conserving surgery (BSC) remains a critical challenge, with 20-25% of cases resulting in inadequate tumor resection, increasing the risk of local recurrence and the need for additional treatment. In this study, we evaluate the diagnostic performance of hyperspectral imaging (HSI) as a non-invasive technique for assessing resection margins in ex vivo lumpectomy specimens. A dataset of over 200 lumpectomy specimens was collected using two hyperspectral cameras, and a classification algorithm was developed to distinguish between healthy and tumor tissue within margins of 0 and 2 mm. The proposed approach achieved its highest diagnostic performance at a 0 mm margin, with a sensitivity of 92%, specificity of 78%, accuracy of 83%, Matthews correlation coefficient of 68%, and an area under the curve of 89%. The entire resection surface could be imaged and evaluated within 10 minutes, providing a rapid and non-invasive alternative to conventional margin assessment techniques. These findings represent a significant advancement toward real-time intraoperative margin assessment, highlighting the potential of HSI to enhance surgical precision and reduce re-excision rates in BCS.
Collapse
Affiliation(s)
- Lynn-Jade S Jong
- Image-Guided Surgery, Department of Surgery, Netherlands Cancer Institute, Plesmanlaan 121, Amsterdam, 1066 CX, The Netherlands
- Faculty of Science and Technology, University of Twente, Drienerlolaan 5, Enschede, 7522 NB, The Netherlands
| | - Dinusha Veluponnar
- Image-Guided Surgery, Department of Surgery, Netherlands Cancer Institute, Plesmanlaan 121, Amsterdam, 1066 CX, The Netherlands
- Faculty of Science and Technology, University of Twente, Drienerlolaan 5, Enschede, 7522 NB, The Netherlands
| | - Freija Geldof
- Image-Guided Surgery, Department of Surgery, Netherlands Cancer Institute, Plesmanlaan 121, Amsterdam, 1066 CX, The Netherlands
| | - Joyce Sanders
- Department of Pathology, Netherlands Cancer Institute, Plesmanlaan 121, Amsterdam, 1066 CX, The Netherlands
| | - Marcos Da Silva Guimaraes
- Department of Pathology, Netherlands Cancer Institute, Plesmanlaan 121, Amsterdam, 1066 CX, The Netherlands
| | | | - Frederieke van Duijnhoven
- Image-Guided Surgery, Department of Surgery, Netherlands Cancer Institute, Plesmanlaan 121, Amsterdam, 1066 CX, The Netherlands
| | - Henricus J C M Sterenborg
- Image-Guided Surgery, Department of Surgery, Netherlands Cancer Institute, Plesmanlaan 121, Amsterdam, 1066 CX, The Netherlands
| | - Behdad Dashtbozorg
- Image-Guided Surgery, Department of Surgery, Netherlands Cancer Institute, Plesmanlaan 121, Amsterdam, 1066 CX, The Netherlands.
| | - Theo J M Ruers
- Image-Guided Surgery, Department of Surgery, Netherlands Cancer Institute, Plesmanlaan 121, Amsterdam, 1066 CX, The Netherlands
- Faculty of Science and Technology, University of Twente, Drienerlolaan 5, Enschede, 7522 NB, The Netherlands
| |
Collapse
|
2
|
Liu J, Zhang H, Tian JH, Su Y, Chen Y, Wang Y. R2D2-GAN: Robust Dual Discriminator Generative Adversarial Network for Microscopy Hyperspectral Image Super-Resolution. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:4064-4074. [PMID: 38861434 DOI: 10.1109/tmi.2024.3412033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2024]
Abstract
High-resolution microscopy hyperspectral (HS) images can provide highly detailed spatial and spectral information, enabling the identification and analysis of biological tissues at a microscale level. Recently, significant efforts have been devoted to enhancing the resolution of HS images by leveraging high spatial resolution multispectral (MS) images. However, the inherent hardware constraints lead to a significant distribution gap between HS and MS images, posing challenges for image super-resolution within biomedical domains. This discrepancy may arise from various factors, including variations in camera imaging principles (e.g., snapshot and push-broom imaging), shooting positions, and the presence of noise interference. To address these challenges, we introduced a unique unsupervised super-resolution framework named R2D2-GAN. This framework utilizes a generative adversarial network (GAN) to efficiently merge the two data modalities and improve the resolution of microscopy HS images. Traditionally, supervised approaches have relied on intuitive and sensitive loss functions, such as mean squared error (MSE). Our method, trained in a real-world unsupervised setting, benefits from exploiting consistent information across the two modalities. It employs a game-theoretic strategy and dynamic adversarial loss, rather than relying solely on fixed training strategies for reconstruction loss. Furthermore, we have augmented our proposed model with a central consistency regularization (CCR) module, aiming to further enhance the robustness of the R2D2-GAN. Our experimental results show that the proposed method is accurate and robust for super-resolution images. We specifically tested our proposed method on both a real and a synthetic dataset, obtaining promising results in comparison to other state-of-the-art methods.
Collapse
|
3
|
Leung JH, Karmakar R, Mukundan A, Lin WS, Anwar F, Wang HC. Technological Frontiers in Brain Cancer: A Systematic Review and Meta-Analysis of Hyperspectral Imaging in Computer-Aided Diagnosis Systems. Diagnostics (Basel) 2024; 14:1888. [PMID: 39272675 PMCID: PMC11394276 DOI: 10.3390/diagnostics14171888] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2024] [Revised: 08/19/2024] [Accepted: 08/23/2024] [Indexed: 09/15/2024] Open
Abstract
Brain cancer is a substantial factor in the mortality associated with cancer, presenting difficulties in the timely identification of the disease. The precision of diagnoses is significantly dependent on the proficiency of radiologists and neurologists. Although there is potential for early detection with computer-aided diagnosis (CAD) algorithms, the majority of current research is hindered by its modest sample sizes. This meta-analysis aims to comprehensively assess the diagnostic test accuracy (DTA) of computer-aided design (CAD) models specifically designed for the detection of brain cancer utilizing hyperspectral (HSI) technology. We employ Quadas-2 criteria to choose seven papers and classify the proposed methodologies according to the artificial intelligence method, cancer type, and publication year. In order to evaluate heterogeneity and diagnostic performance, we utilize Deeks' funnel plot, the forest plot, and accuracy charts. The results of our research suggest that there is no notable variation among the investigations. The CAD techniques that have been examined exhibit a notable level of precision in the automated detection of brain cancer. However, the absence of external validation hinders their potential implementation in real-time clinical settings. This highlights the necessity for additional studies in order to authenticate the CAD models for wider clinical applicability.
Collapse
Affiliation(s)
- Joseph-Hang Leung
- Department of Radiology, Ditmanson Medical Foundation Chia-yi Christian Hospital, Chia Yi 60002, Taiwan;
| | - Riya Karmakar
- Department of Mechanical Engineering, National Chung Cheng University, 168, University Rd., Min Hsiung, Chia Yi 62102, Taiwan; (R.K.); (A.M.)
| | - Arvind Mukundan
- Department of Mechanical Engineering, National Chung Cheng University, 168, University Rd., Min Hsiung, Chia Yi 62102, Taiwan; (R.K.); (A.M.)
| | - Wen-Shou Lin
- Neurology Division, Department of Internal Medicine, Kaohsiung Armed Forces General Hospital, 2, Zhongzheng 1st. Rd., Lingya District, Kaohsiung City 80284, Taiwan
| | - Fathima Anwar
- Faculty of Allied Health Sciences, The University of Lahore, 1-Km Defense Road, Lahore 54590, Punjab, Pakistan;
| | - Hsiang-Chen Wang
- Department of Mechanical Engineering, National Chung Cheng University, 168, University Rd., Min Hsiung, Chia Yi 62102, Taiwan; (R.K.); (A.M.)
- Department of Medical Research, Dalin Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, No. 2, Minsheng Road, Dalin, Chia Yi 62247, Taiwan
- Department of Technology Development, Hitspectra Intelligent Technology Co., Ltd., 8F.11-1, No. 25, Chenggong 2nd Rd., Qianzhen Dist., Kaohsiung City 80661, Taiwan
| |
Collapse
|
4
|
Fan Y, Gao E, Liu S, Guo R, Dong G, Tang X, Liao H, Gao T. RMAP-ResNet: Segmentation of brain tumor OCT images using residual multicore attention pooling networks for intelligent minimally invasive theranostics. Biomed Signal Process Control 2024; 90:105805. [DOI: 10.1016/j.bspc.2023.105805] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/24/2025]
|
5
|
Schmidt VM, Zelger P, Wöss C, Fodor M, Hautz T, Schneeberger S, Huck CW, Arora R, Brunner A, Zelger B, Schirmer M, Pallua JD. Handheld hyperspectral imaging as a tool for the post-mortem interval estimation of human skeletal remains. Heliyon 2024; 10:e25844. [PMID: 38375262 PMCID: PMC10875450 DOI: 10.1016/j.heliyon.2024.e25844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 01/30/2024] [Accepted: 02/02/2024] [Indexed: 02/21/2024] Open
Abstract
In forensic medicine, estimating human skeletal remains' post-mortem interval (PMI) can be challenging. Following death, bones undergo a series of chemical and physical transformations due to their interactions with the surrounding environment. Post-mortem changes have been assessed using various methods, but estimating the PMI of skeletal remains could still be improved. We propose a new methodology with handheld hyperspectral imaging (HSI) system based on the first results from 104 human skeletal remains with PMIs ranging between 1 day and 2000 years. To differentiate between forensic and archaeological bone material, the Convolutional Neural Network analyzed 65.000 distinct diagnostic spectra: the classification accuracy was 0.58, 0.62, 0.73, 0.81, and 0.98 for PMIs of 0 week-2 weeks, 2 weeks-6 months, 6 months-1 year, 1 year-10 years, and >100 years, respectively. In conclusion, HSI can be used in forensic medicine to distinguish bone materials >100 years old from those <10 years old with an accuracy of 98%. The model has adequate predictive performance, and handheld HSI could serve as a novel approach to objectively and accurately determine the PMI of human skeletal remains.
Collapse
Affiliation(s)
- Verena-Maria Schmidt
- Institute of Forensic Medicine, Medical University of Innsbruck, Muellerstraße 44, 6020 Innsbruck, Austria
| | - Philipp Zelger
- University Clinic for Hearing, Voice and Speech Disorders, Medical University of Innsbruck, Anichstrasse 35, 6020 Innsbruck, Austria
| | - Claudia Wöss
- Institute of Forensic Medicine, Medical University of Innsbruck, Muellerstraße 44, 6020 Innsbruck, Austria
| | - Margot Fodor
- OrganLifeTM, Department of Visceral, Transplant and Thoracic Surgery, Medical University of Innsbruck, Innsbruck, Austria
| | - Theresa Hautz
- OrganLifeTM, Department of Visceral, Transplant and Thoracic Surgery, Medical University of Innsbruck, Innsbruck, Austria
| | - Stefan Schneeberger
- OrganLifeTM, Department of Visceral, Transplant and Thoracic Surgery, Medical University of Innsbruck, Innsbruck, Austria
| | - Christian Wolfgang Huck
- Institute of Analytical Chemistry and Radiochemistry, University of Innsbruck, 6020 Innsbruck, Austria
| | - Rohit Arora
- Department of Orthopaedics and Traumatology, Medical University of Innsbruck, Anichstraße 35, 6020 Innsbruck, Austria
| | - Andrea Brunner
- Institute of Pathology, Neuropathology, and Molecular Pathology, Medical University of Innsbruck, Muellerstrasse 44, 6020 Innsbruck, Austria
| | - Bettina Zelger
- Institute of Pathology, Neuropathology, and Molecular Pathology, Medical University of Innsbruck, Muellerstrasse 44, 6020 Innsbruck, Austria
| | - Michael Schirmer
- Department of Internal Medicine, Clinic II, Medical University of Innsbruck, Anichstrasse 35, 6020 Innsbruck, Austria
| | - Johannes Dominikus Pallua
- Department of Orthopaedics and Traumatology, Medical University of Innsbruck, Anichstraße 35, 6020 Innsbruck, Austria
| |
Collapse
|
6
|
Kim BS, Cho M, Chung GE, Lee J, Kang HY, Yoon D, Cho WS, Lee JC, Bae JH, Kong HJ, Kim S. Density clustering-based automatic anatomical section recognition in colonoscopy video using deep learning. Sci Rep 2024; 14:872. [PMID: 38195632 PMCID: PMC10776865 DOI: 10.1038/s41598-023-51056-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Accepted: 12/29/2023] [Indexed: 01/11/2024] Open
Abstract
Recognizing anatomical sections during colonoscopy is crucial for diagnosing colonic diseases and generating accurate reports. While recent studies have endeavored to identify anatomical regions of the colon using deep learning, the deformable anatomical characteristics of the colon pose challenges for establishing a reliable localization system. This study presents a system utilizing 100 colonoscopy videos, combining density clustering and deep learning. Cascaded CNN models are employed to estimate the appendix orifice (AO), flexures, and "outside of the body," sequentially. Subsequently, DBSCAN algorithm is applied to identify anatomical sections. Clustering-based analysis integrates clinical knowledge and context based on the anatomical section within the model. We address challenges posed by colonoscopy images through non-informative removal preprocessing. The image data is labeled by clinicians, and the system deduces section correspondence stochastically. The model categorizes the colon into three sections: right (cecum and ascending colon), middle (transverse colon), and left (descending colon, sigmoid colon, rectum). We estimated the appearance time of anatomical boundaries with an average error of 6.31 s for AO, 9.79 s for HF, 27.69 s for SF, and 3.26 s for outside of the body. The proposed method can facilitate future advancements towards AI-based automatic reporting, offering time-saving efficacy and standardization.
Collapse
Grants
- 1711179421, RS-2021-KD000006 the Korea Medical Device Development Fund grant funded by the Korean government (the Ministry of Science and ICT, the Ministry of Trade, Industry and Energy, the Ministry of Health and Welfare, and the Ministry of Food and Drug Safety)
- 1711179421, RS-2021-KD000006 the Korea Medical Device Development Fund grant funded by the Korean government (the Ministry of Science and ICT, the Ministry of Trade, Industry and Energy, the Ministry of Health and Welfare, and the Ministry of Food and Drug Safety)
- 1711179421, RS-2021-KD000006 the Korea Medical Device Development Fund grant funded by the Korean government (the Ministry of Science and ICT, the Ministry of Trade, Industry and Energy, the Ministry of Health and Welfare, and the Ministry of Food and Drug Safety)
- IITP-2023-2018-0-01833 the Ministry of Science and ICT, Korea under the Information Technology Research Center (ITRC) support program
Collapse
Affiliation(s)
- Byeong Soo Kim
- Interdisciplinary Program in Bioengineering, Graduate School, Seoul National University, Seoul, 08826, Korea
| | - Minwoo Cho
- Innovative Medical Technology Research Institute, Seoul National University Hospital, Seoul, 03080, Korea
- Department of Transdisciplinary Medicine, Seoul National University Hospital, Seoul, 03080, Korea
- Department of Medicine, Seoul National University College of Medicine, Seoul, 03080, Korea
| | - Goh Eun Chung
- Department of Internal Medicine and Healthcare Research Institute, Healthcare System Gangnam Center, Seoul National University Hospital, Seoul, 06236, Korea
| | - Jooyoung Lee
- Department of Internal Medicine and Healthcare Research Institute, Healthcare System Gangnam Center, Seoul National University Hospital, Seoul, 06236, Korea
| | - Hae Yeon Kang
- Department of Internal Medicine and Healthcare Research Institute, Healthcare System Gangnam Center, Seoul National University Hospital, Seoul, 06236, Korea
| | - Dan Yoon
- Interdisciplinary Program in Bioengineering, Graduate School, Seoul National University, Seoul, 08826, Korea
| | - Woo Sang Cho
- Interdisciplinary Program in Bioengineering, Graduate School, Seoul National University, Seoul, 08826, Korea
| | - Jung Chan Lee
- Department of Biomedical Engineering, Seoul National University College of Medicine, Seoul, 03080, Korea
- Institute of Bioengineering, Seoul National University, Seoul, 08826, Republic of Korea
- Institute of Medical and Biological Engineering, Medical Research Center, Seoul National University, Seoul, 03080, Korea
| | - Jung Ho Bae
- Department of Internal Medicine and Healthcare Research Institute, Healthcare System Gangnam Center, Seoul National University Hospital, Seoul, 06236, Korea.
| | - Hyoun-Joong Kong
- Innovative Medical Technology Research Institute, Seoul National University Hospital, Seoul, 03080, Korea.
- Department of Transdisciplinary Medicine, Seoul National University Hospital, Seoul, 03080, Korea.
- Department of Medicine, Seoul National University College of Medicine, Seoul, 03080, Korea.
- Medical Big Data Research Center, Seoul National University College of Medicine, Seoul, 03087, Korea.
| | - Sungwan Kim
- Department of Biomedical Engineering, Seoul National University College of Medicine, Seoul, 03080, Korea.
- Institute of Bioengineering, Seoul National University, Seoul, 08826, Republic of Korea.
- Artificial Intelligence Institute, Seoul National University, Research Park Building 942, 2 Fl., Seoul, 08826, Korea.
| |
Collapse
|
7
|
Zhang C, Zhang Z, Yu D, Cheng Q, Shan S, Li M, Mou L, Yang X, Ma X. Unsupervised band selection of medical hyperspectral images guided by data gravitation and weak correlation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 240:107721. [PMID: 37506601 DOI: 10.1016/j.cmpb.2023.107721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 07/06/2023] [Accepted: 07/12/2023] [Indexed: 07/30/2023]
Abstract
BACKGROUND AND OBJECTIVE Medical hyperspectral images (MHSIs) are used for a contact-free examination of patients without harmful radiation. However, high-dimensionality images contain large amounts of data that are sparsely distributed in a high-dimensional space, which leads to the "curse of dimensionality" (called Hughes' phenomenon) and increases the complexity and cost of data processing and storage. Hence, there is a need for spectral dimensionality reduction before the clinical application of MHSIs. Some dimensionality-reducing strategies have been proposed; however, they distort the data within MHSIs. METHODS To compress dimensionality without destroying the original data structure, we propose a method that involves data gravitation and weak correlation-based ranking (DGWCR) for removing bands of noise from MHSIs while clustering signal-containing bands. Band clustering is done by using the connection centre evolution (CCE) algorithm and selecting the most representative bands in each cluster based on the composite force. The bands within the clusters are ranked using the new entropy-containing matrix, and a global ranking of bands is obtained by applying an S-shaped strategy. The source code is available at https://www.github.com/zhangchenglong1116/DGWCR. RESULTS Upon feeding the reduced-dimensional images into various classifiers, the experimental results demonstrated that the small number of bands selected by the proposed DGWCR consistently achieved higher classification accuracy than the original data. Unlike other reference methods (e.g. the latest deep-learning-based strategies), DGWCR chooses the spectral bands with the least redundancy and greatest discrimination. CONCLUSION In this study, we present a method for efficient band selection for MHSIs that alleviates the "curse of dimensionality". Experiments were validated with three MHSIs in the human brain, and they outperformed several other band selection methods, demonstrating the clinical potential of DGWCR.
Collapse
Affiliation(s)
- Chenglong Zhang
- School of Control Science and Engineering, Shandong University, Jinan 250061, China
| | - Zhimin Zhang
- School of Control Science and Engineering, Shandong University, Jinan 250061, China
| | - Dexin Yu
- Radiology Department, Qilu Hospital of Shandong University, Jinan 250000, China
| | - Qiyuan Cheng
- Medical Engineering Department, Shandong Provincial Hospital affiliated to Shandong First Medical University, Jinan 250021, China
| | - Shihao Shan
- School of Control Science and Engineering, Shandong University, Jinan 250061, China
| | - Mengjiao Li
- School of Control Science and Engineering, Shandong University, Jinan 250061, China
| | - Lichao Mou
- Chair of Data Science in Earth Observation, Technical University of Munich (TUM), Munich, 80333, Germany
| | - Xiaoli Yang
- School of Control Science and Engineering, Shandong University, Jinan 250061, China; Weifang Xinli Superconducting Magnet Technology Co., Ltd, Weifang 261005, China.
| | - Xiaopeng Ma
- School of Control Science and Engineering, Shandong University, Jinan 250061, China.
| |
Collapse
|
8
|
Gao H, Wang M, Sun X, Cao X, Li C, Liu Q, Xu P. Unsupervised dimensionality reduction of medical hyperspectral imagery in tensor space. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 240:107724. [PMID: 37506600 DOI: 10.1016/j.cmpb.2023.107724] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 07/08/2023] [Accepted: 07/16/2023] [Indexed: 07/30/2023]
Abstract
BACKGROUND AND OBJECTIVES Compared with traditional RGB images, medical hyperspectral imagery (HSI) has numerous continuous narrow spectral bands, which can provide rich information for cancer diagnosis. However, the abundant spectral bands also contain a large amount of redundancy information and increase computational complexity. Thus, dimensionality reduction (DR) is essential in HSI analysis. All vector-based DR methods ignore the cubic nature of HSI resulting from vectorization. To overcome the disadvantage of vector-based DR methods, tensor-based techniques have been developed by employing multi-linear algebra. METHODS To fully exploit the structure features of medical HSI and enhance computational efficiency, a novel method called unsupervised dimensionality reduction via tensor-based low-rank collaborative graph embedding (TLCGE) is proposed. TLCGE introduces entropy rate superpixel (ERS) segmentation algorithm to generate superpixels. Then, a low-rank collaborative graph weight matrix is constructed on each superpixel, greatly improving the efficiency and robustness of the proposed method. After that, TLCGE reduces dimensions in tensor space to well preserve intrinsic structure of HSI. RESULTS The proposed TLCGE is tested on cholangiocarcinoma microscopic hyperspectral data sets. To further demonstrate the effectiveness of the proposed algorithm, other machine learning DR methods are used for comparison. Experimental results on cholangiocarcinoma microscopic hyperspectral data sets validate the effectiveness of the proposed TLCGE. CONCLUSIONS The proposed TLCGE is a tensor-based DR method, which can maintain the intrinsic 3-D data structure of medical HSI. By imposing the low-rank and sparse constraints on the objective function, the proposed TLCGE can fully explore the local and global structures within each superpixel. The computational efficiency of the proposed TLCGE is better than other tensor-based DR methods, which can be used as a preprocessing step in real medical HSI classification or segmentation.
Collapse
Affiliation(s)
- Hongmin Gao
- College of Computer and Information, Hohai University, Nanjing 211100, China
| | - Meiling Wang
- College of Computer and Information, Hohai University, Nanjing 211100, China
| | - Xinyu Sun
- Department of Hematology, Nanjing Drum Tower Hospital Clinical College of Nanjing Medical University, China
| | - Xueying Cao
- College of Computer and Information, Hohai University, Nanjing 211100, China
| | - Chenming Li
- College of Computer and Information, Hohai University, Nanjing 211100, China
| | - Qin Liu
- Department of Oncology, Nanjing Drum Tower Hospital, Affiliated Hospital of Medical School, Nanjing University, China
| | - Peipei Xu
- Department of Hematology, Nanjing Drum Tower Hospital Clinical College of Nanjing Medical University, China; Department of Hematology, Nanjing Drum Tower Hospital, Affiliated Hospital of Medical School, Nanjing University, China.
| |
Collapse
|
9
|
Puustinen S, Vrzáková H, Hyttinen J, Rauramaa T, Fält P, Hauta-Kasari M, Bednarik R, Koivisto T, Rantala S, von Und Zu Fraunberg M, Jääskeläinen JE, Elomaa AP. Hyperspectral Imaging in Brain Tumor Surgery-Evidence of Machine Learning-Based Performance. World Neurosurg 2023; 175:e614-e635. [PMID: 37030483 DOI: 10.1016/j.wneu.2023.03.149] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Accepted: 03/31/2023] [Indexed: 04/10/2023]
Abstract
BACKGROUND Hyperspectral imaging (HSI) has the potential to enhance surgical tissue detection and diagnostics. Definite utilization of intraoperative HSI guidance demands validated machine learning and public datasets that currently do not exist. Moreover, current imaging conventions are dispersed, and evidence-based paradigms for neurosurgical HSI have not been declared. METHODS We presented the rationale and a detailed clinical paradigm for establishing microneurosurgical HSI guidance. In addition, a systematic literature review was conducted to summarize the current indications and performance of neurosurgical HSI systems, with an emphasis on machine learning-based methods. RESULTS The published data comprised a few case series or case reports aiming to classify tissues during glioma operations. For a multitissue classification problem, the highest overall accuracy of 80% was obtained using deep learning. Our HSI system was capable of intraoperative data acquisition and visualization with minimal disturbance to glioma surgery. CONCLUSIONS In a limited number of publications, neurosurgical HSI has demonstrated unique capabilities in contrast to the established imaging techniques. Multidisciplinary work is required to establish communicable HSI standards and clinical impact. Our HSI paradigm endorses systematic intraoperative HSI data collection, which aims to facilitate the related standards, medical device regulations, and value-based medical imaging systems.
Collapse
Affiliation(s)
- Sami Puustinen
- University of Eastern Finland, Faculty of Health Sciences, School of Medicine, Kuopio, Finland; Kuopio University Hospital, Eastern Finland Microsurgery Center, Kuopio, Finland.
| | - Hana Vrzáková
- Kuopio University Hospital, Eastern Finland Microsurgery Center, Kuopio, Finland; University of Eastern Finland, Faculty of Science and Forestry, School of Computing, Joensuu, Finland
| | - Joni Hyttinen
- University of Eastern Finland, Faculty of Science and Forestry, School of Computing, Joensuu, Finland
| | - Tuomas Rauramaa
- Kuopio University Hospital, Department of Clinical Pathology, Kuopio, Finland
| | - Pauli Fält
- University of Eastern Finland, Faculty of Science and Forestry, School of Computing, Joensuu, Finland
| | - Markku Hauta-Kasari
- University of Eastern Finland, Faculty of Science and Forestry, School of Computing, Joensuu, Finland
| | - Roman Bednarik
- University of Eastern Finland, Faculty of Science and Forestry, School of Computing, Joensuu, Finland
| | - Timo Koivisto
- Kuopio University Hospital, Department of Neurosurgery, Kuopio, Finland
| | - Susanna Rantala
- Kuopio University Hospital, Department of Neurosurgery, Kuopio, Finland
| | - Mikael von Und Zu Fraunberg
- Oulu University Hospital, Department of Neurosurgery, Oulu, Finland; University of Oulu, Faculty of Medicine, Research Unit of Clinical Medicine, Oulu, Finland
| | | | - Antti-Pekka Elomaa
- University of Eastern Finland, Faculty of Health Sciences, School of Medicine, Kuopio, Finland; Kuopio University Hospital, Eastern Finland Microsurgery Center, Kuopio, Finland; Kuopio University Hospital, Department of Neurosurgery, Kuopio, Finland
| |
Collapse
|
10
|
Wu Y, Xu Z, Yang W, Ning Z, Dong H. Review on the Application of Hyperspectral Imaging Technology of the Exposed Cortex in Cerebral Surgery. Front Bioeng Biotechnol 2022; 10:906728. [PMID: 35711634 PMCID: PMC9196632 DOI: 10.3389/fbioe.2022.906728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Accepted: 05/09/2022] [Indexed: 11/13/2022] Open
Abstract
The study of brain science is vital to human health. The application of hyperspectral imaging in biomedical fields has grown dramatically in recent years due to their unique optical imaging method and multidimensional information acquisition. Hyperspectral imaging technology can acquire two-dimensional spatial information and one-dimensional spectral information of biological samples simultaneously, covering the ultraviolet, visible and infrared spectral ranges with high spectral resolution, which can provide diagnostic information about the physiological, morphological and biochemical components of tissues and organs. This technology also presents finer spectral features for brain imaging studies, and further provides more auxiliary information for cerebral disease research. This paper reviews the recent advance of hyperspectral imaging in cerebral diagnosis. Firstly, the experimental setup, image acquisition and pre-processing, and analysis methods of hyperspectral technology were introduced. Secondly, the latest research progress and applications of hyperspectral imaging in brain tissue metabolism, hemodynamics, and brain cancer diagnosis in recent years were summarized briefly. Finally, the limitations of the application of hyperspectral imaging in cerebral disease diagnosis field were analyzed, and the future development direction was proposed.
Collapse
Affiliation(s)
- Yue Wu
- Research Center for Intelligent Sensing Systems, Zhejiang Lab, Hangzhou, China
| | - Zhongyuan Xu
- Research Center for Intelligent Sensing Systems, Zhejiang Lab, Hangzhou, China
| | - Wenjian Yang
- Research Center for Intelligent Sensing Systems, Zhejiang Lab, Hangzhou, China
| | - Zhiqiang Ning
- Anhui Institute of Optics and Fine Mechanics, Chinese Academy of Sciences (CAS), Hefei, China.,Science Island Branch, Graduate School of USTC, Hefei, China
| | - Hao Dong
- Research Center for Sensing Materials and Devices, Zhejiang Lab, Hangzhou, China
| |
Collapse
|
11
|
Liao L, Chen W, Xiao J, Wang Z, Lin CW, Satoh S. Unsupervised Foggy Scene Understanding via Self Spatial-Temporal Label Diffusion. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:3525-3540. [PMID: 35533162 DOI: 10.1109/tip.2022.3172208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Understanding foggy image sequence in driving scene is critical for autonomous driving, but it remains a challenging task due to the difficulty in collecting and annotating real-world images of adverse weather. Recently, self-training strategy has been considered as a powerful solution for unsupervised domain adaptation, which iteratively adapts the model from the source domain to the target domain by generating target pseudo labels and re-training the model. However, the selection of confident pseudo labels inevitably suffers from the conflict between sparsity and accuracy, both of which will lead to suboptimal models. To tackle this problem, we exploit the characteristics of the foggy image sequence of driving scenes to densify the confident pseudo labels. Specifically, based on the two discoveries of local spatial similarity and adjacent temporal correspondence of the sequential image data, we propose a novel Target-Domain driven pseudo label Diffusion (TDo-Dif) scheme. It employs superpixels and optical flows to identify the spatial similarity and temporal correspondence, respectively, and then diffuses the confident but sparse pseudo labels within a superpixel or a temporal corresponding pair linked by the flow. Moreover, to ensure the feature similarity of the diffused pixels, we introduce local spatial similarity loss and temporal contrastive loss in the model re-training stage. Experimental results show that our TDo-Dif scheme helps the adaptive model achieve 51.92% and 53.84% mean intersection-over-union (mIoU) on two publicly available natural foggy datasets (Foggy Zurich and Foggy Driving), which exceeds the state-of-the-art unsupervised domain adaptive semantic segmentation methods. The proposed method can also be applied to non-sequential images in the target domain by considering only spatial similarity.
Collapse
|
12
|
Seidlitz S, Sellner J, Odenthal J, Özdemir B, Studier-Fischer A, Knödler S, Ayala L, Adler TJ, Kenngott HG, Tizabi M, Wagner M, Nickel F, Müller-Stich BP, Maier-Hein L. Robust deep learning-based semantic organ segmentation in hyperspectral images. Med Image Anal 2022; 80:102488. [DOI: 10.1016/j.media.2022.102488] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2021] [Revised: 03/28/2022] [Accepted: 05/20/2022] [Indexed: 12/15/2022]
|
13
|
Tukra S, Lidströmer N, Ashrafian H, Gianarrou S. AI in Surgical Robotics. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_323] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
14
|
Adu K, Yu Y, Cai J, Mensah PK, Owusu-Agyemang K. MLAF-CapsNet: Multi-lane atrous feature fusion capsule network with contrast limited adaptive histogram equalization for brain tumor classification from MRI images. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-202261] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Convolutional neural networks (CNNs) for automatic classification and medical image diagnosis have recently displayed a remarkable performance. However, the CNNs fail to recognize original images rotated and oriented differently, limiting their performance. This paper presents a new capsule network (CapsNet) based framework known as the multi-lane atrous feature fusion capsule network (MLAF-CapsNet) for brain tumor type classification. The MLAF-CapsNet consists of atrous and CLAHE, where the atrous increases receptive fields and maintains spatial representation, whereas the CLAHE is used as a base layer that uses an improved adaptive histogram equalization (AHE) to enhance the input images. The proposed method is evaluated using whole-brain tumor and segmented tumor datasets. The efficiency performance of the two datasets is explored and compared. The experimental results of the MLAF-CapsNet show better accuracies (93.40% and 96.60%) and precisions (94.21% and 96.55%) in feature extraction based on the original images from the two datasets than the traditional CapsNet (78.93% and 97.30%). Based on the two datasets’ augmentation, the proposed method achieved the best accuracy (98.48% and 98.82%) and precisions (98.88% and 98.58%) in extracting features compared to the traditional CapsNet. Our results indicate that the proposed method can successfully improve brain tumor classification problems and support radiologists in medical diagnostics.
Collapse
Affiliation(s)
- Kwabena Adu
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Yongbin Yu
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Jingye Cai
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Patrick Kwabena Mensah
- Department of Computer Science and Informatics, University of Energy and Natural Resources, Sunyani, Ghana
| | - Kwabena Owusu-Agyemang
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
15
|
Lv M, Li W, Tao R, H Lovell N, Yang Y, Tu T, Li W. Spatial-Spectral Density Peaks-Based Discriminant Analysis for Membranous Nephropathy Classification Using Microscopic Hyperspectral Images. IEEE J Biomed Health Inform 2021; 25:3041-3051. [PMID: 33434138 DOI: 10.1109/jbhi.2021.3050483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The traditional differential diagnosis of membranous nephropathy (MN) mainly relies on clinical symptoms, serological examination and optical renal biopsy. However, there is a probability of false positives in the optical inspection results, and it is unable to detect the change of biochemical components, which poses an obstacle to pathogenic mechanism analysis. Microscopic hyperspectral imaging can reveal detailed component information of immune complexes, but the high dimensionality of microscopic hyperspectral image brings difficulties and challenges to image processing and disease diagnosis. In this paper, a novel classification framework, including spatial-spectral density peaks-based discriminant analysis (SSDP), is proposed for intelligent diagnosis of MN using a microscopic hyperspectral pathological dataset. SSDP constructs a set of graphs describing intrinsic structure of MHSI in both spatial and spectral domains by employing density peak clustering. In the process of graph embedding, low-dimensional features with important diagnostic information in the immune complex are obtained by compacting the spatial-spectral local intra-class pixels while separating the spectral inter-class pixels. For the MN recognition task, a support vector machine (SVM) is used to classify pixels in the low-dimensional space. Experimental validation data employ two types of MN that are difficult to distinguish with optical microscope, including primary MN and hepatitis B virus-associated MN. Experimental results show that the proposed SSDP achieves a sensitivity of 99.36%, which has potential clinical value for automatic diagnosis of MN.
Collapse
|
16
|
Trajanovski S, Shan C, Weijtmans PJC, de Koning SGB, Ruers TJM. Tongue Tumor Detection in Hyperspectral Images Using Deep Learning Semantic Segmentation. IEEE Trans Biomed Eng 2021; 68:1330-1340. [PMID: 32976092 DOI: 10.1109/tbme.2020.3026683] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
OBJECTIVE The utilization of hyperspectral imaging (HSI) in real-time tumor segmentation during a surgery have recently received much attention, but it remains a very challenging task. METHODS In this work, we propose semantic segmentation methods, and compare them with other relevant deep learning algorithms for tongue tumor segmentation. To the best of our knowledge, this is the first work using deep learning semantic segmentation for tumor detection in HSI data using channel selection, and accounting for more spatial tissue context, and global comparison between the prediction map, and the annotation per sample. Results, and Conclusion: On a clinical data set with tongue squamous cell carcinoma, our best method obtains very strong results of average dice coefficient, and area under the ROC-curve of [Formula: see text], and [Formula: see text], respectively on the original spatial image size. The results show that a very good performance can be achieved even with a limited amount of data. We demonstrate that important information regarding tumor decision is encoded in various channels, but some channel selection, and filtering is beneficial over the full spectra. Moreover, we use both visual (VIS), and near-infrared (NIR) spectrum, rather than commonly used only VIS spectrum; although VIS spectrum is generally of higher significance, we demonstrate NIR spectrum is crucial for tumor capturing in some cases. SIGNIFICANCE The HSI technology augmented with accurate deep learning algorithms has a huge potential to be a promising alternative to digital pathology or a doctors' supportive tool in real-time surgeries.
Collapse
|
17
|
Tukra S, Lidströmer N, Ashrafian H, Giannarou S. AI in Surgical Robotics. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_323-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
18
|
Melit Devassy B, George S, Nussbaum P. Unsupervised Clustering of Hyperspectral Paper Data Using t-SNE. J Imaging 2020; 6:jimaging6050029. [PMID: 34460731 PMCID: PMC8321027 DOI: 10.3390/jimaging6050029] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2020] [Revised: 04/28/2020] [Accepted: 04/30/2020] [Indexed: 11/22/2022] Open
Abstract
For a suspected forgery that involves the falsification of a document or its contents, the investigator will primarily analyze the document’s paper and ink in order to establish the authenticity of the subject under investigation. As a non-destructive and contactless technique, Hyperspectral Imaging (HSI) is gaining popularity in the field of forensic document analysis. HSI returns more information compared to conventional three channel imaging systems due to the vast number of narrowband images recorded across the electromagnetic spectrum. As a result, HSI can provide better classification results. In this publication, we present results of an approach known as the t-Distributed Stochastic Neighbor Embedding (t-SNE) algorithm, which we have applied to HSI paper data analysis. Even though t-SNE has been widely accepted as a method for dimensionality reduction and visualization of high dimensional data, its usefulness has not yet been evaluated for the classification of paper data. In this research, we present a hyperspectral dataset of paper samples, and evaluate the clustering quality of the proposed method both visually and quantitatively. The t-SNE algorithm shows exceptional discrimination power when compared to traditional PCA with k-means clustering, in both visual and quantitative evaluations.
Collapse
|
19
|
Huang Q, Li W, Zhang B, Li Q, Tao R, Lovell NH. Blood Cell Classification Based on Hyperspectral Imaging With Modulated Gabor and CNN. IEEE J Biomed Health Inform 2020; 24:160-170. [DOI: 10.1109/jbhi.2019.2905623] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/30/2023]
|
20
|
Vercauteren T, Unberath M, Padoy N, Navab N. CAI4CAI: The Rise of Contextual Artificial Intelligence in Computer Assisted Interventions. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2020; 108:198-214. [PMID: 31920208 PMCID: PMC6952279 DOI: 10.1109/jproc.2019.2946993] [Citation(s) in RCA: 53] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2019] [Revised: 09/12/2019] [Accepted: 10/04/2019] [Indexed: 05/10/2023]
Abstract
Data-driven computational approaches have evolved to enable extraction of information from medical images with a reliability, accuracy and speed which is already transforming their interpretation and exploitation in clinical practice. While similar benefits are longed for in the field of interventional imaging, this ambition is challenged by a much higher heterogeneity. Clinical workflows within interventional suites and operating theatres are extremely complex and typically rely on poorly integrated intra-operative devices, sensors, and support infrastructures. Taking stock of some of the most exciting developments in machine learning and artificial intelligence for computer assisted interventions, we highlight the crucial need to take context and human factors into account in order to address these challenges. Contextual artificial intelligence for computer assisted intervention, or CAI4CAI, arises as an emerging opportunity feeding into the broader field of surgical data science. Central challenges being addressed in CAI4CAI include how to integrate the ensemble of prior knowledge and instantaneous sensory information from experts, sensors and actuators; how to create and communicate a faithful and actionable shared representation of the surgery among a mixed human-AI actor team; how to design interventional systems and associated cognitive shared control schemes for online uncertainty-aware collaborative decision making ultimately producing more precise and reliable interventions.
Collapse
Affiliation(s)
- Tom Vercauteren
- School of Biomedical Engineering & Imaging SciencesKing’s College LondonLondonWC2R 2LSU.K.
| | - Mathias Unberath
- Department of Computer ScienceJohns Hopkins UniversityBaltimoreMD21218USA
| | - Nicolas Padoy
- ICube institute, CNRS, IHU Strasbourg, University of Strasbourg67081StrasbourgFrance
| | - Nassir Navab
- Fakultät für InformatikTechnische Universität München80333MunichGermany
| |
Collapse
|
21
|
Ma L, Lu G, Wang D, Qin X, Chen ZG, Fei B. Adaptive deep learning for head and neck cancer detection using hyperspectral imaging. Vis Comput Ind Biomed Art 2019; 2:18. [PMID: 32190408 PMCID: PMC7055573 DOI: 10.1186/s42492-019-0023-8] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2019] [Accepted: 10/09/2019] [Indexed: 12/02/2022] Open
Abstract
It can be challenging to detect tumor margins during surgery for complete resection. The purpose of this work is to develop a novel learning method that learns the difference between the tumor and benign tissue adaptively for cancer detection on hyperspectral images in an animal model. Specifically, an auto-encoder network is trained based on the wavelength bands on hyperspectral images to extract the deep information to create a pixel-wise prediction of cancerous and benign pixel. According to the output hypothesis of each pixel, the misclassified pixels would be reclassified in the right prediction direction based on their adaptive weights. The auto-encoder network is again trained based on these updated pixels. The learner can adaptively improve the ability to identify the cancer and benign tissue by focusing on the misclassified pixels, and thus can improve the detection performance. The adaptive deep learning method highlighting the tumor region proved to be accurate in detecting the tumor boundary on hyperspectral images and achieved a sensitivity of 92.32% and a specificity of 91.31% in our animal experiments. This adaptive learning method on hyperspectral imaging has the potential to provide a noninvasive tool for tumor detection, especially, for the tumor whose margin is indistinct and irregular.
Collapse
Affiliation(s)
- Ling Ma
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA 30322 USA
- College of Software, Nankai University, Tianjin, 300350 People’s Republic of China
| | - Guolan Lu
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA 30322 USA
| | - Dongsheng Wang
- Department of Hematology and Medical Oncology, Emory University, Atlanta, GA 30322 USA
| | - Xulei Qin
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA 30322 USA
| | - Zhuo Georgia Chen
- Department of Hematology and Medical Oncology, Emory University, Atlanta, GA 30322 USA
| | - Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA 30322 USA
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX 75080 USA
- Department of Radiology, The University of Texas Southwestern Medical Center, Dallas, TX 75390 USA
| |
Collapse
|
22
|
Halicek M, Fabelo H, Ortega S, Little JV, Wang X, Chen AY, Callico GM, Myers L, Sumer BD, Fei B. Hyperspectral imaging for head and neck cancer detection: specular glare and variance of the tumor margin in surgical specimens. J Med Imaging (Bellingham) 2019; 6:035004. [PMID: 31528662 DOI: 10.1117/1.jmi.6.3.035004] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2019] [Accepted: 08/06/2019] [Indexed: 12/19/2022] Open
Abstract
Head and neck squamous cell carcinoma (SCC) is primarily managed by surgical cancer resection. Recurrence rates after surgery can be as high as 55%, if residual cancer is present. Hyperspectral imaging (HSI) is evaluated for detection of SCC in ex-vivo surgical specimens. Several machine learning methods are investigated, including convolutional neural networks (CNNs) and a spectral-spatial classification framework based on support vector machines. Quantitative results demonstrate that additional data preprocessing and unsupervised segmentation can improve CNN results to achieve optimal performance. The methods are trained in two paradigms, with and without specular glare. Classifying regions that include specular glare degrade the overall results, but the combination of the CNN probability maps and unsupervised segmentation using a majority voting method produces an area under the curve value of 0.81 [0.80, 0.83]. As the wavelengths of light used in HSI can penetrate different depths into biological tissue, cancer margins may change with depth and create uncertainty in the ground truth. Through serial histological sectioning, the variance in the cancer margin with depth is investigated and paired with qualitative classification heat maps using the methods proposed for the testing group of SCC patients. The results determined that the validity of the top section alone as the ground truth may be limited to 1 to 2 mm. The study of specular glare and margin variation provided better understanding of the potential of HSI for the use in the operating room.
Collapse
Affiliation(s)
- Martin Halicek
- University of Texas at Dallas, Department of Bioengineering, Dallas, Texas, United States.,Emory University and Georgia Institute of Technology, Department of Biomedical Engineering, Atlanta, Georgia, United States
| | - Himar Fabelo
- University of Texas at Dallas, Department of Bioengineering, Dallas, Texas, United States.,University of Las Palmas de Gran Canaria, Institute for Applied Microelectronics, Las Palmas, Spain
| | - Samuel Ortega
- University of Las Palmas de Gran Canaria, Institute for Applied Microelectronics, Las Palmas, Spain
| | - James V Little
- Emory University School of Medicine, Department of Pathology and Laboratory Medicine, Atlanta, Georgia, United States
| | - Xu Wang
- Emory University School of Medicine, Department of Hematology and Medical Oncology, Atlanta, Georgia, United States
| | - Amy Y Chen
- Emory University School of Medicine, Department of Otolaryngology, Atlanta, Georgia, United States
| | - Gustavo Marrero Callico
- University of Las Palmas de Gran Canaria, Institute for Applied Microelectronics, Las Palmas, Spain
| | - Larry Myers
- University of Texas Southwestern Medical Center, Department of Otolaryngology, Dallas, Texas, United States
| | - Baran D Sumer
- University of Texas Southwestern Medical Center, Department of Otolaryngology, Dallas, Texas, United States
| | - Baowei Fei
- University of Texas at Dallas, Department of Bioengineering, Dallas, Texas, United States.,University of Texas Southwestern Medical Center, Advanced Imaging Research Center, Dallas, Texas, United States.,University of Texas Southwestern Medical Center, Department of Radiology, Dallas, Texas, United States
| |
Collapse
|
23
|
Shapey J, Xie Y, Nabavi E, Bradford R, Saeed SR, Ourselin S, Vercauteren T. Intraoperative multispectral and hyperspectral label-free imaging: A systematic review of in vivo clinical studies. JOURNAL OF BIOPHOTONICS 2019; 12:e201800455. [PMID: 30859757 PMCID: PMC6736677 DOI: 10.1002/jbio.201800455] [Citation(s) in RCA: 57] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/07/2018] [Revised: 02/26/2019] [Accepted: 03/08/2019] [Indexed: 05/21/2023]
Abstract
Multispectral and hyperspectral imaging (HSI) are emerging optical imaging techniques with the potential to transform the way surgery is performed but it is not clear whether current systems are capable of delivering real-time tissue characterization and surgical guidance. We conducted a systematic review of surgical in vivo label-free multispectral and HSI systems that have been assessed intraoperatively in adult patients, published over a 10-year period to May 2018. We analysed 14 studies including 8 different HSI systems. Current in-vivo HSI systems generate an intraoperative tissue oxygenation map or enable tumour detection. Intraoperative tissue oxygenation measurements may help to predict those patients at risk of postoperative complications and in-vivo intraoperative tissue characterization may be performed with high specificity and sensitivity. All systems utilized a line-scanning or wavelength-scanning method but the spectral range and number of spectral bands employed varied significantly between studies and according to the system's clinical aim. The time to acquire a hyperspectral cube dataset ranged between 5 and 30 seconds. No safety concerns were reported in any studies. A small number of studies have demonstrated the capabilities of intraoperative in-vivo label-free HSI but further work is needed to fully integrate it into the current surgical workflow.
Collapse
Affiliation(s)
- Jonathan Shapey
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Yijing Xie
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Eli Nabavi
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Robert Bradford
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
| | - Shakeel R Saeed
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
- The Ear Institute, University College London, London, UK
- The Royal National Throat, Nose and Ear Hospital, London, UK
| | - Sebastien Ourselin
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Tom Vercauteren
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| |
Collapse
|
24
|
Maktabi M, Köhler H, Ivanova M, Jansen-Winkeln B, Takoh J, Niebisch S, Rabe SM, Neumuth T, Gockel I, Chalopin C. Tissue classification of oncologic esophageal resectates based on hyperspectral data. Int J Comput Assist Radiol Surg 2019; 14:1651-1661. [PMID: 31222672 DOI: 10.1007/s11548-019-02016-x] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Accepted: 06/11/2019] [Indexed: 01/02/2023]
Abstract
PURPOSE Esophageal carcinoma is the eighth most common cancer worldwide. Esophageal resection with gastric pull-up is a potentially curative therapeutic option. After this procedure, the specimen is examined by the pathologist to confirm complete removal of the cancer. An intraoperative analysis of the resectate would be less time-consuming and therefore improve patient safety. METHODS Hyperspectral imaging (HSI) is a relatively new modality, which has shown promising results for the detection of tumors. Automatic approaches could support the surgeon in the visualization of tumor margins. Therefore, we evaluated four supervised classification algorithms: random forest, support vector machines (SVM), multilayer perceptron, and k-nearest neighbors to differentiate malignant from healthy tissue based on HSI recordings of esophago-gastric resectates in 11 patients. RESULTS The best performances were obtained with a cancerous tissue detection of 63% sensitivity and 69% specificity with the SVM. In a leave-one patient-out cross-validation, the classification showed larger performance differences according to the patient data used. In less than 1 s, data classification and visualization was shown. CONCLUSION In this work, we successfully tested several classification algorithms for the automatic detection of esophageal carcinoma in resected tissue. A larger data set and a combination of several methods would probably increase the performance. Moreover, the implementation of software tools for intraoperative tumor boundary visualization will further support the surgeon during oncologic operations.
Collapse
Affiliation(s)
- Marianne Maktabi
- Innovation Center Computer Assisted Surgery (ICCAS), University of Leipzig, Leipzig, Germany.
| | - Hannes Köhler
- Innovation Center Computer Assisted Surgery (ICCAS), University of Leipzig, Leipzig, Germany
| | - Margarita Ivanova
- Innovation Center Computer Assisted Surgery (ICCAS), University of Leipzig, Leipzig, Germany
| | - Boris Jansen-Winkeln
- Department of Visceral, Transplant, Thoracic and Vascular Surgery, University Hospital of Leipzig, Leipzig, Germany
| | - Jonathan Takoh
- Department of Visceral, Transplant, Thoracic and Vascular Surgery, University Hospital of Leipzig, Leipzig, Germany
| | - Stefan Niebisch
- Department of Visceral, Transplant, Thoracic and Vascular Surgery, University Hospital of Leipzig, Leipzig, Germany
| | - Sebastian M Rabe
- Department of Visceral, Transplant, Thoracic and Vascular Surgery, University Hospital of Leipzig, Leipzig, Germany
| | - Thomas Neumuth
- Innovation Center Computer Assisted Surgery (ICCAS), University of Leipzig, Leipzig, Germany
| | - Ines Gockel
- Department of Visceral, Transplant, Thoracic and Vascular Surgery, University Hospital of Leipzig, Leipzig, Germany
| | - Claire Chalopin
- Innovation Center Computer Assisted Surgery (ICCAS), University of Leipzig, Leipzig, Germany
| |
Collapse
|
25
|
Abstract
INTRODUCTION Anastomotic insufficiency (AI) remains the most feared surgical complication in gastrointestinal surgery, which is closely associated with a prolonged inpatient hospital stay and significant postoperative mortality. Hyperspectral imaging (HSI) is a relatively new medical imaging procedure which has proven to be promising in tissue identification as well as in the analysis of tissue oxygenation and water content. Until now, no data exist on the in vivo HSI analysis of gastrointestinal anastomoses. METHODS Intraoperative images were obtained using the TIVITA™ tissue system HSI camera from Diaspective Vision GmbH (Pepelow, Germany). In 47 patients who underwent gastrointestinal surgery with esophageal, gastric, pancreatic, small bowel or colorectal anastomoses, 97 assessable recordings were generated. Parameters obtained at the sites of the anastomoses included tissue oxygenation (StO2), the tissue hemoglobin index (THI), near-infrared (NIR) perfusion index, and tissue water index (TWI). RESULTS Obtaining and analyzing the intraoperative images with this non-invasive imaging system proved practicable and delivered good results on a consistent basis. A NIR gradient along and across the anastomosis was observed and, furthermore, analysis of the tissue water and oxygenation content showed specific changes at the site of anastomosis. CONCLUSION The HSI method provides a non-contact, non-invasive, intraoperative imaging procedure without the use of a contrast medium, which enables a real-time analysis of physiological anastomotic parameters, which may contribute to determine the "ideal" anastomotic region. In light of this, the establishment of this methodology in the field of visceral surgery, enabling the generation of normal or cut off values for different gastrointestinal anastomotic types, is an obvious necessity.
Collapse
|
26
|
Halicek M, Fabelo H, Ortega S, Callico GM, Fei B. In-Vivo and Ex-Vivo Tissue Analysis through Hyperspectral Imaging Techniques: Revealing the Invisible Features of Cancer. Cancers (Basel) 2019; 11:E756. [PMID: 31151223 PMCID: PMC6627361 DOI: 10.3390/cancers11060756] [Citation(s) in RCA: 109] [Impact Index Per Article: 18.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2019] [Revised: 05/20/2019] [Accepted: 05/24/2019] [Indexed: 12/27/2022] Open
Abstract
In contrast to conventional optical imaging modalities, hyperspectral imaging (HSI) is able to capture much more information from a certain scene, both within and beyond the visual spectral range (from 400 to 700 nm). This imaging modality is based on the principle that each material provides different responses to light reflection, absorption, and scattering across the electromagnetic spectrum. Due to these properties, it is possible to differentiate and identify the different materials/substances presented in a certain scene by their spectral signature. Over the last two decades, HSI has demonstrated potential to become a powerful tool to study and identify several diseases in the medical field, being a non-contact, non-ionizing, and a label-free imaging modality. In this review, the use of HSI as an imaging tool for the analysis and detection of cancer is presented. The basic concepts related to this technology are detailed. The most relevant, state-of-the-art studies that can be found in the literature using HSI for cancer analysis are presented and summarized, both in-vivo and ex-vivo. Lastly, we discuss the current limitations of this technology in the field of cancer detection, together with some insights into possible future steps in the improvement of this technology.
Collapse
Affiliation(s)
- Martin Halicek
- Department of Bioengineering, The University of Texas at Dallas, 800 W. Campbell Road, Richardson, TX 75080, USA.
- Department of Biomedical Engineering, Emory University and The Georgia Institute of Technology, 1841 Clifton Road NE, Atlanta, GA 30329, USA.
| | - Himar Fabelo
- Department of Bioengineering, The University of Texas at Dallas, 800 W. Campbell Road, Richardson, TX 75080, USA.
- Institute for Applied Microelectronics (IUMA), University of Las Palmas de Gran Canaria (ULPGC), 35017 Las Palmas de Gran Canaria, Spain.
| | - Samuel Ortega
- Institute for Applied Microelectronics (IUMA), University of Las Palmas de Gran Canaria (ULPGC), 35017 Las Palmas de Gran Canaria, Spain.
| | - Gustavo M Callico
- Institute for Applied Microelectronics (IUMA), University of Las Palmas de Gran Canaria (ULPGC), 35017 Las Palmas de Gran Canaria, Spain.
| | - Baowei Fei
- Department of Bioengineering, The University of Texas at Dallas, 800 W. Campbell Road, Richardson, TX 75080, USA.
- Advanced Imaging Research Center, University of Texas Southwestern Medical Center, 5323 Harry Hine Blvd, Dallas, TX 75390, USA.
- Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hine Blvd, Dallas, TX 75390, USA.
| |
Collapse
|
27
|
Halicek M, Little JV, Wang X, Chen AY, Fei B. Optical biopsy of head and neck cancer using hyperspectral imaging and convolutional neural networks. JOURNAL OF BIOMEDICAL OPTICS 2019; 24:1-9. [PMID: 30891966 PMCID: PMC6975184 DOI: 10.1117/1.jbo.24.3.036007] [Citation(s) in RCA: 48] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/08/2018] [Accepted: 01/14/2019] [Indexed: 05/21/2023]
Abstract
For patients undergoing surgical cancer resection of squamous cell carcinoma (SCCa), cancer-free surgical margins are essential for good prognosis. We developed a method to use hyperspectral imaging (HSI), a noncontact optical imaging modality, and convolutional neural networks (CNNs) to perform an optical biopsy of ex-vivo, surgical gross-tissue specimens, collected from 21 patients undergoing surgical cancer resection. Using a cross-validation paradigm with data from different patients, the CNN can distinguish SCCa from normal aerodigestive tract tissues with an area under the receiver operator curve (AUC) of 0.82. Additionally, normal tissue from the upper aerodigestive tract can be subclassified into squamous epithelium, muscle, and gland with an average AUC of 0.94. After separately training on thyroid tissue, the CNN can differentiate between thyroid carcinoma and normal thyroid with an AUC of 0.95, 92% accuracy, 92% sensitivity, and 92% specificity. Moreover, the CNN can discriminate medullary thyroid carcinoma from benign multinodular goiter (MNG) with an AUC of 0.93. Classical-type papillary thyroid carcinoma is differentiated from MNG with an AUC of 0.91. Our preliminary results demonstrate that an HSI-based optical biopsy method using CNNs can provide multicategory diagnostic information for normal and cancerous head-and-neck tissue, and more patient data are needed to fully investigate the potential and reliability of the proposed technique.
Collapse
Affiliation(s)
- Martin Halicek
- University of Texas at Dallas, Department of Bioengineering, Richardson, Texas, United States
- Emory University and Georgia Institute of Technology, Department of Biomedical Engineering, Atlanta, Georgia, United States
| | - James V. Little
- Emory University School of Medicine, Department of Pathology and Laboratory Medicine, Atlanta, Georgia, United States
| | - Xu Wang
- Emory University School of Medicine, Department of Hematology and Medical Oncology, Atlanta, Georgia, United States
| | - Amy Y. Chen
- Emory University School of Medicine, Department of Otolaryngology, Atlanta, Georgia, United States
| | - Baowei Fei
- University of Texas at Dallas, Department of Bioengineering, Richardson, Texas, United States
- Emory University School of Medicine, Department of Radiology and Imaging Sciences, Atlanta, Georgia, United States
- University of Texas Southwestern Medical Center, Advanced Imaging Research Center, Dallas, Texas, United States
- University of Texas Southwestern Medical Center, Department of Radiology, Dallas, Texas, United States
- Address all correspondence to Baowei Fei, E-mail:
| |
Collapse
|
28
|
Ravì D, Szczotka AB, Pereira SP, Vercauteren T. Adversarial training with cycle consistency for unsupervised super-resolution in endomicroscopy. Med Image Anal 2019; 53:123-131. [PMID: 30769327 PMCID: PMC6873642 DOI: 10.1016/j.media.2019.01.011] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2018] [Revised: 12/31/2018] [Accepted: 01/18/2019] [Indexed: 11/20/2022]
Abstract
We propose an unsupervised SR framework based on an adversarial deep neural network with a physically-inspired cycle consistency. Our framework can exploit HR images, regardless of the domain where they are coming from, to transfer the quality of the HR images to the initial LR images. Our system can be particularly useful in all situations where pairs of LR/HR are not available during the training of a super-resolution algorithm. Our quantitative analysis, validated using a database of 238 endomicroscopy video sequences from 143 patients, shows the ability of the pipeline to produce convincing super-resolved images. A Mean Opinion Score (MOS) study also confirms this quantitative image quality assessment.
In recent years, endomicroscopy has become increasingly used for diagnostic purposes and interventional guidance. It can provide intraoperative aids for real-time tissue characterization and can help to perform visual investigations aimed for example to discover epithelial cancers. Due to physical constraints on the acquisition process, endomicroscopy images, still today have a low number of informative pixels which hampers their quality. Post-processing techniques, such as Super-Resolution (SR), are a potential solution to increase the quality of these images. SR techniques are often supervised, requiring aligned pairs of low-resolution (LR) and high-resolution (HR) images patches to train a model. However, in our domain, the lack of HR images hinders the collection of such pairs and makes supervised training unsuitable. For this reason, we propose an unsupervised SR framework based on an adversarial deep neural network with a physically-inspired cycle consistency, designed to impose some acquisition properties on the super-resolved images. Our framework can exploit HR images, regardless of the domain where they are coming from, to transfer the quality of the HR images to the initial LR images. This property can be particularly useful in all situations where pairs of LR/HR are not available during the training. Our quantitative analysis, validated using a database of 238 endomicroscopy video sequences from 143 patients, shows the ability of the pipeline to produce convincing super-resolved images. A Mean Opinion Score (MOS) study also confirms this quantitative image quality assessment.
Collapse
Affiliation(s)
- Daniele Ravì
- Centre for Medical Image Computing, University College London, United Kingdom.
| | - Agnieszka Barbara Szczotka
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, United Kingdom
| | - Stephen P Pereira
- Institute for Liver and Digestive Health, University College London, United Kingdom
| | - Tom Vercauteren
- School of Biomedical Engineering & Imaging Sciences, King's College London, United Kingdom
| |
Collapse
|
29
|
Halicek M, Fabelo H, Ortega S, Little JV, Wang X, Chen AY, Callico GM, Myers LL, Sumer BD, Fei B. Cancer Detection Using Hyperspectral Imaging and Evaluation of the Superficial Tumor Margin Variance with Depth. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2019; 10951:109511A. [PMID: 32489227 PMCID: PMC7265739 DOI: 10.1117/12.2512985] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
Head and neck squamous cell carcinoma (SCCa) is primarily managed by surgical resection. Recurrence rates after surgery can be as high as 55% if residual cancer is present. In this study, hyperspectral imaging (HSI) is evaluated for detection of SCCa in ex-vivo surgical specimens. Several methods are investigated, including convolutional neural networks (CNNs) and a spectral-spatial variant of support vector machines. Quantitative results demonstrate that additional processing and unsupervised filtering can improve CNN results to achieve optimal performance. Classifying regions that include specular glare, the average AUC is increased from 0.73 [0.71, 0.75 (95% confidence interval)] to 0.81 [0.80, 0.83] through an unsupervised filtering and majority voting method described. The wavelengths of light used in HSI can penetrate different depths into biological tissue, while the cancer margin may change with depth and create uncertainty in the ground-truth. Through serial histological sectioning, the variance in cancer-margin with depth is also investigated and paired with qualitative classification heat maps using the methods proposed for the testing group SCC patients.
Collapse
Affiliation(s)
- Martin Halicek
- Department of Bioengineering, University of Texas at Dallas, Dallas, TX, USA
- Georgia Inst. of Tech. & Emory Univ., Dept. of Biomedical Engineering, Atlanta, GA
- Medical College of Georgia, Augusta University, Augusta, GA
| | - Himar Fabelo
- Department of Bioengineering, University of Texas at Dallas, Dallas, TX, USA
- Institute for Applied Microelectronics, University of Las Palmas de Gran Canaria, Spain
| | - Samuel Ortega
- Institute for Applied Microelectronics, University of Las Palmas de Gran Canaria, Spain
| | - James V Little
- Emory Univ. School of Medicine, Dept. of Pathology & Laboratory Medicine, Atlanta, GA
| | - Xu Wang
- Emory Univ. School of Medicine, Dept. of Hematology & Medical Oncology, Atlanta, GA
| | - Amy Y Chen
- Emory University School of Medicine, Dept. of Otolaryngology, Atlanta, GA
| | | | - Larry L Myers
- University of Texas Southwestern Medical Center, Dept. of Otolaryngology, Dallas, TX
| | - Baran D Sumer
- University of Texas Southwestern Medical Center, Dept. of Otolaryngology, Dallas, TX
| | - Baowei Fei
- Department of Bioengineering, University of Texas at Dallas, Dallas, TX, USA
- Univ. of Texas Southwestern Medical Center, Advanced Imaging Research Center, Dallas, TX
- University of Texas Southwestern Medical Center, Department of Radiology, Dallas, TX
| |
Collapse
|
30
|
Fan Y, Sun Y, Chang W, Zhang X, Tang J, Zhang L, Liao H. Bioluminescence imaging and two-photon microscopy guided laser ablation of GBM decreases tumor burden. Am J Cancer Res 2018; 8:4072-4085. [PMID: 30128037 PMCID: PMC6096384 DOI: 10.7150/thno.25357] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2018] [Accepted: 05/03/2018] [Indexed: 11/25/2022] Open
Abstract
Brain tumor delineation and treatment are the main concerns of neurosurgeons in neurosurgical operations. Bridging the gap between imaging/diagnosis and treatment will provide great convenience for neurosurgeons. Here, we developed an optical theranostics platform that helps to delineate the boundary and quantitatively analyze glioblastoma multiforms (GBMs) with bioluminescence imaging (BLI) to guide laser ablation, and we imaged the GBM cells with two-photon microscopy (TPM) to visualize the laser ablation zone in vivo. Methods: Laser ablation, using the method of coupled ablated path planning with the guidance of BLI, was implemented in vivo for mouse brain tumors. The mapping relationship between semi-quantitative BLI and the laser ablation path was built through the quantitative tumor burden. The mapping was reflected through coupled ablated path planning. The BLI quantitatively and qualitatively evaluated treatment using laser ablation with the appropriate laser parameters and laser-tissue parameters. These parameters were measured after treatment. Furthermore, histopathological analysis of the brain tissue was conducted to compare the TPM images before and after laser ablation and to evaluate the results of in vivo laser ablation. The local recurrences were measured with three separate cohorts. The weights of all of the mice were measured during the experiment. Results: Our in vivo BLI data show that the tumor cell numbers were significantly attenuated after treatment with the optical theranostics platform, and the delineation of GBM margins had clear views to guide the laser resection; the fluorescence intensity in vivo of GBMs quantitatively analyzed the rapid progression of GBMs. The laser-tissue parameters under guidance of multimodality imaging ranged between 1.0 mm and 0.1 mm. The accuracy of the laser ablation reached a submillimeter level, and the resection ratio reached more than 99% under the guidance of BLI. The histopathological sections were compared to TPM images, and the results demonstrated that these images highly coincided. The weight index and local recurrence results demonstrated that the therapeutic effect of the optical theranostics platform was significant. Conclusion: We propose an optical multimodality imaging-guided laser ablation theranostics platform for the treatment of GBMs in an intravital mouse model. The experimental results demonstrated that the integration of multimodality imaging can precisely guide laser ablation for the treatment of GBMs. This preclinical research provides a possibility for the precision treatment of GBMs. The study also provides some theoretical support for clinical research.
Collapse
|
31
|
Fabelo H, Ortega S, Ravi D, Kiran BR, Sosa C, Bulters D, Callicó GM, Bulstrode H, Szolna A, Piñeiro JF, Kabwama S, Madroñal D, Lazcano R, J-O’Shanahan A, Bisshopp S, Hernández M, Báez A, Yang GZ, Stanciulescu B, Salvador R, Juárez E, Sarmiento R. Spatio-spectral classification of hyperspectral images for brain cancer detection during surgical operations. PLoS One 2018; 13:e0193721. [PMID: 29554126 PMCID: PMC5858847 DOI: 10.1371/journal.pone.0193721] [Citation(s) in RCA: 76] [Impact Index Per Article: 10.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2017] [Accepted: 02/06/2018] [Indexed: 11/18/2022] Open
Abstract
Surgery for brain cancer is a major problem in neurosurgery. The diffuse infiltration into the surrounding normal brain by these tumors makes their accurate identification by the naked eye difficult. Since surgery is the common treatment for brain cancer, an accurate radical resection of the tumor leads to improved survival rates for patients. However, the identification of the tumor boundaries during surgery is challenging. Hyperspectral imaging is a non-contact, non-ionizing and non-invasive technique suitable for medical diagnosis. This study presents the development of a novel classification method taking into account the spatial and spectral characteristics of the hyperspectral images to help neurosurgeons to accurately determine the tumor boundaries in surgical-time during the resection, avoiding excessive excision of normal tissue or unintentionally leaving residual tumor. The algorithm proposed in this study to approach an efficient solution consists of a hybrid framework that combines both supervised and unsupervised machine learning methods. Firstly, a supervised pixel-wise classification using a Support Vector Machine classifier is performed. The generated classification map is spatially homogenized using a one-band representation of the HS cube, employing the Fixed Reference t-Stochastic Neighbors Embedding dimensional reduction algorithm, and performing a K-Nearest Neighbors filtering. The information generated by the supervised stage is combined with a segmentation map obtained via unsupervised clustering employing a Hierarchical K-Means algorithm. The fusion is performed using a majority voting approach that associates each cluster with a certain class. To evaluate the proposed approach, five hyperspectral images of surface of the brain affected by glioblastoma tumor in vivo from five different patients have been used. The final classification maps obtained have been analyzed and validated by specialists. These preliminary results are promising, obtaining an accurate delineation of the tumor area.
Collapse
Affiliation(s)
- Himar Fabelo
- Institute for Applied Microelectronics (IUMA), University of Las Palmas de Gran Canaria (ULPGC), Las Palmas de Gran Canaria, Spain
- * E-mail:
| | - Samuel Ortega
- Institute for Applied Microelectronics (IUMA), University of Las Palmas de Gran Canaria (ULPGC), Las Palmas de Gran Canaria, Spain
| | - Daniele Ravi
- The Hamlyn Centre, Imperial College London (ICL), London, United Kingdom
| | - B. Ravi Kiran
- Laboratoire CRISTAL, Université Lille 3, Villeneuve-d’Ascq, France
| | - Coralia Sosa
- Department of Neurosurgery, University Hospital Doctor Negrin, Las Palmas de Gran Canaria, Spain
| | - Diederik Bulters
- Wessex Neurological Centre, University Hospital Southampton, Tremona Road, Southampton, United Kingdom
| | - Gustavo M. Callicó
- Institute for Applied Microelectronics (IUMA), University of Las Palmas de Gran Canaria (ULPGC), Las Palmas de Gran Canaria, Spain
| | - Harry Bulstrode
- Department of Neurosurgery, Addenbrookes Hospital, University of Cambridge, Cambridge, United Kingdom
| | - Adam Szolna
- Department of Neurosurgery, University Hospital Doctor Negrin, Las Palmas de Gran Canaria, Spain
| | - Juan F. Piñeiro
- Department of Neurosurgery, University Hospital Doctor Negrin, Las Palmas de Gran Canaria, Spain
| | - Silvester Kabwama
- Wessex Neurological Centre, University Hospital Southampton, Tremona Road, Southampton, United Kingdom
| | - Daniel Madroñal
- Centre of Software Technologies and Multimedia Systems (CITSEM), Universidad Politecnica de Madrid (UPM), Madrid, Spain
| | - Raquel Lazcano
- Centre of Software Technologies and Multimedia Systems (CITSEM), Universidad Politecnica de Madrid (UPM), Madrid, Spain
| | - Aruma J-O’Shanahan
- Department of Neurosurgery, University Hospital Doctor Negrin, Las Palmas de Gran Canaria, Spain
| | - Sara Bisshopp
- Department of Neurosurgery, University Hospital Doctor Negrin, Las Palmas de Gran Canaria, Spain
| | - María Hernández
- Department of Neurosurgery, University Hospital Doctor Negrin, Las Palmas de Gran Canaria, Spain
| | - Abelardo Báez
- Institute for Applied Microelectronics (IUMA), University of Las Palmas de Gran Canaria (ULPGC), Las Palmas de Gran Canaria, Spain
| | - Guang-Zhong Yang
- The Hamlyn Centre, Imperial College London (ICL), London, United Kingdom
| | - Bogdan Stanciulescu
- Ecole Nationale Supérieure des Mines de Paris (ENSMP), MINES ParisTech, Paris, France
| | - Rubén Salvador
- Centre of Software Technologies and Multimedia Systems (CITSEM), Universidad Politecnica de Madrid (UPM), Madrid, Spain
| | - Eduardo Juárez
- Centre of Software Technologies and Multimedia Systems (CITSEM), Universidad Politecnica de Madrid (UPM), Madrid, Spain
| | - Roberto Sarmiento
- Institute for Applied Microelectronics (IUMA), University of Las Palmas de Gran Canaria (ULPGC), Las Palmas de Gran Canaria, Spain
| |
Collapse
|
32
|
Xie Y, Thom M, Ebner M, Wykes V, Desjardins A, Miserocchi A, Ourselin S, McEvoy AW, Vercauteren T. Wide-field spectrally resolved quantitative fluorescence imaging system: toward neurosurgical guidance in glioma resection. JOURNAL OF BIOMEDICAL OPTICS 2017; 22:1-14. [PMID: 29139243 PMCID: PMC6742512 DOI: 10.1117/1.jbo.22.11.116006] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2017] [Accepted: 10/26/2017] [Indexed: 05/03/2023]
Abstract
In high-grade glioma surgery, tumor resection is often guided by intraoperative fluorescence imaging. 5-aminolevulinic acid-induced protoporphyrin IX (PpIX) provides fluorescent contrast between normal brain tissue and glioma tissue, thus achieving improved tumor delineation and prolonged patient survival compared with conventional white-light-guided resection. However, commercially available fluorescence imaging systems rely solely on visual assessment of fluorescence patterns by the surgeon, which makes the resection more subjective than necessary. We developed a wide-field spectrally resolved fluorescence imaging system utilizing a Generation II scientific CMOS camera and an improved computational model for the precise reconstruction of the PpIX concentration map. In our model, the tissue's optical properties and illumination geometry, which distort the fluorescent emission spectra, are considered. We demonstrate that the CMOS-based system can detect low PpIX concentration at short camera exposure times, while providing high-pixel resolution wide-field images. We show that total variation regularization improves the contrast-to-noise ratio of the reconstructed quantitative concentration map by approximately twofold. Quantitative comparison between the estimated PpIX concentration and tumor histopathology was also investigated to further evaluate the system.
Collapse
Affiliation(s)
- Yijing Xie
- University College London, Wellcome/EPSRC Centre for Interventional and Surgical Sciences, London, United Kingdom
- Address all correspondence to: Yijing Xie,
| | - Maria Thom
- University College London, Institute of Neurology, Department of Neuropathology, London, United Kingdom
| | - Michael Ebner
- University College London, Wellcome/EPSRC Centre for Interventional and Surgical Sciences, London, United Kingdom
| | - Victoria Wykes
- University College London, Institute of Neurology, National Hospital for Neurology and Neurosurgery, London, United Kingdom
| | - Adrien Desjardins
- University College London, Wellcome/EPSRC Centre for Interventional and Surgical Sciences, London, United Kingdom
| | - Anna Miserocchi
- University College London, Institute of Neurology, National Hospital for Neurology and Neurosurgery, London, United Kingdom
| | - Sebastien Ourselin
- University College London, Wellcome/EPSRC Centre for Interventional and Surgical Sciences, London, United Kingdom
| | - Andrew W. McEvoy
- University College London, Wellcome/EPSRC Centre for Interventional and Surgical Sciences, London, United Kingdom
- University College London, Institute of Neurology, National Hospital for Neurology and Neurosurgery, London, United Kingdom
| | - Tom Vercauteren
- University College London, Wellcome/EPSRC Centre for Interventional and Surgical Sciences, London, United Kingdom
| |
Collapse
|