1
|
Cheng W, Li L, Chen J, Chen Z, Li J, Liu S, Zhang N, Gu F, Wang W, Wang W, Yang B, Liang L. In vivo lacrimal gland imaging artefact assessment based on swept-source optical coherence tomography for dry eye disease. Br J Ophthalmol 2025; 109:554-560. [PMID: 39486885 DOI: 10.1136/bjo-2024-325864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2024] [Accepted: 10/15/2024] [Indexed: 11/04/2024]
Abstract
BACKGROUND This study aimed to characterise imaging artefacts in the lacrimal gland using swept-source optical coherence tomography (SS-OCT) in patients with dry eye disease (DED) and healthy participants and identify risk factors for these artefacts. METHODS In total, 151 eyes, including 104 from patients with DED and 47 from non-DED participants, were analysed. Demographic data collection, comprehensive ocular examinations and SS-OCT imaging of the palpebral lobe of the lacrimal gland were performed. Artefacts were classified into distinct categories with different severities. Univariate and multivariate logistic regression analyses were performed to evaluate the association of age, gender, best-corrected visual acuity, intraocular pressure (IOP) and the presence of DED with the presence of artefacts. RESULTS Eight artefact types and severity grading were defined by analysing 1208 lacrimal SS-OCT images. The three most prevalent artefacts were defocus (75.83%), cliff (67.47%) and Z-off (58.44%). The presence of artefacts was significantly associated with the presence of DED (OR=9.13; 95% CI, 2.39 to 34.88; p=0.001) and higher IOP (OR=1.34; 95% CI, 1.14 to 1.58; p<0.001). Furthermore, multivariate logistic analyses showed that lower tear film breakup time (OR=0.71; 95% CI, 0.55 to 0.92; p=0.009) and higher meibum quality score (OR=2.86; 95% CI, 1.49 to 5.48; p=0.002) were significantly associated with higher odds for the presence of artefacts. CONCLUSIONS DED eyes had more SS-OCT image artefacts than normal eyes. Stringent standardised image quality control should be implemented before further image analysis when using SS-OCT to assess lacrimal gland image.
Collapse
Affiliation(s)
- Weijing Cheng
- Sun Yat-Sen University Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Guangzhou, Guangdong, China
| | - Longyue Li
- Sun Yat-Sen University Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Guangzhou, Guangdong, China
| | - Juejing Chen
- Sun Yat-Sen University Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Guangzhou, Guangdong, China
| | - Ziyan Chen
- Sun Yat-Sen University Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Guangzhou, Guangdong, China
| | - Jing Li
- Sun Yat-Sen University Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Guangzhou, Guangdong, China
| | - Siyi Liu
- Sun Yat-Sen University Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Guangzhou, Guangdong, China
| | - Nuan Zhang
- Sun Yat-Sen University Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Guangzhou, Guangdong, China
| | - Feng Gu
- Sun Yat-Sen University Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Guangzhou, Guangdong, China
| | - Wenhui Wang
- Sun Yat-Sen University Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Guangzhou, Guangdong, China
| | - Wei Wang
- Sun Yat-Sen University Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Guangzhou, Guangdong, China
| | - Boyu Yang
- Sun Yat-Sen University Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Guangzhou, Guangdong, China
| | - Lingyi Liang
- Sun Yat-Sen University Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Guangzhou, Guangdong, China
| |
Collapse
|
2
|
Bellemo V, Haindl R, Pramanik M, Liu L, Schmetterer L, Liu X. Complex conjugate removal in optical coherence tomography using phase aware generative adversarial network. JOURNAL OF BIOMEDICAL OPTICS 2025; 30:026001. [PMID: 39963188 PMCID: PMC11831228 DOI: 10.1117/1.jbo.30.2.026001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/10/2024] [Revised: 10/24/2024] [Accepted: 10/28/2024] [Indexed: 02/20/2025]
Abstract
Significance Current methods for complex conjugate removal (CCR) in frequency-domain optical coherence tomography (FD-OCT) often require additional hardware components, which increase system complexity and cost. A software-based solution would provide a more efficient and cost-effective alternative. Aim We aim to develop a deep learning approach to effectively remove complex conjugate artifacts (CCAs) from OCT scans without the need for extra hardware components. Approach We introduce a deep learning method that employs generative adversarial networks to eliminate CCAs from OCT scans. Our model leverages both conventional intensity images and phase images from the OCT scans to enhance the artifact removal process. Results Our CCR-generative adversarial network models successfully converted conventional OCT scans with CCAs into artifact-free scans across various samples, including phantoms, human skin, and mouse eyes imaged in vivo with a phase-stable swept source-OCT prototype. The inclusion of phase images significantly improved the performance of the deep learning models in removing CCAs. Conclusions Our method provides a low-cost, data-driven, and software-based solution to enhance FD-OCT imaging capabilities by the removal of CCAs.
Collapse
Affiliation(s)
- Valentina Bellemo
- Nanyang Technological University, School of Chemistry, Chemical Engineering and Biotechnology, Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- SERI-NTU Advanced Ocular Engineering, Singapore, Singapore
| | - Richard Haindl
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
| | - Manojit Pramanik
- Iowa State University, Department of Electrical and Computer Engineering, Ames, Iowa, United States
| | - Linbo Liu
- Guangzhou National Laboratory, Guangzhou, Guangdong, China
| | - Leopold Schmetterer
- Nanyang Technological University, School of Chemistry, Chemical Engineering and Biotechnology, Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- SERI-NTU Advanced Ocular Engineering, Singapore, Singapore
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
- Duke-NUS Medical School, Ophthalmology and Visual Sciences Academic Clinical Program, Singapore, Singapore
- Institute of Clinical and Experimental Ophthalmology, Basel, Switzerland
- Medical University of Vienna, Department of Clinical Pharmacology, Vienna, Austria
| | - Xinyu Liu
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- SERI-NTU Advanced Ocular Engineering, Singapore, Singapore
- Duke-NUS Medical School, Ophthalmology and Visual Sciences Academic Clinical Program, Singapore, Singapore
- Peking University, Institute of Medical Technology, Beijing, China
| |
Collapse
|
3
|
Li K, Yang J, Liang W, Li X, Zhang C, Chen L, Wu C, Zhang X, Xu Z, Wang Y, Meng L, Zhang Y, Chen Y, Zhou SK. O-PRESS: Boosting OCT axial resolution with Prior guidance, Recurrence, and Equivariant Self-Supervision. Med Image Anal 2025; 99:103319. [PMID: 39270466 DOI: 10.1016/j.media.2024.103319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Revised: 07/10/2024] [Accepted: 08/19/2024] [Indexed: 09/15/2024]
Abstract
Optical coherence tomography (OCT) is a noninvasive technology that enables real-time imaging of tissue microanatomies. The axial resolution of OCT is intrinsically constrained by the spectral bandwidth of the employed light source while maintaining a fixed center wavelength for a specific application. Physically extending this bandwidth faces strong limitations and requires a substantial cost. We present a novel computational approach, called as O-PRESS, for boosting the axial resolution of OCT with Prior guidance, a Recurrent mechanism, and Equivariant Self-Supervision. Diverging from conventional deconvolution methods that rely on physical models or data-driven techniques, our method seamlessly integrates OCT modeling and deep learning, enabling us to achieve real-time axial-resolution enhancement exclusively from measurements without a need for paired images. Our approach solves two primary tasks of resolution enhancement and noise reduction with one treatment. Both tasks are executed in a self-supervised manner, with equivariance imaging and free space priors guiding their respective processes. Experimental evaluations, encompassing both quantitative metrics and visual assessments, consistently verify the efficacy and superiority of our approach, which exhibits performance on par with fully supervised methods. Importantly, the robustness of our model is affirmed, showcasing its dual capability to enhance axial resolution while concurrently improving the signal-to-noise ratio.
Collapse
Affiliation(s)
- Kaiyan Li
- School of Biomedical Engineering, Division of Life Sciences and Medicine, University of Science and Technology of China (USTC), Hefei Anhui, 230026, China; Center for Medical Imaging, Robotics, Analytic Computing & Learning (MIRACLE), Suzhou Institute for Advanced Research, USTC, Suzhou Jiangsu, 215123, China
| | - Jingyuan Yang
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China; Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Wenxuan Liang
- School of Biomedical Engineering, Division of Life Sciences and Medicine, University of Science and Technology of China (USTC), Hefei Anhui, 230026, China; Center for Medical Imaging, Robotics, Analytic Computing & Learning (MIRACLE), Suzhou Institute for Advanced Research, USTC, Suzhou Jiangsu, 215123, China; School of Physical Sciences, University of Science and Technology of China, Hefei Anhui, 230026, China
| | - Xingde Li
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, 21287, USA
| | - Chenxi Zhang
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China; Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Lulu Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China; Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Chan Wu
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China; Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Xiao Zhang
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China; Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Zhiyan Xu
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China; Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Yueling Wang
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China; Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Lihui Meng
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China; Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Yue Zhang
- School of Biomedical Engineering, Division of Life Sciences and Medicine, University of Science and Technology of China (USTC), Hefei Anhui, 230026, China; Center for Medical Imaging, Robotics, Analytic Computing & Learning (MIRACLE), Suzhou Institute for Advanced Research, USTC, Suzhou Jiangsu, 215123, China
| | - Youxin Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China; Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, 100730, China.
| | - S Kevin Zhou
- School of Biomedical Engineering, Division of Life Sciences and Medicine, University of Science and Technology of China (USTC), Hefei Anhui, 230026, China; Center for Medical Imaging, Robotics, Analytic Computing & Learning (MIRACLE), Suzhou Institute for Advanced Research, USTC, Suzhou Jiangsu, 215123, China; Key Laboratory of Precision and Intelligent Chemistry, USTC, Hefei Anhui, 230026, China; Key Laboratory of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing, 100190, China.
| |
Collapse
|
4
|
Viqar M, Sahin E, Stoykova E, Madjarova V. Reconstruction of Optical Coherence Tomography Images from Wavelength Space Using Deep Learning. SENSORS (BASEL, SWITZERLAND) 2024; 25:93. [PMID: 39796883 PMCID: PMC11723098 DOI: 10.3390/s25010093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/11/2024] [Revised: 12/19/2024] [Accepted: 12/24/2024] [Indexed: 01/13/2025]
Abstract
Conventional Fourier domain Optical Coherence Tomography (FD-OCT) systems depend on resampling into a wavenumber (k) domain to extract the depth profile. This either necessitates additional hardware resources or amplifies the existing computational complexity. Moreover, the OCT images also suffer from speckle noise, due to systemic reliance on low-coherence interferometry. We propose a streamlined and computationally efficient approach based on Deep Learning (DL) which enables reconstructing speckle-reduced OCT images directly from the wavelength (λ) domain. For reconstruction, two encoder-decoder styled networks, namely Spatial Domain Convolution Neural Network (SD-CNN) and Fourier Domain CNN (FD-CNN), are used sequentially. The SD-CNN exploits the highly degraded images obtained by Fourier transforming the (λ) domain fringes to reconstruct the deteriorated morphological structures along with suppression of unwanted noise. The FD-CNN leverages this output to enhance the image quality further by optimization in the Fourier domain (FD). We quantitatively and visually demonstrate the efficacy of the method in obtaining high-quality OCT images. Furthermore, we illustrate the computational complexity reduction by harnessing the power of DL models. We believe that this work lays the framework for further innovations in the realm of OCT image reconstruction.
Collapse
Affiliation(s)
- Maryam Viqar
- Faculty of Information Technology and Communication Sciences, Tampere University, 33720 Tampere, Finland;
- Institute of Optical Materials and Technologies, Bulgarian Academy of Sciences, 1113 Sofia, Bulgaria; (E.S.); (V.M.)
| | - Erdem Sahin
- Faculty of Information Technology and Communication Sciences, Tampere University, 33720 Tampere, Finland;
| | - Elena Stoykova
- Institute of Optical Materials and Technologies, Bulgarian Academy of Sciences, 1113 Sofia, Bulgaria; (E.S.); (V.M.)
| | - Violeta Madjarova
- Institute of Optical Materials and Technologies, Bulgarian Academy of Sciences, 1113 Sofia, Bulgaria; (E.S.); (V.M.)
| |
Collapse
|
5
|
Wang M, Mao J, Su H, Ling Y, Zhou C, Su Y. Physics-guided deep learning-based real-time image reconstruction of Fourier-domain optical coherence tomography. BIOMEDICAL OPTICS EXPRESS 2024; 15:6619-6637. [PMID: 39553872 PMCID: PMC11563334 DOI: 10.1364/boe.538756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/07/2024] [Revised: 10/12/2024] [Accepted: 10/18/2024] [Indexed: 11/19/2024]
Abstract
In this paper, we introduce a physics-guided deep learning approach for high-quality, real-time Fourier-domain optical coherence tomography (FD-OCT) image reconstruction. Unlike traditional supervised deep learning methods, the proposed method employs unsupervised learning. It leverages the underlying OCT imaging physics to guide the neural networks, which could thus generate high-quality images and provide a physically sound solution to the original problem. Evaluations on synthetic and experimental datasets demonstrate the superior performance of our proposed physics-guided deep learning approach. The method achieves the highest image quality metrics compared to the inverse discrete Fourier transform (IDFT), the optimization-based methods, and several state-of-the-art methods based on deep learning. Our method enables real-time frame rates of 232 fps for synthetic images and 87 fps for experimental images, which represents significant improvements over existing techniques. Our physics-guided deep learning-based approach could offer a promising solution for FD-OCT image reconstruction, which potentially paves the way for leveraging the power of deep learning in real-world OCT imaging applications.
Collapse
Affiliation(s)
- Mengyuan Wang
- Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jianing Mao
- Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Hang Su
- Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yuye Ling
- Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Chuanqing Zhou
- College of Medical Instrument, Shanghai University of Medicine and Health Sciences, Shanghai, China
| | - Yikai Su
- State Key Lab of Advanced Optical Communication Systems and Networks, Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
6
|
Fanous MJ, Casteleiro Costa P, Işıl Ç, Huang L, Ozcan A. Neural network-based processing and reconstruction of compromised biophotonic image data. LIGHT, SCIENCE & APPLICATIONS 2024; 13:231. [PMID: 39237561 PMCID: PMC11377739 DOI: 10.1038/s41377-024-01544-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2024] [Revised: 07/16/2024] [Accepted: 07/18/2024] [Indexed: 09/07/2024]
Abstract
In recent years, the integration of deep learning techniques with biophotonic setups has opened new horizons in bioimaging. A compelling trend in this field involves deliberately compromising certain measurement metrics to engineer better bioimaging tools in terms of e.g., cost, speed, and form-factor, followed by compensating for the resulting defects through the utilization of deep learning models trained on a large amount of ideal, superior or alternative data. This strategic approach has found increasing popularity due to its potential to enhance various aspects of biophotonic imaging. One of the primary motivations for employing this strategy is the pursuit of higher temporal resolution or increased imaging speed, critical for capturing fine dynamic biological processes. Additionally, this approach offers the prospect of simplifying hardware requirements and complexities, thereby making advanced imaging standards more accessible in terms of cost and/or size. This article provides an in-depth review of the diverse measurement aspects that researchers intentionally impair in their biophotonic setups, including the point spread function (PSF), signal-to-noise ratio (SNR), sampling density, and pixel resolution. By deliberately compromising these metrics, researchers aim to not only recuperate them through the application of deep learning networks, but also bolster in return other crucial parameters, such as the field of view (FOV), depth of field (DOF), and space-bandwidth product (SBP). Throughout this article, we discuss various biophotonic methods that have successfully employed this strategic approach. These techniques span a wide range of applications and showcase the versatility and effectiveness of deep learning in the context of compromised biophotonic data. Finally, by offering our perspectives on the exciting future possibilities of this rapidly evolving concept, we hope to motivate our readers from various disciplines to explore novel ways of balancing hardware compromises with compensation via artificial intelligence (AI).
Collapse
Affiliation(s)
- Michael John Fanous
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
| | - Paloma Casteleiro Costa
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
| | - Çağatay Işıl
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- Bioengineering Department, University of California, Los Angeles, CA, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Luzhe Huang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- Bioengineering Department, University of California, Los Angeles, CA, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA.
- Bioengineering Department, University of California, Los Angeles, CA, USA.
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA.
- Department of Surgery, David Geffen School of Medicine, University of California, Los Angeles, CA, USA.
| |
Collapse
|
7
|
Song K, Bian Y, Zeng F, Liu Z, Han S, Li J, Tian J, Li K, Shi X, Xiao L. Photon-level single-pixel 3D tomography with masked attention network. OPTICS EXPRESS 2024; 32:4387-4399. [PMID: 38297641 DOI: 10.1364/oe.510706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 01/11/2024] [Indexed: 02/02/2024]
Abstract
Tomography plays an important role in characterizing the three-dimensional structure of samples within specialized scenarios. In the paper, a masked attention network is presented to eliminate interference from different layers of the sample, substantially enhancing the resolution for photon-level single-pixel tomographic imaging. The simulation and experimental results have demonstrated that the axial resolution and lateral resolution of the imaging system can be improved by about 3 and 2 times respectively, with a sampling rate of 3.0 %. The scheme is expected to be seamlessly integrated into various tomography systems, which is conducive to promoting the tomographic imaging for biology, medicine, and materials science.
Collapse
|
8
|
Salimi M, Roshanfar M, Tabatabaei N, Mosadegh B. Machine Learning-Assisted Short-Wave InfraRed (SWIR) Techniques for Biomedical Applications: Towards Personalized Medicine. J Pers Med 2023; 14:33. [PMID: 38248734 PMCID: PMC10817559 DOI: 10.3390/jpm14010033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Revised: 12/08/2023] [Accepted: 12/20/2023] [Indexed: 01/23/2024] Open
Abstract
Personalized medicine transforms healthcare by adapting interventions to individuals' unique genetic, molecular, and clinical profiles. To maximize diagnostic and/or therapeutic efficacy, personalized medicine requires advanced imaging devices and sensors for accurate assessment and monitoring of individual patient conditions or responses to therapeutics. In the field of biomedical optics, short-wave infrared (SWIR) techniques offer an array of capabilities that hold promise to significantly enhance diagnostics, imaging, and therapeutic interventions. SWIR techniques provide in vivo information, which was previously inaccessible, by making use of its capacity to penetrate biological tissues with reduced attenuation and enable researchers and clinicians to delve deeper into anatomical structures, physiological processes, and molecular interactions. Combining SWIR techniques with machine learning (ML), which is a powerful tool for analyzing information, holds the potential to provide unprecedented accuracy for disease detection, precision in treatment guidance, and correlations of complex biological features, opening the way for the data-driven personalized medicine field. Despite numerous biomedical demonstrations that utilize cutting-edge SWIR techniques, the clinical potential of this approach has remained significantly underexplored. This paper demonstrates how the synergy between SWIR imaging and ML is reshaping biomedical research and clinical applications. As the paper showcases the growing significance of SWIR imaging techniques that are empowered by ML, it calls for continued collaboration between researchers, engineers, and clinicians to boost the translation of this technology into clinics, ultimately bridging the gap between cutting-edge technology and its potential for personalized medicine.
Collapse
Affiliation(s)
| | - Majid Roshanfar
- Department of Mechanical Engineering, Concordia University, Montreal, QC H3G 1M8, Canada;
| | - Nima Tabatabaei
- Department of Mechanical Engineering, York University, Toronto, ON M3J 1P3, Canada;
| | - Bobak Mosadegh
- Dalio Institute of Cardiovascular Imaging, Department of Radiology, Weill Cornell Medicine, New York, NY 10021, USA
| |
Collapse
|
9
|
Li X, Dong Z, Liu H, Kang-Mieler JJ, Ling Y, Gan Y. Frequency-aware optical coherence tomography image super-resolution via conditional generative adversarial neural network. BIOMEDICAL OPTICS EXPRESS 2023; 14:5148-5161. [PMID: 37854579 PMCID: PMC10581809 DOI: 10.1364/boe.494557] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 08/27/2023] [Accepted: 09/01/2023] [Indexed: 10/20/2023]
Abstract
Optical coherence tomography (OCT) has stimulated a wide range of medical image-based diagnosis and treatment in fields such as cardiology and ophthalmology. Such applications can be further facilitated by deep learning-based super-resolution technology, which improves the capability of resolving morphological structures. However, existing deep learning-based method only focuses on spatial distribution and disregards frequency fidelity in image reconstruction, leading to a frequency bias. To overcome this limitation, we propose a frequency-aware super-resolution framework that integrates three critical frequency-based modules (i.e., frequency transformation, frequency skip connection, and frequency alignment) and frequency-based loss function into a conditional generative adversarial network (cGAN). We conducted a large-scale quantitative study from an existing coronary OCT dataset to demonstrate the superiority of our proposed framework over existing deep learning frameworks. In addition, we confirmed the generalizability of our framework by applying it to fish corneal images and rat retinal images, demonstrating its capability to super-resolve morphological details in eye imaging.
Collapse
Affiliation(s)
- Xueshen Li
- Department of Biomedical Engineering, Stevens Institute of Technology, Hoboken, NJ 07030, USA
| | - Zhenxing Dong
- Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai, Minhang District, 200240, China
| | - Hongshan Liu
- Department of Biomedical Engineering, Stevens Institute of Technology, Hoboken, NJ 07030, USA
| | - Jennifer J. Kang-Mieler
- Department of Biomedical Engineering, Stevens Institute of Technology, Hoboken, NJ 07030, USA
| | - Yuye Ling
- Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai, Minhang District, 200240, China
| | - Yu Gan
- Department of Biomedical Engineering, Stevens Institute of Technology, Hoboken, NJ 07030, USA
| |
Collapse
|
10
|
Lee W, Nam HS, Seok JY, Oh WY, Kim JW, Yoo H. Deep learning-based image enhancement in optical coherence tomography by exploiting interference fringe. Commun Biol 2023; 6:464. [PMID: 37117279 PMCID: PMC10147647 DOI: 10.1038/s42003-023-04846-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Accepted: 04/17/2023] [Indexed: 04/30/2023] Open
Abstract
Optical coherence tomography (OCT), an interferometric imaging technique, provides non-invasive, high-speed, high-sensitive volumetric biological imaging in vivo. However, systemic features inherent in the basic operating principle of OCT limit its imaging performance such as spatial resolution and signal-to-noise ratio. Here, we propose a deep learning-based OCT image enhancement framework that exploits raw interference fringes to achieve further enhancement from currently obtainable optimized images. The proposed framework for enhancing spatial resolution and reducing speckle noise in OCT images consists of two separate models: an A-scan-based network (NetA) and a B-scan-based network (NetB). NetA utilizes spectrograms obtained via short-time Fourier transform of raw interference fringes to enhance axial resolution of A-scans. NetB was introduced to enhance lateral resolution and reduce speckle noise in B-scan images. The individually trained networks were applied sequentially. We demonstrate the versatility and capability of the proposed framework by visually and quantitatively validating its robust performance. Comparative studies suggest that deep learning utilizing interference fringes can outperform the existing methods. Furthermore, we demonstrate the advantages of the proposed method by comparing our outcomes with multi-B-scan averaged images and contrast-adjusted images. We expect that the proposed framework will be a versatile technology that can improve functionality of OCT.
Collapse
Affiliation(s)
- Woojin Lee
- Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology, 291 Daehak-ro, Yuseong-gu, Daejeon, 34141, Republic of Korea
| | - Hyeong Soo Nam
- Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology, 291 Daehak-ro, Yuseong-gu, Daejeon, 34141, Republic of Korea
| | - Jae Yeon Seok
- Department of Pathology, Yongin Severance Hospital, Yonsei University College of Medicine, 363 Dongbaekjukjeon-daero, Giheung-gu, Yongin-si, Gyeonggi-do, 16995, Republic of Korea
| | - Wang-Yuhl Oh
- Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology, 291 Daehak-ro, Yuseong-gu, Daejeon, 34141, Republic of Korea
| | - Jin Won Kim
- Multimodal Imaging and Theranostic Lab, Cardiovascular Center, Korea University Guro Hospital, 148 Gurodong-ro, Guro-gu, Seoul, 08308, Republic of Korea
| | - Hongki Yoo
- Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology, 291 Daehak-ro, Yuseong-gu, Daejeon, 34141, Republic of Korea.
| |
Collapse
|
11
|
Liu F, Liu J, Chen Q, Wang X, Liu C. SiamHAS: Siamese Tracker with Hierarchical Attention Strategy for Aerial Tracking. MICROMACHINES 2023; 14:893. [PMID: 37421126 DOI: 10.3390/mi14040893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Revised: 03/26/2023] [Accepted: 03/30/2023] [Indexed: 07/09/2023]
Abstract
For the Siamese network-based trackers utilizing modern deep feature extraction networks without taking full advantage of the different levels of features, tracking drift is prone to occur in aerial scenarios, such as target occlusion, scale variation, and low-resolution target tracking. Additionally, the accuracy is low in challenging scenarios of visual tracking, which is due to the imperfect utilization of features. To improve the performance of the existing Siamese tracker in the above-mentioned challenging scenes, we propose a Siamese tracker based on Transformer multi-level feature enhancement with a hierarchical attention strategy. The saliency of the extracted features is enhanced by the process of Transformer Multi-level Enhancement; the application of the hierarchical attention strategy makes the tracker adaptively notice the target region information and improve the tracking performance in challenging aerial scenarios. Meanwhile, we conducted extensive experiments and qualitative or quantitative discussions on UVA123, UAV20L, and OTB100 datasets. Finally, the experimental results show that our SiamHAS performs favorably against several state-of-the-art trackers in these challenging scenarios.
Collapse
Affiliation(s)
- Faxue Liu
- Changchun Institute of Optics, Fine Mechanics and Physics (CIOMP), Chinese Academy of Sciences, Changchun 130033, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Jinghong Liu
- Changchun Institute of Optics, Fine Mechanics and Physics (CIOMP), Chinese Academy of Sciences, Changchun 130033, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Qiqi Chen
- Changchun Institute of Optics, Fine Mechanics and Physics (CIOMP), Chinese Academy of Sciences, Changchun 130033, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Xuan Wang
- Changchun Institute of Optics, Fine Mechanics and Physics (CIOMP), Chinese Academy of Sciences, Changchun 130033, China
| | - Chenglong Liu
- Changchun Institute of Optics, Fine Mechanics and Physics (CIOMP), Chinese Academy of Sciences, Changchun 130033, China
| |
Collapse
|
12
|
Ling Y, Dong Z, Li X, Gan Y, Su Y. Deep learning empowered highly compressive SS-OCT via learnable spectral-spatial sub-sampling. OPTICS LETTERS 2023; 48:1910-1913. [PMID: 37221797 DOI: 10.1364/ol.484500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Accepted: 02/26/2023] [Indexed: 05/25/2023]
Abstract
With the rapid advances of light source technology, the A-line imaging rate of swept-source optical coherence tomography (SS-OCT) has experienced a great increase in the past three decades. The bandwidths of data acquisition, data transfer, and data storage, which can easily reach several hundred megabytes per second, have now been considered major bottlenecks for modern SS-OCT system design. To address these issues, various compression schemes have been previously proposed. However, most of the current methods focus on enhancing the capability of the reconstruction algorithm and can only provide a data compression ratio (DCR) up to 4 without impairing the image quality. In this Letter, we proposed a novel design paradigm, in which the sub-sampling pattern for interferogram acquisition is jointly optimized with the reconstruction algorithm in an end-to-end manner. To validate the idea, we retrospectively apply the proposed method on an ex vivo human coronary optical coherence tomography (OCT) dataset. The proposed method could reach a maximum DCR of ∼62.5 with peak signal-to-noise ratio (PSNR) of 24.2 dB, while a DCR of ∼27.78 could yield a visually pleasant image with a PSNR of ∼24.6 dB. We believe the proposed system could be a viable remedy for the ever-growing data issue in SS-OCT.
Collapse
|
13
|
Li X, Cao S, Liu H, Yao X, Brott BC, Litovsky SH, Song X, Ling Y, Gan Y. Multi-Scale Reconstruction of Undersampled Spectral-Spatial OCT Data for Coronary Imaging Using Deep Learning. IEEE Trans Biomed Eng 2022; 69:3667-3677. [PMID: 35594212 PMCID: PMC10000308 DOI: 10.1109/tbme.2022.3175670] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Coronary artery disease (CAD) is a cardiovascular condition with high morbidity and mortality. Intravascular optical coherence tomography (IVOCT) has been considered as an optimal imagining system for the diagnosis and treatment of CAD. Constrained by Nyquist theorem, dense sampling in IVOCT attains high resolving power to delineate cellular structures/features. There is a trade-off between high spatial resolution and fast scanning rate for coronary imaging. In this paper, we propose a viable spectral-spatial acquisition method that down-scales the sampling process in both spectral and spatial domain while maintaining high quality in image reconstruction. The down-scaling schedule boosts data acquisition speed without any hardware modifications. Additionally, we propose a unified multi-scale reconstruction framework, namely Multiscale-Spectral-Spatial-Magnification Network (MSSMN), to resolve highly down-scaled (compressed) OCT images with flexible magnification factors. We incorporate the proposed methods into Spectral Domain OCT (SD-OCT) imaging of human coronary samples with clinical features such as stent and calcified lesions. Our experimental results demonstrate that spectral-spatial down-scaled data can be better reconstructed than data that are down-scaled solely in either spectral or spatial domain. Moreover, we observe better reconstruction performance using MSSMN than using existing reconstruction methods. Our acquisition method and multi-scale reconstruction framework, in combination, may allow faster SD-OCT inspection with high resolution during coronary intervention.
Collapse
|
14
|
Noise Reduction of OCT Images Based on the External Patch Prior Guided Internal Clustering and Morphological Analysis. PHOTONICS 2022. [DOI: 10.3390/photonics9080543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Optical coherence tomography (OCT) is widely used in biomedical imaging. However, noise severely affects diagnosing and identifying diseased tissues on OCT images. Here, a noise reduction method based on the external patch prior guided internal clustering and morphological analysis (E2PGICMA) is developed to remove the noise of OCT images. The external patch prior guided internal clustering algorithm is used to reduce speckle noise. The morphological analysis algorithm is employed to the background for contrast enhancement. OCT images of in vivo normal skin tissues were analyzed to remove noise using the proposed method. The estimated standard deviations of the noise were chosen as different values for evaluating the quantitative metrics. The visual quality improvement includes more textures and fine detail preservation. The denoising effects of different methods were compared. Then, quantitative and qualitative evaluations of this proposed method were conducted. The results demonstrated that the SNR, PSNR, and XCOR were higher than those of the other noise-reduction methods, reaching 15.05 dB, 27.48 dB, and 0.9959, respectively. Furthermore, the presented method’s noise reduction ratio (NRR) reached 0.8999. This proposed method can efficiently remove the background and speckle noise. Improving the proposed noise reduction method would outperform existing state-of-the-art OCT despeckling methods.
Collapse
|
15
|
Rawat S, Wendoloski J, Wang A. cGAN-assisted imaging through stationary scattering media. OPTICS EXPRESS 2022; 30:18145-18155. [PMID: 36221621 DOI: 10.1364/oe.450321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/03/2022] [Accepted: 05/03/2022] [Indexed: 06/16/2023]
Abstract
Analyzing images taken through scattering media is challenging, owing to speckle decorrelations from perturbations in the media. For in-line imaging modalities, which are appealing because they are compact, require no moving parts, and are robust, negating the effects of such scattering becomes particularly challenging. Here we explore the use of conditional generative adversarial networks (cGANs) to mitigate the effects of the additional scatterers in in-line geometries, including digital holographic microscopy. Using light scattering simulations and experiments on objects of interest with and without additional scatterers, we find that cGANs can be quickly trained with minuscule datasets and can also efficiently learn the one-to-one statistical mapping between the cross-domain input-output image pairs. Importantly, the output images are faithful enough to enable quantitative feature extraction. We also show that with rapid training using only 20 image pairs, it is possible to negate this undesired scattering to accurately localize diffraction-limited impulses with high spatial accuracy, therefore transforming a shift variant system to a linear shift invariant (LSI) system.
Collapse
|
16
|
Rapid Vehicle Detection in Aerial Images under the Complex Background of Dense Urban Areas. REMOTE SENSING 2022. [DOI: 10.3390/rs14092088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Vehicle detection on aerial remote sensing images under the complex background of urban areas has always received great attention in the field of remote sensing; however, the view of remote sensing images usually covers a large area, and the size of the vehicle is small and the background is complex. Therefore, compared with object detection in the ground view images, vehicle detection in aerial images remains a challenging problem. In this paper, we propose a single-scale rapid convolutional neural network (SSRD-Net). In the proposed framework, we design a global relational (GR) block to enhance the fusion of local and global features; moreover, we adjust the image segmentation method to unify the vehicle size in the input image, thus simplifying the model structure and improving the detection speed. We further introduce an aerial remote sensing image dataset with rotating bounding boxes (RO-ARS), which has complex backgrounds such as snow, clouds, and fog scenes. We also design a data augmentation method to get more images with clouds and fog. Finally, we evaluate the performance of the proposed model on several datasets, and the experimental results show that the recall and precision are improved compared with existing methods.
Collapse
|
17
|
Sun T. Light People: Professor Aydogan Ozcan. LIGHT, SCIENCE & APPLICATIONS 2021; 10:208. [PMID: 34611128 PMCID: PMC8491441 DOI: 10.1038/s41377-021-00643-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
In 2016, the news that Google's artificial intelligence (AI) robot AlphaGo, based on the principle of deep learning, won the victory over lee Sedol, the former world Go champion and the famous 9th Dan competitor of Korea, caused a sensation in both fields of AI and Go, which brought epoch-making significance to the development of deep learning. Deep learning is a complex machine learning algorithm that uses multiple layers of artificial neural networks to automatically analyze signals or data. At present, deep learning has penetrated into our daily life, such as the applications of face recognition and speech recognition. Scientists have also made many remarkable achievements based on deep learning. Professor Aydogan Ozcan from the University of California, Los Angeles (UCLA) led his team to research deep learning algorithms, which provided new ideas for the exploring of optical computational imaging and sensing technology, and introduced image generation and reconstruction methods which brought major technological innovations to the development of related fields. Optical designs and devices are moving from being physically driven to being data-driven. We are much honored to have Aydogan Ozcan, Fellow of the National Academy of Inventors and Chancellor's Professor of UCLA, to unscramble his latest scientific research results and foresight for the future development of related fields, and to share his journey of pursuing Optics, his indissoluble relationship with Light: Science & Applications (LSA), and his experience in talent cultivation.
Collapse
Affiliation(s)
- Tingting Sun
- Light Publishing Group, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, 3888 Dong Nan Hu Road, Changchun, 130033, China.
| |
Collapse
|