1
|
Liao J, Zhang T, Li C, Huang Z. Sub-Second Optical Coherence Tomography Angiography Protocol for Intraoral Imaging Using an Efficient Super-Resolution Network. JOURNAL OF BIOPHOTONICS 2025:e70050. [PMID: 40254547 DOI: 10.1002/jbio.70050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/02/2025] [Revised: 04/01/2025] [Accepted: 04/09/2025] [Indexed: 04/22/2025]
Abstract
This study introduces a 200 kHz swept-source optical coherence tomography system-based fast optical coherence tomography angiography (OCTA) protocol for intraoral imaging by integrating an efficient Intraoral Micro-Angiography Super-Resolution Transformer (IMAST) model. This protocol reduces acquisition time to ~0.3 s by reducing the spatial sampling resolution, thereby minimizing motion artifacts while maintaining a field of view and image quality. The IMAST model utilizes a transformer-based architecture combined with convolutional operations to reconstruct high-resolution intraoral OCTA images from reduced-resolution scans. Experimental results from various intraoral sites and conditions show the model's robustness and high performance in enhancing image quality compared to existing deep-learning methods. Besides, IMAST shows advantages in model complexity, inference time, and computational cost, underscoring its suitability for clinical environments. These findings support the potential of our approach for noninvasive oral disease diagnosis, reducing patient discomfort and facilitating early detection of malignancies, thus serving as a valuable tool for oral assessment.
Collapse
Affiliation(s)
- Jinpeng Liao
- Centre of Medical Engineering and Technology (CMET), University of Dundee, Dundee, Scotland, UK
- Healthcare Engineering, School of Physics and Engineering Technology, University of York, York, England, UK
| | - Tianyu Zhang
- Centre of Medical Engineering and Technology (CMET), University of Dundee, Dundee, Scotland, UK
- Healthcare Engineering, School of Physics and Engineering Technology, University of York, York, England, UK
| | - Chunhui Li
- Centre of Medical Engineering and Technology (CMET), University of Dundee, Dundee, Scotland, UK
| | - Zhihong Huang
- Healthcare Engineering, School of Physics and Engineering Technology, University of York, York, England, UK
| |
Collapse
|
2
|
Liao J, Zhang T, Li C, Huang Z. LS-Net: lightweight segmentation network for dermatological epidermal segmentation in optical coherence tomography imaging. BIOMEDICAL OPTICS EXPRESS 2024; 15:5723-5738. [PMID: 39421780 PMCID: PMC11482159 DOI: 10.1364/boe.529662] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/08/2024] [Revised: 07/01/2024] [Accepted: 07/31/2024] [Indexed: 10/19/2024]
Abstract
Optical coherence tomography (OCT) can be an important tool for non-invasive dermatological evaluation, providing useful data on epidermal integrity for diagnosing skin diseases. Despite its benefits, OCT's utility is limited by the challenges of accurate, fast epidermal segmentation due to the skin morphological diversity. To address this, we introduce a lightweight segmentation network (LS-Net), a novel deep learning model that combines the robust local feature extraction abilities of Convolution Neural Network and the long-term information processing capabilities of Vision Transformer. LS-Net has a depth-wise convolutional transformer for enhanced spatial contextualization and a squeeze-and-excitation block for feature recalibration, ensuring precise segmentation while maintaining computational efficiency. Our network outperforms existing methods, demonstrating high segmentation accuracy (mean Dice: 0.9624 and mean IoU: 0.9468) with significantly reduced computational demands (floating point operations: 1.131 G). We further validate LS-Net on our acquired dataset, showing its effectiveness in various skin sites (e.g., face, palm) under realistic clinical conditions. This model promises to enhance the diagnostic capabilities of OCT, making it a valuable tool for dermatological practice.
Collapse
Affiliation(s)
- Jinpeng Liao
- University of Dundee, School of Science and Engineering, Dundee, United Kingdom
| | - Tianyu Zhang
- University of Dundee, School of Science and Engineering, Dundee, United Kingdom
| | - Chunhui Li
- University of Dundee, School of Science and Engineering, Dundee, United Kingdom
| | - Zhihong Huang
- University of Dundee, School of Science and Engineering, Dundee, United Kingdom
| |
Collapse
|
3
|
Zhu L, Li J, Hu Y, Zhu R, Zeng S, Rong P, Zhang Y, Gu X, Wang Y, Zhang Z, Yang L, Ren Q, Lu Y. Choroidal Optical Coherence Tomography Angiography: Noninvasive Choroidal Vessel Analysis via Deep Learning. HEALTH DATA SCIENCE 2024; 4:0170. [PMID: 39257642 PMCID: PMC11383389 DOI: 10.34133/hds.0170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/10/2024] [Accepted: 06/25/2024] [Indexed: 09/12/2024]
Abstract
Background: The choroid is the most vascularized structure in the human eye, associated with numerous retinal and choroidal diseases. However, the vessel distribution of choroidal sublayers has yet to be effectively explored due to the lack of suitable tools for visualization and analysis. Methods: In this paper, we present a novel choroidal angiography strategy to more effectively evaluate vessels within choroidal sublayers in the clinic. Our approach utilizes a segmentation model to extract choroidal vessels from OCT B-scans layer by layer. Furthermore, we ensure that the model, trained on B-scans with high choroidal quality, can proficiently handle the low-quality B-scans commonly collected in clinical practice for reconstruction vessel distributions. By treating this process as a cross-domain segmentation task, we propose an ensemble discriminative mean teacher structure to address the specificities inherent in this cross-domain segmentation process. The proposed structure can select representative samples with minimal label noise for self-training and enhance the adaptation strength of adversarial training. Results: Experiments demonstrate the effectiveness of the proposed structure, achieving a dice score of 77.28 for choroidal vessel segmentation. This validates our strategy to provide satisfactory choroidal angiography noninvasively, supportting the analysis of choroidal vessel distribution for paitients with choroidal diseases. We observed that patients with central serous chorioretinopathy have evidently (P < 0.05) lower vascular indexes at all choroidal sublayers than healthy individuals, especially in the region beyond central fovea of macula (larger than 6 mm). Conclusions: We release the code and training set of the proposed method as the first noninvasive mechnism to assist clinical application for the analysis of choroidal vessels.
Collapse
Affiliation(s)
- Lei Zhu
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing 100191, China
- Department of Biomedical Engineering, Peking University, Beijing 100871, China
- Department of Ophthalmology, Peking University First Hospital, Beijing 100034, China
| | - Junmeng Li
- Department of Ophthalmology, Peking University First Hospital, Beijing 100034, China
| | - Yicheng Hu
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing 100191, China
- Department of Biomedical Engineering, Peking University, Beijing 100871, China
- Department of Ophthalmology, Peking University First Hospital, Beijing 100034, China
| | - Ruilin Zhu
- Department of Ophthalmology, Peking University First Hospital, Beijing 100034, China
| | - Shuang Zeng
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing 100191, China
- Department of Biomedical Engineering, Peking University, Beijing 100871, China
- Department of Ophthalmology, Peking University First Hospital, Beijing 100034, China
| | - Pei Rong
- Department of Ophthalmology, Peking University First Hospital, Beijing 100034, China
| | - Yadi Zhang
- Department of Ophthalmology, Peking University First Hospital, Beijing 100034, China
| | - Xiaopeng Gu
- Department of Ophthalmology, Peking University First Hospital, Beijing 100034, China
| | - Yuwei Wang
- Department of Ophthalmology, Peking University First Hospital, Beijing 100034, China
| | - Zhiyue Zhang
- Department of Ophthalmology, Peking University First Hospital, Beijing 100034, China
| | - Liu Yang
- Department of Ophthalmology, Peking University First Hospital, Beijing 100034, China
| | - Qiushi Ren
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing 100191, China
- Department of Biomedical Engineering, Peking University, Beijing 100871, China
- National Biomedical Imaging Center, Peking University, Beijing 100871, China
| | - Yanye Lu
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing 100191, China
- Department of Biomedical Engineering, Peking University, Beijing 100871, China
- Department of Ophthalmology, Peking University First Hospital, Beijing 100034, China
| |
Collapse
|
4
|
Yao B, Jin L, Hu J, Liu Y, Yan Y, Li Q, Lu Y. Noise-imitation learning: unpaired speckle noise reduction for optical coherence tomography. Phys Med Biol 2024; 69:185003. [PMID: 39151463 DOI: 10.1088/1361-6560/ad708c] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2024] [Accepted: 08/16/2024] [Indexed: 08/19/2024]
Abstract
Objective.Optical coherence tomography (OCT) is widely used in clinical practice for its non-invasive, high-resolution imaging capabilities. However, speckle noise inherent to its low coherence principle can degrade image quality and compromise diagnostic accuracy. While deep learning methods have shown promise in reducing speckle noise, obtaining well-registered image pairs remains challenging, leading to the development of unpaired methods. Despite their potential, existing unpaired methods suffer from redundancy in network structures or interaction mechanisms. Therefore, a more streamlined method for unpaired OCT denoising is essential.Approach.In this work, we propose a novel unpaired method for OCT image denoising, referred to as noise-imitation learning (NIL). NIL comprises three primary modules: the noise extraction module, which extracts noise features by denoising noisy images; the noise imitation module, which synthesizes noisy images and generates fake clean images; and the adversarial learning module, which differentiates between real and fake clean images through adversarial training. The complexity of NIL is significantly lower than that of previous unpaired methods, utilizing only one generator and one discriminator for training.Main results.By efficiently fusing unpaired images and employing adversarial training, NIL can extract more speckle noise information to enhance denoising performance. Building on NIL, we propose an OCT image denoising pipeline, NIL-NAFNet. This pipeline achieved PSNR, SSIM, and RMSE values of 31.27 dB, 0.865, and 7.00, respectively, on the PKU37 dataset. Extensive experiments suggest that our method outperforms state-of-the-art unpaired methods both qualitatively and quantitatively.Significance.These findings indicate that the proposed NIL is a simple yet effective method for unpaired OCT speckle noise reduction. The OCT denoising pipeline based on NIL demonstrates exceptional performance and efficiency. By addressing speckle noise without requiring well-registered image pairs, this method can enhance image quality and diagnostic accuracy in clinical practice.
Collapse
Affiliation(s)
- Bin Yao
- Institute of Microelectronics of the Chinese Academy of Sciences, Beijing 100029, People's Republic of China
- University of Chinese Academy of Sciences, Beijing 101408, People's Republic of China
| | - Lujia Jin
- China Mobile Research Institute, Beijing 100032, People's Republic of China
| | - Jiakui Hu
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing 100191, People's Republic of China
- National Biomedical Imaging Center, Peking University, Beijing 100871, People's Republic of China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen 518055, People's Republic of China
| | - Yuzhao Liu
- Institute of Microelectronics of the Chinese Academy of Sciences, Beijing 100029, People's Republic of China
- University of Chinese Academy of Sciences, Beijing 101408, People's Republic of China
| | - Yuepeng Yan
- Institute of Microelectronics of the Chinese Academy of Sciences, Beijing 100029, People's Republic of China
- University of Chinese Academy of Sciences, Beijing 101408, People's Republic of China
| | - Qing Li
- Institute of Microelectronics of the Chinese Academy of Sciences, Beijing 100029, People's Republic of China
- University of Chinese Academy of Sciences, Beijing 101408, People's Republic of China
| | - Yanye Lu
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing 100191, People's Republic of China
- National Biomedical Imaging Center, Peking University, Beijing 100871, People's Republic of China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen 518055, People's Republic of China
| |
Collapse
|
5
|
Rashidi M, Kalenkov G, Green DJ, Mclaughlin RA. Improved microvascular imaging with optical coherence tomography using 3D neural networks and a channel attention mechanism. Sci Rep 2024; 14:17809. [PMID: 39090263 PMCID: PMC11294560 DOI: 10.1038/s41598-024-68296-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Accepted: 07/22/2024] [Indexed: 08/04/2024] Open
Abstract
Skin microvasculature is vital for human cardiovascular health and thermoregulation, but its imaging and analysis presents significant challenges. Statistical methods such as speckle decorrelation in optical coherence tomography angiography (OCTA) often require multiple co-located B-scans, leading to lengthy acquisitions prone to motion artefacts. Deep learning has shown promise in enhancing accuracy and reducing measurement time by leveraging local information. However, both statistical and deep learning methods typically focus solely on processing individual 2D B-scans, neglecting contextual information from neighbouring B-scans. This limitation compromises spatial context and disregards the 3D features within tissue, potentially affecting OCTA image accuracy. In this study, we propose a novel approach utilising 3D convolutional neural networks (CNNs) to address this limitation. By considering the 3D spatial context, these 3D CNNs mitigate information loss, preserving fine details and boundaries in OCTA images. Our method reduces the required number of B-scans while enhancing accuracy, thereby increasing clinical applicability. This advancement holds promise for improving clinical practices and understanding skin microvascular dynamics crucial for cardiovascular health and thermoregulation.
Collapse
Affiliation(s)
- Mohammad Rashidi
- Faculty of Health and Medical Sciences, The University of Adelaide, Adelaide, SA, 5005, Australia.
- Institute for Photonics and Advanced Sensing, The University of Adelaide, Adelaide, SA, 5005, Australia.
| | - Georgy Kalenkov
- Faculty of Health and Medical Sciences, The University of Adelaide, Adelaide, SA, 5005, Australia
- Institute for Photonics and Advanced Sensing, The University of Adelaide, Adelaide, SA, 5005, Australia
| | - Daniel J Green
- School of Human Sciences (Exercise and Sport Sciences), The University of Western Australia, Crawley, WA, 6009, Australia
| | - Robert A Mclaughlin
- Faculty of Health and Medical Sciences, The University of Adelaide, Adelaide, SA, 5005, Australia
- Institute for Photonics and Advanced Sensing, The University of Adelaide, Adelaide, SA, 5005, Australia
- School of Engineering, The University of Western Australia, Crawley, WA, 6009, Australia
| |
Collapse
|
6
|
Yao B, Jin L, Hu J, Liu Y, Yan Y, Li Q, Lu Y. PSCAT: a lightweight transformer for simultaneous denoising and super-resolution of OCT images. BIOMEDICAL OPTICS EXPRESS 2024; 15:2958-2976. [PMID: 38855701 PMCID: PMC11161353 DOI: 10.1364/boe.521453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Revised: 03/27/2024] [Accepted: 03/30/2024] [Indexed: 06/11/2024]
Abstract
Optical coherence tomography (OCT), owing to its non-invasive nature, has demonstrated tremendous potential in clinical practice and has become a prevalent diagnostic method. Nevertheless, the inherent speckle noise and low sampling rate in OCT imaging often limit the quality of OCT images. In this paper, we propose a lightweight Transformer to efficiently reconstruct high-quality images from noisy and low-resolution OCT images acquired by short scans. Our method, PSCAT, parallelly employs spatial window self-attention and channel attention in the Transformer block to aggregate features from both spatial and channel dimensions. It explores the potential of the Transformer in denoising and super-resolution for OCT, reducing computational costs and enhancing the speed of image processing. To effectively assist in restoring high-frequency details, we introduce a hybrid loss function in both spatial and frequency domains. Extensive experiments demonstrate that our PSCAT has fewer network parameters and lower computational costs compared to state-of-the-art methods while delivering a competitive performance both qualitatively and quantitatively.
Collapse
Affiliation(s)
- Bin Yao
- Institute of Microelectronics of the Chinese Academy of Sciences, Beijing 100029, China
- University of Chinese Academy of Sciences, Beijing 101408, China
| | - Lujia Jin
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing 100191, China
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing 100871, China
- National Biomedical Imaging Center, Peking University, Beijing 100871, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen 518055, China
| | - Jiakui Hu
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing 100191, China
- National Biomedical Imaging Center, Peking University, Beijing 100871, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen 518055, China
| | - Yuzhao Liu
- Institute of Microelectronics of the Chinese Academy of Sciences, Beijing 100029, China
- University of Chinese Academy of Sciences, Beijing 101408, China
| | - Yuepeng Yan
- Institute of Microelectronics of the Chinese Academy of Sciences, Beijing 100029, China
- University of Chinese Academy of Sciences, Beijing 101408, China
| | - Qing Li
- Institute of Microelectronics of the Chinese Academy of Sciences, Beijing 100029, China
- University of Chinese Academy of Sciences, Beijing 101408, China
| | - Yanye Lu
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing 100191, China
- National Biomedical Imaging Center, Peking University, Beijing 100871, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen 518055, China
| |
Collapse
|
7
|
Liao J, Zhang T, Zhang Y, Li C, Huang Z. VET: Vasculature Extraction Transformer for Single-Scan Optical Coherence Tomography Angiography. IEEE Trans Biomed Eng 2024; 71:1179-1190. [PMID: 37930903 DOI: 10.1109/tbme.2023.3330681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2023]
Abstract
Optical coherence tomography angiography (OCTA) is a non-invasive imaging modality for analyzing skin microvasculature, enabling non-invasive diagnosis and treatment monitoring. Traditional OCTA algorithms necessitate at least two-repeated scans to generate microvasculature images, while image quality is highly dependent on the repetitions of scans (e.g., 4-8). Nevertheless, a higher repetition count increases data acquisition time, causing patient discomfort and more unpredictable motion artifacts, which can result in potential misdiagnosis. To address these limitations, we proposed a vasculature extraction pipeline based on the novelty vasculature extraction transformer (VET) to generate OCTA images by using a single OCT scan. Distinct from the vision Transformer, VET utilizes convolutional projection to better learn the spatial relationships between image patches. This study recruited 15 healthy participants. The OCT scans were performed in five various skin sites, i.e., palm, arm, face, neck, and lip. Our results show that in comparison to OCTA images obtained by the speckle variance OCTA (peak-signal-to-noise ratio (PSNR): 16.13) and eigen-decomposition OCTA (PSNR: 17.08) using four repeated OCT scans, OCTA images extracted by the proposed pipeline exhibit a better PSNR (18.03) performance while reducing the data acquisition time by 75%. Visual comparisons show that the proposed pipeline outperformed traditional OCTA algorithms, particularly in the imaging of lip and face areas, where artifacts are commonly encountered. This study is the first to demonstrate that the VET can efficiently extract high-quality vasculature images from a single, rapid OCT scan. This capability significantly enhances diagnostic accuracy for patients and streamlines the imaging process.
Collapse
|
8
|
Zhang J, Zou H. Insights into artificial intelligence in myopia management: from a data perspective. Graefes Arch Clin Exp Ophthalmol 2024; 262:3-17. [PMID: 37231280 PMCID: PMC10212230 DOI: 10.1007/s00417-023-06101-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2022] [Revised: 03/23/2023] [Accepted: 05/06/2023] [Indexed: 05/27/2023] Open
Abstract
Given the high incidence and prevalence of myopia, the current healthcare system is struggling to handle the task of myopia management, which is worsened by home quarantine during the ongoing COVID-19 pandemic. The utilization of artificial intelligence (AI) in ophthalmology is thriving, yet not enough in myopia. AI can serve as a solution for the myopia pandemic, with application potential in early identification, risk stratification, progression prediction, and timely intervention. The datasets used for developing AI models are the foundation and determine the upper limit of performance. Data generated from clinical practice in managing myopia can be categorized into clinical data and imaging data, and different AI methods can be used for analysis. In this review, we comprehensively review the current application status of AI in myopia with an emphasis on data modalities used for developing AI models. We propose that establishing large public datasets with high quality, enhancing the model's capability of handling multimodal input, and exploring novel data modalities could be of great significance for the further application of AI for myopia.
Collapse
Affiliation(s)
- Juzhao Zhang
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Haidong Zou
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
- Shanghai Eye Diseases Prevention & Treatment Center, Shanghai Eye Hospital, Shanghai, China.
- National Clinical Research Center for Eye Diseases, Shanghai, China.
- Shanghai Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China.
| |
Collapse
|
9
|
Liao J, Zhang T, Li C, Huang Z. U-shaped fusion convolutional transformer based workflow for fast optical coherence tomography angiography generation in lips. BIOMEDICAL OPTICS EXPRESS 2023; 14:5583-5601. [PMID: 38021117 PMCID: PMC10659781 DOI: 10.1364/boe.502085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 09/15/2023] [Accepted: 09/23/2023] [Indexed: 12/01/2023]
Abstract
Oral disorders, including oral cancer, pose substantial diagnostic challenges due to late-stage diagnosis, invasive biopsy procedures, and the limitations of existing non-invasive imaging techniques. Optical coherence tomography angiography (OCTA) shows potential in delivering non-invasive, real-time, high-resolution vasculature images. However, the quality of OCTA images are often compromised due to motion artifacts and noise, necessitating more robust and reliable image reconstruction approaches. To address these issues, we propose a novel model, a U-shaped fusion convolutional transformer (UFCT), for the reconstruction of high-quality, low-noise OCTA images from two-repeated OCT scans. UFCT integrates the strengths of convolutional neural networks (CNNs) and transformers, proficiently capturing both local and global image features. According to the qualitative and quantitative analysis in normal and pathological conditions, the performance of the proposed pipeline outperforms that of the traditional OCTA generation methods when only two repeated B-scans are performed. We further provide a comparative study with various CNN and transformer models and conduct ablation studies to validate the effectiveness of our proposed strategies. Based on the results, the UFCT model holds the potential to significantly enhance clinical workflow in oral medicine by facilitating early detection, reducing the need for invasive procedures, and improving overall patient outcomes.
Collapse
Affiliation(s)
- Jinpeng Liao
- School of Science and Engineering, University of Dundee, DD1 4HN, Scotland, United Kingdom
| | - Tianyu Zhang
- School of Science and Engineering, University of Dundee, DD1 4HN, Scotland, United Kingdom
| | - Chunhui Li
- School of Science and Engineering, University of Dundee, DD1 4HN, Scotland, United Kingdom
| | - Zhihong Huang
- School of Science and Engineering, University of Dundee, DD1 4HN, Scotland, United Kingdom
| |
Collapse
|
10
|
Liao J, Yang S, Zhang T, Li C, Huang Z. A hand-held optical coherence tomography angiography scanner based on angiography reconstruction transformer networks. JOURNAL OF BIOPHOTONICS 2023; 16:e202300100. [PMID: 37264544 DOI: 10.1002/jbio.202300100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Revised: 05/18/2023] [Accepted: 05/24/2023] [Indexed: 06/03/2023]
Abstract
Optical coherence tomography angiography (OCTA) has successfully demonstrated its viability for clinical applications in dermatology. Due to the high optical scattering property of skin, extracting high-quality OCTA images from skin tissues requires at least six-repeated scans. While the motion artifacts from the patient and the free hand-held probe can lead to a low-quality OCTA image. Our deep-learning-based scan pipeline enables fast and high-quality OCTA imaging with 0.3-s data acquisition. We utilize a fast scanning protocol with a 60 μm/pixel spatial interval rate and introduce angiography-reconstruction-transformer (ART) for 4× super-resolution of low transverse resolution OCTA images. The ART outperforms state-of-the-art networks in OCTA image super-resolution and provides a lighter network size. ART can restore microvessels while reducing the processing time by 85%, and maintaining improvements in structural similarity and peak-signal-to-noise ratio. This study represents that ART can achieve fast and flexible skin OCTA imaging while maintaining image quality.
Collapse
Affiliation(s)
- Jinpeng Liao
- School of Science and Engineering, University of Dundee, Scotland, UK
| | - Shufan Yang
- School of Computing, Engineering and Built Environment, Edinburgh Napier University, Edinburgh, UK
- Research Department of Orthopaedics and Musculoskeletal Science, University College London, UK
| | - Tianyu Zhang
- School of Science and Engineering, University of Dundee, Scotland, UK
| | - Chunhui Li
- School of Science and Engineering, University of Dundee, Scotland, UK
| | - Zhihong Huang
- School of Science and Engineering, University of Dundee, Scotland, UK
| |
Collapse
|
11
|
Ong CJT, Wong MYZ, Cheong KX, Zhao J, Teo KYC, Tan TE. Optical Coherence Tomography Angiography in Retinal Vascular Disorders. Diagnostics (Basel) 2023; 13:diagnostics13091620. [PMID: 37175011 PMCID: PMC10178415 DOI: 10.3390/diagnostics13091620] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Revised: 04/28/2023] [Accepted: 05/01/2023] [Indexed: 05/15/2023] Open
Abstract
Traditionally, abnormalities of the retinal vasculature and perfusion in retinal vascular disorders, such as diabetic retinopathy and retinal vascular occlusions, have been visualized with dye-based fluorescein angiography (FA). Optical coherence tomography angiography (OCTA) is a newer, alternative modality for imaging the retinal vasculature, which has some advantages over FA, such as its dye-free, non-invasive nature, and depth resolution. The depth resolution of OCTA allows for characterization of the retinal microvasculature in distinct anatomic layers, and commercial OCTA platforms also provide automated quantitative vascular and perfusion metrics. Quantitative and qualitative OCTA analysis in various retinal vascular disorders has facilitated the detection of pre-clinical vascular changes, greater understanding of known clinical signs, and the development of imaging biomarkers to prognosticate and guide treatment. With further technological improvements, such as a greater field of view and better image quality processing algorithms, it is likely that OCTA will play an integral role in the study and management of retinal vascular disorders. Artificial intelligence methods-in particular, deep learning-show promise in refining the insights to be gained from the use of OCTA in retinal vascular disorders. This review aims to summarize the current literature on this imaging modality in relation to common retinal vascular disorders.
Collapse
Affiliation(s)
- Charles Jit Teng Ong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 168751, Singapore
| | - Mark Yu Zheng Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 168751, Singapore
| | - Kai Xiong Cheong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 168751, Singapore
| | - Jinzhi Zhao
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 168751, Singapore
| | - Kelvin Yi Chong Teo
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 168751, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (EYE ACP), Duke-NUS Medical School, Singapore 169857, Singapore
| | - Tien-En Tan
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 168751, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (EYE ACP), Duke-NUS Medical School, Singapore 169857, Singapore
| |
Collapse
|
12
|
Gan W, Sun Y, Eldeniz C, Liu J, An H, Kamilov US. Deformation-Compensated Learning for Image Reconstruction Without Ground Truth. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2371-2384. [PMID: 35344490 PMCID: PMC9497435 DOI: 10.1109/tmi.2022.3163018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Deep neural networks for medical image reconstruction are traditionally trained using high-quality ground-truth images as training targets. Recent work on Noise2Noise (N2N) has shown the potential of using multiple noisy measurements of the same object as an alternative to having a ground-truth. However, existing N2N-based methods are not suitable for learning from the measurements of an object undergoing nonrigid deformation. This paper addresses this issue by proposing the deformation-compensated learning (DeCoLearn) method for training deep reconstruction networks by compensating for object deformations. A key component of DeCoLearn is a deep registration module, which is jointly trained with the deep reconstruction network without any ground-truth supervision. We validate DeCoLearn on both simulated and experimentally collected magnetic resonance imaging (MRI) data and show that it significantly improves imaging quality.
Collapse
|
13
|
Li Y, Zheng F, Foo LL, Wong QY, Ting D, Hoang QV, Chong R, Ang M, Wong CW. Advances in OCT Imaging in Myopia and Pathologic Myopia. Diagnostics (Basel) 2022; 12:diagnostics12061418. [PMID: 35741230 PMCID: PMC9221645 DOI: 10.3390/diagnostics12061418] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Revised: 06/06/2022] [Accepted: 06/06/2022] [Indexed: 11/16/2022] Open
Abstract
Advances in imaging with optical coherence tomography (OCT) and optical coherence tomography angiography (OCTA) technology, including the development of swept source OCT/OCTA, widefield or ultra-widefield systems, have greatly improved the understanding, diagnosis, and treatment of myopia and myopia-related complications. Anterior segment OCT is useful for imaging the anterior segment of myopes, providing the basis for implantable collamer lens optimization, or detecting intraocular lens decentration in high myopic patients. OCT has enhanced imaging of vitreous properties, and measurement of choroidal thickness in myopic eyes. Widefield OCT systems have greatly improved the visualization of peripheral retinal lesions and have enabled the evaluation of wide staphyloma and ocular curvature. Based on OCT imaging, a new classification system and guidelines for the management of myopic traction maculopathy have been proposed; different dome-shaped macula morphologies have been described; and myopia-related abnormalities in the optic nerve and peripapillary region have been demonstrated. OCTA can quantitatively evaluate the retinal microvasculature and choriocapillaris, which is useful for the early detection of myopic choroidal neovascularization and the evaluation of anti-vascular endothelial growth factor therapy in these patients. In addition, the application of artificial intelligence in OCT/OCTA imaging in myopia has achieved promising results.
Collapse
Affiliation(s)
- Yong Li
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore 169856, Singapore; (Y.L.); (F.Z.); (L.L.F.); (Q.Y.W.); (D.T.); (Q.V.H.); (R.C.); (M.A.)
- Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore 169857, Singapore
| | - Feihui Zheng
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore 169856, Singapore; (Y.L.); (F.Z.); (L.L.F.); (Q.Y.W.); (D.T.); (Q.V.H.); (R.C.); (M.A.)
| | - Li Lian Foo
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore 169856, Singapore; (Y.L.); (F.Z.); (L.L.F.); (Q.Y.W.); (D.T.); (Q.V.H.); (R.C.); (M.A.)
- Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore 169857, Singapore
| | - Qiu Ying Wong
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore 169856, Singapore; (Y.L.); (F.Z.); (L.L.F.); (Q.Y.W.); (D.T.); (Q.V.H.); (R.C.); (M.A.)
| | - Daniel Ting
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore 169856, Singapore; (Y.L.); (F.Z.); (L.L.F.); (Q.Y.W.); (D.T.); (Q.V.H.); (R.C.); (M.A.)
- Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore 169857, Singapore
| | - Quan V. Hoang
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore 169856, Singapore; (Y.L.); (F.Z.); (L.L.F.); (Q.Y.W.); (D.T.); (Q.V.H.); (R.C.); (M.A.)
- Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore 169857, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore 119077, Singapore
- Department of Ophthalmology, Columbia University, New York, NY 10027, USA
| | - Rachel Chong
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore 169856, Singapore; (Y.L.); (F.Z.); (L.L.F.); (Q.Y.W.); (D.T.); (Q.V.H.); (R.C.); (M.A.)
- Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore 169857, Singapore
| | - Marcus Ang
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore 169856, Singapore; (Y.L.); (F.Z.); (L.L.F.); (Q.Y.W.); (D.T.); (Q.V.H.); (R.C.); (M.A.)
- Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore 169857, Singapore
| | - Chee Wai Wong
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore 169856, Singapore; (Y.L.); (F.Z.); (L.L.F.); (Q.Y.W.); (D.T.); (Q.V.H.); (R.C.); (M.A.)
- Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore 169857, Singapore
- Correspondence:
| |
Collapse
|
14
|
Jiang Z, Huang Z, You Y, Geng M, Meng X, Qiu B, Zhu L, Gao M, Wang J, Zhou C, Ren Q, Lu Y. Rethinking the neighborhood information for deep learning‐based optical coherence tomography angiography. Med Phys 2022; 49:3705-3716. [DOI: 10.1002/mp.15618] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2021] [Revised: 02/08/2022] [Accepted: 03/11/2022] [Indexed: 10/18/2022] Open
Affiliation(s)
- Zhe Jiang
- Institute of Medical Technology Peking University Health Science Center Peking University Beijing 100191 China
- Department of Biomedical Engineering College of Future Technology Peking University Beijing 100871 China
- Institute of Biomedical Engineering Peking University Shenzhen Graduate School Shenzhen 518055 China
- Institute of Biomedical Engineering Shenzhen Bay Laboratory 5F Shenzhen 518071 China
| | - Zhiyu Huang
- Institute of Medical Technology Peking University Health Science Center Peking University Beijing 100191 China
- Department of Biomedical Engineering College of Future Technology Peking University Beijing 100871 China
- Institute of Biomedical Engineering Peking University Shenzhen Graduate School Shenzhen 518055 China
- Institute of Biomedical Engineering Shenzhen Bay Laboratory 5F Shenzhen 518071 China
| | - Yunfei You
- Institute of Medical Technology Peking University Health Science Center Peking University Beijing 100191 China
- Department of Biomedical Engineering College of Future Technology Peking University Beijing 100871 China
- Institute of Biomedical Engineering Peking University Shenzhen Graduate School Shenzhen 518055 China
- Institute of Biomedical Engineering Shenzhen Bay Laboratory 5F Shenzhen 518071 China
| | - Mufeng Geng
- Institute of Medical Technology Peking University Health Science Center Peking University Beijing 100191 China
- Department of Biomedical Engineering College of Future Technology Peking University Beijing 100871 China
- Institute of Biomedical Engineering Peking University Shenzhen Graduate School Shenzhen 518055 China
- Institute of Biomedical Engineering Shenzhen Bay Laboratory 5F Shenzhen 518071 China
| | - Xiangxi Meng
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education) Department of Nuclear Medicine Peking University Cancer Hospital & Institute Beijing 100142 China
| | - Bin Qiu
- Institute of Medical Technology Peking University Health Science Center Peking University Beijing 100191 China
- Department of Biomedical Engineering College of Future Technology Peking University Beijing 100871 China
- Institute of Biomedical Engineering Peking University Shenzhen Graduate School Shenzhen 518055 China
- Institute of Biomedical Engineering Shenzhen Bay Laboratory 5F Shenzhen 518071 China
| | - Lei Zhu
- Institute of Medical Technology Peking University Health Science Center Peking University Beijing 100191 China
- Department of Biomedical Engineering College of Future Technology Peking University Beijing 100871 China
- Institute of Biomedical Engineering Peking University Shenzhen Graduate School Shenzhen 518055 China
- Institute of Biomedical Engineering Shenzhen Bay Laboratory 5F Shenzhen 518071 China
| | - Mengdi Gao
- Institute of Medical Technology Peking University Health Science Center Peking University Beijing 100191 China
- Department of Biomedical Engineering College of Future Technology Peking University Beijing 100871 China
- Institute of Biomedical Engineering Peking University Shenzhen Graduate School Shenzhen 518055 China
- Institute of Biomedical Engineering Shenzhen Bay Laboratory 5F Shenzhen 518071 China
| | - Jing Wang
- Sir Run Run Shaw Hospital Zhejiang University School of Medicine Hangzhou 310058 China
| | - Chuanqing Zhou
- College of Medical Instrument Shanghai University of Medicine and Health Sciences Shanghai 201318 China
| | - Qiushi Ren
- Institute of Medical Technology Peking University Health Science Center Peking University Beijing 100191 China
- Department of Biomedical Engineering College of Future Technology Peking University Beijing 100871 China
- Institute of Biomedical Engineering Peking University Shenzhen Graduate School Shenzhen 518055 China
- Institute of Biomedical Engineering Shenzhen Bay Laboratory 5F Shenzhen 518071 China
| | - Yanye Lu
- Institute of Medical Technology Peking University Health Science Center Peking University Beijing 100191 China
- Institute of Biomedical Engineering Peking University Shenzhen Graduate School Shenzhen 518055 China
| |
Collapse
|
15
|
Wu D, Kim K, Li Q. Low-dose CT reconstruction with Noise2Noise network and testing-time fine-tuning. Med Phys 2021; 48:7657-7672. [PMID: 34791655 PMCID: PMC11216369 DOI: 10.1002/mp.15101] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Revised: 06/07/2021] [Accepted: 06/24/2021] [Indexed: 01/05/2023] Open
Abstract
PURPOSE Deep learning-based image denoising and reconstruction methods demonstrated promising performance on low-dose CT imaging in recent years. However, most existing deep learning-based low-dose CT reconstruction methods require normal-dose images for training. Sometimes such clean images do not exist such as for dynamic CT imaging or very large patients. The purpose of this work is to develop a low-dose CT image reconstruction algorithm based on deep learning which does not need clean images for training. METHODS In this paper, we proposed a novel reconstruction algorithm where the image prior was expressed via the Noise2Noise network, whose weights were fine-tuned along with the image during the iterative reconstruction. The Noise2Noise network built a self-consistent loss by projection data splitting and mapping the corresponding filtered backprojection (FBP) results to each other with a deep neural network. Besides, the network weights are optimized along with the image to be reconstructed under an alternating optimization scheme. In the proposed method, no clean image is needed for network training and the testing-time fine-tuning leads to optimization for each reconstruction. RESULTS We used the 2016 Low-dose CT Challenge dataset to validate the feasibility of the proposed method. We compared its performance to several existing iterative reconstruction algorithms that do not need clean training data, including total variation, non-local mean, convolutional sparse coding, and Noise2Noise denoising. It was demonstrated that the proposed Noise2Noise reconstruction achieved better RMSE, SSIM and texture preservation compared to the other methods. The performance is also robust against the different noise levels, hyperparameters, and network structures used in the reconstruction. Furthermore, we also demonstrated that the proposed methods achieved competitive results without any pre-training of the network at all, that is, using randomly initialized network weights during testing. The proposed iterative reconstruction algorithm also has empirical convergence with and without network pre-training. CONCLUSIONS The proposed Noise2Noise reconstruction method can achieve promising image quality in low-dose CT image reconstruction. The method works both with and without pre-training, and only noisy data are required for pre-training.
Collapse
Affiliation(s)
- Dufan Wu
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Kyungsang Kim
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Quanzheng Li
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| |
Collapse
|