1
|
Yang Z, Woodward MA, Niziol LM, Pawar M, Prajna NV, Krishnamoorthy A, Wang Y, Lu MC, Selvaraj S, Farsiu S. Self-knowledge distillation-empowered directional connectivity transformer for microbial keratitis biomarkers segmentation on slit-lamp photography. Med Image Anal 2025; 102:103533. [PMID: 40117989 PMCID: PMC12004389 DOI: 10.1016/j.media.2025.103533] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2024] [Revised: 12/28/2024] [Accepted: 02/25/2025] [Indexed: 03/23/2025]
Abstract
The lack of standardized, objective tools for measuring biomarker morphology poses a significant obstacle to managing Microbial Keratitis (MK). Previous studies have demonstrated that robust segmentation benefits MK diagnosis, management, and estimation of visual outcomes. However, despite exciting advances, current methods cannot accurately detect biomarker boundaries and differentiate the overlapped regions in challenging cases. In this work, we propose a novel self-knowledge distillation-empowered directional connectivity transformer, called SDCTrans. We utilize the directional connectivity modeling framework to improve biomarker boundary detection. The transformer backbone and the hierarchical self-knowledge distillation scheme in this framework enhance directional representation learning. We also propose an efficient segmentation head design to effectively segment overlapping regions. This is the first work that successfully incorporates directional connectivity modeling with a transformer. SDCTrans trained and tested with a new large-scale MK dataset accurately and robustly segments crucial biomarkers in three types of slit lamp biomicroscopy images. Through comprehensive experiments, we demonstrated the superiority of the proposed SDCTrans over current state-of-the-art models. We also show that our SDCTrans matches, if not outperforms, the performance of expert human graders in MK biomarker identification and visual acuity outcome estimation. Experiments on skin lesion images are also included as an illustrative example of SDCTrans' utility in other segmentation tasks. The new MK dataset and codes are available at https://github.com/Zyun-Y/SDCTrans.
Collapse
Affiliation(s)
- Ziyun Yang
- Duke University, Department of Biomedical Engineering, Durham, 27705, NC, USA.
| | - Maria A Woodward
- University of Michigan, Department of Ophthalmology and Visual Sciences, Ann Arbor, 48105, MI, USA
| | - Leslie M Niziol
- University of Michigan, Department of Ophthalmology and Visual Sciences, Ann Arbor, 48105, MI, USA
| | - Mercy Pawar
- University of Michigan, Department of Ophthalmology and Visual Sciences, Ann Arbor, 48105, MI, USA
| | | | | | - Yiqing Wang
- Duke University, Department of Biomedical Engineering, Durham, 27705, NC, USA
| | - Ming-Chen Lu
- University of Michigan, Department of Ophthalmology and Visual Sciences, Ann Arbor, 48105, MI, USA
| | | | - Sina Farsiu
- Duke University, Department of Biomedical Engineering, Durham, 27705, NC, USA.
| |
Collapse
|
2
|
Hill C, Malone J, Liu K, Ng SPY, MacAulay C, Poh C, Lane P. Three-Dimension Epithelial Segmentation in Optical Coherence Tomography of the Oral Cavity Using Deep Learning. Cancers (Basel) 2024; 16:2144. [PMID: 38893263 PMCID: PMC11172075 DOI: 10.3390/cancers16112144] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2024] [Revised: 06/01/2024] [Accepted: 06/02/2024] [Indexed: 06/21/2024] Open
Abstract
This paper aims to simplify the application of optical coherence tomography (OCT) for the examination of subsurface morphology in the oral cavity and reduce barriers towards the adoption of OCT as a biopsy guidance device. The aim of this work was to develop automated software tools for the simplified analysis of the large volume of data collected during OCT. Imaging and corresponding histopathology were acquired in-clinic using a wide-field endoscopic OCT system. An annotated dataset (n = 294 images) from 60 patients (34 male and 26 female) was assembled to train four unique neural networks. A deep learning pipeline was built using convolutional and modified u-net models to detect the imaging field of view (network 1), detect artifacts (network 2), identify the tissue surface (network 3), and identify the presence and location of the epithelial-stromal boundary (network 4). The area under the curve of the image and artifact detection networks was 1.00 and 0.94, respectively. The Dice similarity score for the surface and epithelial-stromal boundary segmentation networks was 0.98 and 0.83, respectively. Deep learning (DL) techniques can identify the location and variations in the epithelial surface and epithelial-stromal boundary in OCT images of the oral mucosa. Segmentation results can be synthesized into accessible en face maps to allow easier visualization of changes.
Collapse
Affiliation(s)
- Chloe Hill
- Department of Integrative Oncology, British Columbia Cancer Research Institute, 675 W 10th Ave., Vancouver, BC V5Z 1L3, Canada; (C.H.); (J.M.); (K.L.); (C.M.); (C.P.)
- School of Engineering Science, Simon Fraser University, 8888 University Drive, Burnaby, BC V5A 1S6, Canada
| | - Jeanie Malone
- Department of Integrative Oncology, British Columbia Cancer Research Institute, 675 W 10th Ave., Vancouver, BC V5Z 1L3, Canada; (C.H.); (J.M.); (K.L.); (C.M.); (C.P.)
- School of Biomedical Engineering, University of British Columbia, 2222 Health Sciences Mall, Vancouver, BC V6T 1Z3, Canada
| | - Kelly Liu
- Department of Integrative Oncology, British Columbia Cancer Research Institute, 675 W 10th Ave., Vancouver, BC V5Z 1L3, Canada; (C.H.); (J.M.); (K.L.); (C.M.); (C.P.)
- School of Biomedical Engineering, University of British Columbia, 2222 Health Sciences Mall, Vancouver, BC V6T 1Z3, Canada
- Faculty of Dentistry, University of British Columbia, 2199 Wesbrook Mall, Vancouver, BC V6T 1Z3, Canada;
| | - Samson Pak-Yan Ng
- Faculty of Dentistry, University of British Columbia, 2199 Wesbrook Mall, Vancouver, BC V6T 1Z3, Canada;
| | - Calum MacAulay
- Department of Integrative Oncology, British Columbia Cancer Research Institute, 675 W 10th Ave., Vancouver, BC V5Z 1L3, Canada; (C.H.); (J.M.); (K.L.); (C.M.); (C.P.)
- Department of Pathology and Laboratory Medicine, University of British Columbia, 2211 Wesbrook Mall, Vancouver, BC V6T 1Z7, Canada
| | - Catherine Poh
- Department of Integrative Oncology, British Columbia Cancer Research Institute, 675 W 10th Ave., Vancouver, BC V5Z 1L3, Canada; (C.H.); (J.M.); (K.L.); (C.M.); (C.P.)
- Faculty of Dentistry, University of British Columbia, 2199 Wesbrook Mall, Vancouver, BC V6T 1Z3, Canada;
| | - Pierre Lane
- Department of Integrative Oncology, British Columbia Cancer Research Institute, 675 W 10th Ave., Vancouver, BC V5Z 1L3, Canada; (C.H.); (J.M.); (K.L.); (C.M.); (C.P.)
- School of Engineering Science, Simon Fraser University, 8888 University Drive, Burnaby, BC V5A 1S6, Canada
- School of Biomedical Engineering, University of British Columbia, 2222 Health Sciences Mall, Vancouver, BC V6T 1Z3, Canada
| |
Collapse
|
3
|
Wang J, Chen C, You W, Jiao Y, Liu X, Jiang X, Lu W. Honeycomb effect elimination in differential phase fiber-bundle-based endoscopy. OPTICS EXPRESS 2024; 32:20682-20694. [PMID: 38859444 DOI: 10.1364/oe.526033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/10/2024] [Accepted: 05/10/2024] [Indexed: 06/12/2024]
Abstract
Fiber-bundle-based endoscopy, with its ultrathin probe and micrometer-level resolution, has become a widely adopted imaging modality for in vivo imaging. However, the fiber bundles introduce a significant honeycomb effect, primarily due to the multi-core structure and crosstalk of adjacent fiber cores, which superposes the honeycomb pattern image on the original image. To tackle this issue, we propose an iterative-free spatial pixel shifting (SPS) algorithm, designed to suppress the honeycomb effect and enhance real-time imaging performance. The process involves the creation of three additional sub-images by shifting the original image by one pixel at 0, 45, and 90 degree angles. These four sub-images are then used to compute differential maps in the x and y directions. By performing spiral integration on these differential maps, we reconstruct a honeycomb-free image with improved details. Our simulations and experimental results, conducted on a self-built fiber bundle-based endoscopy system, demonstrate the effectiveness of the SPS algorithm. SPS significantly improves the image quality of reflective objects and unlabeled transparent scattered objects, laying a solid foundation for biomedical endoscopic applications.
Collapse
|
4
|
Fan Y, Liu S, Gao E, Guo R, Dong G, Li Y, Gao T, Tang X, Liao H. The LMIT: Light-mediated minimally-invasive theranostics in oncology. Theranostics 2024; 14:341-362. [PMID: 38164160 PMCID: PMC10750201 DOI: 10.7150/thno.87783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Accepted: 10/18/2023] [Indexed: 01/03/2024] Open
Abstract
Minimally-invasive diagnosis and therapy have gradually become the trend and research hotspot of current medical applications. The integration of intraoperative diagnosis and treatment is a development important direction for real-time detection, minimally-invasive diagnosis and therapy to reduce mortality and improve the quality of life of patients, so called minimally-invasive theranostics (MIT). Light is an important theranostic tool for the treatment of cancerous tissues. Light-mediated minimally-invasive theranostics (LMIT) is a novel evolutionary technology that integrates diagnosis and therapeutics for the less invasive treatment of diseased tissues. Intelligent theranostics would promote precision surgery based on the optical characterization of cancerous tissues. Furthermore, MIT also requires the assistance of smart medical devices or robots. And, optical multimodality lay a solid foundation for intelligent MIT. In this review, we summarize the important state-of-the-arts of optical MIT or LMIT in oncology. Multimodal optical image-guided intelligent treatment is another focus. Intraoperative imaging and real-time analysis-guided optical treatment are also systemically discussed. Finally, the potential challenges and future perspectives of intelligent optical MIT are discussed.
Collapse
Affiliation(s)
- Yingwei Fan
- School of Medical Technology, Beijing Institute of Technology, Beijing, China, 100081
| | - Shuai Liu
- School of Medical Technology, Beijing Institute of Technology, Beijing, China, 100081
| | - Enze Gao
- School of Medical Technology, Beijing Institute of Technology, Beijing, China, 100081
| | - Rui Guo
- School of Medical Technology, Beijing Institute of Technology, Beijing, China, 100081
| | - Guozhao Dong
- School of Medical Technology, Beijing Institute of Technology, Beijing, China, 100081
| | - Yangxi Li
- Dept. of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China, 100084
| | - Tianxin Gao
- School of Medical Technology, Beijing Institute of Technology, Beijing, China, 100081
| | - Xiaoying Tang
- School of Medical Technology, Beijing Institute of Technology, Beijing, China, 100081
| | - Hongen Liao
- Dept. of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China, 100084
| |
Collapse
|
5
|
Cui R, Wang L, Lin L, Li J, Lu R, Liu S, Liu B, Gu Y, Zhang H, Shang Q, Chen L, Tian D. Deep Learning in Barrett's Esophagus Diagnosis: Current Status and Future Directions. Bioengineering (Basel) 2023; 10:1239. [PMID: 38002363 PMCID: PMC10669008 DOI: 10.3390/bioengineering10111239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 10/13/2023] [Accepted: 10/16/2023] [Indexed: 11/26/2023] Open
Abstract
Barrett's esophagus (BE) represents a pre-malignant condition characterized by abnormal cellular proliferation in the distal esophagus. A timely and accurate diagnosis of BE is imperative to prevent its progression to esophageal adenocarcinoma, a malignancy associated with a significantly reduced survival rate. In this digital age, deep learning (DL) has emerged as a powerful tool for medical image analysis and diagnostic applications, showcasing vast potential across various medical disciplines. In this comprehensive review, we meticulously assess 33 primary studies employing varied DL techniques, predominantly featuring convolutional neural networks (CNNs), for the diagnosis and understanding of BE. Our primary focus revolves around evaluating the current applications of DL in BE diagnosis, encompassing tasks such as image segmentation and classification, as well as their potential impact and implications in real-world clinical settings. While the applications of DL in BE diagnosis exhibit promising results, they are not without challenges, such as dataset issues and the "black box" nature of models. We discuss these challenges in the concluding section. Essentially, while DL holds tremendous potential to revolutionize BE diagnosis, addressing these challenges is paramount to harnessing its full capacity and ensuring its widespread application in clinical practice.
Collapse
Affiliation(s)
- Ruichen Cui
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Lei Wang
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
- West China School of Nursing, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China
| | - Lin Lin
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
- West China School of Nursing, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China
| | - Jie Li
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
- West China School of Nursing, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China
| | - Runda Lu
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Shixiang Liu
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Bowei Liu
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Yimin Gu
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Hanlu Zhang
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Qixin Shang
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Longqi Chen
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Dong Tian
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| |
Collapse
|
6
|
Zemborain ZZ, Soifer M, Azar NS, Murillo S, Mousa HM, Perez VL, Farsiu S. Open-Source Automated Segmentation of Neuronal Structures in Corneal Confocal Microscopy Images of the Subbasal Nerve Plexus With Accuracy on Par With Human Segmentation. Cornea 2023; 42:1309-1319. [PMID: 37669422 PMCID: PMC10635613 DOI: 10.1097/ico.0000000000003319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Accepted: 04/24/2023] [Indexed: 09/07/2023]
Abstract
PURPOSE The aim of this study was to perform automated segmentation of corneal nerves and other structures in corneal confocal microscopy (CCM) images of the subbasal nerve plexus (SNP) in eyes with ocular surface diseases (OSDs). METHODS A deep learning-based 2-stage algorithm was designed to perform segmentation of SNP features. In the first stage, to address applanation artifacts, a generative adversarial network-enabled deep network was constructed to identify 3 neighboring corneal layers on each CCM image: epithelium, SNP, and stroma. This network was trained/validated on 470 images of each layer from 73 individuals. The segmented SNP regions were further classified in the second stage by another deep network as follows: background, nerve, neuroma, and immune cells. Twenty-one-fold cross-validation was used to assess the performance of the overall algorithm on a separate data set of 207 manually segmented SNP images from 43 patients with OSD. RESULTS For the background, nerve, neuroma, and immune cell classes, the Dice similarity coefficients of the proposed automatic method were 0.992, 0.814, 0.748, and 0.736, respectively. The performance metrics for automatic segmentations were statistically better or equal as compared to human segmentation. In addition, the resulting clinical metrics had good to excellent intraclass correlation coefficients between automatic and human segmentations. CONCLUSIONS The proposed automatic method can reliably segment potential CCM biomarkers of OSD onset and progression with accuracy on par with human gradings in real clinical data, which frequently exhibited image acquisition artifacts. To facilitate future studies on OSD, we made our data set and algorithms freely available online as an open-source software package.
Collapse
Affiliation(s)
| | - Matias Soifer
- Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
- Foster Center for Ocular Immunology, Duke Eye Institute, Durham, NC, USA
| | - Nadim S. Azar
- Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
- Foster Center for Ocular Immunology, Duke Eye Institute, Durham, NC, USA
| | - Sofia Murillo
- Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
- Foster Center for Ocular Immunology, Duke Eye Institute, Durham, NC, USA
| | - Hazem M. Mousa
- Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
- Foster Center for Ocular Immunology, Duke Eye Institute, Durham, NC, USA
| | - Victor L. Perez
- Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
- Foster Center for Ocular Immunology, Duke Eye Institute, Durham, NC, USA
| | - Sina Farsiu
- Department of Biomedical Engineering, Duke University, Durham, NC, USA
- Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
| |
Collapse
|
7
|
Nawaz M, Uvaliyev A, Bibi K, Wei H, Abaxi SMD, Masood A, Shi P, Ho HP, Yuan W. Unraveling the complexity of Optical Coherence Tomography image segmentation using machine and deep learning techniques: A review. Comput Med Imaging Graph 2023; 108:102269. [PMID: 37487362 DOI: 10.1016/j.compmedimag.2023.102269] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 06/30/2023] [Accepted: 07/03/2023] [Indexed: 07/26/2023]
Abstract
Optical Coherence Tomography (OCT) is an emerging technology that provides three-dimensional images of the microanatomy of biological tissue in-vivo and at micrometer-scale resolution. OCT imaging has been widely used to diagnose and manage various medical diseases, such as macular degeneration, glaucoma, and coronary artery disease. Despite its wide range of applications, the segmentation of OCT images remains difficult due to the complexity of tissue structures and the presence of artifacts. In recent years, different approaches have been used for OCT image segmentation, such as intensity-based, region-based, and deep learning-based methods. This paper reviews the major advances in state-of-the-art OCT image segmentation techniques. It provides an overview of the advantages and limitations of each method and presents the most relevant research works related to OCT image segmentation. It also provides an overview of existing datasets and discusses potential clinical applications. Additionally, this review gives an in-depth analysis of machine learning and deep learning approaches for OCT image segmentation. It outlines challenges and opportunities for further research in this field.
Collapse
Affiliation(s)
- Mehmood Nawaz
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong Special Administrative Region of China
| | - Adilet Uvaliyev
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong Special Administrative Region of China
| | - Khadija Bibi
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong Special Administrative Region of China
| | - Hao Wei
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong Special Administrative Region of China
| | - Sai Mu Dalike Abaxi
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong Special Administrative Region of China
| | - Anum Masood
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
| | - Peilun Shi
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong Special Administrative Region of China
| | - Ho-Pui Ho
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong Special Administrative Region of China
| | - Wu Yuan
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong Special Administrative Region of China.
| |
Collapse
|
8
|
Yang Z, Farsiu S. Directional Connectivity-based Segmentation of Medical Images. PROCEEDINGS. IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION 2023; 2023:11525-11535. [PMID: 37790907 PMCID: PMC10543919 DOI: 10.1109/cvpr52729.2023.01109] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
Anatomical consistency in biomarker segmentation is crucial for many medical image analysis tasks. A promising paradigm for achieving anatomically consistent segmentation via deep networks is incorporating pixel connectivity, a basic concept in digital topology, to model inter-pixel relationships. However, previous works on connectivity modeling have ignored the rich channel-wise directional information in the latent space. In this work, we demonstrate that effective disentanglement of directional sub-space from the shared latent space can significantly enhance the feature representation in the connectivity-based network. To this end, we propose a directional connectivity modeling scheme for segmentation that decouples, tracks, and utilizes the directional information across the network. Experiments on various public medical image segmentation benchmarks show the effectiveness of our model as compared to the state-of-the-art methods. Code is available at https://github.com/Zyun-Y/DconnNet.
Collapse
Affiliation(s)
- Ziyun Yang
- Duke University, Durham, NC, United States
| | | |
Collapse
|
9
|
Soltanian-Zadeh S, Liu Z, Liu Y, Lassoued A, Cukras CA, Miller DT, Hammer DX, Farsiu S. Deep learning-enabled volumetric cone photoreceptor segmentation in adaptive optics optical coherence tomography images of normal and diseased eyes. BIOMEDICAL OPTICS EXPRESS 2023; 14:815-833. [PMID: 36874491 PMCID: PMC9979662 DOI: 10.1364/boe.478693] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 01/11/2023] [Accepted: 01/12/2023] [Indexed: 06/11/2023]
Abstract
Objective quantification of photoreceptor cell morphology, such as cell diameter and outer segment length, is crucial for early, accurate, and sensitive diagnosis and prognosis of retinal neurodegenerative diseases. Adaptive optics optical coherence tomography (AO-OCT) provides three-dimensional (3-D) visualization of photoreceptor cells in the living human eye. The current gold standard for extracting cell morphology from AO-OCT images involves the tedious process of 2-D manual marking. To automate this process and extend to 3-D analysis of the volumetric data, we propose a comprehensive deep learning framework to segment individual cone cells in AO-OCT scans. Our automated method achieved human-level performance in assessing cone photoreceptors of healthy and diseased participants captured with three different AO-OCT systems representing two different types of point scanning OCT: spectral domain and swept source.
Collapse
Affiliation(s)
| | - Zhuolin Liu
- Center for Devices and Radiological Health (CDRH), U.S. Food and Drug Administration, Silver Spring, MD 20993, USA
| | - Yan Liu
- School of Optometry, Indiana University, Bloomington, IN 47405, USA
| | - Ayoub Lassoued
- School of Optometry, Indiana University, Bloomington, IN 47405, USA
| | - Catherine A. Cukras
- National Eye Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Donald T. Miller
- School of Optometry, Indiana University, Bloomington, IN 47405, USA
| | - Daniel X. Hammer
- Center for Devices and Radiological Health (CDRH), U.S. Food and Drug Administration, Silver Spring, MD 20993, USA
| | - Sina Farsiu
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
- Department of Ophthalmology, Duke University Medical Center, Durham, NC 27710, USA
| |
Collapse
|
10
|
Wang C, Gan M. Wavelet attention network for the segmentation of layer structures on OCT images. BIOMEDICAL OPTICS EXPRESS 2022; 13:6167-6181. [PMID: 36589584 PMCID: PMC9774872 DOI: 10.1364/boe.475272] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Revised: 10/21/2022] [Accepted: 10/24/2022] [Indexed: 06/17/2023]
Abstract
Automatic segmentation of layered tissue is critical for optical coherence tomography (OCT) image analysis. The development of deep learning techniques provides various solutions to this problem, while most existing methods suffer from topological errors such as outlier prediction and label disconnection. The channel attention mechanism is a powerful technique to address these problems due to its simplicity and robustness. However, it relies on global average pooling (GAP), which only calculates the lowest frequency component and leaves other potentially useful information unexplored. In this study, we use the discrete wavelet transform (DWT) to extract multi-spectral information and propose the wavelet attention network (WATNet) for tissue layer segmentation. The DWT-based attention mechanism enables multi-spectral analysis with no complex frequency-selection process and can be easily embedded to existing frameworks. Furthermore, the various wavelet bases make the WATNet adaptable to different tasks. Experiments on a self-collected esophageal dataset and two public retinal OCT dataset demonstrated that the WATNet achieved better performance compared to several widely used deep networks, confirming the advantages of the proposed method.
Collapse
Affiliation(s)
- Cong Wang
- Jiangsu Key Laboratory of Medical Optics, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- Jinan Guoke Medical Technology Development Co., Ltd, Jinan 250102, China
| | - Meng Gan
- Jiangsu Key Laboratory of Medical Optics, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- Jinan Guoke Medical Technology Development Co., Ltd, Jinan 250102, China
| |
Collapse
|
11
|
Shi Y, Lu J, Le N, Wang RK. Integrating a pressure sensor with an OCT handheld probe to facilitate imaging of microvascular information in skin tissue beds. BIOMEDICAL OPTICS EXPRESS 2022; 13:6153-6166. [PMID: 36733756 PMCID: PMC9872897 DOI: 10.1364/boe.473013] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 10/22/2022] [Accepted: 10/23/2022] [Indexed: 05/05/2023]
Abstract
Optical coherence tomography (OCT) and OCT angiography (OCTA) have been increasingly applied in skin imaging applications in dermatology, where the imaging is often performed with the OCT probe in contact with the skin surface. However, this contact mode imaging can introduce uncontrollable mechanical stress applied to the skin, inevitably complicating the interpretation of OCT/OCTA imaging results. There remains a need for a strategy for assessing local pressure applied on the skin during imaging acquisition. This study reports a handheld scanning probe integrated with built-in pressure sensors, allowing the operator to control the mechanical stress applied to the skin in real-time. With real time feedback information, the operator can easily determine whether the pressure applied to the skin would affect the imaging quality so as to obtain repeatable and reliable OCTA images for a more accurate investigation of skin conditions. Using this probe, imaging of palm skin was used in this study to demonstrate how the OCTA imaging would have been affected by different mechanical pressures ranging from 0 to 69 kPa. The results showed that OCTA imaging is relatively stable when the pressure is less than 11 kPa, and within this range, the change of vascular area density calculated from the OCTA imaging is below 0.13%. In addition, the probe was used to augment the OCT monitoring of blood flow changes during a reactive hyperemia experiment, in which the operator could properly control the amount of pressure applied to the skin surface and achieve full release after compression stimulation.
Collapse
Affiliation(s)
- Yaping Shi
- Department of Bioengineering, University of Washington, Seattle, WA 98105, USA
- These authors contributed equally to this study
| | - Jie Lu
- Department of Bioengineering, University of Washington, Seattle, WA 98105, USA
- These authors contributed equally to this study
| | - Nhan Le
- Department of Bioengineering, University of Washington, Seattle, WA 98105, USA
| | - Ruikang K. Wang
- Department of Bioengineering, University of Washington, Seattle, WA 98105, USA
- Department of Ophthalmology, University of Washington, Seattle, WA 98105, USA
| |
Collapse
|
12
|
Yuan W, Thiboutot J, Park HC, Li A, Loube J, Mitzner W, Yarmus L, Brown RH, Li X. Direct Visualization and Quantitative Imaging of Small Airway Anatomy In Vivo Using Deep Learning Assisted Diffractive OCT. IEEE Trans Biomed Eng 2022; PP:10.1109/TBME.2022.3188173. [PMID: 35786546 PMCID: PMC9842112 DOI: 10.1109/tbme.2022.3188173] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
OBJECTIVE/BACKGROUND In vivo imaging and quantification of the microstructures of small airways in three dimensions (3D) allows a better understanding and management of airway diseases, such as asthma and chronic obstructive pulmonary disease (COPD). At present, the resolution and contrast of the currently available conventional optical coherence tomography (OCT) imaging technologies operating at 1300 nm remain challenging to directly visualize the fine microstructures of small airways in vivo. METHODS We developed an ultrahigh-resolution diffractive endoscopic OCT at 800 nm to afford a resolving power of 1.7 µm (in tissue) with an improved contrast and a custom deep residual learning based image segmentation framework to perform accurate and automated 3D quantification of airway anatomy. RESULTS The 800-nm diffractive OCT enabled the direct delineation of the structural components in the small airway wall in vivo. We further first demonstrated the 3D anatomic quantification of critical tissue compartments of small airways in sheep using the automated segmentation method. CONCLUSION The deep learning assisted diffractive OCT provides a unique ability to access the small airways, directly visualize and quantify the important tissue compartments, such as airway smooth muscle, in the airway wall in vivo in 3D. SIGNIFICANCE These pilot results suggest a potential technology for calculating volumetric measurements of small airways in patients in vivo.
Collapse
Affiliation(s)
- Wu Yuan
- Johns Hopkins University, Baltimore, MD 21205, USA; Department of Biomedical Engineering and Shun Hing Institute of Advanced Engineering, The Chinese University of Hong Kong, Shatin, Hong Kong SAR, China
| | - Jeffrey Thiboutot
- Division of Pulmonary and Critical Care Medicine, School of Medicine, Johns Hopkins University, Baltimore, MD 21205, USA
| | - Hyeon-cheol Park
- Department of Biomedical Engineering, School of Medicine, Johns Hopkins University, Baltimore, MD 21205, USA
| | - Ang Li
- Department of Biomedical Engineering, School of Medicine, Johns Hopkins University, Baltimore, MD 21205, USA
| | - Jeffrey Loube
- Division of Pulmonary and Critical Care Medicine, School of Medicine, Johns Hopkins University, Baltimore, MD 21205, USA
| | - Wayne Mitzner
- Division of Pulmonary and Critical Care Medicine, School of Medicine, Johns Hopkins University, Baltimore, MD 21205, USA
| | - Lonny Yarmus
- Division of Pulmonary and Critical Care Medicine, School of Medicine, Johns Hopkins University, Baltimore, MD 21205, USA
| | - Robert H. Brown
- Department of Anesthesiology and Critical Care Medicine, School of Medicine, Johns Hopkins University, Baltimore, MD 21205, USA
| | - Xingde Li
- Department of Biomedical Engineering, School of Medicine, Johns Hopkins University, Baltimore, MD 21205, USA
| |
Collapse
|
13
|
Gan M, Wang C. Esophageal optical coherence tomography image synthesis using an adversarially learned variational autoencoder. BIOMEDICAL OPTICS EXPRESS 2022; 13:1188-1201. [PMID: 35414971 PMCID: PMC8973180 DOI: 10.1364/boe.449796] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 01/22/2022] [Accepted: 01/27/2022] [Indexed: 05/12/2023]
Abstract
Endoscopic optical coherence tomography (OCT) imaging offers a non-invasive way to detect esophageal lesions on the microscopic scale, which is of clinical potential in the early diagnosis and treatment of esophageal cancers. Recent studies focused on applying deep learning-based methods in esophageal OCT image analysis and achieved promising results, which require a large data size. However, traditional data augmentation techniques generate samples that are highly correlated and sometimes far from reality, which may not lead to a satisfied trained model. In this paper, we proposed an adversarial learned variational autoencoder (AL-VAE) to generate high-quality esophageal OCT samples. The AL-VAE combines the generative adversarial network (GAN) and variational autoencoder (VAE) in a simple yet effective way, which preserves the advantages of VAEs, such as stable training and nice latent manifold, and requires no extra discriminators. Experimental results verified the proposed method achieved better image quality in generating esophageal OCT images when compared with the state-of-the-art image synthesis network, and its potential in improving deep learning model performance was also evaluated by esophagus segmentation.
Collapse
Affiliation(s)
- Meng Gan
- Jiangsu Key Laboratory of Medical Optics, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- Jinan Guoke Medical Technology Development Co., Ltd, Jinan 250102, China
| | - Cong Wang
- Jiangsu Key Laboratory of Medical Optics, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- Jinan Guoke Medical Technology Development Co., Ltd, Jinan 250102, China
| |
Collapse
|