1
|
Yang F, He Q, Wang Y, Zeng S, Xu Y, Ye J, He Y, Guan T, Wang Z, Li J. Unsupervised stain augmentation enhanced glomerular instance segmentation on pathology images. Int J Comput Assist Radiol Surg 2025; 20:225-236. [PMID: 38848032 DOI: 10.1007/s11548-024-03154-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Accepted: 04/16/2024] [Indexed: 02/11/2025]
Abstract
PURPOSE In pathology images, different stains highlight different glomerular structures, so a supervised deep learning-based glomerular instance segmentation model trained on individual stains performs poorly on other stains. However, it is difficult to obtain a training set with multiple stains because the labeling of pathology images is very time-consuming and tedious. Therefore, in this paper, we proposed an unsupervised stain augmentation-based method for segmentation of glomerular instances. METHODS In this study, we successfully realized the conversion between different staining methods such as PAS, MT and PASM by contrastive unpaired translation (CUT), thus improving the staining diversity of the training set. Moreover, we replaced the backbone of mask R-CNN with swin transformer to further improve the efficiency of feature extraction and thus achieve better performance in instance segmentation task. RESULTS To validate the method presented in this paper, we constructed a dataset from 216 WSIs of the three stains in this study. After conducting in-depth experiments, we verified that the instance segmentation method based on stain augmentation outperforms existing methods across all metrics for PAS, PASM, and MT stains. Furthermore, ablation experiments are performed in this paper to further demonstrate the effectiveness of the proposed module. CONCLUSION This study successfully demonstrated the potential of unsupervised stain augmentation to improve glomerular segmentation in pathology analysis. Future research could extend this approach to other complex segmentation tasks in the pathology image domain to further explore the potential of applying stain augmentation techniques in different domains of pathology image analysis.
Collapse
Affiliation(s)
- Fan Yang
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen, China
| | - Qiming He
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen, China
| | - Yanxia Wang
- Department of Pathology, State Key Laboratory of Cancer Biology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
- School of Basic Medicine, Fourth Military Medical University, Xi'an, China
| | - Siqi Zeng
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen, China
- Research Institute of Tsinghua, Pearl River Delta, Guangzhou, China
| | - Yingming Xu
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen, China
| | - Jing Ye
- Department of Pathology, State Key Laboratory of Cancer Biology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
- School of Basic Medicine, Fourth Military Medical University, Xi'an, China
| | - Yonghong He
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen, China
| | - Tian Guan
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen, China.
| | - Zhe Wang
- Department of Pathology, State Key Laboratory of Cancer Biology, Xijing Hospital, Fourth Military Medical University, Xi'an, China.
- School of Basic Medicine, Fourth Military Medical University, Xi'an, China.
| | - Jing Li
- Department of Pathology, State Key Laboratory of Cancer Biology, Xijing Hospital, Fourth Military Medical University, Xi'an, China.
- School of Basic Medicine, Fourth Military Medical University, Xi'an, China.
| |
Collapse
|
2
|
Hou X, Guan Z, Zhang X, Hu X, Zou S, Liang C, Shi L, Zhang K, You H. Evaluation of tumor budding with virtual panCK stains generated by novel multi-model CNN framework. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 257:108352. [PMID: 39241330 DOI: 10.1016/j.cmpb.2024.108352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Revised: 06/03/2024] [Accepted: 07/22/2024] [Indexed: 09/09/2024]
Abstract
As the global incidence of cancer continues to rise rapidly, the need for swift and precise diagnoses has become increasingly pressing. Pathologists commonly rely on H&E-panCK stain pairs for various aspects of cancer diagnosis, including the detection of occult tumor cells and the evaluation of tumor budding. Nevertheless, conventional chemical staining methods suffer from notable drawbacks, such as time-intensive processes and irreversible staining outcomes. The virtual stain technique, leveraging generative adversarial network (GAN), has emerged as a promising alternative to chemical stains. This approach aims to transform biopsy scans (often H&E) into other stain types. Despite achieving notable progress in recent years, current state-of-the-art virtual staining models confront challenges that hinder their efficacy, particularly in achieving accurate staining outcomes under specific conditions. These limitations have impeded the practical integration of virtual staining into diagnostic practices. To address the goal of producing virtual panCK stains capable of replacing chemical panCK, we propose an innovative multi-model framework. Our approach involves employing a combination of Mask-RCNN (for cell segmentation) and GAN models to extract cytokeratin distribution from chemical H&E images. Additionally, we introduce a tailored dynamic GAN model to convert H&E images into virtual panCK stains, integrating the derived cytokeratin distribution. Our framework is motivated by the fact that the unique pattern of the panCK is derived from cytokeratin distribution. As a proof of concept, we employ our virtual panCK stains to evaluate tumor budding in 45 H&E whole-slide images taken from breast cancer-invaded lymph nodes . Through thorough validation by both pathologists and the QuPath software, our virtual panCK stains demonstrate a remarkable level of accuracy. In stark contrast, the accuracy of state-of-the-art single cycleGAN virtual panCK stains is negligible. To our best knowledge, this is the first instance of a multi-model virtual panCK framework and the utilization of virtual panCK for tumor budding assessment. Our framework excels in generating dependable virtual panCK stains with significantly improved efficiency, thereby considerably reducing turnaround times in diagnosis. Furthermore, its outcomes are easily comprehensible even to pathologists who may not be well-versed in computer technology. We firmly believe that our framework has the potential to advance the field of virtual stain, thereby making significant strides towards improved cancer diagnosis.
Collapse
Affiliation(s)
- Xingzhong Hou
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, 100190, China; School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing, 100190, China
| | - Zhen Guan
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, 100190, China
| | - Xianwei Zhang
- Department of Pathology, Henan Provincial People's Hospital; People's Hospital of Zhengzhou University, Zhengzhou, Henan 450003, China
| | - Xiao Hu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of Pathology, Peking University Cancer Hospital & Institute, Beijing, China
| | - Shuangmei Zou
- Department of Pathology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Chunzi Liang
- School of Laboratory Medicine, Hubei University of Chinese Medicine, 16 Huangjia Lake West Road, Wuhan, Hubei 430065, China.
| | - Lulin Shi
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, 100190, China; School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing, 100190, China
| | - Kaitai Zhang
- State Key Laboratory of Molecular Oncology, Department of Etiology and Carcinogenesis, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China.
| | - Haihang You
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, 100190, China; Zhongguancun Laboratory, Beijing 102206, China
| |
Collapse
|
3
|
Chen M, Liu YT, Khan FS, Fox MC, Reichenberg JS, Lopes FCPS, Sebastian KR, Markey MK, Tunnell JW. Single color digital H&E staining with In-and-Out Net. Comput Med Imaging Graph 2024; 118:102468. [PMID: 39579455 DOI: 10.1016/j.compmedimag.2024.102468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2024] [Revised: 11/06/2024] [Accepted: 11/07/2024] [Indexed: 11/25/2024]
Abstract
Digital staining streamlines traditional staining procedures by digitally generating stained images from unstained or differently stained images. While conventional staining methods involve time-consuming chemical processes, digital staining offers an efficient and low-infrastructure alternative. Researchers can expedite tissue analysis without physical sectioning by leveraging microscopy-based techniques, such as confocal microscopy. However, interpreting grayscale or pseudo-color microscopic images remains challenging for pathologists and surgeons accustomed to traditional histologically stained images. To fill this gap, various studies explore digitally simulating staining to mimic targeted histological stains. This paper introduces a novel network, In-and-Out Net, designed explicitly for digital staining tasks. Based on Generative Adversarial Networks (GAN), our model efficiently transforms Reflectance Confocal Microscopy (RCM) images into Hematoxylin and Eosin (H&E) stained images. Using aluminum chloride preprocessing for skin tissue, we enhance nuclei contrast in RCM images. We trained the model with digital H&E labels featuring two fluorescence channels, eliminating the need for image registration and providing pixel-level ground truth. Our contributions include proposing an optimal training strategy, conducting a comparative analysis demonstrating state-of-the-art performance, validating the model through an ablation study, and collecting perfectly matched input and ground truth images without registration. In-and-Out Net showcases promising results, offering a valuable tool for digital staining tasks and advancing the field of histological image analysis.
Collapse
Affiliation(s)
- Mengkun Chen
- University of Texas at Austin, Department of Biomedical Engineering, 107 W Dean Keeton St, Austin, 78712, TX, United States
| | - Yen-Tung Liu
- University of Texas at Austin, Department of Biomedical Engineering, 107 W Dean Keeton St, Austin, 78712, TX, United States
| | - Fadeel Sher Khan
- University of Texas at Austin, Department of Biomedical Engineering, 107 W Dean Keeton St, Austin, 78712, TX, United States
| | - Matthew C Fox
- The University of Texas at Austin, Division of Dermatology, Dell Medical School, 1301 Barbara Jordan Blvd #200, Austin, 78732, TX, United States
| | - Jason S Reichenberg
- The University of Texas at Austin, Division of Dermatology, Dell Medical School, 1301 Barbara Jordan Blvd #200, Austin, 78732, TX, United States
| | - Fabiana C P S Lopes
- The University of Texas at Austin, Division of Dermatology, Dell Medical School, 1301 Barbara Jordan Blvd #200, Austin, 78732, TX, United States
| | - Katherine R Sebastian
- The University of Texas at Austin, Division of Dermatology, Dell Medical School, 1301 Barbara Jordan Blvd #200, Austin, 78732, TX, United States
| | - Mia K Markey
- University of Texas at Austin, Department of Biomedical Engineering, 107 W Dean Keeton St, Austin, 78712, TX, United States; The University of Texas MD Anderson Cancer Center, Department of Imaging Physics, 1400 Pressler Street, Houston, 77030, TX, United States
| | - James W Tunnell
- University of Texas at Austin, Department of Biomedical Engineering, 107 W Dean Keeton St, Austin, 78712, TX, United States.
| |
Collapse
|
4
|
Jin L, Tang Y, Coole JB, Tan MT, Zhao X, Badaoui H, Robinson JT, Williams MD, Vigneswaran N, Gillenwater AM, Richards-Kortum RR, Veeraraghavan A. DeepDOF-SE: affordable deep-learning microscopy platform for slide-free histology. Nat Commun 2024; 15:2935. [PMID: 38580633 PMCID: PMC10997797 DOI: 10.1038/s41467-024-47065-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 03/19/2024] [Indexed: 04/07/2024] Open
Abstract
Histopathology plays a critical role in the diagnosis and surgical management of cancer. However, access to histopathology services, especially frozen section pathology during surgery, is limited in resource-constrained settings because preparing slides from resected tissue is time-consuming, labor-intensive, and requires expensive infrastructure. Here, we report a deep-learning-enabled microscope, named DeepDOF-SE, to rapidly scan intact tissue at cellular resolution without the need for physical sectioning. Three key features jointly make DeepDOF-SE practical. First, tissue specimens are stained directly with inexpensive vital fluorescent dyes and optically sectioned with ultra-violet excitation that localizes fluorescent emission to a thin surface layer. Second, a deep-learning algorithm extends the depth-of-field, allowing rapid acquisition of in-focus images from large areas of tissue even when the tissue surface is highly irregular. Finally, a semi-supervised generative adversarial network virtually stains DeepDOF-SE fluorescence images with hematoxylin-and-eosin appearance, facilitating image interpretation by pathologists without significant additional training. We developed the DeepDOF-SE platform using a data-driven approach and validated its performance by imaging surgical resections of suspected oral tumors. Our results show that DeepDOF-SE provides histological information of diagnostic importance, offering a rapid and affordable slide-free histology platform for intraoperative tumor margin assessment and in low-resource settings.
Collapse
Affiliation(s)
- Lingbo Jin
- Department of Electrical and Computer Engineering, Rice University, 6100 Main St, Houston, TX, USA
| | - Yubo Tang
- Department of Bioengineering, Rice University, 6100 Main St, Houston, TX, USA
| | - Jackson B Coole
- Department of Bioengineering, Rice University, 6100 Main St, Houston, TX, USA
| | - Melody T Tan
- Department of Bioengineering, Rice University, 6100 Main St, Houston, TX, USA
| | - Xuan Zhao
- Department of Electrical and Computer Engineering, Rice University, 6100 Main St, Houston, TX, USA
| | - Hawraa Badaoui
- Department of Head and Neck Surgery, University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, USA
| | - Jacob T Robinson
- Department of Electrical and Computer Engineering, Rice University, 6100 Main St, Houston, TX, USA
| | - Michelle D Williams
- Department of Pathology, University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, USA
| | - Nadarajah Vigneswaran
- Department of Diagnostic and Biomedical Sciences, University of Texas Health Science Center at Houston School of Dentistry, 7500 Cambridge St, Houston, TX, USA
| | - Ann M Gillenwater
- Department of Head and Neck Surgery, University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, USA
| | | | - Ashok Veeraraghavan
- Department of Electrical and Computer Engineering, Rice University, 6100 Main St, Houston, TX, USA.
| |
Collapse
|
5
|
Bishop KW, Erion Barner LA, Han Q, Baraznenok E, Lan L, Poudel C, Gao G, Serafin RB, Chow SSL, Glaser AK, Janowczyk A, Brenes D, Huang H, Miyasato D, True LD, Kang S, Vaughan JC, Liu JTC. An end-to-end workflow for nondestructive 3D pathology. Nat Protoc 2024; 19:1122-1148. [PMID: 38263522 DOI: 10.1038/s41596-023-00934-4] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 10/23/2023] [Indexed: 01/25/2024]
Abstract
Recent advances in 3D pathology offer the ability to image orders of magnitude more tissue than conventional pathology methods while also providing a volumetric context that is not achievable with 2D tissue sections, and all without requiring destructive tissue sectioning. Generating high-quality 3D pathology datasets on a consistent basis, however, is not trivial and requires careful attention to a series of details during tissue preparation, imaging and initial data processing, as well as iterative optimization of the entire process. Here, we provide an end-to-end procedure covering all aspects of a 3D pathology workflow (using light-sheet microscopy as an illustrative imaging platform) with sufficient detail to perform well-controlled preclinical and clinical studies. Although 3D pathology is compatible with diverse staining protocols and computationally generated color palettes for visual analysis, this protocol focuses on the use of a fluorescent analog of hematoxylin and eosin, which remains the most common stain used for gold-standard pathological reports. We present our guidelines for a broad range of end users (e.g., biologists, clinical researchers and engineers) in a simple format. The end-to-end workflow requires 3-6 d to complete, bearing in mind that data analysis may take longer.
Collapse
Affiliation(s)
- Kevin W Bishop
- Department of Mechanical Engineering, University of Washington, Seattle, WA, USA
- Department of Bioengineering, University of Washington, Seattle, WA, USA
| | | | - Qinghua Han
- Department of Mechanical Engineering, University of Washington, Seattle, WA, USA
- Department of Bioengineering, University of Washington, Seattle, WA, USA
| | - Elena Baraznenok
- Department of Mechanical Engineering, University of Washington, Seattle, WA, USA
- Department of Bioengineering, University of Washington, Seattle, WA, USA
| | - Lydia Lan
- Department of Mechanical Engineering, University of Washington, Seattle, WA, USA
- Department of Biology, University of Washington, Seattle, WA, USA
| | - Chetan Poudel
- Department of Chemistry, University of Washington, Seattle, WA, USA
| | - Gan Gao
- Department of Mechanical Engineering, University of Washington, Seattle, WA, USA
| | - Robert B Serafin
- Department of Mechanical Engineering, University of Washington, Seattle, WA, USA
| | - Sarah S L Chow
- Department of Mechanical Engineering, University of Washington, Seattle, WA, USA
| | - Adam K Glaser
- Allen Institute for Neural Dynamics, Seattle, WA, USA
| | - Andrew Janowczyk
- Department of Biomedical Engineering, Emory University, Atlanta, GA, USA
- Department of Oncology, Division of Precision Oncology, University Hospital of Geneva, Geneva, Switzerland
- Department of Diagnostics, Division of Clinical Pathology, University Hospital of Geneva, Geneva, Switzerland
| | - David Brenes
- Department of Mechanical Engineering, University of Washington, Seattle, WA, USA
| | - Hongyi Huang
- Department of Mechanical Engineering, University of Washington, Seattle, WA, USA
| | - Dominie Miyasato
- Department of Mechanical Engineering, University of Washington, Seattle, WA, USA
| | - Lawrence D True
- Department of Laboratory Medicine and Pathology, University of Washington, Seattle, WA, USA
| | - Soyoung Kang
- Department of Mechanical Engineering, University of Washington, Seattle, WA, USA
| | - Joshua C Vaughan
- Department of Chemistry, University of Washington, Seattle, WA, USA
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, USA
| | - Jonathan T C Liu
- Department of Mechanical Engineering, University of Washington, Seattle, WA, USA.
- Department of Bioengineering, University of Washington, Seattle, WA, USA.
- Department of Laboratory Medicine and Pathology, University of Washington, Seattle, WA, USA.
| |
Collapse
|
6
|
Wong IHM, Chen Z, Shi L, Lo CTK, Kang L, Dai W, Wong TTW. Deep learning-assisted low-cost autofluorescence microscopy for rapid slide-free imaging with virtual histological staining. BIOMEDICAL OPTICS EXPRESS 2024; 15:2187-2201. [PMID: 38633074 PMCID: PMC11019672 DOI: 10.1364/boe.515018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 01/27/2024] [Accepted: 02/20/2024] [Indexed: 04/19/2024]
Abstract
Slide-free imaging techniques have shown great promise in improving the histological workflow. For example, computational high-throughput autofluorescence microscopy by pattern illumination (CHAMP) has achieved high resolution with a long depth of field, which, however, requires a costly ultraviolet laser. Here, simply using a low-cost light-emitting diode (LED), we propose a deep learning-assisted framework of enhanced widefield microscopy, termed EW-LED, to generate results similar to CHAMP (the learning target). Comparing EW-LED and CHAMP, EW-LED reduces the cost by 85×, shortening the image acquisition time and computation time by 36× and 17×, respectively. This framework can be applied to other imaging modalities, enhancing widefield images for better virtual histology.
Collapse
Affiliation(s)
| | | | - Lulin Shi
- Translational and Advanced Bioimaging Laboratory, Department of Chemical and Biological Engineering, Hong Kong University of Science and Technology, Hong Kong, China
| | - Claudia T. K. Lo
- Translational and Advanced Bioimaging Laboratory, Department of Chemical and Biological Engineering, Hong Kong University of Science and Technology, Hong Kong, China
| | - Lei Kang
- Translational and Advanced Bioimaging Laboratory, Department of Chemical and Biological Engineering, Hong Kong University of Science and Technology, Hong Kong, China
| | - Weixing Dai
- Translational and Advanced Bioimaging Laboratory, Department of Chemical and Biological Engineering, Hong Kong University of Science and Technology, Hong Kong, China
| | - Terence T. W. Wong
- Translational and Advanced Bioimaging Laboratory, Department of Chemical and Biological Engineering, Hong Kong University of Science and Technology, Hong Kong, China
| |
Collapse
|
7
|
Park WY, Yun J, Shin J, Oh BH, Yoon G, Hong SM, Kim KH. Open-top Bessel beam two-photon light sheet microscopy for three-dimensional pathology. eLife 2024; 12:RP92614. [PMID: 38488831 PMCID: PMC10942781 DOI: 10.7554/elife.92614] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/17/2024] Open
Abstract
Nondestructive pathology based on three-dimensional (3D) optical microscopy holds promise as a complement to traditional destructive hematoxylin and eosin (H&E) stained slide-based pathology by providing cellular information in high throughput manner. However, conventional techniques provided superficial information only due to shallow imaging depths. Herein, we developed open-top two-photon light sheet microscopy (OT-TP-LSM) for intraoperative 3D pathology. An extended depth of field two-photon excitation light sheet was generated by scanning a nondiffractive Bessel beam, and selective planar imaging was conducted with cameras at 400 frames/s max during the lateral translation of tissue specimens. Intrinsic second harmonic generation was collected for additional extracellular matrix (ECM) visualization. OT-TP-LSM was tested in various human cancer specimens including skin, pancreas, and prostate. High imaging depths were achieved owing to long excitation wavelengths and long wavelength fluorophores. 3D visualization of both cells and ECM enhanced the ability of cancer detection. Furthermore, an unsupervised deep learning network was employed for the style transfer of OT-TP-LSM images to virtual H&E images. The virtual H&E images exhibited comparable histological characteristics to real ones. OT-TP-LSM may have the potential for histopathological examination in surgical and biopsy applications by rapidly providing 3D information.
Collapse
Affiliation(s)
- Won Yeong Park
- Department of Mechanical Engineering, Pohang University of Science and TechnologyPohangRepublic of Korea
| | - Jieun Yun
- Department of Mechanical Engineering, Pohang University of Science and TechnologyPohangRepublic of Korea
| | - Jinho Shin
- Department of Medicine, University of Ulsan College of Medicine, SeoulSeoulRepublic of Korea
| | - Byung Ho Oh
- Department of Dermatology, College of Medicine, Yonsei UniversitySeoulRepublic of Korea
| | - Gilsuk Yoon
- Department of Pathology, School of Medicine, Kyungpook National UniversityDaeguRepublic of Korea
| | - Seung-Mo Hong
- Department of Pathology, Asan Medical Center, University of Ulsan College of MedicineSeoulRepublic of Korea
| | - Ki Hean Kim
- Department of Mechanical Engineering, Pohang University of Science and TechnologyPohangRepublic of Korea
- Medical Science and Engineering Program, School of Convergence Science and Technology, Pohang University of Science and TechnologyPohangRepublic of Korea
- Institute for Convergence Research and Education in Advanced Technology, Yonsei UniversitySeoulRepublic of Korea
| |
Collapse
|
8
|
Abraham TM, Casteleiro Costa P, Filan C, Guang Z, Zhang Z, Neill S, Olson JJ, Levenson R, Robles FE. Label- and slide-free tissue histology using 3D epi-mode quantitative phase imaging and virtual hematoxylin and eosin staining. OPTICA 2023; 10:1605-1618. [PMID: 39640229 PMCID: PMC11620277 DOI: 10.1364/optica.502859] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 10/25/2023] [Indexed: 12/07/2024]
Abstract
Histological staining of tissue biopsies, especially hematoxylin and eosin (H&E) staining, serves as the benchmark for disease diagnosis and comprehensive clinical assessment of tissue. However, the typical formalin-fixation, paraffin-embedding (FFPE) process is laborious and time consuming, often limiting its usage in time-sensitive applications such as surgical margin assessment. To address these challenges, we combine an emerging 3D quantitative phase imaging technology, termed quantitative oblique back illumination microscopy (qOBM), with an unsupervised generative adversarial network pipeline to map qOBM phase images of unaltered thick tissues (i.e., label- and slide-free) to virtually stained H&E-like (vH&E) images. We demonstrate that the approach achieves high-fidelity conversions to H&E with subcellular detail using fresh tissue specimens from mouse liver, rat gliosarcoma, and human gliomas. We also show that the framework directly enables additional capabilities such as H&E-like contrast for volumetric imaging. The quality and fidelity of the vH&E images are validated using both a neural network classifier trained on real H&E images and tested on virtual H&E images, and a user study with neuropathologists. Given its simple and low-cost embodiment and ability to provide real-time feedback in vivo, this deep-learning-enabled qOBM approach could enable new workflows for histopathology with the potential to significantly save time, labor, and costs in cancer screening, detection, treatment guidance, and more.
Collapse
Affiliation(s)
- Tanishq Mathew Abraham
- Department of Biomedical Engineering, University of California, Davis, California 95616, USA
| | - Paloma Casteleiro Costa
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332, USA
| | - Caroline Filan
- George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332, USA
| | - Zhe Guang
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332, USA
| | - Zhaobin Zhang
- Winship Cancer Institute, Emory University, Atlanta, Georgia 30332, USA
- Department of Neurosurgery, Emory University School of Medicine, Atlanta, Georgia 30332, USA
| | - Stewart Neill
- Winship Cancer Institute, Emory University, Atlanta, Georgia 30332, USA
- Department of Pathology & Laboratory Medicine, Emory University School of Medicine, Atlanta, Georgia 30332, USA
| | - Jeffrey J. Olson
- Winship Cancer Institute, Emory University, Atlanta, Georgia 30332, USA
- Department of Neurosurgery, Emory University School of Medicine, Atlanta, Georgia 30332, USA
| | - Richard Levenson
- Department of Pathology and Laboratory Medicine, UC Davis Health, Sacramento, California 95817, USA
| | - Francisco E. Robles
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332, USA
| |
Collapse
|
9
|
Chen R, Liu M, Chen W, Wang Y, Meijering E. Deep learning in mesoscale brain image analysis: A review. Comput Biol Med 2023; 167:107617. [PMID: 37918261 DOI: 10.1016/j.compbiomed.2023.107617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/06/2023] [Accepted: 10/23/2023] [Indexed: 11/04/2023]
Abstract
Mesoscale microscopy images of the brain contain a wealth of information which can help us understand the working mechanisms of the brain. However, it is a challenging task to process and analyze these data because of the large size of the images, their high noise levels, the complex morphology of the brain from the cellular to the regional and anatomical levels, the inhomogeneous distribution of fluorescent labels in the cells and tissues, and imaging artifacts. Due to their impressive ability to extract relevant information from images, deep learning algorithms are widely applied to microscopy images of the brain to address these challenges and they perform superiorly in a wide range of microscopy image processing and analysis tasks. This article reviews the applications of deep learning algorithms in brain mesoscale microscopy image processing and analysis, including image synthesis, image segmentation, object detection, and neuron reconstruction and analysis. We also discuss the difficulties of each task and possible directions for further research.
Collapse
Affiliation(s)
- Runze Chen
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Min Liu
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China; Research Institute of Hunan University in Chongqing, Chongqing, 401135, China.
| | - Weixun Chen
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Yaonan Wang
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Sydney 2052, New South Wales, Australia
| |
Collapse
|
10
|
Martell MT, Haven NJM, Cikaluk BD, Restall BS, McAlister EA, Mittal R, Adam BA, Giannakopoulos N, Peiris L, Silverman S, Deschenes J, Li X, Zemp RJ. Deep learning-enabled realistic virtual histology with ultraviolet photoacoustic remote sensing microscopy. Nat Commun 2023; 14:5967. [PMID: 37749108 PMCID: PMC10519961 DOI: 10.1038/s41467-023-41574-2] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Accepted: 09/11/2023] [Indexed: 09/27/2023] Open
Abstract
The goal of oncologic surgeries is complete tumor resection, yet positive margins are frequently found postoperatively using gold standard H&E-stained histology methods. Frozen section analysis is sometimes performed for rapid intraoperative margin evaluation, albeit with known inaccuracies. Here, we introduce a label-free histological imaging method based on an ultraviolet photoacoustic remote sensing and scattering microscope, combined with unsupervised deep learning using a cycle-consistent generative adversarial network for realistic virtual staining. Unstained tissues are scanned at rates of up to 7 mins/cm2, at resolution equivalent to 400x digital histopathology. Quantitative validation suggests strong concordance with conventional histology in benign and malignant prostate and breast tissues. In diagnostic utility studies we demonstrate a mean sensitivity and specificity of 0.96 and 0.91 in breast specimens, and respectively 0.87 and 0.94 in prostate specimens. We also find virtual stain quality is preferred (P = 0.03) compared to frozen section analysis in a blinded survey of pathologists.
Collapse
Affiliation(s)
- Matthew T Martell
- Department of Electrical and Computer Engineering, University of Alberta, 116 Street & 85 Avenue, Edmonton, AB, T6G 2R3, Canada
| | - Nathaniel J M Haven
- Department of Electrical and Computer Engineering, University of Alberta, 116 Street & 85 Avenue, Edmonton, AB, T6G 2R3, Canada
| | - Brendyn D Cikaluk
- Department of Electrical and Computer Engineering, University of Alberta, 116 Street & 85 Avenue, Edmonton, AB, T6G 2R3, Canada
| | - Brendon S Restall
- Department of Electrical and Computer Engineering, University of Alberta, 116 Street & 85 Avenue, Edmonton, AB, T6G 2R3, Canada
| | - Ewan A McAlister
- Department of Electrical and Computer Engineering, University of Alberta, 116 Street & 85 Avenue, Edmonton, AB, T6G 2R3, Canada
| | - Rohan Mittal
- Department of Laboratory Medicine and Pathology, University of Alberta, 11405 87 Avenue NW, Edmonton, AB, T6G 1C9, Canada
| | - Benjamin A Adam
- Department of Laboratory Medicine and Pathology, University of Alberta, 11405 87 Avenue NW, Edmonton, AB, T6G 1C9, Canada
| | - Nadia Giannakopoulos
- Department of Laboratory Medicine and Pathology, University of Alberta, 11405 87 Avenue NW, Edmonton, AB, T6G 1C9, Canada
| | - Lashan Peiris
- Department of Surgery, University of Alberta, 8440 - 112 Street, Edmonton, AB, T6G 2B7, Canada
| | - Sveta Silverman
- Department of Laboratory Medicine and Pathology, University of Alberta, 11405 87 Avenue NW, Edmonton, AB, T6G 1C9, Canada
| | - Jean Deschenes
- Department of Laboratory Medicine and Pathology, University of Alberta, 11405 87 Avenue NW, Edmonton, AB, T6G 1C9, Canada
| | - Xingyu Li
- Department of Electrical and Computer Engineering, University of Alberta, 116 Street & 85 Avenue, Edmonton, AB, T6G 2R3, Canada
| | - Roger J Zemp
- Department of Electrical and Computer Engineering, University of Alberta, 116 Street & 85 Avenue, Edmonton, AB, T6G 2R3, Canada.
| |
Collapse
|
11
|
Bishop KW, Barner LAE, Han Q, Baraznenok E, Lan L, Poudel C, Gao G, Serafin RB, Chow SS, Glaser AK, Janowczyk A, Brenes D, Huang H, Miyasato D, True LD, Kang S, Vaughan JC, Liu JT. An end-to-end workflow for non-destructive 3D pathology. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.08.03.551845. [PMID: 37577615 PMCID: PMC10418226 DOI: 10.1101/2023.08.03.551845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/15/2023]
Abstract
Recent advances in 3D pathology offer the ability to image orders-of-magnitude more tissue than conventional pathology while providing a volumetric context that is lacking with 2D tissue sections, all without requiring destructive tissue sectioning. Generating high-quality 3D pathology datasets on a consistent basis is non-trivial, requiring careful attention to many details regarding tissue preparation, imaging, and data/image processing in an iterative process. Here we provide an end-to-end protocol covering all aspects of a 3D pathology workflow (using light-sheet microscopy as an illustrative imaging platform) with sufficient detail to perform well-controlled preclinical and clinical studies. While 3D pathology is compatible with diverse staining protocols and computationally generated color palettes for visual analysis, this protocol will focus on a fluorescent analog of hematoxylin and eosin (H&E), which remains the most common stain for gold-standard diagnostic determinations. We present our guidelines for a broad range of end-users (e.g., biologists, clinical researchers, and engineers) in a simple tutorial format.
Collapse
Affiliation(s)
- Kevin W. Bishop
- Department of Mechanical Engineering, University of Washington, Seattle, Washington, USA
- Department of Bioengineering, University of Washington, Seattle, Washington, USA
| | | | - Qinghua Han
- Department of Mechanical Engineering, University of Washington, Seattle, Washington, USA
- Department of Bioengineering, University of Washington, Seattle, Washington, USA
| | - Elena Baraznenok
- Department of Mechanical Engineering, University of Washington, Seattle, Washington, USA
- Department of Bioengineering, University of Washington, Seattle, Washington, USA
| | - Lydia Lan
- Department of Mechanical Engineering, University of Washington, Seattle, Washington, USA
- Department of Biology, University of Washington, Seattle, Washington, USA
| | - Chetan Poudel
- Department of Chemistry, University of Washington, Seattle, Washington, USA
| | - Gan Gao
- Department of Mechanical Engineering, University of Washington, Seattle, Washington, USA
| | - Robert B. Serafin
- Department of Mechanical Engineering, University of Washington, Seattle, Washington, USA
| | - Sarah S.L. Chow
- Department of Mechanical Engineering, University of Washington, Seattle, Washington, USA
| | - Adam K. Glaser
- Department of Mechanical Engineering, University of Washington, Seattle, Washington, USA
| | - Andrew Janowczyk
- Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, USA
- Department of Oncology, Division of Precision Oncology, University Hospital of Geneva, Geneva, Switzerland
- Department of Clinical Pathology, Division of Clinical Pathology, University Hospital of Geneva, Geneva, Switzerland
| | - David Brenes
- Department of Mechanical Engineering, University of Washington, Seattle, Washington, USA
| | - Hongyi Huang
- Department of Mechanical Engineering, University of Washington, Seattle, Washington, USA
| | - Dominie Miyasato
- Department of Mechanical Engineering, University of Washington, Seattle, Washington, USA
| | - Lawrence D. True
- Department of Laboratory Medicine and Pathology, University of Washington, Seattle, Washington, USA
| | - Soyoung Kang
- Department of Mechanical Engineering, University of Washington, Seattle, Washington, USA
| | - Joshua C. Vaughan
- Department of Chemistry, University of Washington, Seattle, Washington, USA
- Department of Physiology and Biophysics, University of Washington, Seattle, Washington, USA
| | - Jonathan T.C. Liu
- Department of Mechanical Engineering, University of Washington, Seattle, Washington, USA
- Department of Bioengineering, University of Washington, Seattle, Washington, USA
- Department of Laboratory Medicine and Pathology, University of Washington, Seattle, Washington, USA
| |
Collapse
|
12
|
Bai B, Yang X, Li Y, Zhang Y, Pillar N, Ozcan A. Deep learning-enabled virtual histological staining of biological samples. LIGHT, SCIENCE & APPLICATIONS 2023; 12:57. [PMID: 36864032 PMCID: PMC9981740 DOI: 10.1038/s41377-023-01104-7] [Citation(s) in RCA: 80] [Impact Index Per Article: 40.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Revised: 02/10/2023] [Accepted: 02/14/2023] [Indexed: 06/18/2023]
Abstract
Histological staining is the gold standard for tissue examination in clinical pathology and life-science research, which visualizes the tissue and cellular structures using chromatic dyes or fluorescence labels to aid the microscopic assessment of tissue. However, the current histological staining workflow requires tedious sample preparation steps, specialized laboratory infrastructure, and trained histotechnologists, making it expensive, time-consuming, and not accessible in resource-limited settings. Deep learning techniques created new opportunities to revolutionize staining methods by digitally generating histological stains using trained neural networks, providing rapid, cost-effective, and accurate alternatives to standard chemical staining methods. These techniques, broadly referred to as virtual staining, were extensively explored by multiple research groups and demonstrated to be successful in generating various types of histological stains from label-free microscopic images of unstained samples; similar approaches were also used for transforming images of an already stained tissue sample into another type of stain, performing virtual stain-to-stain transformations. In this Review, we provide a comprehensive overview of the recent research advances in deep learning-enabled virtual histological staining techniques. The basic concepts and the typical workflow of virtual staining are introduced, followed by a discussion of representative works and their technical innovations. We also share our perspectives on the future of this emerging field, aiming to inspire readers from diverse scientific fields to further expand the scope of deep learning-enabled virtual histological staining techniques and their applications.
Collapse
Affiliation(s)
- Bijie Bai
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Xilin Yang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Yuzhu Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Yijie Zhang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Nir Pillar
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.
- Bioengineering Department, University of California, Los Angeles, 90095, USA.
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA.
| |
Collapse
|
13
|
Li Z, Muench G, Goebel S, Uhland K, Wenhart C, Reimann A. Flow chamber staining modality for real-time inspection of dynamic phenotypes in multiple histological stains. PLoS One 2023; 18:e0284444. [PMID: 37141296 PMCID: PMC10159194 DOI: 10.1371/journal.pone.0284444] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 03/30/2023] [Indexed: 05/05/2023] Open
Abstract
Traditional histological stains, such as hematoxylin-eosin (HE), special stains, and immunofluorescence (IF), have defined myriads of cellular phenotypes and tissue structures in a separate stained section. However, the precise connection of information conveyed by the various stains in the same section, which may be important for diagnosis, is absent. Here, we present a new staining modality-Flow chamber stain, which complies with the current staining workflow but possesses newly additional features non-seen in conventional stains, allowing for (1) quickly switching staining modes between destain and restain for multiplex staining in one single section from routinely histological preparation, (2) real-time inspecting and digitally capturing each specific stained phenotype, and (3) efficiently synthesizing graphs containing the tissue multiple-stained components at site-specific regions. Comparisons of its stains with those by the conventional staining fashions using the microscopic images of mouse tissues (lung, heart, liver, kidney, esophagus, and brain), involving stains of HE, Periodic acid-Schiff, Sirius red, and IF for Human IgG, and mouse CD45, hemoglobin, and CD31, showed no major discordance. Repetitive experiments testing on targeted areas of stained sections confirmed the method is reliable with accuracy and high reproducibility. Using the technique, the targets of IF were easily localized and seen structurally in HE- or special-stained sections, and the unknown or suspected components or structures in HE-stained sections were further determined in histological special stains or IF. By the technique, staining processing was videoed and made a backup for off-site pathologists, which facilitates tele-consultation or -education in current digital pathology. Mistakes, which might occur during the staining process, can be immediately found and amended accordingly. With the technique, a single section can provide much more information than the traditional stained counterpart. The staining mode bears great potential to become a common supplementary tool for traditional histopathology.
Collapse
|
14
|
Rapid and label-free histological imaging of unprocessed surgical tissues via dark-field reflectance ultraviolet microscopy. iScience 2022; 26:105849. [PMID: 36647380 PMCID: PMC9839964 DOI: 10.1016/j.isci.2022.105849] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Revised: 12/04/2022] [Accepted: 12/19/2022] [Indexed: 12/29/2022] Open
Abstract
Routine examination for intraoperative histopathologic assessment is lengthy and laborious. Here, we present the dark-field reflectance ultraviolet microscopy (DRUM) that enables label-free imaging of unprocessed and thick tissues with subcellular resolution and a high signal-to-background ratio. To the best of our knowledge, DRUM provides image results for pathological assessment with the shortest turnaround time (2-3 min in total from sample preparation to tissue imaging). We also proposed a virtual staining process to convert DRUM images into pseudo-colorized images and enhance the image familiarity of pathologists. By imaging various tissues, we found DRUM can resolve cell nuclei and some extranuclear features, which are comparable to standard H&E images. Furthermore, the essential diagnostic features of intraoperatively excised tumor tissues also can be revealed by DRUM, demonstrating its potential as an additional aid for intraoperative histopathology.
Collapse
|
15
|
Bai B, Wang H, Li Y, de Haan K, Colonnese F, Wan Y, Zuo J, Doan NB, Zhang X, Zhang Y, Li J, Yang X, Dong W, Darrow MA, Kamangar E, Lee HS, Rivenson Y, Ozcan A. Label-Free Virtual HER2 Immunohistochemical Staining of Breast Tissue using Deep Learning. BME FRONTIERS 2022; 2022:9786242. [PMID: 37850170 PMCID: PMC10521710 DOI: 10.34133/2022/9786242] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Accepted: 08/25/2022] [Indexed: 10/19/2023] Open
Abstract
The immunohistochemical (IHC) staining of the human epidermal growth factor receptor 2 (HER2) biomarker is widely practiced in breast tissue analysis, preclinical studies, and diagnostic decisions, guiding cancer treatment and investigation of pathogenesis. HER2 staining demands laborious tissue treatment and chemical processing performed by a histotechnologist, which typically takes one day to prepare in a laboratory, increasing analysis time and associated costs. Here, we describe a deep learning-based virtual HER2 IHC staining method using a conditional generative adversarial network that is trained to rapidly transform autofluorescence microscopic images of unlabeled/label-free breast tissue sections into bright-field equivalent microscopic images, matching the standard HER2 IHC staining that is chemically performed on the same tissue sections. The efficacy of this virtual HER2 staining framework was demonstrated by quantitative analysis, in which three board-certified breast pathologists blindly graded the HER2 scores of virtually stained and immunohistochemically stained HER2 whole slide images (WSIs) to reveal that the HER2 scores determined by inspecting virtual IHC images are as accurate as their immunohistochemically stained counterparts. A second quantitative blinded study performed by the same diagnosticians further revealed that the virtually stained HER2 images exhibit a comparable staining quality in the level of nuclear detail, membrane clearness, and absence of staining artifacts with respect to their immunohistochemically stained counterparts. This virtual HER2 staining framework bypasses the costly, laborious, and time-consuming IHC staining procedures in laboratory and can be extended to other types of biomarkers to accelerate the IHC tissue staining used in life sciences and biomedical workflow.
Collapse
Affiliation(s)
- Bijie Bai
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Hongda Wang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Yuzhu Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Kevin de Haan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | | | - Yujie Wan
- Physics and Astronomy Department, University of California, Los Angeles, CA 90095, USA
| | - Jingyi Zuo
- Computer Science Department, University of California, Los Angeles, CA, USA
| | - Ngan B. Doan
- Translational Pathology Core Laboratory, University of California, Los Angeles, CA 90095, USA
| | - Xiaoran Zhang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
| | - Yijie Zhang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Jingxi Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Xilin Yang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Wenjie Dong
- Statistics Department, University of California, Los Angeles, CA 90095, USA
| | - Morgan Angus Darrow
- Department of Pathology and Laboratory Medicine, University of California at Davis, Sacramento, CA 95817, USA
| | - Elham Kamangar
- Department of Pathology and Laboratory Medicine, University of California at Davis, Sacramento, CA 95817, USA
| | - Han Sung Lee
- Department of Pathology and Laboratory Medicine, University of California at Davis, Sacramento, CA 95817, USA
| | - Yair Rivenson
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
- Department of Surgery, University of California, Los Angeles, CA 90095, USA
| |
Collapse
|