1
|
Zhang Z, Zhou X, Fang Y, Xiong Z, Zhang T. AI-driven 3D bioprinting for regenerative medicine: From bench to bedside. Bioact Mater 2025; 45:201-230. [PMID: 39651398 PMCID: PMC11625302 DOI: 10.1016/j.bioactmat.2024.11.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2024] [Revised: 11/01/2024] [Accepted: 11/16/2024] [Indexed: 12/11/2024] Open
Abstract
In recent decades, 3D bioprinting has garnered significant research attention due to its ability to manipulate biomaterials and cells to create complex structures precisely. However, due to technological and cost constraints, the clinical translation of 3D bioprinted products (BPPs) from bench to bedside has been hindered by challenges in terms of personalization of design and scaling up of production. Recently, the emerging applications of artificial intelligence (AI) technologies have significantly improved the performance of 3D bioprinting. However, the existing literature remains deficient in a methodological exploration of AI technologies' potential to overcome these challenges in advancing 3D bioprinting toward clinical application. This paper aims to present a systematic methodology for AI-driven 3D bioprinting, structured within the theoretical framework of Quality by Design (QbD). This paper commences by introducing the QbD theory into 3D bioprinting, followed by summarizing the technology roadmap of AI integration in 3D bioprinting, including multi-scale and multi-modal sensing, data-driven design, and in-line process control. This paper further describes specific AI applications in 3D bioprinting's key elements, including bioink formulation, model structure, printing process, and function regulation. Finally, the paper discusses current prospects and challenges associated with AI technologies to further advance the clinical translation of 3D bioprinting.
Collapse
Affiliation(s)
- Zhenrui Zhang
- Biomanufacturing Center, Department of Mechanical Engineering, Tsinghua University, Beijing, 100084, PR China
- Biomanufacturing and Rapid Forming Technology Key Laboratory of Beijing, Beijing, 100084, PR China
- “Biomanufacturing and Engineering Living Systems” Innovation International Talents Base (111 Base), Beijing, 100084, PR China
| | - Xianhao Zhou
- Biomanufacturing Center, Department of Mechanical Engineering, Tsinghua University, Beijing, 100084, PR China
- Biomanufacturing and Rapid Forming Technology Key Laboratory of Beijing, Beijing, 100084, PR China
- “Biomanufacturing and Engineering Living Systems” Innovation International Talents Base (111 Base), Beijing, 100084, PR China
| | - Yongcong Fang
- Biomanufacturing Center, Department of Mechanical Engineering, Tsinghua University, Beijing, 100084, PR China
- Biomanufacturing and Rapid Forming Technology Key Laboratory of Beijing, Beijing, 100084, PR China
- “Biomanufacturing and Engineering Living Systems” Innovation International Talents Base (111 Base), Beijing, 100084, PR China
- State Key Laboratory of Tribology in Advanced Equipment, Tsinghua University, Beijing, 100084, PR China
| | - Zhuo Xiong
- Biomanufacturing Center, Department of Mechanical Engineering, Tsinghua University, Beijing, 100084, PR China
- Biomanufacturing and Rapid Forming Technology Key Laboratory of Beijing, Beijing, 100084, PR China
- “Biomanufacturing and Engineering Living Systems” Innovation International Talents Base (111 Base), Beijing, 100084, PR China
| | - Ting Zhang
- Biomanufacturing Center, Department of Mechanical Engineering, Tsinghua University, Beijing, 100084, PR China
- Biomanufacturing and Rapid Forming Technology Key Laboratory of Beijing, Beijing, 100084, PR China
- “Biomanufacturing and Engineering Living Systems” Innovation International Talents Base (111 Base), Beijing, 100084, PR China
- State Key Laboratory of Tribology in Advanced Equipment, Tsinghua University, Beijing, 100084, PR China
| |
Collapse
|
2
|
Koka R, Wake LM, Ku NK, Rice K, LaRocque A, Vidal EG, Alexanian S, Kozikowski R, Rivenson Y, Kallen ME. Assessment of AI-based computational H&E staining versus chemical H&E staining for primary diagnosis in lymphomas: a brief interim report. J Clin Pathol 2025; 78:208-211. [PMID: 39304200 DOI: 10.1136/jcp-2024-209643] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2024] [Accepted: 09/10/2024] [Indexed: 09/22/2024]
Abstract
Microscopic review of tissue sections is of foundational importance in pathology, yet the traditional chemistry-based histology laboratory methods are labour intensive, tissue destructive, poorly scalable to the evolving needs of precision medicine and cause delays in patient diagnosis and treatment. Recent AI-based techniques offer promise in upending histology workflow; one such method developed by PictorLabs can generate near-instantaneous diagnostic images via a machine learning algorithm. Here, we demonstrate the utility of virtual staining in a blinded, wash-out controlled study of 16 cases of lymph node excisional biopsies, including a spectrum of diagnoses from reactive to lymphoma and compare the diagnostic performance of virtual and chemical H&Es across a range of stain quality, image quality, morphometric assessment and diagnostic interpretation parameters as well as proposed follow-up immunostains. Our results show non-inferior performance of virtual H&E stains across all parameters, including an improved stain quality pass rate (92% vs 79% for virtual vs chemical stains, respectively) and an equivalent rate of binary diagnostic concordance (90% vs 92%). More detailed adjudicated reviews of differential diagnoses and proposed IHC panels showed no major discordances. Virtual H&Es appear fit for purpose and non-inferior to chemical H&Es in diagnostic assessment of clinical lymph node samples, in a limited pilot study.
Collapse
Affiliation(s)
- Rima Koka
- Department of Pathology, University of Maryland School of Medicine, Baltimore, Maryland, USA
| | - Laura M Wake
- Johns Hopkins Hospital, Baltimore, Maryland, USA
| | - Nam K Ku
- Department of Pathology and Laboratory Medicine, University of California Los Angeles, Los Angeles, California, USA
| | - Kathryn Rice
- Department of Pathology, University of Maryland School of Medicine, Baltimore, Maryland, USA
| | - Autumn LaRocque
- Department of Pathology, University of Maryland School of Medicine, Baltimore, Maryland, USA
| | - Elba G Vidal
- University of Maryland Medical Center, Baltimore, Maryland, USA
| | | | | | | | - Michael Edward Kallen
- Department of Pathology, University of Maryland School of Medicine, Baltimore, Maryland, USA
| |
Collapse
|
3
|
Lin Y, Wang Y, Fang Z, Li Z, Guan X, Jiang D, Zhang Y. A Multi-Perspective Self-Supervised Generative Adversarial Network for FS to FFPE Stain Transfer. IEEE TRANSACTIONS ON MEDICAL IMAGING 2025; 44:774-788. [PMID: 39283778 DOI: 10.1109/tmi.2024.3460795] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/20/2024]
Abstract
In clinical practice, frozen section (FS) images can be utilized to obtain the immediate pathological results of the patients in operation due to their fast production speed. However, compared with the formalin-fixed and paraffin-embedded (FFPE) images, the FS images greatly suffer from poor quality. Thus, it is of great significance to transfer the FS image to the FFPE one, which enables pathologists to observe high-quality images in operation. However, obtaining the paired FS and FFPE images is quite hard, so it is difficult to obtain accurate results using supervised methods. Apart from this, the FS to FFPE stain transfer faces many challenges. Firstly, the number and position of nuclei scattered throughout the image are hard to maintain during the transfer process. Secondly, transferring the blurry FS images to the clear FFPE ones is quite challenging. Thirdly, compared with the center regions of each patch, the edge regions are harder to transfer. To overcome these problems, a multi-perspective self-supervised GAN, incorporating three auxiliary tasks, is proposed to improve the performance of FS to FFPE stain transfer. Concretely, a nucleus consistency constraint is designed to enable the high-fidelity of nuclei, an FFPE guided image deblurring is proposed for improving the clarity, and a multi-field-of-view consistency constraint is designed to better generate the edge regions. Objective indicators and pathologists' evaluation for experiments on the five datasets across different countries have demonstrated the effectiveness of our method. In addition, the validation in the downstream task of microsatellite instability prediction has also proved the performance improvement by transferring the FS images to FFPE ones. Our code link is https://github.com/linyiyang98/Self-Supervised-FS2FFPE.git.
Collapse
|
4
|
Işıl Ç, Koydemir HC, Eryilmaz M, de Haan K, Pillar N, Mentesoglu K, Unal AF, Rivenson Y, Chandrasekaran S, Garner OB, Ozcan A. Virtual Gram staining of label-free bacteria using dark-field microscopy and deep learning. SCIENCE ADVANCES 2025; 11:eads2757. [PMID: 39772690 PMCID: PMC11803577 DOI: 10.1126/sciadv.ads2757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/06/2024] [Accepted: 12/03/2024] [Indexed: 01/11/2025]
Abstract
Gram staining has been a frequently used staining protocol in microbiology. It is vulnerable to staining artifacts due to, e.g., operator errors and chemical variations. Here, we introduce virtual Gram staining of label-free bacteria using a trained neural network that digitally transforms dark-field images of unstained bacteria into their Gram-stained equivalents matching bright-field image contrast. After a one-time training, the virtual Gram staining model processes an axial stack of dark-field microscopy images of label-free bacteria (never seen before) to rapidly generate Gram staining, bypassing several chemical steps involved in the conventional staining process. We demonstrated the success of virtual Gram staining on label-free bacteria samples containing Escherichia coli and Listeria innocua by quantifying the staining accuracy of the model and comparing the chromatic and morphological features of the virtually stained bacteria against their chemically stained counterparts. This virtual bacterial staining framework bypasses the traditional Gram staining protocol and its challenges, including stain standardization, operator errors, and sensitivity to chemical variations.
Collapse
Affiliation(s)
- Çağatay Işıl
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Hatice Ceylan Koydemir
- Department of Biomedical Engineering, Texas A&M University, College Station, TX 77843, USA
- Center for Remote Health Technologies and Systems, Texas A&M Engineering Experiment Station, College Station, TX 77843, USA
| | - Merve Eryilmaz
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Kevin de Haan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Nir Pillar
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Koray Mentesoglu
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
| | - Aras Firat Unal
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Yair Rivenson
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Sukantha Chandrasekaran
- Department of Pathology and Laboratory Medicine, David Geffen School of Medicine, University of California, Los Angeles, CA 90095, USA
| | - Omai B. Garner
- Department of Pathology and Laboratory Medicine, David Geffen School of Medicine, University of California, Los Angeles, CA 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| |
Collapse
|
5
|
Dai B, You S, Wang K, Long Y, Chen J, Upreti N, Peng J, Zheng L, Chang C, Huang TJ, Guan Y, Zhuang S, Zhang D. Deep learning-enabled filter-free fluorescence microscope. SCIENCE ADVANCES 2025; 11:eadq2494. [PMID: 39742468 DOI: 10.1126/sciadv.adq2494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/04/2024] [Accepted: 11/25/2024] [Indexed: 01/03/2025]
Abstract
Optical filtering is an indispensable part of fluorescence microscopy for selectively highlighting molecules labeled with a specific fluorophore and suppressing background noise. However, the utilization of optical filtering sets increases the complexity, size, and cost of microscopic systems, making them less suitable for multifluorescence channel, high-speed imaging. Here, we present filter-free fluorescence microscopic imaging enabled with deep learning-based digital spectral filtering. This approach allows for automatic fluorescence channel selection after image acquisition and accurate prediction of fluorescence by computing color changes due to spectral shifts with the presence of excitation scattering. Fluorescence prediction for cells and tissues labeled with various fluorophores was demonstrated under different magnification powers. The technique offers accurate identification of labeling with robust sensitivity and specificity, achieving consistent results with the reference standard. Beyond fluorescence microscopy, the deep learning-enabled spectral filtering strategy has the potential to drive the development of other biomedical applications, including cytometry and endoscopy.
Collapse
Affiliation(s)
- Bo Dai
- Engineering Research Center of Optical Instrument and System, the Ministry of Education, Shanghai Key Laboratory of Modern Optical System, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Shaojie You
- Engineering Research Center of Optical Instrument and System, the Ministry of Education, Shanghai Key Laboratory of Modern Optical System, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Kan Wang
- Department of Neurology, Renji Hospital, School of Medicine, Shanghai Jiaotong University, Shanghai 200127, China
| | - Yan Long
- Engineering Research Center of Optical Instrument and System, the Ministry of Education, Shanghai Key Laboratory of Modern Optical System, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Junyi Chen
- Engineering Research Center of Optical Instrument and System, the Ministry of Education, Shanghai Key Laboratory of Modern Optical System, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Neil Upreti
- Department of Mechanical Engineering and Materials Science, Duke University, Durham, NC 27709, USA
| | - Jing Peng
- Department of Neurology, Renji Hospital, School of Medicine, Shanghai Jiaotong University, Shanghai 200127, China
| | - Lulu Zheng
- Engineering Research Center of Optical Instrument and System, the Ministry of Education, Shanghai Key Laboratory of Modern Optical System, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Chenliang Chang
- Engineering Research Center of Optical Instrument and System, the Ministry of Education, Shanghai Key Laboratory of Modern Optical System, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Tony Jun Huang
- Department of Mechanical Engineering and Materials Science, Duke University, Durham, NC 27709, USA
| | - Yangtai Guan
- Department of Neurology, Punan Branch of Renji Hospital, School of Medicine, Shanghai Jiaotong University (Punan Hospital in Pudong New District, Shanghai), Shanghai 200125, China
| | - Songlin Zhuang
- Engineering Research Center of Optical Instrument and System, the Ministry of Education, Shanghai Key Laboratory of Modern Optical System, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Dawei Zhang
- Engineering Research Center of Optical Instrument and System, the Ministry of Education, Shanghai Key Laboratory of Modern Optical System, University of Shanghai for Science and Technology, Shanghai 200093, China
| |
Collapse
|
6
|
Chang S, Wintergerst GA, Carlson C, Yin H, Scarpato KR, Luckenbaugh AN, Chang SS, Kolouri S, Bowden AK. Low-cost and label-free blue light cystoscopy through digital staining of white light cystoscopy videos. COMMUNICATIONS MEDICINE 2024; 4:269. [PMID: 39695331 DOI: 10.1038/s43856-024-00705-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Accepted: 12/10/2024] [Indexed: 12/20/2024] Open
Abstract
BACKGROUND Bladder cancer is the 10th most common malignancy and carries the highest treatment cost among all cancers. The elevated cost stems from its high recurrence rate, which necessitates frequent surveillance. White light cystoscopy (WLC), the standard of care surveillance tool to examine the bladder for lesions, has limited sensitivity for early-stage bladder cancer. Blue light cystoscopy (BLC) utilizes a fluorescent dye to induce contrast in cancerous regions, improving the sensitivity of detection by 43%. Nevertheless, the added equipment cost and lengthy dwell time of the dye limits the availability of BLC. METHODS Here, we report the first demonstration of digital staining as a promising strategy to convert WLC images collected with standard-of-care clinical equipment into accurate BLC-like images, providing enhanced sensitivity for WLC without the associated labor or equipment cost. RESULTS By introducing key pre-processing steps to circumvent color and brightness variations in clinical datasets needed for successful model performance, the results achieve a staining accuracy of 80.58% and show excellent qualitative and quantitative agreement of the digitally stained WLC (dsWLC) images with ground truth BLC images, including color consistency. CONCLUSIONS In short, dsWLC can affordably provide the fluorescent contrast needed to improve the detection sensitivity of bladder cancer, thereby increasing the accessibility of BLC contrast for bladder cancer surveillance. The broader implications of this work suggest digital staining is a cost-effective alternative to contrast-based endoscopy for other clinical scenarios outside of urology that can democratize access to better healthcare.
Collapse
Affiliation(s)
- Shuang Chang
- Vanderbilt University, Department of Biomedical Engineering, Nashville, TN, 37232, USA
| | | | - Camella Carlson
- Vanderbilt University, Department of Biomedical Engineering, Nashville, TN, 37232, USA
| | - Haoli Yin
- Vanderbilt University, Department of Computer Science, Nashville, TN, 37232, USA
| | - Kristen R Scarpato
- Vanderbilt University Medical Center, Department of Urology, Nashville, TN, 37232, USA
| | - Amy N Luckenbaugh
- Vanderbilt University Medical Center, Department of Urology, Nashville, TN, 37232, USA
| | - Sam S Chang
- Vanderbilt University Medical Center, Department of Urology, Nashville, TN, 37232, USA
| | - Soheil Kolouri
- Vanderbilt University, Department of Computer Science, Nashville, TN, 37232, USA
| | - Audrey K Bowden
- Vanderbilt University, Department of Biomedical Engineering, Nashville, TN, 37232, USA.
- Vanderbilt University, Department of Electrical and Computer Engineering, Nashville, TN, 37232, USA.
| |
Collapse
|
7
|
Renner JA, Riley PC. Using machine learning for chemical-free histological tissue staining. J Histotechnol 2024; 47:180-183. [PMID: 38648120 DOI: 10.1080/01478885.2024.2338585] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Accepted: 03/27/2024] [Indexed: 04/25/2024]
Abstract
Hematoxylin and eosin staining can be hazardous, expensive, and prone to error and variability. To circumvent these issues, artificial intelligence/machine learning models such as generative adversarial networks (GANs), are being used to 'virtually' stain unstained tissue images indistinguishable from chemically stained tissue. Frameworks such as deep convolutional GANs (DCGAN) and conditional GANs (CGANs) have successfully generated highly reproducible 'stained' images. However, their utility may be limited by requiring registered, paired images which can be difficult to obtain. To avoid these dataset requirements, we attempted to use an unsupervised CycleGAN pix2pix model(5,6) to turn unpaired, unstained bright-field images into pathologist-approved digitally 'stained' images. Using formalin-fixed-paraffin-embedded liver samples, 5µm section images (20x) were obtained before and after staining to create "stained" an "unstained" datasets. Model implementation was conducted using Ubuntu 20.04.4 LTS, 32 GB RAM, Intel Core i7-9750 CPU @2.6 GHz, Nvidia GeForce RTX 2070 Mobile, Python 3.7.11 and Tensorflow 2.9.1. The CycleGAN framework utilized a u-net-based generator and discriminator from pix2pix, a CGAN. The CycleGAN used a modified loss function, cycle consistent loss that assumed unpaired images, so loss was measured twice. To our knowledge, this is the first documented application of this architecture using unpaired bright-field images. Results and suggested improvements are discussed.
Collapse
Affiliation(s)
- Julie A Renner
- US Army DEVCOM Chemical Biological Center, Aberdeen Proving Ground, MD, USA
| | - Patrick C Riley
- US Army DEVCOM Chemical Biological Center, Aberdeen Proving Ground, MD, USA
| |
Collapse
|
8
|
Kawai M, Odate T, Kasai K, Inoue T, Mochizuki K, Oishi N, Kondo T. Virtual multi-staining in a single-section view for renal pathology using generative adversarial networks. Comput Biol Med 2024; 182:109149. [PMID: 39298886 DOI: 10.1016/j.compbiomed.2024.109149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2024] [Revised: 08/17/2024] [Accepted: 09/09/2024] [Indexed: 09/22/2024]
Abstract
Sections stained in periodic acid-Schiff (PAS), periodic acid-methenamine silver (PAM), hematoxylin and eosin (H&E), and Masson's trichrome (MT) stain with minimal morphological discordance are helpful for pathological diagnosis in renal biopsy. Here, we propose an artificial intelligence-based re-stainer called PPHM-GAN (PAS, PAM, H&E, and MT-generative adversarial networks) with multi-stain to multi-stain transformation capability. We trained three GAN models on 512 × 512-pixel patches from 26 training cases. The model with the best transformation quality was selected for each pair of stain transformations by human evaluation. Frechet inception distances, peak signal-to-noise ratio, structural similarity index measure, contrast structural similarity, and newly introduced domain shift inception score were calculated as auxiliary quality metrics. We validated the diagnostic utility using 5120 × 5120 patches of ten validation cases for major glomerular and interstitial abnormalities. Transformed stains were sometimes superior to original stains for the recognition of crescent formation, mesangial hypercellularity, glomerular sclerosis, interstitial lesions, or arteriosclerosis. 23 of 24 glomeruli (95.83 %) from 9 additional validation cases transformed to PAM, PAS, or MT facilitated recognition of crescent formation. Stain transformations to PAM (p = 4.0E-11) and transformations from H&E (p = 4.8E-9) most improved crescent formation recognition. PPHM-GAN maximizes information from a given section by providing several stains in a virtual single-section view, and may change the staining and diagnostic strategy.
Collapse
Affiliation(s)
- Masataka Kawai
- Department of Pathology, University of Yamanashi, Chuo, Yamanashi, Japan.
| | - Toru Odate
- Department of Pathology, University of Yamanashi, Chuo, Yamanashi, Japan
| | - Kazunari Kasai
- Department of Pathology, University of Yamanashi, Chuo, Yamanashi, Japan
| | - Tomohiro Inoue
- Department of Pathology, University of Yamanashi, Chuo, Yamanashi, Japan
| | - Kunio Mochizuki
- Department of Pathology, University of Yamanashi, Chuo, Yamanashi, Japan
| | - Naoki Oishi
- Department of Pathology, University of Yamanashi, Chuo, Yamanashi, Japan
| | - Tetsuo Kondo
- Department of Pathology, University of Yamanashi, Chuo, Yamanashi, Japan
| |
Collapse
|
9
|
Remedios LW, Bao S, Remedios SW, Lee HH, Cai LY, Li T, Deng R, Newlin NR, Saunders AM, Cui C, Li J, Liu Q, Lau KS, Roland JT, Washington MK, Coburn LA, Wilson KT, Huo Y, Landman BA. Data-driven nucleus subclassification on colon hematoxylin and eosin using style-transferred digital pathology. J Med Imaging (Bellingham) 2024; 11:067501. [PMID: 39507410 PMCID: PMC11537205 DOI: 10.1117/1.jmi.11.6.067501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2024] [Revised: 10/03/2024] [Accepted: 10/15/2024] [Indexed: 11/08/2024] Open
Abstract
Purpose Cells are building blocks for human physiology; consequently, understanding the way cells communicate, co-locate, and interrelate is essential to furthering our understanding of how the body functions in both health and disease. Hematoxylin and eosin (H&E) is the standard stain used in histological analysis of tissues in both clinical and research settings. Although H&E is ubiquitous and reveals tissue microanatomy, the classification and mapping of cell subtypes often require the use of specialized stains. The recent CoNIC Challenge focused on artificial intelligence classification of six types of cells on colon H&E but was unable to classify epithelial subtypes (progenitor, enteroendocrine, goblet), lymphocyte subtypes (B, helper T, cytotoxic T), and connective subtypes (fibroblasts). We propose to use inter-modality learning to label previously un-labelable cell types on H&E. Approach We took advantage of the cell classification information inherent in multiplexed immunofluorescence (MxIF) histology to create cell-level annotations for 14 subclasses. Then, we performed style transfer on the MxIF to synthesize realistic virtual H&E. We assessed the efficacy of a supervised learning scheme using the virtual H&E and 14 subclass labels. We evaluated our model on virtual H&E and real H&E. Results On virtual H&E, we were able to classify helper T cells and epithelial progenitors with positive predictive values of 0.34 ± 0.15 (prevalence 0.03 ± 0.01 ) and 0.47 ± 0.1 (prevalence 0.07 ± 0.02 ), respectively, when using ground truth centroid information. On real H&E, we needed to compute bounded metrics instead of direct metrics because our fine-grained virtual H&E predicted classes had to be matched to the closest available parent classes in the coarser labels from the real H&E dataset. For the real H&E, we could classify bounded metrics for the helper T cells and epithelial progenitors with upper bound positive predictive values of 0.43 ± 0.03 (parent class prevalence 0.21) and 0.94 ± 0.02 (parent class prevalence 0.49) when using ground truth centroid information. Conclusions This is the first work to provide cell type classification for helper T and epithelial progenitor nuclei on H&E.
Collapse
Affiliation(s)
- Lucas W. Remedios
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Shunxing Bao
- Vanderbilt University, Department of Electrical and Computer Engineering, Nashville, Tennessee, United States
| | - Samuel W. Remedios
- Johns Hopkins University, Department of Computer Science, Baltimore, Maryland, United States
- National Institutes of Health, Department of Radiology and Imaging Sciences, Bethesda, Maryland, United States
| | - Ho Hin Lee
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Leon Y. Cai
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States
| | - Thomas Li
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States
| | - Ruining Deng
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Nancy R. Newlin
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Adam M. Saunders
- Vanderbilt University, Department of Electrical and Computer Engineering, Nashville, Tennessee, United States
| | - Can Cui
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Jia Li
- Vanderbilt University Medical Center, Department of Biostatistics, Nashville, Tennessee, United States
| | - Qi Liu
- Vanderbilt University Medical Center, Department of Biostatistics, Nashville, Tennessee, United States
- Vanderbilt University Medical Center, Center for Quantitative Sciences, Nashville, Tennessee, United States
| | - Ken S. Lau
- Vanderbilt University Medical Center, Center for Quantitative Sciences, Nashville, Tennessee, United States
- Vanderbilt University Medical Center, Epithelial Biology Center, Nashville, Tennessee, United States
- Vanderbilt University School of Medicine, Department of Cell and Developmental Biology, Nashville, Tennessee, United States
| | - Joseph T. Roland
- Vanderbilt University Medical Center, Epithelial Biology Center, Nashville, Tennessee, United States
| | - Mary K. Washington
- Vanderbilt University Medical Center, Department of Pathology, Microbiology, and Immunology, Nashville, Tennessee, United States
| | - Lori A. Coburn
- Vanderbilt University Medical Center, Division of Gastroenterology, Hepatology, and Nutrition, Department of Medicine, Nashville, Tennessee, United States
- Vanderbilt University Medical Center, Vanderbilt Center for Mucosal Inflammation and Cancer, Nashville, Tennessee, United States
- Vanderbilt University School of Medicine, Program in Cancer Biology, Nashville, Tennessee, United States
- Veterans Affairs Tennessee Valley Healthcare System, Nashville, Tennessee, United States
| | - Keith T. Wilson
- Vanderbilt University Medical Center, Department of Pathology, Microbiology, and Immunology, Nashville, Tennessee, United States
- Vanderbilt University Medical Center, Division of Gastroenterology, Hepatology, and Nutrition, Department of Medicine, Nashville, Tennessee, United States
- Vanderbilt University Medical Center, Vanderbilt Center for Mucosal Inflammation and Cancer, Nashville, Tennessee, United States
- Vanderbilt University School of Medicine, Program in Cancer Biology, Nashville, Tennessee, United States
- Veterans Affairs Tennessee Valley Healthcare System, Nashville, Tennessee, United States
| | - Yuankai Huo
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
- Vanderbilt University, Department of Electrical and Computer Engineering, Nashville, Tennessee, United States
| | - Bennett A. Landman
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
- Vanderbilt University, Department of Electrical and Computer Engineering, Nashville, Tennessee, United States
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States
| |
Collapse
|
10
|
Yang X, Bai B, Zhang Y, Aydin M, Li Y, Selcuk SY, Casteleiro Costa P, Guo Z, Fishbein GA, Atlan K, Wallace WD, Pillar N, Ozcan A. Virtual birefringence imaging and histological staining of amyloid deposits in label-free tissue using autofluorescence microscopy and deep learning. Nat Commun 2024; 15:7978. [PMID: 39266547 PMCID: PMC11393327 DOI: 10.1038/s41467-024-52263-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2024] [Accepted: 09/02/2024] [Indexed: 09/14/2024] Open
Abstract
Systemic amyloidosis involves the deposition of misfolded proteins in organs/tissues, leading to progressive organ dysfunction and failure. Congo red is the gold-standard chemical stain for visualizing amyloid deposits in tissue, showing birefringence under polarization microscopy. However, Congo red staining is tedious and costly to perform, and prone to false diagnoses due to variations in amyloid amount, staining quality and manual examination of tissue under a polarization microscope. We report virtual birefringence imaging and virtual Congo red staining of label-free human tissue to show that a single neural network can transform autofluorescence images of label-free tissue into brightfield and polarized microscopy images, matching their histochemically stained versions. Blind testing with quantitative metrics and pathologist evaluations on cardiac tissue showed that our virtually stained polarization and brightfield images highlight amyloid patterns in a consistent manner, mitigating challenges due to variations in chemical staining quality and manual imaging processes in the clinical workflow.
Collapse
Affiliation(s)
- Xilin Yang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Bijie Bai
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Yijie Zhang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Musa Aydin
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Department of Computer Engineering, Fatih Sultan Mehmet Vakif University, Istanbul, 34038, Turkey
| | - Yuzhu Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Sahan Yoruc Selcuk
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Paloma Casteleiro Costa
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Zhen Guo
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
| | - Gregory A Fishbein
- Department of Pathology and Laboratory Medicine, David Geffen School of Medicine at the University of California, Los Angeles, CA, 90095, USA
| | - Karine Atlan
- Department of Pathology, Hadassah Hebrew University Medical Center, Jerusalem, 91120, Israel
| | - William Dean Wallace
- Department of Pathology, Keck School of Medicine, University of Southern California, Los Angeles, CA, 90033, USA
| | - Nir Pillar
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA.
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA.
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA.
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA.
- Department of Surgery, University of California, Los Angeles, CA, 90095, USA.
| |
Collapse
|
11
|
Pati P, Karkampouna S, Bonollo F, Compérat E, Radić M, Spahn M, Martinelli A, Wartenberg M, Kruithof-de Julio M, Rapsomaniki M. Accelerating histopathology workflows with generative AI-based virtually multiplexed tumour profiling. NAT MACH INTELL 2024; 6:1077-1093. [PMID: 39309216 PMCID: PMC11415301 DOI: 10.1038/s42256-024-00889-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 07/29/2024] [Indexed: 09/25/2024]
Abstract
Understanding the spatial heterogeneity of tumours and its links to disease initiation and progression is a cornerstone of cancer biology. Presently, histopathology workflows heavily rely on hematoxylin and eosin and serial immunohistochemistry staining, a cumbersome, tissue-exhaustive process that results in non-aligned tissue images. We propose the VirtualMultiplexer, a generative artificial intelligence toolkit that effectively synthesizes multiplexed immunohistochemistry images for several antibody markers (namely AR, NKX3.1, CD44, CD146, p53 and ERG) from only an input hematoxylin and eosin image. The VirtualMultiplexer captures biologically relevant staining patterns across tissue scales without requiring consecutive tissue sections, image registration or extensive expert annotations. Thorough qualitative and quantitative assessment indicates that the VirtualMultiplexer achieves rapid, robust and precise generation of virtually multiplexed imaging datasets of high staining quality that are indistinguishable from the real ones. The VirtualMultiplexer is successfully transferred across tissue scales and patient cohorts with no need for model fine-tuning. Crucially, the virtually multiplexed images enabled training a graph transformer that simultaneously learns from the joint spatial distribution of several proteins to predict clinically relevant endpoints. We observe that this multiplexed learning scheme was able to greatly improve clinical prediction, as corroborated across several downstream tasks, independent patient cohorts and cancer types. Our results showcase the clinical relevance of artificial intelligence-assisted multiplexed tumour imaging, accelerating histopathology workflows and cancer biology.
Collapse
Affiliation(s)
| | - Sofia Karkampouna
- Urology Research Laboratory, Department for BioMedical Research, University of Bern, Bern, Switzerland
- Department of Urology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Francesco Bonollo
- Urology Research Laboratory, Department for BioMedical Research, University of Bern, Bern, Switzerland
| | - Eva Compérat
- Department of Pathology, Medical University of Vienna, Vienna, Austria
| | - Martina Radić
- Urology Research Laboratory, Department for BioMedical Research, University of Bern, Bern, Switzerland
| | - Martin Spahn
- Department of Urology, Lindenhofspital Bern, Bern, Switzerland
- Department of Urology, University Duisburg-Essen, Essen, Germany
| | - Adriano Martinelli
- IBM Research Europe, Rüschlikon, Switzerland
- ETH Zürich, Zürich, Switzerland
- Biomedical Data Science Center, Lausanne University Hospital, Lausanne, Switzerland
| | - Martin Wartenberg
- Institute of Tissue Medicine and Pathology, University of Bern, Bern, Switzerland
| | - Marianna Kruithof-de Julio
- Urology Research Laboratory, Department for BioMedical Research, University of Bern, Bern, Switzerland
- Department of Urology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
- Translational Organoid Resource, Department for BioMedical Research, University of Bern, Bern, Switzerland
| | - Marianna Rapsomaniki
- IBM Research Europe, Rüschlikon, Switzerland
- Biomedical Data Science Center, Lausanne University Hospital, Lausanne, Switzerland
- Faculty of Biology and Medicine, University of Lausanne, Lausanne, Switzerland
| |
Collapse
|
12
|
Yoon C, Park E, Misra S, Kim JY, Baik JW, Kim KG, Jung CK, Kim C. Deep learning-based virtual staining, segmentation, and classification in label-free photoacoustic histology of human specimens. LIGHT, SCIENCE & APPLICATIONS 2024; 13:226. [PMID: 39223152 PMCID: PMC11369251 DOI: 10.1038/s41377-024-01554-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 07/08/2024] [Accepted: 07/24/2024] [Indexed: 09/04/2024]
Abstract
In pathological diagnostics, histological images highlight the oncological features of excised specimens, but they require laborious and costly staining procedures. Despite recent innovations in label-free microscopy that simplify complex staining procedures, technical limitations and inadequate histological visualization are still problems in clinical settings. Here, we demonstrate an interconnected deep learning (DL)-based framework for performing automated virtual staining, segmentation, and classification in label-free photoacoustic histology (PAH) of human specimens. The framework comprises three components: (1) an explainable contrastive unpaired translation (E-CUT) method for virtual H&E (VHE) staining, (2) an U-net architecture for feature segmentation, and (3) a DL-based stepwise feature fusion method (StepFF) for classification. The framework demonstrates promising performance at each step of its application to human liver cancers. In virtual staining, the E-CUT preserves the morphological aspects of the cell nucleus and cytoplasm, making VHE images highly similar to real H&E ones. In segmentation, various features (e.g., the cell area, number of cells, and the distance between cell nuclei) have been successfully segmented in VHE images. Finally, by using deep feature vectors from PAH, VHE, and segmented images, StepFF has achieved a 98.00% classification accuracy, compared to the 94.80% accuracy of conventional PAH classification. In particular, StepFF's classification reached a sensitivity of 100% based on the evaluation of three pathologists, demonstrating its applicability in real clinical settings. This series of DL methods for label-free PAH has great potential as a practical clinical strategy for digital pathology.
Collapse
Affiliation(s)
- Chiho Yoon
- Departments of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, Medical Science and Engineering, Graduate School of Artificial Intelligence, and Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea
| | - Eunwoo Park
- Departments of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, Medical Science and Engineering, Graduate School of Artificial Intelligence, and Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea
| | - Sampa Misra
- Departments of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, Medical Science and Engineering, Graduate School of Artificial Intelligence, and Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea
| | - Jin Young Kim
- Departments of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, Medical Science and Engineering, Graduate School of Artificial Intelligence, and Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea
- Opticho Inc., Pohang, Republic of Korea
| | - Jin Woo Baik
- Departments of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, Medical Science and Engineering, Graduate School of Artificial Intelligence, and Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea
| | - Kwang Gi Kim
- Department of Health Sciences and Technology, Gachon Advanced Institute for Health Sciences and Technology (GAIHST), Gachon University, Incheon, Republic of Korea
| | - Chan Kwon Jung
- Cancer Research Institute, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea.
- Department of Hospital Pathology, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea.
| | - Chulhong Kim
- Departments of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, Medical Science and Engineering, Graduate School of Artificial Intelligence, and Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea.
- Opticho Inc., Pohang, Republic of Korea.
| |
Collapse
|
13
|
Zhu R, He H, Chen Y, Yi M, Ran S, Wang C, Wang Y. Deep learning for rapid virtual H&E staining of label-free glioma tissue from hyperspectral images. Comput Biol Med 2024; 180:108958. [PMID: 39094325 DOI: 10.1016/j.compbiomed.2024.108958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2024] [Revised: 07/02/2024] [Accepted: 07/26/2024] [Indexed: 08/04/2024]
Abstract
Hematoxylin and eosin (H&E) staining is a crucial technique for diagnosing glioma, allowing direct observation of tissue structures. However, the H&E staining workflow necessitates intricate processing, specialized laboratory infrastructures, and specialist pathologists, rendering it expensive, labor-intensive, and time-consuming. In view of these considerations, we combine the deep learning method and hyperspectral imaging technique, aiming at accurately and rapidly converting the hyperspectral images into virtual H&E staining images. The method overcomes the limitations of H&E staining by capturing tissue information at different wavelengths, providing comprehensive and detailed tissue composition information as the realistic H&E staining. In comparison with various generator structures, the Unet exhibits substantial overall advantages, as evidenced by a mean structure similarity index measure (SSIM) of 0.7731 and a peak signal-to-noise ratio (PSNR) of 23.3120, as well as the shortest training and inference time. A comprehensive software system for virtual H&E staining, which integrates CCD control, microscope control, and virtual H&E staining technology, is developed to facilitate fast intraoperative imaging, promote disease diagnosis, and accelerate the development of medical automation. The platform reconstructs large-scale virtual H&E staining images of gliomas at a high speed of 3.81 mm2/s. This innovative approach will pave the way for a novel, expedited route in histological staining.
Collapse
Affiliation(s)
- Ruohua Zhu
- National Engineering Research Center of Ophthalmology and Optometry, School of Biomedical Engineering, Eye Hospital, Wenzhou Medical University, Xueyuan Road 270, Wenzhou, 325027, China
| | - Haiyang He
- National Engineering Research Center of Ophthalmology and Optometry, School of Biomedical Engineering, Eye Hospital, Wenzhou Medical University, Xueyuan Road 270, Wenzhou, 325027, China
| | - Yuzhe Chen
- National Engineering Research Center of Ophthalmology and Optometry, School of Biomedical Engineering, Eye Hospital, Wenzhou Medical University, Xueyuan Road 270, Wenzhou, 325027, China
| | - Ming Yi
- National Engineering Research Center of Ophthalmology and Optometry, School of Biomedical Engineering, Eye Hospital, Wenzhou Medical University, Xueyuan Road 270, Wenzhou, 325027, China
| | - Shengdong Ran
- National Engineering Research Center of Ophthalmology and Optometry, School of Biomedical Engineering, Eye Hospital, Wenzhou Medical University, Xueyuan Road 270, Wenzhou, 325027, China
| | - Chengde Wang
- Department of Neurosurgery, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou 325000, China.
| | - Yi Wang
- National Engineering Research Center of Ophthalmology and Optometry, School of Biomedical Engineering, Eye Hospital, Wenzhou Medical University, Xueyuan Road 270, Wenzhou, 325027, China; Wenzhou Institute, University of Chinese Academy of Sciences, Jinlian Road 1, Wenzhou, 325001, China.
| |
Collapse
|
14
|
Zhou Z, Jiang Y, Sun Z, Zhang T, Feng W, Li G, Li R, Xing L. Virtual multiplexed immunofluorescence staining from non-antibody-stained fluorescence imaging for gastric cancer prognosis. EBioMedicine 2024; 107:105287. [PMID: 39154539 PMCID: PMC11378090 DOI: 10.1016/j.ebiom.2024.105287] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Revised: 07/11/2024] [Accepted: 08/01/2024] [Indexed: 08/20/2024] Open
Abstract
BACKGROUND Multiplexed immunofluorescence (mIF) staining, such as CODEX and MIBI, holds significant clinical value for various fields, such as disease diagnosis, biological research, and drug development. However, these techniques are often hindered by high time and cost requirements. METHODS Here we present a Multimodal-Attention-based virtual mIF Staining (MAS) system that utilises a deep learning model to extract potential antibody-related features from dual-modal non-antibody-stained fluorescence imaging, specifically autofluorescence (AF) and DAPI imaging. The MAS system simultaneously generates predictions of mIF with multiple survival-associated biomarkers in gastric cancer using self- and multi-attention learning mechanisms. FINDINGS Experimental results with 180 pathological slides from 94 patients with gastric cancer demonstrate the efficiency and consistent performance of the MAS system in both cancer and noncancer gastric tissues. Furthermore, we showcase the prognostic accuracy of the virtual mIF images of seven gastric cancer related biomarkers, including CD3, CD20, FOXP3, PD1, CD8, CD163, and PD-L1, which is comparable to those obtained from the standard mIF staining. INTERPRETATION The MAS system rapidly generates reliable multiplexed staining, greatly reducing the cost of mIF and improving clinical workflow. FUNDING Stanford 2022 HAI Seed Grant; National Institutes of Health 1R01CA256890.
Collapse
Affiliation(s)
- Zixia Zhou
- Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA, 94305, USA.
| | - Yuming Jiang
- Department of Radiation Oncology, Wake Forest University School of Medicine, Winston Salem, NC, 27109, USA.
| | - Zepang Sun
- Department of General Surgery & Guangdong Provincial Key Laboratory of Precision Medicine for Gastrointestinal Tumor, Nanfang Hospital, Southern Medical University, 510515, Guangzhou, China
| | - Taojun Zhang
- Department of General Surgery & Guangdong Provincial Key Laboratory of Precision Medicine for Gastrointestinal Tumor, Nanfang Hospital, Southern Medical University, 510515, Guangzhou, China
| | - Wanying Feng
- Department of Pathology, School of Basic Medical Sciences, Southern Medical University, 510515, Guangzhou, China
| | - Guoxin Li
- Department of General Surgery & Guangdong Provincial Key Laboratory of Precision Medicine for Gastrointestinal Tumor, Nanfang Hospital, Southern Medical University, 510515, Guangzhou, China
| | - Ruijiang Li
- Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Lei Xing
- Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA, 94305, USA.
| |
Collapse
|
15
|
Latonen L, Koivukoski S, Khan U, Ruusuvuori P. Virtual staining for histology by deep learning. Trends Biotechnol 2024; 42:1177-1191. [PMID: 38480025 DOI: 10.1016/j.tibtech.2024.02.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 02/14/2024] [Accepted: 02/15/2024] [Indexed: 09/07/2024]
Abstract
In pathology and biomedical research, histology is the cornerstone method for tissue analysis. Currently, the histological workflow consumes plenty of chemicals, water, and time for staining procedures. Deep learning is now enabling digital replacement of parts of the histological staining procedure. In virtual staining, histological stains are created by training neural networks to produce stained images from an unstained tissue image, or through transferring information from one stain to another. These technical innovations provide more sustainable, rapid, and cost-effective alternatives to traditional histological pipelines, but their development is in an early phase and requires rigorous validation. In this review we cover the basic concepts of virtual staining for histology and provide future insights into the utilization of artificial intelligence (AI)-enabled virtual histology.
Collapse
Affiliation(s)
- Leena Latonen
- Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland.
| | - Sonja Koivukoski
- Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland
| | - Umair Khan
- Institute of Biomedicine, University of Turku, Turku, Finland
| | | |
Collapse
|
16
|
Biswas S, Barma S. Feature Fusion GAN Based Virtual Staining on Plant Microscopy Images. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2024; 21:1264-1273. [PMID: 38517710 DOI: 10.1109/tcbb.2024.3380634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/24/2024]
Abstract
Virtual staining of microscopy specimens using GAN-based methods could resolve critical concerns of manual staining process as displayed in recent studies on histopathology images. However, most of these works use basic-GAN framework ignoring microscopy image characteristics and their performance were evaluated based on structural and error statistics (SSIM and PSNR) between synthetic and ground-truth without considering any color space although virtual staining deals with color transformation. Besides, major aspects of staining, like color, contrast, focus, image-realness etc. were totally ignored. However, modifications of GAN architecture for virtual staining might be suitable by incorporating microscopy image features. Further, its successful implementation need to be examined by considering various aspects of staining process. Therefore, we designed, a new feature-fusion-GAN for virtual staining followed by performance assessment by framing a state-of-the-art multi-evaluation framework that includes numerous metrics in -qualitative (based on histogram-correlation of color and brightness); quantitative (SSIM and PSNR); focus aptitude (Brenner metrics and Spectral-Moments); and influence on perception (semantic perceptual influence score). For, experimental validation cell boundaries were highlighted by two different staining reagents, Safranin-O and Toluidine-Blue-O on plant microscopy images of potato tuber. We evaluated virtually stained image quality w.r.t ground-truth in RGB and YCbCr color spaces based on defined metrics and results are found very consistent. Further, impact of feature fusion has been demonstrated. Collectively, this study could be a baseline towards guiding architectural upgrading of deep pipelines for virtual staining of diverse microscopy modalities followed by future benchmark methodology or protocols.
Collapse
|
17
|
Wang Q, Akram AR, Dorward DA, Talas S, Monks B, Thum C, Hopgood JR, Javidi M, Vallejo M. Deep learning-based virtual H& E staining from label-free autofluorescence lifetime images. NPJ IMAGING 2024; 2:17. [PMID: 38948152 PMCID: PMC11213708 DOI: 10.1038/s44303-024-00021-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Accepted: 06/11/2024] [Indexed: 07/02/2024]
Abstract
Label-free autofluorescence lifetime is a unique feature of the inherent fluorescence signals emitted by natural fluorophores in biological samples. Fluorescence lifetime imaging microscopy (FLIM) can capture these signals enabling comprehensive analyses of biological samples. Despite the fundamental importance and wide application of FLIM in biomedical and clinical sciences, existing methods for analysing FLIM images often struggle to provide rapid and precise interpretations without reliable references, such as histology images, which are usually unavailable alongside FLIM images. To address this issue, we propose a deep learning (DL)-based approach for generating virtual Hematoxylin and Eosin (H&E) staining. By combining an advanced DL model with a contemporary image quality metric, we can generate clinical-grade virtual H&E-stained images from label-free FLIM images acquired on unstained tissue samples. Our experiments also show that the inclusion of lifetime information, an extra dimension beyond intensity, results in more accurate reconstructions of virtual staining when compared to using intensity-only images. This advancement allows for the instant and accurate interpretation of FLIM images at the cellular level without the complexities associated with co-registering FLIM and histology images. Consequently, we are able to identify distinct lifetime signatures of seven different cell types commonly found in the tumour microenvironment, opening up new opportunities towards biomarker-free tissue histology using FLIM across multiple cancer types.
Collapse
Affiliation(s)
- Qiang Wang
- Centre for Inflammation Research, Institute of Regeneration and Repair, The University of Edinburgh, Edinburgh, UK
- Translational Healthcare Technologies Group, Centre for Inflammation Research, Institute of Regeneration and Repair, The University of Edinburgh, Edinburgh, UK
| | - Ahsan R. Akram
- Centre for Inflammation Research, Institute of Regeneration and Repair, The University of Edinburgh, Edinburgh, UK
- Translational Healthcare Technologies Group, Centre for Inflammation Research, Institute of Regeneration and Repair, The University of Edinburgh, Edinburgh, UK
| | - David A. Dorward
- Centre for Inflammation Research, Institute of Regeneration and Repair, The University of Edinburgh, Edinburgh, UK
- Department of Pathology, Royal Infirmary of Edinburgh, Edinburgh, UK
| | - Sophie Talas
- Centre for Inflammation Research, Institute of Regeneration and Repair, The University of Edinburgh, Edinburgh, UK
- Department of Pathology, Royal Infirmary of Edinburgh, Edinburgh, UK
| | - Basil Monks
- Department of Pathology, Royal Infirmary of Edinburgh, Edinburgh, UK
| | - Chee Thum
- Department of Pathology, Royal Infirmary of Edinburgh, Edinburgh, UK
| | - James R. Hopgood
- School of Engineering, The University of Edinburgh, Edinburgh, UK
| | - Malihe Javidi
- School of Mathematical and Computer Sciences, Heriot-Watt University, Edinburgh, UK
- Department of Computer Engineering, Quchan University of Technology, Quchan, Iran
| | - Marta Vallejo
- School of Mathematical and Computer Sciences, Heriot-Watt University, Edinburgh, UK
| |
Collapse
|
18
|
Shi Y, Chen C, Deng L, Zeng N, Li H, Liu Z, He H, He C, Ma H. Polarization enhancement mechanism from tissue staining in multispectral Mueller matrix microscopy. OPTICS LETTERS 2024; 49:3356-3359. [PMID: 38875619 DOI: 10.1364/ol.523570] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Accepted: 05/14/2024] [Indexed: 06/16/2024]
Abstract
Mueller matrix microscopy can provide comprehensive polarization-related optical and structural information of biomedical samples label-freely. Thus, it is regarded as an emerging powerful tool for pathological diagnosis. However, the staining dyes have different optical properties and staining mechanisms, which can put influence on Mueller matrix microscopic measurement. In this Letter, we quantitatively analyze the polarization enhancement mechanism from hematoxylin and eosin (H&E) staining in multispectral Mueller matrix microscopy. We examine the influence of hematoxylin and eosin dyes on Mueller matrix-derived polarization characteristics of fibrous tissue structures. Combined with Monte Carlo simulations, we explain how the dyes enhance diattenuation and linear retardance as the illumination wavelength changed. In addition, it is demonstrated that by choosing an appropriate incident wavelength, more visual Mueller matrix polarimetric information can be observed of the H&E stained tissue sample. The findings can lay the foundation for the future Mueller matrix-assisted digital pathology.
Collapse
|
19
|
Chen Z, Wong IHM, Dai W, Lo CTK, Wong TTW. Lung Cancer Diagnosis on Virtual Histologically Stained Tissue Using Weakly Supervised Learning. Mod Pathol 2024; 37:100487. [PMID: 38588884 DOI: 10.1016/j.modpat.2024.100487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2023] [Revised: 03/05/2024] [Accepted: 03/30/2024] [Indexed: 04/10/2024]
Abstract
Lung adenocarcinoma (LUAD) is the most common primary lung cancer and accounts for 40% of all lung cancer cases. The current gold standard for lung cancer analysis is based on the pathologists' interpretation of hematoxylin and eosin (H&E)-stained tissue slices viewed under a brightfield microscope or a digital slide scanner. Computational pathology using deep learning has been proposed to detect lung cancer on histology images. However, the histological staining workflow to acquire the H&E-stained images and the subsequent cancer diagnosis procedures are labor-intensive and time-consuming with tedious sample preparation steps and repetitive manual interpretation, respectively. In this work, we propose a weakly supervised learning method for LUAD classification on label-free tissue slices with virtual histological staining. The autofluorescence images of label-free tissue with histopathological information can be converted into virtual H&E-stained images by a weakly supervised deep generative model. For the downstream LUAD classification task, we trained the attention-based multiple-instance learning model with different settings on the open-source LUAD H&E-stained whole-slide images (WSIs) dataset from the Cancer Genome Atlas (TCGA). The model was validated on the 150 H&E-stained WSIs collected from patients in Queen Mary Hospital and Prince of Wales Hospital with an average area under the curve (AUC) of 0.961. The model also achieved an average AUC of 0.973 on 58 virtual H&E-stained WSIs, comparable to the results on 58 standard H&E-stained WSIs with an average AUC of 0.977. The attention heatmaps of virtual H&E-stained WSIs and ground-truth H&E-stained WSIs can indicate tumor regions of LUAD tissue slices. In conclusion, the proposed diagnostic workflow on virtual H&E-stained WSIs of label-free tissue is a rapid, cost effective, and interpretable approach to assist clinicians in postoperative pathological examinations. The method could serve as a blueprint for other label-free imaging modalities and disease contexts.
Collapse
Affiliation(s)
- Zhenghui Chen
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China
| | - Ivy H M Wong
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China
| | - Weixing Dai
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China
| | - Claudia T K Lo
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China
| | - Terence T W Wong
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China.
| |
Collapse
|
20
|
Tweel JED, Ecclestone BR, Boktor M, Dinakaran D, Mackey JR, Reza PH. Automated Whole Slide Imaging for Label-Free Histology Using Photon Absorption Remote Sensing Microscopy. IEEE Trans Biomed Eng 2024; 71:1901-1912. [PMID: 38231822 DOI: 10.1109/tbme.2024.3355296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2024]
Abstract
OBJECTIVE Pathologists rely on histochemical stains to impart contrast in thin translucent tissue samples, revealing tissue features necessary for identifying pathological conditions. However, the chemical labeling process is destructive and often irreversible or challenging to undo, imposing practical limits on the number of stains that can be applied to the same tissue section. Here we present an automated label-free whole slide scanner using a PARS microscope designed for imaging thin, transmissible samples. METHODS Peak SNR and in-focus acquisitions are achieved across entire tissue sections using the scattering signal from the PARS detection beam to measure the optimal focal plane. Whole slide images (WSI) are seamlessly stitched together using a custom contrast leveling algorithm. Identical tissue sections are subsequently H&E stained and brightfield imaged. The one-to-one WSIs from both modalities are visually and quantitatively compared. RESULTS PARS WSIs are presented at standard 40x magnification in malignant human breast and skin samples. We show correspondence of subcellular diagnostic details in both PARS and H&E WSIs and demonstrate virtual H&E staining of an entire PARS WSI. The one-to-one WSI from both modalities show quantitative similarity in nuclear features and structural information. CONCLUSION PARS WSIs are compatible with existing digital pathology tools, and samples remain suitable for histochemical, immunohistochemical, and other staining techniques. SIGNIFICANCE This work is a critical advance for integrating label-free optical methods into standard histopathology workflows.
Collapse
|
21
|
Ma J, Chen H. Efficient Supervised Pretraining of Swin-Transformer for Virtual Staining of Microscopy Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1388-1399. [PMID: 38010933 DOI: 10.1109/tmi.2023.3337253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
Fluorescence staining is an important technique in life science for labeling cellular constituents. However, it also suffers from being time-consuming, having difficulty in simultaneous labeling, etc. Thus, virtual staining, which does not rely on chemical labeling, has been introduced. Recently, deep learning models such as transformers have been applied to virtual staining tasks. However, their performance relies on large-scale pretraining, hindering their development in the field. To reduce the reliance on large amounts of computation and data, we construct a Swin-transformer model and propose an efficient supervised pretraining method based on the masked autoencoder (MAE). Specifically, we adopt downsampling and grid sampling to mask 75% of pixels and reduce the number of tokens. The pretraining time of our method is only 1/16 compared with the original MAE. We also design a supervised proxy task to predict stained images with multiple styles instead of masked pixels. Additionally, most virtual staining approaches are based on private datasets and evaluated by different metrics, making a fair comparison difficult. Therefore, we develop a standard benchmark based on three public datasets and build a baseline for the convenience of future researchers. We conduct extensive experiments on three benchmark datasets, and the experimental results show the proposed method achieves the best performance both quantitatively and qualitatively. In addition, ablation studies are conducted, and experimental results illustrate the effectiveness of the proposed pretraining method. The benchmark and code are available at https://github.com/birkhoffkiki/CAS-Transformer.
Collapse
|
22
|
Shen B, Li Z, Pan Y, Guo Y, Yin Z, Hu R, Qu J, Liu L. Noninvasive Nonlinear Optical Computational Histology. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2024; 11:e2308630. [PMID: 38095543 PMCID: PMC10916666 DOI: 10.1002/advs.202308630] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Revised: 11/28/2023] [Indexed: 03/07/2024]
Abstract
Cancer remains a global health challenge, demanding early detection and accurate diagnosis for improved patient outcomes. An intelligent paradigm is introduced that elevates label-free nonlinear optical imaging with contrastive patch-wise learning, yielding stain-free nonlinear optical computational histology (NOCH). NOCH enables swift, precise diagnostic analysis of fresh tissues, reducing patient anxiety and healthcare costs. Nonlinear modalities are evaluated, including stimulated Raman scattering and multiphoton imaging, for their ability to enhance tumor microenvironment sensitivity, pathological analysis, and cancer examination. Quantitative analysis confirmed that NOCH images accurately reproduce nuclear morphometric features across different cancer stages. Key diagnostic features, such as nuclear morphology, size, and nuclear-cytoplasmic contrast, are well preserved. NOCH models also demonstrate promising generalization when applied to other pathological tissues. The study unites label-free nonlinear optical imaging with histopathology using contrastive learning to establish stain-free computational histology. NOCH provides a rapid, non-invasive, and precise approach to surgical pathology, holding immense potential for revolutionizing cancer diagnosis and surgical interventions.
Collapse
Affiliation(s)
- Binglin Shen
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of EducationCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Zhenglin Li
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of EducationCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Ying Pan
- China–Japan Union Hospital of Jilin UniversityChangchun130033China
| | - Yuan Guo
- Shaanxi Provincial Cancer HospitalXi'an710065China
| | - Zongyi Yin
- Shenzhen University General HospitalShenzhen518055China
| | - Rui Hu
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of EducationCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Junle Qu
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of EducationCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Liwei Liu
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of EducationCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| |
Collapse
|
23
|
Li Y, Pillar N, Li J, Liu T, Wu D, Sun S, Ma G, de Haan K, Huang L, Zhang Y, Hamidi S, Urisman A, Keidar Haran T, Wallace WD, Zuckerman JE, Ozcan A. Virtual histological staining of unlabeled autopsy tissue. Nat Commun 2024; 15:1684. [PMID: 38396004 PMCID: PMC10891155 DOI: 10.1038/s41467-024-46077-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Accepted: 02/09/2024] [Indexed: 02/25/2024] Open
Abstract
Traditional histochemical staining of post-mortem samples often confronts inferior staining quality due to autolysis caused by delayed fixation of cadaver tissue, and such chemical staining procedures covering large tissue areas demand substantial labor, cost and time. Here, we demonstrate virtual staining of autopsy tissue using a trained neural network to rapidly transform autofluorescence images of label-free autopsy tissue sections into brightfield equivalent images, matching hematoxylin and eosin (H&E) stained versions of the same samples. The trained model can effectively accentuate nuclear, cytoplasmic and extracellular features in new autopsy tissue samples that experienced severe autolysis, such as COVID-19 samples never seen before, where the traditional histochemical staining fails to provide consistent staining quality. This virtual autopsy staining technique provides a rapid and resource-efficient solution to generate artifact-free H&E stains despite severe autolysis and cell death, also reducing labor, cost and infrastructure requirements associated with the standard histochemical staining.
Collapse
Affiliation(s)
- Yuzhu Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Nir Pillar
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Jingxi Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Tairan Liu
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Di Wu
- Computer Science Department, University of California, Los Angeles, CA, 90095, USA
| | - Songyu Sun
- Computer Science Department, University of California, Los Angeles, CA, 90095, USA
| | - Guangdong Ma
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- School of Physics, Xi'an Jiaotong University, Xi'an, Shaanxi, 710049, China
| | - Kevin de Haan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Luzhe Huang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Yijie Zhang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Sepehr Hamidi
- Department of Pathology and Laboratory Medicine, David Geffen School of Medicine, University of California, Los Angeles, CA, 90095, USA
| | - Anatoly Urisman
- Department of Pathology, University of California, San Francisco, CA, 94143, USA
| | - Tal Keidar Haran
- Department of Pathology, Hadassah Hebrew University Medical Center, Jerusalem, 91120, Israel
| | - William Dean Wallace
- Department of Pathology, Keck School of Medicine, University of Southern California, Los Angeles, CA, 90033, USA
| | - Jonathan E Zuckerman
- Department of Pathology and Laboratory Medicine, David Geffen School of Medicine, University of California, Los Angeles, CA, 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA.
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA.
- Department of Surgery, University of California, Los Angeles, CA, 90095, USA.
| |
Collapse
|
24
|
Sun H, Li J, Murphy RF. Expanding the coverage of spatial proteomics: a machine learning approach. Bioinformatics 2024; 40:btae062. [PMID: 38310340 PMCID: PMC10873576 DOI: 10.1093/bioinformatics/btae062] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Revised: 02/15/2024] [Accepted: 02/15/2024] [Indexed: 02/05/2024] Open
Abstract
MOTIVATION Multiplexed protein imaging methods use a chosen set of markers and provide valuable information about complex tissue structure and cellular heterogeneity. However, the number of markers that can be measured in the same tissue sample is inherently limited. RESULTS In this paper, we present an efficient method to choose a minimal predictive subset of markers that for the first time allows the prediction of full images for a much larger set of markers. We demonstrate that our approach also outperforms previous methods for predicting cell-level protein composition. Most importantly, we demonstrate that our approach can be used to select a marker set that enables prediction of a much larger set than could be measured concurrently. AVAILABILITY AND IMPLEMENTATION All code and intermediate results are available in a Reproducible Research Archive at https://github.com/murphygroup/CODEXPanelOptimization.
Collapse
Affiliation(s)
- Huangqingbo Sun
- Computational Biology Department, Carnegie Mellon University, Pittsburgh, PA 15213, United States
| | - Jiayi Li
- Computational Biology Department, Carnegie Mellon University, Pittsburgh, PA 15213, United States
| | - Robert F Murphy
- Computational Biology Department, Carnegie Mellon University, Pittsburgh, PA 15213, United States
| |
Collapse
|
25
|
Levy JJ, Davis MJ, Chacko RS, Davis MJ, Fu LJ, Goel T, Pamal A, Nafi I, Angirekula A, Suvarna A, Vempati R, Christensen BC, Hayden MS, Vaickus LJ, LeBoeuf MR. Intraoperative margin assessment for basal cell carcinoma with deep learning and histologic tumor mapping to surgical site. NPJ Precis Oncol 2024; 8:2. [PMID: 38172524 PMCID: PMC10764333 DOI: 10.1038/s41698-023-00477-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Accepted: 11/14/2023] [Indexed: 01/05/2024] Open
Abstract
Successful treatment of solid cancers relies on complete surgical excision of the tumor either for definitive treatment or before adjuvant therapy. Intraoperative and postoperative radial sectioning, the most common form of margin assessment, can lead to incomplete excision and increase the risk of recurrence and repeat procedures. Mohs Micrographic Surgery is associated with complete removal of basal cell and squamous cell carcinoma through real-time margin assessment of 100% of the peripheral and deep margins. Real-time assessment in many tumor types is constrained by tissue size, complexity, and specimen processing / assessment time during general anesthesia. We developed an artificial intelligence platform to reduce the tissue preprocessing and histological assessment time through automated grossing recommendations, mapping and orientation of tumor to the surgical specimen. Using basal cell carcinoma as a model system, results demonstrate that this approach can address surgical laboratory efficiency bottlenecks for rapid and complete intraoperative margin assessment.
Collapse
Affiliation(s)
- Joshua J Levy
- Department of Pathology and Laboratory Medicine, Cedars-Sinai Medical Center, Los Angeles, CA, 90048, USA.
- Department of Computational Biomedicine, Cedars-Sinai Medical Center, Los Angeles, CA, 90048, USA.
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA.
- Emerging Diagnostic and Investigative Technologies, Clinical Genomics and Advanced Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, NH, 03756, USA.
- Department of Epidemiology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA.
- Program in Quantitative Biomedical Sciences, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA.
| | - Matthew J Davis
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
| | | | - Michael J Davis
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
| | - Lucy J Fu
- Geisel School of Medicine at Dartmouth, Hanover, NH, 03755, USA
| | - Tarushii Goel
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
- Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Akash Pamal
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
- University of Virginia, Charlottesville, VA, 22903, USA
| | - Irfan Nafi
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
- Stanford University, Palo Alto, CA, 94305, USA
| | - Abhinav Angirekula
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
- University of Illinois Urbana-Champaign, Champaign, IL, 61820, USA
| | - Anish Suvarna
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
| | - Ram Vempati
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
| | - Brock C Christensen
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
- Department of Molecular and Systems Biology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
- Department of Community and Family Medicine, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
| | - Matthew S Hayden
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
| | - Louis J Vaickus
- Emerging Diagnostic and Investigative Technologies, Clinical Genomics and Advanced Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, NH, 03756, USA
| | - Matthew R LeBoeuf
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
| |
Collapse
|
26
|
Cazzaniga G, Rossi M, Eccher A, Girolami I, L'Imperio V, Van Nguyen H, Becker JU, Bueno García MG, Sbaraglia M, Dei Tos AP, Gambaro G, Pagni F. Time for a full digital approach in nephropathology: a systematic review of current artificial intelligence applications and future directions. J Nephrol 2024; 37:65-76. [PMID: 37768550 PMCID: PMC10920416 DOI: 10.1007/s40620-023-01775-w] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Accepted: 08/22/2023] [Indexed: 09/29/2023]
Abstract
INTRODUCTION Artificial intelligence (AI) integration in nephropathology has been growing rapidly in recent years, facing several challenges including the wide range of histological techniques used, the low occurrence of certain diseases, and the need for data sharing. This narrative review retraces the history of AI in nephropathology and provides insights into potential future developments. METHODS Electronic searches in PubMed-MEDLINE and Embase were made to extract pertinent articles from the literature. Works about automated image analysis or the application of an AI algorithm on non-neoplastic kidney histological samples were included and analyzed to extract information such as publication year, AI task, and learning type. Prepublication servers and reviews were not included. RESULTS Seventy-six (76) original research articles were selected. Most of the studies were conducted in the United States in the last 7 years. To date, research has been mainly conducted on relatively easy tasks, like single-stain glomerular segmentation. However, there is a trend towards developing more complex tasks such as glomerular multi-stain classification. CONCLUSION Deep learning has been used to identify patterns in complex histopathology data and looks promising for the comprehensive assessment of renal biopsy, through the use of multiple stains and virtual staining techniques. Hybrid and collaborative learning approaches have also been explored to utilize large amounts of unlabeled data. A diverse team of experts, including nephropathologists, computer scientists, and clinicians, is crucial for the development of AI systems for nephropathology. Collaborative efforts among multidisciplinary experts result in clinically relevant and effective AI tools.
Collapse
Affiliation(s)
- Giorgio Cazzaniga
- Department of Medicine and Surgery, Pathology, Fondazione IRCCS San Gerardo dei Tintori, Università di Milano-Bicocca, Monza, Italy.
| | - Mattia Rossi
- Division of Nephrology, Department of Medicine, University of Verona, Piazzale Aristide Stefani, 1, 37126, Verona, Italy
| | - Albino Eccher
- Department of Pathology and Diagnostics, University and Hospital Trust of Verona, P.le Stefani n. 1, 37126, Verona, Italy
- Department of Medical and Surgical Sciences for Children and Adults, University of Modena and Reggio Emilia, University Hospital of Modena, Modena, Italy
| | - Ilaria Girolami
- Department of Pathology and Diagnostics, University and Hospital Trust of Verona, P.le Stefani n. 1, 37126, Verona, Italy
| | - Vincenzo L'Imperio
- Department of Medicine and Surgery, Pathology, Fondazione IRCCS San Gerardo dei Tintori, Università di Milano-Bicocca, Monza, Italy
| | - Hien Van Nguyen
- Department of Electrical and Computer Engineering, University of Houston, Houston, TX, 77004, USA
| | - Jan Ulrich Becker
- Institute of Pathology, University Hospital of Cologne, Cologne, Germany
| | - María Gloria Bueno García
- VISILAB Research Group, E.T.S. Ingenieros Industriales, University of Castilla-La Mancha, Ciudad Real, Spain
| | - Marta Sbaraglia
- Department of Pathology, Azienda Ospedale-Università Padova, Padua, Italy
- Department of Medicine, University of Padua School of Medicine, Padua, Italy
| | - Angelo Paolo Dei Tos
- Department of Pathology, Azienda Ospedale-Università Padova, Padua, Italy
- Department of Medicine, University of Padua School of Medicine, Padua, Italy
| | - Giovanni Gambaro
- Division of Nephrology, Department of Medicine, University of Verona, Piazzale Aristide Stefani, 1, 37126, Verona, Italy
| | - Fabio Pagni
- Department of Medicine and Surgery, Pathology, Fondazione IRCCS San Gerardo dei Tintori, Università di Milano-Bicocca, Monza, Italy
| |
Collapse
|
27
|
Bao S, Lee HH, Yang Q, Remedios LW, Deng R, Cui C, Cai LY, Xu K, Yu X, Chiron S, Li Y, Patterson NH, Wang Y, Li J, Liu Q, Lau KS, Roland JT, Coburn LA, Wilson KT, Landman BA, Huo Y. Alleviating tiling effect by random walk sliding window in high-resolution histological whole slide image synthesis. PROCEEDINGS OF MACHINE LEARNING RESEARCH 2024; 227:1406-1422. [PMID: 38993526 PMCID: PMC11238901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 07/13/2024]
Abstract
Multiplex immunofluorescence (MxIF) is an advanced molecular imaging technique that can simultaneously provide biologists with multiple (i.e., more than 20) molecular markers on a single histological tissue section. Unfortunately, due to imaging restrictions, the more routinely used hematoxylin and eosin (H&E) stain is typically unavailable with MxIF on the same tissue section. As biological H&E staining is not feasible, previous efforts have been made to obtain H&E whole slide image (WSI) from MxIF via deep learning empowered virtual staining. However, the tiling effect is a long-lasting problem in high-resolution WSI-wise synthesis. The MxIF to H&E synthesis is no exception. Limited by computational resources, the cross-stain image synthesis is typically performed at the patch-level. Thus, discontinuous intensities might be visually identified along with the patch boundaries assembling all individual patches back to a WSI. In this work, we propose a deep learning based unpaired high-resolution image synthesis method to obtain virtual H&E WSIs from MxIF WSIs (each with 27 markers/stains) with reduced tiling effects. Briefly, we first extend the CycleGAN framework by adding simultaneous nuclei and mucin segmentation supervision as spatial constraints. Then, we introduce a random walk sliding window shifting strategy during the optimized inference stage, to alleviate the tiling effects. The validation results show that our spatially constrained synthesis method achieves a 56% performance gain for the downstream cell segmentation task. The proposed inference method reduces the tiling effects by using 50% fewer computation resources without compromising performance. The proposed random sliding window inference method is a plug-and-play module, which can be generalized for other high-resolution WSI image synthesis applications. The source code with our proposed model are available at https://github.com/MASILab/RandomWalkSlidingWindow.git.
Collapse
Affiliation(s)
- Shunxing Bao
- Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
| | - Ho Hin Lee
- Department of Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Qi Yang
- Department of Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Lucas W Remedios
- Department of Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Ruining Deng
- Department of Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Can Cui
- Department of Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Leon Y Cai
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA
| | - Kaiwen Xu
- Department of Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Xin Yu
- Department of Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Sophie Chiron
- Division of Gastroenterology, Hepatology, and Nutrition, Department of Medicine, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Yike Li
- Department of Otolaryngology-Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
| | | | - Yaohong Wang
- Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Jia Li
- Dept. of Biostatistics, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Qi Liu
- Dept. of Biostatistics, Vanderbilt University Medical Center, Nashville, TN, USA
- Center for Quantitative Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Ken S Lau
- Center for Quantitative Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
- Epithelial Biology Center, Vanderbilt University Medical Center, Nashville, TN, USA
- Dept. of Cell and Developmental Biology, Vanderbilt University School of Medicine, Nashville, TN, USA
| | - Joseph T Roland
- Epithelial Biology Center, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Lori A Coburn
- Division of Gastroenterology, Hepatology, and Nutrition, Department of Medicine, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Center for Mucosal Inflammation and Cancer, Nashville, TN, USA
- Veterans Affairs Tennessee Valley Healthcare System, Nashville, TN, USA
| | - Keith T Wilson
- Division of Gastroenterology, Hepatology, and Nutrition, Department of Medicine, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Center for Mucosal Inflammation and Cancer, Nashville, TN, USA
- Veterans Affairs Tennessee Valley Healthcare System, Nashville, TN, USA
- Program in Cancer Biology, Vanderbilt University School of Medicine, Nashville, TN, USA
| | - Bennett A Landman
- Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
- Department of Computer Science, Vanderbilt University, Nashville, TN, USA
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA
| | - Yuankai Huo
- Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
- Department of Computer Science, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
28
|
Samueli B, Aizenberg N, Shaco-Levy R, Katzav A, Kezerle Y, Krausz J, Mazareb S, Niv-Drori H, Peled HB, Sabo E, Tobar A, Asa SL. Complete digital pathology transition: A large multi-center experience. Pathol Res Pract 2024; 253:155028. [PMID: 38142526 DOI: 10.1016/j.prp.2023.155028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Accepted: 12/08/2023] [Indexed: 12/26/2023]
Abstract
INTRODUCTION Transitioning from glass slide pathology to digital pathology for primary diagnostics requires an appropriate laboratory information system, an image management system, and slide scanners; it also reinforces the need for sophisticated pathology informatics including synoptic reporting. Previous reports have discussed the transition itself and relevant considerations for it, but not the selection criteria and considerations for the infrastructure. OBJECTIVE To describe the process used to evaluate slide scanners, image management systems, and synoptic reporting systems for a large multisite institution. METHODS Six network hospitals evaluated six slide scanners, three image management systems, and three synoptic reporting systems. Scanners were evaluated based on the quality of image, speed, ease of operation, and special capabilities (including z-stacking, fluorescence and others). Image management and synoptic reporting systems were evaluated for their ease of use and capacity. RESULTS Among the scanners evaluated, the Leica GT450 produced the highest quality images, while the 3DHistech Pannoramic provided fluorescence and superior z-stacking. The newest generation of scanners, released relatively recently, performed better than slightly older scanners from major manufacturers Although the Olympus VS200 was not fully vetted due to not meeting all inclusion criteria, it is discussed herein due to its exceptional versatility. For Image Management Software, the authors believe that Sectra is, at the time of writing the best developed option, but this could change in the very near future as other systems improve their capabilities. All synoptic reporting systems performed impressively. CONCLUSIONS Specifics regarding quality and abilities of different components will change rapidly with time, but large pathology practices considering such a transition should be aware of the issues discussed and evaluate the most current generation to arrive at appropriate conclusions.
Collapse
Affiliation(s)
- Benzion Samueli
- Department of Pathology, Soroka University Medical Center, P.O. Box 151, Be'er Sheva 8410101, Israel; Faculty of Health Sciences, Ben Gurion University of the Negev, P.O. Box 653, Be'er Sheva 8410501, Israel.
| | - Natalie Aizenberg
- Department of Pathology, Soroka University Medical Center, P.O. Box 151, Be'er Sheva 8410101, Israel; Faculty of Health Sciences, Ben Gurion University of the Negev, P.O. Box 653, Be'er Sheva 8410501, Israel
| | - Ruthy Shaco-Levy
- Department of Pathology, Soroka University Medical Center, P.O. Box 151, Be'er Sheva 8410101, Israel; Faculty of Health Sciences, Ben Gurion University of the Negev, P.O. Box 653, Be'er Sheva 8410501, Israel; Department of Pathology, Barzilai Medical Center, 2 Ha-Histadrut St, Ashkelon 7830604, Israel
| | - Aviva Katzav
- Pathology Institute, Meir Medical Center, Kfar Saba 4428164, Israel
| | - Yarden Kezerle
- Department of Pathology, Soroka University Medical Center, P.O. Box 151, Be'er Sheva 8410101, Israel; Faculty of Health Sciences, Ben Gurion University of the Negev, P.O. Box 653, Be'er Sheva 8410501, Israel
| | - Judit Krausz
- Department of Pathology, HaEmek Medical Center, 21 Yitzhak Rabin Ave, Afula 183411, Israel
| | - Salam Mazareb
- Department of Pathology, Carmel Medical Center, 7 Michal Street, Haifa 3436212, Israel
| | - Hagit Niv-Drori
- Department of Pathology, Rabin Medical Center, 39 Jabotinsky St, Petah Tikva 4941492, Israel; Faculty of Medicine, Tel Aviv University, P.O. Box 39040, Tel Aviv 6139001, Israel
| | - Hila Belhanes Peled
- Department of Pathology, HaEmek Medical Center, 21 Yitzhak Rabin Ave, Afula 183411, Israel
| | - Edmond Sabo
- Department of Pathology, Carmel Medical Center, 7 Michal Street, Haifa 3436212, Israel; Rappaport Faculty of Medicine, Technion Israel Institute of Technology, Haifa 3525433, Israel
| | - Ana Tobar
- Department of Pathology, Rabin Medical Center, 39 Jabotinsky St, Petah Tikva 4941492, Israel; Faculty of Medicine, Tel Aviv University, P.O. Box 39040, Tel Aviv 6139001, Israel
| | - Sylvia L Asa
- Institute of Pathology, University Hospitals Cleveland Medical Center, Case Western Reserve University, 11100 Euclid Avenue, Room 204, Cleveland, OH 44106, USA
| |
Collapse
|
29
|
Zhao J, Wang X, Zhu J, Chukwudi C, Finebaum A, Zhang J, Yang S, He S, Saeidi N. PhaseFIT: live-organoid phase-fluorescent image transformation via generative AI. LIGHT, SCIENCE & APPLICATIONS 2023; 12:297. [PMID: 38097545 PMCID: PMC10721831 DOI: 10.1038/s41377-023-01296-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 09/02/2023] [Accepted: 09/24/2023] [Indexed: 12/17/2023]
Abstract
Organoid models have provided a powerful platform for mechanistic investigations into fundamental biological processes involved in the development and function of organs. Despite the potential for image-based phenotypic quantification of organoids, their complex 3D structure, and the time-consuming and labor-intensive nature of immunofluorescent staining present significant challenges. In this work, we developed a virtual painting system, PhaseFIT (phase-fluorescent image transformation) utilizing customized and morphologically rich 2.5D intestinal organoids, which generate virtual fluorescent images for phenotypic quantification via accessible and low-cost organoid phase images. This system is driven by a novel segmentation-informed deep generative model that specializes in segmenting overlap and proximity between objects. The model enables an annotation-free digital transformation from phase-contrast to multi-channel fluorescent images. The virtual painting results of nuclei, secretory cell markers, and stem cells demonstrate that PhaseFIT outperforms the existing deep learning-based stain transformation models by generating fine-grained visual content. We further validated the efficiency and accuracy of PhaseFIT to quantify the impacts of three compounds on crypt formation, cell population, and cell stemness. PhaseFIT is the first deep learning-enabled virtual painting system focused on live organoids, enabling large-scale, informative, and efficient organoid phenotypic quantification. PhaseFIT would enable the use of organoids in high-throughput drug screening applications.
Collapse
Affiliation(s)
- Junhan Zhao
- Division of Gastrointestinal and Oncologic Surgery, Department of Surgery, Massachusetts General Hospital, Boston, MA, 02114, USA
- Department of Surgery, Center for Engineering in Medicine and Surgery, Massachusetts General Hospital, Boston, MA, 02114, USA
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, 02115, USA
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, 02115, USA
| | - Xiyue Wang
- College of Biomedical Engineering, Sichuan University, Chengdu, Sichuan, 610065, China
| | - Junyou Zhu
- Division of Gastrointestinal and Oncologic Surgery, Department of Surgery, Massachusetts General Hospital, Boston, MA, 02114, USA
- Department of Surgery, Center for Engineering in Medicine and Surgery, Massachusetts General Hospital, Boston, MA, 02114, USA
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, 02115, USA
- Shriners Hospital for Children-Boston, Boston, MA, 02114, USA
| | - Chijioke Chukwudi
- Division of Gastrointestinal and Oncologic Surgery, Department of Surgery, Massachusetts General Hospital, Boston, MA, 02114, USA
- Department of Surgery, Center for Engineering in Medicine and Surgery, Massachusetts General Hospital, Boston, MA, 02114, USA
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, 02115, USA
- Shriners Hospital for Children-Boston, Boston, MA, 02114, USA
| | - Andrew Finebaum
- Division of Gastrointestinal and Oncologic Surgery, Department of Surgery, Massachusetts General Hospital, Boston, MA, 02114, USA
| | - Jun Zhang
- Tencent AI Lab, Shenzhen, Guangdong, 518057, China
| | - Sen Yang
- Tencent AI Lab, Shenzhen, Guangdong, 518057, China
| | - Shijie He
- Division of Gastrointestinal and Oncologic Surgery, Department of Surgery, Massachusetts General Hospital, Boston, MA, 02114, USA.
- Department of Surgery, Center for Engineering in Medicine and Surgery, Massachusetts General Hospital, Boston, MA, 02114, USA.
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, 02115, USA.
- Shriners Hospital for Children-Boston, Boston, MA, 02114, USA.
| | - Nima Saeidi
- Division of Gastrointestinal and Oncologic Surgery, Department of Surgery, Massachusetts General Hospital, Boston, MA, 02114, USA.
- Department of Surgery, Center for Engineering in Medicine and Surgery, Massachusetts General Hospital, Boston, MA, 02114, USA.
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, 02115, USA.
- Shriners Hospital for Children-Boston, Boston, MA, 02114, USA.
- Harvard Stem Cell Institute, Cambridge, MA, 02138, USA.
| |
Collapse
|
30
|
Astratov VN, Sahel YB, Eldar YC, Huang L, Ozcan A, Zheludev N, Zhao J, Burns Z, Liu Z, Narimanov E, Goswami N, Popescu G, Pfitzner E, Kukura P, Hsiao YT, Hsieh CL, Abbey B, Diaspro A, LeGratiet A, Bianchini P, Shaked NT, Simon B, Verrier N, Debailleul M, Haeberlé O, Wang S, Liu M, Bai Y, Cheng JX, Kariman BS, Fujita K, Sinvani M, Zalevsky Z, Li X, Huang GJ, Chu SW, Tzang O, Hershkovitz D, Cheshnovsky O, Huttunen MJ, Stanciu SG, Smolyaninova VN, Smolyaninov II, Leonhardt U, Sahebdivan S, Wang Z, Luk’yanchuk B, Wu L, Maslov AV, Jin B, Simovski CR, Perrin S, Montgomery P, Lecler S. Roadmap on Label-Free Super-Resolution Imaging. LASER & PHOTONICS REVIEWS 2023; 17:2200029. [PMID: 38883699 PMCID: PMC11178318 DOI: 10.1002/lpor.202200029] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Indexed: 06/18/2024]
Abstract
Label-free super-resolution (LFSR) imaging relies on light-scattering processes in nanoscale objects without a need for fluorescent (FL) staining required in super-resolved FL microscopy. The objectives of this Roadmap are to present a comprehensive vision of the developments, the state-of-the-art in this field, and to discuss the resolution boundaries and hurdles which need to be overcome to break the classical diffraction limit of the LFSR imaging. The scope of this Roadmap spans from the advanced interference detection techniques, where the diffraction-limited lateral resolution is combined with unsurpassed axial and temporal resolution, to techniques with true lateral super-resolution capability which are based on understanding resolution as an information science problem, on using novel structured illumination, near-field scanning, and nonlinear optics approaches, and on designing superlenses based on nanoplasmonics, metamaterials, transformation optics, and microsphere-assisted approaches. To this end, this Roadmap brings under the same umbrella researchers from the physics and biomedical optics communities in which such studies have often been developing separately. The ultimate intent of this paper is to create a vision for the current and future developments of LFSR imaging based on its physical mechanisms and to create a great opening for the series of articles in this field.
Collapse
Affiliation(s)
- Vasily N. Astratov
- Department of Physics and Optical Science, University of North Carolina at Charlotte, Charlotte, North Carolina 28223-0001, USA
| | - Yair Ben Sahel
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Yonina C. Eldar
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Luzhe Huang
- Electrical and Computer Engineering Department, University of California, Los Angeles, California 90095, USA
- Bioengineering Department, University of California, Los Angeles, California 90095, USA
- California Nano Systems Institute (CNSI), University of California, Los Angeles, California 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, California 90095, USA
- Bioengineering Department, University of California, Los Angeles, California 90095, USA
- California Nano Systems Institute (CNSI), University of California, Los Angeles, California 90095, USA
- David Geffen School of Medicine, University of California, Los Angeles, California 90095, USA
| | - Nikolay Zheludev
- Optoelectronics Research Centre, University of Southampton, Southampton, SO17 1BJ, UK
- Centre for Disruptive Photonic Technologies, The Photonics Institute, School of Physical and Mathematical Sciences, Nanyang Technological University, 637371, Singapore
| | - Junxiang Zhao
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
| | - Zachary Burns
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
| | - Zhaowei Liu
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
- Material Science and Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
| | - Evgenii Narimanov
- School of Electrical Engineering, and Birck Nanotechnology Center, Purdue University, West Lafayette, Indiana 47907, USA
| | - Neha Goswami
- Quantitative Light Imaging Laboratory, Beckman Institute of Advanced Science and Technology, University of Illinois at Urbana-Champaign, Illinois 61801, USA
| | - Gabriel Popescu
- Quantitative Light Imaging Laboratory, Beckman Institute of Advanced Science and Technology, University of Illinois at Urbana-Champaign, Illinois 61801, USA
| | - Emanuel Pfitzner
- Department of Chemistry, University of Oxford, Oxford OX1 3QZ, United Kingdom
| | - Philipp Kukura
- Department of Chemistry, University of Oxford, Oxford OX1 3QZ, United Kingdom
| | - Yi-Teng Hsiao
- Institute of Atomic and Molecular Sciences (IAMS), Academia Sinica 1, Roosevelt Rd. Sec. 4, Taipei 10617 Taiwan
| | - Chia-Lung Hsieh
- Institute of Atomic and Molecular Sciences (IAMS), Academia Sinica 1, Roosevelt Rd. Sec. 4, Taipei 10617 Taiwan
| | - Brian Abbey
- Australian Research Council Centre of Excellence for Advanced Molecular Imaging, La Trobe University, Melbourne, Victoria, Australia
- Department of Chemistry and Physics, La Trobe Institute for Molecular Science (LIMS), La Trobe University, Melbourne, Victoria, Australia
| | - Alberto Diaspro
- Optical Nanoscopy and NIC@IIT, CHT, Istituto Italiano di Tecnologia, Via Enrico Melen 83B, 16152 Genoa, Italy
- DIFILAB, Department of Physics, University of Genoa, Via Dodecaneso 33, 16146 Genoa, Italy
| | - Aymeric LeGratiet
- Optical Nanoscopy and NIC@IIT, CHT, Istituto Italiano di Tecnologia, Via Enrico Melen 83B, 16152 Genoa, Italy
- Université de Rennes, CNRS, Institut FOTON - UMR 6082, F-22305 Lannion, France
| | - Paolo Bianchini
- Optical Nanoscopy and NIC@IIT, CHT, Istituto Italiano di Tecnologia, Via Enrico Melen 83B, 16152 Genoa, Italy
- DIFILAB, Department of Physics, University of Genoa, Via Dodecaneso 33, 16146 Genoa, Italy
| | - Natan T. Shaked
- Tel Aviv University, Faculty of Engineering, Department of Biomedical Engineering, Tel Aviv 6997801, Israel
| | - Bertrand Simon
- LP2N, Institut d’Optique Graduate School, CNRS UMR 5298, Université de Bordeaux, Talence France
| | - Nicolas Verrier
- IRIMAS UR UHA 7499, Université de Haute-Alsace, Mulhouse, France
| | | | - Olivier Haeberlé
- IRIMAS UR UHA 7499, Université de Haute-Alsace, Mulhouse, France
| | - Sheng Wang
- School of Physics and Technology, Wuhan University, China
- Wuhan Institute of Quantum Technology, China
| | - Mengkun Liu
- Department of Physics and Astronomy, Stony Brook University, USA
- National Synchrotron Light Source II, Brookhaven National Laboratory, USA
| | - Yeran Bai
- Boston University Photonics Center, Boston, MA 02215, USA
| | - Ji-Xin Cheng
- Boston University Photonics Center, Boston, MA 02215, USA
| | - Behjat S. Kariman
- Optical Nanoscopy and NIC@IIT, CHT, Istituto Italiano di Tecnologia, Via Enrico Melen 83B, 16152 Genoa, Italy
- DIFILAB, Department of Physics, University of Genoa, Via Dodecaneso 33, 16146 Genoa, Italy
| | - Katsumasa Fujita
- Department of Applied Physics and the Advanced Photonics and Biosensing Open Innovation Laboratory (AIST); and the Transdimensional Life Imaging Division, Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Osaka, Japan
| | - Moshe Sinvani
- Faculty of Engineering and the Nano-Technology Center, Bar-Ilan University, Ramat Gan, 52900 Israel
| | - Zeev Zalevsky
- Faculty of Engineering and the Nano-Technology Center, Bar-Ilan University, Ramat Gan, 52900 Israel
| | - Xiangping Li
- Guangdong Provincial Key Laboratory of Optical Fiber Sensing and Communications, Institute of Photonics Technology, Jinan University, Guangzhou 510632, China
| | - Guan-Jie Huang
- Department of Physics and Molecular Imaging Center, National Taiwan University, Taipei 10617, Taiwan
- Brain Research Center, National Tsing Hua University, Hsinchu 30013, Taiwan
| | - Shi-Wei Chu
- Department of Physics and Molecular Imaging Center, National Taiwan University, Taipei 10617, Taiwan
- Brain Research Center, National Tsing Hua University, Hsinchu 30013, Taiwan
| | - Omer Tzang
- School of Chemistry, The Sackler faculty of Exact Sciences, and the Center for Light matter Interactions, and the Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv 69978, Israel
| | - Dror Hershkovitz
- School of Chemistry, The Sackler faculty of Exact Sciences, and the Center for Light matter Interactions, and the Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv 69978, Israel
| | - Ori Cheshnovsky
- School of Chemistry, The Sackler faculty of Exact Sciences, and the Center for Light matter Interactions, and the Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv 69978, Israel
| | - Mikko J. Huttunen
- Laboratory of Photonics, Physics Unit, Tampere University, FI-33014, Tampere, Finland
| | - Stefan G. Stanciu
- Center for Microscopy – Microanalysis and Information Processing, Politehnica University of Bucharest, 313 Splaiul Independentei, 060042, Bucharest, Romania
| | - Vera N. Smolyaninova
- Department of Physics Astronomy and Geosciences, Towson University, 8000 York Rd., Towson, MD 21252, USA
| | - Igor I. Smolyaninov
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD 20742, USA
| | - Ulf Leonhardt
- Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Sahar Sahebdivan
- EMTensor GmbH, TechGate, Donau-City-Strasse 1, 1220 Wien, Austria
| | - Zengbo Wang
- School of Computer Science and Electronic Engineering, Bangor University, Bangor, LL57 1UT, United Kingdom
| | - Boris Luk’yanchuk
- Faculty of Physics, Lomonosov Moscow State University, Moscow 119991, Russia
| | - Limin Wu
- Department of Materials Science and State Key Laboratory of Molecular Engineering of Polymers, Fudan University, Shanghai 200433, China
| | - Alexey V. Maslov
- Department of Radiophysics, University of Nizhny Novgorod, Nizhny Novgorod, 603022, Russia
| | - Boya Jin
- Department of Physics and Optical Science, University of North Carolina at Charlotte, Charlotte, North Carolina 28223-0001, USA
| | - Constantin R. Simovski
- Department of Electronics and Nano-Engineering, Aalto University, FI-00076, Espoo, Finland
- Faculty of Physics and Engineering, ITMO University, 199034, St-Petersburg, Russia
| | - Stephane Perrin
- ICube Research Institute, University of Strasbourg - CNRS - INSA de Strasbourg, 300 Bd. Sébastien Brant, 67412 Illkirch, France
| | - Paul Montgomery
- ICube Research Institute, University of Strasbourg - CNRS - INSA de Strasbourg, 300 Bd. Sébastien Brant, 67412 Illkirch, France
| | - Sylvain Lecler
- ICube Research Institute, University of Strasbourg - CNRS - INSA de Strasbourg, 300 Bd. Sébastien Brant, 67412 Illkirch, France
| |
Collapse
|
31
|
Aleksandrovych M, Strassberg M, Melamed J, Xu M. Polarization differential interference contrast microscopy with physics-inspired plug-and-play denoiser for single-shot high-performance quantitative phase imaging. BIOMEDICAL OPTICS EXPRESS 2023; 14:5833-5850. [PMID: 38021115 PMCID: PMC10659786 DOI: 10.1364/boe.499316] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 08/31/2023] [Accepted: 09/15/2023] [Indexed: 12/01/2023]
Abstract
We present single-shot high-performance quantitative phase imaging with a physics-inspired plug-and-play denoiser for polarization differential interference contrast (PDIC) microscopy. The quantitative phase is recovered by the alternating direction method of multipliers (ADMM), balancing total variance regularization and a pre-trained dense residual U-net (DRUNet) denoiser. The custom DRUNet uses the Tanh activation function to guarantee the symmetry requirement for phase retrieval. In addition, we introduce an adaptive strategy accelerating convergence and explicitly incorporating measurement noise. After validating this deep denoiser-enhanced PDIC microscopy on simulated data and phantom experiments, we demonstrated high-performance phase imaging of histological tissue sections. The phase retrieval by the denoiser-enhanced PDIC microscopy achieves significantly higher quality and accuracy than the solution based on Fourier transforms or the iterative solution with total variance regularization alone.
Collapse
Affiliation(s)
- Mariia Aleksandrovych
- Dept. of Physics and Astronomy, Hunter College and the Graduate Center, The City University of New York, 695 Park Ave, New York, NY 10065, USA
| | - Mark Strassberg
- Dept. of Physics and Astronomy, Hunter College and the Graduate Center, The City University of New York, 695 Park Ave, New York, NY 10065, USA
| | - Jonathan Melamed
- Department of Pathology, New York University Langone School of Medicine, New York, NY 10016, USA
| | - Min Xu
- Dept. of Physics and Astronomy, Hunter College and the Graduate Center, The City University of New York, 695 Park Ave, New York, NY 10065, USA
| |
Collapse
|
32
|
Wei S, Si L, Huang T, Du S, Yao Y, Dong Y, Ma H. Deep-learning-based cross-modality translation from Stokes image to bright-field contrast. JOURNAL OF BIOMEDICAL OPTICS 2023; 28:102911. [PMID: 37867633 PMCID: PMC10587695 DOI: 10.1117/1.jbo.28.10.102911] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 08/25/2023] [Accepted: 09/25/2023] [Indexed: 10/24/2023]
Abstract
Significance Mueller matrix (MM) microscopy has proven to be a powerful tool for probing microstructural characteristics of biological samples down to subwavelength scale. However, in clinical practice, doctors usually rely on bright-field microscopy images of stained tissue slides to identify characteristic features of specific diseases and make accurate diagnosis. Cross-modality translation based on polarization imaging helps to improve the efficiency and stability in analyzing sample properties from different modalities for pathologists. Aim In this work, we propose a computational image translation technique based on deep learning to enable bright-field microscopy contrast using snapshot Stokes images of stained pathological tissue slides. Taking Stokes images as input instead of MM images allows the translated bright-field images to be unaffected by variations of light source and samples. Approach We adopted CycleGAN as the translation model to avoid requirements on co-registered image pairs in the training. This method can generate images that are equivalent to the bright-field images with different staining styles on the same region. Results Pathological slices of liver and breast tissues with hematoxylin and eosin staining and lung tissues with two types of immunohistochemistry staining, i.e., thyroid transcription factor-1 and Ki-67, were used to demonstrate the effectiveness of our method. The output results were evaluated by four image quality assessment methods. Conclusions By comparing the cross-modality translation performance with MM images, we found that the Stokes images, with the advantages of faster acquisition and independence from light intensity and image registration, can be well translated to bright-field images.
Collapse
Affiliation(s)
- Shilong Wei
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
| | - Lu Si
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
| | - Tongyu Huang
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
- Tsinghua University, Department of Biomedical Engineering, Beijing, China
| | - Shan Du
- University of Chinese Academy of Sciences, Shenzhen Hospital, Department of Pathology, Shenzhen, China
| | - Yue Yao
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
| | - Yang Dong
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
| | - Hui Ma
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
- Tsinghua University, Department of Biomedical Engineering, Beijing, China
- Tsinghua University, Department of Physics, Beijing, China
| |
Collapse
|
33
|
Liu X, Li B, Liu C, Ta D. Virtual Fluorescence Translation for Biological Tissue by Conditional Generative Adversarial Network. PHENOMICS (CHAM, SWITZERLAND) 2023; 3:408-420. [PMID: 37589024 PMCID: PMC10425324 DOI: 10.1007/s43657-023-00094-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 01/03/2023] [Accepted: 01/05/2023] [Indexed: 08/18/2023]
Abstract
Fluorescence labeling and imaging provide an opportunity to observe the structure of biological tissues, playing a crucial role in the field of histopathology. However, when labeling and imaging biological tissues, there are still some challenges, e.g., time-consuming tissue preparation steps, expensive reagents, and signal bias due to photobleaching. To overcome these limitations, we present a deep-learning-based method for fluorescence translation of tissue sections, which is achieved by conditional generative adversarial network (cGAN). Experimental results from mouse kidney tissues demonstrate that the proposed method can predict the other types of fluorescence images from one raw fluorescence image, and implement the virtual multi-label fluorescent staining by merging the generated different fluorescence images as well. Moreover, this proposed method can also effectively reduce the time-consuming and laborious preparation in imaging processes, and further saves the cost and time. Supplementary Information The online version contains supplementary material available at 10.1007/s43657-023-00094-1.
Collapse
Affiliation(s)
- Xin Liu
- Academy for Engineering and Technology, Fudan University, Shanghai, 200433 China
- State Key Laboratory of Medical Neurobiology, Fudan University, Shanghai, 200433 China
| | - Boyi Li
- Academy for Engineering and Technology, Fudan University, Shanghai, 200433 China
| | - Chengcheng Liu
- Academy for Engineering and Technology, Fudan University, Shanghai, 200433 China
| | - Dean Ta
- Academy for Engineering and Technology, Fudan University, Shanghai, 200433 China
- Center for Biomedical Engineering, Fudan University, Shanghai, 200433 China
| |
Collapse
|
34
|
Fanous MJ, Pillar N, Ozcan A. Digital staining facilitates biomedical microscopy. FRONTIERS IN BIOINFORMATICS 2023; 3:1243663. [PMID: 37564725 PMCID: PMC10411189 DOI: 10.3389/fbinf.2023.1243663] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 07/17/2023] [Indexed: 08/12/2023] Open
Abstract
Traditional staining of biological specimens for microscopic imaging entails time-consuming, laborious, and costly procedures, in addition to producing inconsistent labeling and causing irreversible sample damage. In recent years, computational "virtual" staining using deep learning techniques has evolved into a robust and comprehensive application for streamlining the staining process without typical histochemical staining-related drawbacks. Such virtual staining techniques can also be combined with neural networks designed to correct various microscopy aberrations, such as out-of-focus or motion blur artifacts, and improve upon diffracted-limited resolution. Here, we highlight how such methods lead to a host of new opportunities that can significantly improve both sample preparation and imaging in biomedical microscopy.
Collapse
Affiliation(s)
- Michael John Fanous
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, United States
| | - Nir Pillar
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, United States
- Bioengineering Department, University of California, Los Angeles, CA, United States
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, United States
- Bioengineering Department, University of California, Los Angeles, CA, United States
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, United States
- Department of Surgery, David Geffen School of Medicine, University of California, Los Angeles, CA, United States
| |
Collapse
|
35
|
Deng F, Qin G, Chen Y, Zhang X, Zhu M, Hou M, Yao Q, Gu W, Wang C, Yang H, Jia X, Wu C, Peng H, Du H, Tang S. Multi-omics reveals 2-bromo-4,6-dinitroaniline (BDNA)-induced hepatotoxicity and the role of the gut-liver axis in rats. JOURNAL OF HAZARDOUS MATERIALS 2023; 457:131760. [PMID: 37285786 DOI: 10.1016/j.jhazmat.2023.131760] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 05/26/2023] [Accepted: 06/01/2023] [Indexed: 06/09/2023]
Abstract
2-Bromo-4, 6-dinitroaniline (BDNA) is a widespread azo-dye-related hazardous pollutant. However, its reported adverse effects are limited to mutagenicity, genotoxicity, endocrine disruption, and reproductive toxicity. We systematically assessed the hepatotoxicity of BDNA exposure via pathological and biochemical examinations and explored the underlying mechanisms via integrative multi-omics analyses of the transcriptome, metabolome, and microbiome in rats. After 28 days of oral administration, compared with the control group, 100 mg/kg BDNA significantly triggered hepatotoxicity, upregulated toxicity indicators (e.g., HSI, ALT, and ARG1), and induced systemic inflammation (e.g., G-CSF, MIP-2, RANTES, and VEGF), dyslipidemia (e.g., TC and TG), and bile acid (BA) synthesis (e.g., CA, GCA, and GDCA). Transcriptomic and metabolomic analyses revealed broad perturbations in gene transcripts and metabolites involved in the representative pathways of liver inflammation (e.g., Hmox1, Spi1, L-methionine, valproic acid, and choline), steatosis (e.g., Nr0b2, Cyp1a1, Cyp1a2, Dusp1, Plin3, arachidonic acid, linoleic acid, and palmitic acid), and cholestasis (e.g., FXR/Nr1h4, Cdkn1a, Cyp7a1, and bilirubin). Microbiome analysis revealed reduced relative abundances of beneficial gut microbial taxa (e.g., Ruminococcaceae and Akkermansia muciniphila), which further contributed to the inflammatory response, lipid accumulation, and BA synthesis in the enterohepatic circulation. The observed effect concentrations here were comparable to the highly contaminated wastewaters, showcasing BDNA's hepatotoxic effects at environmentally relevant concentrations. These results shed light on the biomolecular mechanism and important role of the gut-liver axis underpinning BDNA-induced cholestatic liver disorders in vivo.
Collapse
Affiliation(s)
- Fuchang Deng
- China CDC Key Laboratory of Environment and Population Health, National Institute of Environmental Health, Chinese Center for Disease Control and Prevention, Beijing 100021, China
| | - Guangqiu Qin
- Department of Preventive Medicine, Guangxi University of Chinese Medicine, Nanning 530200, China
| | - Yuanyuan Chen
- China CDC Key Laboratory of Environment and Population Health, National Institute of Environmental Health, Chinese Center for Disease Control and Prevention, Beijing 100021, China
| | - Xu Zhang
- China CDC Key Laboratory of Environment and Population Health, National Institute of Environmental Health, Chinese Center for Disease Control and Prevention, Beijing 100021, China
| | - Mu Zhu
- China CDC Key Laboratory of Environment and Population Health, National Institute of Environmental Health, Chinese Center for Disease Control and Prevention, Beijing 100021, China
| | - Min Hou
- China CDC Key Laboratory of Environment and Population Health, National Institute of Environmental Health, Chinese Center for Disease Control and Prevention, Beijing 100021, China
| | - Qiao Yao
- China CDC Key Laboratory of Environment and Population Health, National Institute of Environmental Health, Chinese Center for Disease Control and Prevention, Beijing 100021, China
| | - Wen Gu
- China CDC Key Laboratory of Environment and Population Health, National Institute of Environmental Health, Chinese Center for Disease Control and Prevention, Beijing 100021, China
| | - Chao Wang
- China CDC Key Laboratory of Environment and Population Health, National Institute of Environmental Health, Chinese Center for Disease Control and Prevention, Beijing 100021, China
| | - Hui Yang
- NHC Key Laboratory of Food Safety Risk Assessment, China National Center for Food Safety Risk Assessment, Beijing 100021, China
| | - Xudong Jia
- NHC Key Laboratory of Food Safety Risk Assessment, China National Center for Food Safety Risk Assessment, Beijing 100021, China
| | - Chongming Wu
- School of Chinese Materia Medica, Tianjin University of Traditional Chinese Medicine, Tianjin 301617, China
| | - Hui Peng
- Department of Chemistry, University of Toronto, Toronto, Ontario M5S3H6, Canada
| | - Huamao Du
- College of Sericulture, Textile and Biomass Sciences, Southwest University, Chongqing 400715, China
| | - Song Tang
- China CDC Key Laboratory of Environment and Population Health, National Institute of Environmental Health, Chinese Center for Disease Control and Prevention, Beijing 100021, China; Center for Global Health, School of Public Health, Nanjing Medical University, Nanjing, Jiangsu 211166, China.
| |
Collapse
|
36
|
Salido J, Vallez N, González-López L, Deniz O, Bueno G. Comparison of deep learning models for digital H&E staining from unpaired label-free multispectral microscopy images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 235:107528. [PMID: 37040684 DOI: 10.1016/j.cmpb.2023.107528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Revised: 03/27/2023] [Accepted: 04/03/2023] [Indexed: 05/08/2023]
Abstract
BACKGROUND AND OBJECTIVE This paper presents the quantitative comparison of three generative models of digital staining, also known as virtual staining, in H&E modality (i.e., Hematoxylin and Eosin) that are applied to 5 types of breast tissue. Moreover, a qualitative evaluation of the results achieved with the best model was carried out. This process is based on images of samples without staining captured by a multispectral microscope with previous dimensional reduction to three channels in the RGB range. METHODS The models compared are based on conditional GAN (pix2pix) which uses images aligned with/without staining, and two models that do not require image alignment, Cycle GAN (cycleGAN) and contrastive learning-based model (CUT). These models are compared based on the structural similarity and chromatic discrepancy between samples with chemical staining and their corresponding ones with digital staining. The correspondence between images is achieved after the chemical staining images are subjected to digital unstaining by means of a model obtained to guarantee the cyclic consistency of the generative models. RESULTS The comparison of the three models corroborates the visual evaluation of the results showing the superiority of cycleGAN both for its larger structural similarity with respect to chemical staining (mean value of SSIM ∼ 0.95) and lower chromatic discrepancy (10%). To this end, quantization and calculation of EMD (Earth Mover's Distance) between clusters is used. In addition, quality evaluation through subjective psychophysical tests with three experts was carried out to evaluate quality of the results with the best model (cycleGAN). CONCLUSIONS The results can be satisfactorily evaluated by metrics that use as reference image a chemically stained sample and the digital staining images of the reference sample with prior digital unstaining. These metrics demonstrate that generative staining models that guarantee cyclic consistency provide the closest results to chemical H&E staining that also is consistent with the result of qualitative evaluation by experts.
Collapse
Affiliation(s)
- Jesus Salido
- IEEAC Dept. (ESI-UCLM), P de la Universidad 4, Ciudad Real, 13071, Spain.
| | - Noelia Vallez
- IEEAC Dept. (ETSII-UCLM), Avda. Camilo José Cela s/n, Ciudad Real, 13071, Spain
| | - Lucía González-López
- Hospital Gral. Universitario de C.Real (HGUCR), C. Obispo Rafael Torija s/n, Ciudad Real, 13005, Spain
| | - Oscar Deniz
- IEEAC Dept. (ETSII-UCLM), Avda. Camilo José Cela s/n, Ciudad Real, 13071, Spain
| | - Gloria Bueno
- IEEAC Dept. (ETSII-UCLM), Avda. Camilo José Cela s/n, Ciudad Real, 13071, Spain
| |
Collapse
|
37
|
Khan U, Koivukoski S, Valkonen M, Latonen L, Ruusuvuori P. The effect of neural network architecture on virtual H&E staining: Systematic assessment of histological feasibility. PATTERNS (NEW YORK, N.Y.) 2023; 4:100725. [PMID: 37223268 PMCID: PMC10201298 DOI: 10.1016/j.patter.2023.100725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Revised: 12/23/2022] [Accepted: 03/08/2023] [Indexed: 05/25/2023]
Abstract
Conventional histopathology has relied on chemical staining for over a century. The staining process makes tissue sections visible to the human eye through a tedious and labor-intensive procedure that alters the tissue irreversibly, preventing repeated use of the sample. Deep learning-based virtual staining can potentially alleviate these shortcomings. Here, we used standard brightfield microscopy on unstained tissue sections and studied the impact of increased network capacity on the resulting virtually stained H&E images. Using the generative adversarial neural network model pix2pix as a baseline, we observed that replacing simple convolutions with dense convolution units increased the structural similarity score, peak signal-to-noise ratio, and nuclei reproduction accuracy. We also demonstrated highly accurate reproduction of histology, especially with increased network capacity, and demonstrated applicability to several tissues. We show that network architecture optimization can improve the image translation accuracy of virtual H&E staining, highlighting the potential of virtual staining in streamlining histopathological analysis.
Collapse
Affiliation(s)
- Umair Khan
- University of Turku, Institute of Biomedicine, Turku 20014, Finland
| | - Sonja Koivukoski
- University of Eastern Finland, Institute of Biomedicine, Kuopio 70211, Finland
| | - Mira Valkonen
- Tampere University, Faculty of Medicine and Health Technology, Tampere 33100, Finland
| | - Leena Latonen
- University of Eastern Finland, Institute of Biomedicine, Kuopio 70211, Finland
- Foundation for the Finnish Cancer Institute, Helsinki 00290, Finland
| | - Pekka Ruusuvuori
- University of Turku, Institute of Biomedicine, Turku 20014, Finland
- Tampere University, Faculty of Medicine and Health Technology, Tampere 33100, Finland
- FICAN West Cancer Centre, Cancer Research Unit, Turku University Hospital, Turku 20500, Finland
| |
Collapse
|
38
|
Min E, Aimakov N, Lee S, Ban S, Yang H, Ahn Y, You JS, Jung W. Multi-contrast digital histopathology of mouse organs using quantitative phase imaging and virtual staining. BIOMEDICAL OPTICS EXPRESS 2023; 14:2068-2079. [PMID: 37206137 PMCID: PMC10191651 DOI: 10.1364/boe.484516] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Revised: 02/22/2023] [Accepted: 02/27/2023] [Indexed: 05/21/2023]
Abstract
Quantitative phase imaging (QPI) has emerged as a new digital histopathologic tool as it provides structural information of conventional slide without staining process. It is also capable of imaging biological tissue sections with sub-nanometer sensitivity and classifying them using light scattering properties. Here we extend its capability further by using optical scattering properties as imaging contrast in a wide-field QPI. In our first step towards validation, QPI images of 10 major organs of a wild-type mouse have been obtained followed by H&E-stained images of the corresponding tissue sections. Furthermore, we utilized deep learning model based on generative adversarial network (GAN) architecture for virtual staining of phase delay images to a H&E-equivalent brightfield (BF) image analogues. Using the structural similarity index, we demonstrate similarities between virtually stained and H&E histology images. Whereas the scattering-based maps look rather similar to QPI phase maps in the kidney, the brain images show significant improvement over QPI with clear demarcation of features across all regions. Since our technology provides not only structural information but also unique optical property maps, it could potentially become a fast and contrast-enriched histopathology technique.
Collapse
Affiliation(s)
- Eunjung Min
- Systems Neuroscience and Neuroengineering, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Nurbolat Aimakov
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan, Republic of Korea
| | - Sangjin Lee
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan, Republic of Korea
| | - Sungbea Ban
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan, Republic of Korea
| | - Hyunmo Yang
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan, Republic of Korea
| | - Yujin Ahn
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan, Republic of Korea
| | - Joon S. You
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan, Republic of Korea
- Incipian LLC, Laguna Niguel, California, USA
| | - Woonggyu Jung
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan, Republic of Korea
| |
Collapse
|
39
|
Yan R, He Q, Liu Y, Ye P, Zhu L, Shi S, Gou J, He Y, Guan T, Zhou G. Unpaired virtual histological staining using prior-guided generative adversarial networks. Comput Med Imaging Graph 2023; 105:102185. [PMID: 36764189 DOI: 10.1016/j.compmedimag.2023.102185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Revised: 01/05/2023] [Accepted: 01/05/2023] [Indexed: 01/24/2023]
Abstract
Fibrosis is an inevitable stage in the development of chronic liver disease and has an irreplaceable role in characterizing the degree of progression of chronic liver disease. Histopathological diagnosis is the gold standard for the interpretation of fibrosis parameters. Conventional hematoxylin-eosin (H&E) staining can only reflect the gross structure of the tissue and the distribution of hepatocytes, while Masson trichrome can highlight specific types of collagen fiber structure, thus providing the necessary structural information for fibrosis scoring. However, the expensive costs of time, economy, and patient specimens as well as the non-uniform preparation and staining process make the conversion of existing H&E staining into virtual Masson trichrome staining a solution for fibrosis evaluation. Existing translation approaches fail to extract fiber features accurately enough, and the decoder of staining is unable to converge due to the inconsistent color of physical staining. In this work, we propose a prior-guided generative adversarial network, based on unpaired data for effective Masson trichrome stained image generation from the corresponding H&E stained image. Conducted on a small training set, our method takes full advantage of prior knowledge to set up better constraints on both the encoder and the decoder. Experiments indicate the superior performance of our method that surpasses the previous approaches. For various liver diseases, our results demonstrate a high correlation between the staging of real and virtual stains (ρ=0.82; 95% CI: 0.73-0.89). In addition, our finetuning strategy is able to standardize the staining color and release the memory and computational burden, which can be employed in clinical assessment.
Collapse
Affiliation(s)
- Renao Yan
- Shenzhen International Graduate School, Tsinghua University, Xili University City, Shenzhen, 518055, Guangdong, China
| | - Qiming He
- Shenzhen International Graduate School, Tsinghua University, Xili University City, Shenzhen, 518055, Guangdong, China
| | - Yiqing Liu
- Shenzhen International Graduate School, Tsinghua University, Xili University City, Shenzhen, 518055, Guangdong, China
| | - Peng Ye
- Shenzhen International Graduate School, Tsinghua University, Xili University City, Shenzhen, 518055, Guangdong, China
| | - Lianghui Zhu
- Shenzhen International Graduate School, Tsinghua University, Xili University City, Shenzhen, 518055, Guangdong, China
| | - Shanshan Shi
- Shenzhen International Graduate School, Tsinghua University, Xili University City, Shenzhen, 518055, Guangdong, China
| | - Jizhou Gou
- The Third People's Hospital of Shenzhen, Buji Buran Road 29, Shenzhen, 518112, Guangdong, China
| | - Yonghong He
- Shenzhen International Graduate School, Tsinghua University, Xili University City, Shenzhen, 518055, Guangdong, China
| | - Tian Guan
- Shenzhen International Graduate School, Tsinghua University, Xili University City, Shenzhen, 518055, Guangdong, China.
| | - Guangde Zhou
- The Third People's Hospital of Shenzhen, Buji Buran Road 29, Shenzhen, 518112, Guangdong, China.
| |
Collapse
|
40
|
Bai B, Yang X, Li Y, Zhang Y, Pillar N, Ozcan A. Deep learning-enabled virtual histological staining of biological samples. LIGHT, SCIENCE & APPLICATIONS 2023; 12:57. [PMID: 36864032 PMCID: PMC9981740 DOI: 10.1038/s41377-023-01104-7] [Citation(s) in RCA: 80] [Impact Index Per Article: 40.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Revised: 02/10/2023] [Accepted: 02/14/2023] [Indexed: 06/18/2023]
Abstract
Histological staining is the gold standard for tissue examination in clinical pathology and life-science research, which visualizes the tissue and cellular structures using chromatic dyes or fluorescence labels to aid the microscopic assessment of tissue. However, the current histological staining workflow requires tedious sample preparation steps, specialized laboratory infrastructure, and trained histotechnologists, making it expensive, time-consuming, and not accessible in resource-limited settings. Deep learning techniques created new opportunities to revolutionize staining methods by digitally generating histological stains using trained neural networks, providing rapid, cost-effective, and accurate alternatives to standard chemical staining methods. These techniques, broadly referred to as virtual staining, were extensively explored by multiple research groups and demonstrated to be successful in generating various types of histological stains from label-free microscopic images of unstained samples; similar approaches were also used for transforming images of an already stained tissue sample into another type of stain, performing virtual stain-to-stain transformations. In this Review, we provide a comprehensive overview of the recent research advances in deep learning-enabled virtual histological staining techniques. The basic concepts and the typical workflow of virtual staining are introduced, followed by a discussion of representative works and their technical innovations. We also share our perspectives on the future of this emerging field, aiming to inspire readers from diverse scientific fields to further expand the scope of deep learning-enabled virtual histological staining techniques and their applications.
Collapse
Affiliation(s)
- Bijie Bai
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Xilin Yang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Yuzhu Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Yijie Zhang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Nir Pillar
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.
- Bioengineering Department, University of California, Los Angeles, 90095, USA.
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA.
| |
Collapse
|
41
|
Li Z, Muench G, Goebel S, Uhland K, Wenhart C, Reimann A. Flow chamber staining modality for real-time inspection of dynamic phenotypes in multiple histological stains. PLoS One 2023; 18:e0284444. [PMID: 37141296 PMCID: PMC10159194 DOI: 10.1371/journal.pone.0284444] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 03/30/2023] [Indexed: 05/05/2023] Open
Abstract
Traditional histological stains, such as hematoxylin-eosin (HE), special stains, and immunofluorescence (IF), have defined myriads of cellular phenotypes and tissue structures in a separate stained section. However, the precise connection of information conveyed by the various stains in the same section, which may be important for diagnosis, is absent. Here, we present a new staining modality-Flow chamber stain, which complies with the current staining workflow but possesses newly additional features non-seen in conventional stains, allowing for (1) quickly switching staining modes between destain and restain for multiplex staining in one single section from routinely histological preparation, (2) real-time inspecting and digitally capturing each specific stained phenotype, and (3) efficiently synthesizing graphs containing the tissue multiple-stained components at site-specific regions. Comparisons of its stains with those by the conventional staining fashions using the microscopic images of mouse tissues (lung, heart, liver, kidney, esophagus, and brain), involving stains of HE, Periodic acid-Schiff, Sirius red, and IF for Human IgG, and mouse CD45, hemoglobin, and CD31, showed no major discordance. Repetitive experiments testing on targeted areas of stained sections confirmed the method is reliable with accuracy and high reproducibility. Using the technique, the targets of IF were easily localized and seen structurally in HE- or special-stained sections, and the unknown or suspected components or structures in HE-stained sections were further determined in histological special stains or IF. By the technique, staining processing was videoed and made a backup for off-site pathologists, which facilitates tele-consultation or -education in current digital pathology. Mistakes, which might occur during the staining process, can be immediately found and amended accordingly. With the technique, a single section can provide much more information than the traditional stained counterpart. The staining mode bears great potential to become a common supplementary tool for traditional histopathology.
Collapse
|
42
|
He Q, He L, Duan H, Sun Q, Zheng R, Guan J, He Y, Huang W, Guan T. Expression site agnostic histopathology image segmentation framework by self supervised domain adaption. Comput Biol Med 2023; 152:106412. [PMID: 36516576 DOI: 10.1016/j.compbiomed.2022.106412] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2022] [Revised: 11/22/2022] [Accepted: 12/03/2022] [Indexed: 12/12/2022]
Abstract
MOTIVATION With the sites of antigen expression different, the segmentation of immunohistochemical (IHC) histopathology images is challenging, due to the visual variances. With H&E images highlighting the tissue structure and cell distribution more broadly, transferring more salient features from H&E images can achieve considerable performance on expression site agnostic IHC images segmentation. METHODS To the best of our knowledge, this is the first work that focuses on domain adaptive segmentation for different expression sites. We propose an expression site agnostic domain adaptive histopathology image semantic segmentation framework (ESASeg). In ESASeg, multi-level feature alignment encodes expression site invariance by learning generic representations of global and multi-scale local features. Moreover, self-supervision enhances domain adaptation to perceive high-level semantics by predicting pseudo-labels. RESULTS We construct a dataset with three IHCs (Her2 with membrane stained, Ki67 with nucleus stained, GPC3 with cytoplasm stained) with different expression sites from two diseases (breast and liver cancer). Intensive experiments on tumor region segmentation illustrate that ESASeg performs best across all metrics, and the implementation of each module proves to achieve impressive improvements. CONCLUSION The performance of ESASeg on the tumor region segmentation demonstrates the efficiency of the proposed framework, which provides a novel solution on expression site agnostic IHC related tasks. Moreover, the proposed domain adaption and self-supervision module can improve feature domain adaption and extraction without labels. In addition, ESASeg lays the foundation to perform joint analysis and information interaction for IHCs with different expression sites.
Collapse
Affiliation(s)
- Qiming He
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China.
| | - Ling He
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China.
| | - Hufei Duan
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China.
| | - Qiehe Sun
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China.
| | - Runliang Zheng
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China.
| | - Jian Guan
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, 518116, China.
| | - Yonghong He
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China.
| | - Wenting Huang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, 518116, China.
| | - Tian Guan
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China.
| |
Collapse
|
43
|
Vasiljević J, Nisar Z, Feuerhake F, Wemmert C, Lampert T. CycleGAN for virtual stain transfer: Is seeing really believing? Artif Intell Med 2022; 133:102420. [PMID: 36328671 DOI: 10.1016/j.artmed.2022.102420] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Revised: 03/16/2022] [Accepted: 10/02/2022] [Indexed: 01/18/2023]
Abstract
Digital Pathology is an area prone to high variation due to multiple factors which can strongly affect diagnostic quality and visual appearance of the Whole-Slide-Images (WSIs). The state-of-the art methods to deal with such variation tend to address this through style-transfer inspired approaches. Usually, these solutions directly apply successful approaches from the literature, potentially with some task-related modifications. The majority of the obtained results are visually convincing, however, this paper shows that this is not a guarantee that such images can be directly used for either medical diagnosis or reducing domain shift.This article shows that slight modification in a stain transfer architecture, such as a choice of normalisation layer, while resulting in a variety of visually appealing results, surprisingly greatly effects the ability of a stain transfer model to reduce domain shift. By extensive qualitative and quantitative evaluations, we confirm that translations resulting from different stain transfer architectures are distinct from each other and from the real samples. Therefore conclusions made by visual inspection or pretrained model evaluation might be misleading.
Collapse
Affiliation(s)
- Jelica Vasiljević
- ICube, University of Strasbourg, CNRS (UMR 7357), France; University of Belgrade, Belgrade, Serbia; Faculty of Science, University of Kragujevac, Kragujevac, Serbia.
| | - Zeeshan Nisar
- ICube, University of Strasbourg, CNRS (UMR 7357), France
| | - Friedrich Feuerhake
- Institute of Pathology, Hannover Medical School, Germany; University Clinic, Freiburg, Germany
| | - Cédric Wemmert
- ICube, University of Strasbourg, CNRS (UMR 7357), France
| | - Thomas Lampert
- ICube, University of Strasbourg, CNRS (UMR 7357), France
| |
Collapse
|
44
|
Pillar N, Ozcan A. Virtual tissue staining in pathology using machine learning. Expert Rev Mol Diagn 2022; 22:987-989. [PMID: 36440487 DOI: 10.1080/14737159.2022.2153040] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Accepted: 11/25/2022] [Indexed: 11/29/2022]
Affiliation(s)
- Nir Pillar
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- Bioengineering Department, University of California, Los Angeles, CA, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- Bioengineering Department, University of California, Los Angeles, CA, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
- Department of Surgery, University of California, Los Angeles, CA, USA
| |
Collapse
|
45
|
Bai B, Wang H, Li Y, de Haan K, Colonnese F, Wan Y, Zuo J, Doan NB, Zhang X, Zhang Y, Li J, Yang X, Dong W, Darrow MA, Kamangar E, Lee HS, Rivenson Y, Ozcan A. Label-Free Virtual HER2 Immunohistochemical Staining of Breast Tissue using Deep Learning. BME FRONTIERS 2022; 2022:9786242. [PMID: 37850170 PMCID: PMC10521710 DOI: 10.34133/2022/9786242] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Accepted: 08/25/2022] [Indexed: 10/19/2023] Open
Abstract
The immunohistochemical (IHC) staining of the human epidermal growth factor receptor 2 (HER2) biomarker is widely practiced in breast tissue analysis, preclinical studies, and diagnostic decisions, guiding cancer treatment and investigation of pathogenesis. HER2 staining demands laborious tissue treatment and chemical processing performed by a histotechnologist, which typically takes one day to prepare in a laboratory, increasing analysis time and associated costs. Here, we describe a deep learning-based virtual HER2 IHC staining method using a conditional generative adversarial network that is trained to rapidly transform autofluorescence microscopic images of unlabeled/label-free breast tissue sections into bright-field equivalent microscopic images, matching the standard HER2 IHC staining that is chemically performed on the same tissue sections. The efficacy of this virtual HER2 staining framework was demonstrated by quantitative analysis, in which three board-certified breast pathologists blindly graded the HER2 scores of virtually stained and immunohistochemically stained HER2 whole slide images (WSIs) to reveal that the HER2 scores determined by inspecting virtual IHC images are as accurate as their immunohistochemically stained counterparts. A second quantitative blinded study performed by the same diagnosticians further revealed that the virtually stained HER2 images exhibit a comparable staining quality in the level of nuclear detail, membrane clearness, and absence of staining artifacts with respect to their immunohistochemically stained counterparts. This virtual HER2 staining framework bypasses the costly, laborious, and time-consuming IHC staining procedures in laboratory and can be extended to other types of biomarkers to accelerate the IHC tissue staining used in life sciences and biomedical workflow.
Collapse
Affiliation(s)
- Bijie Bai
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Hongda Wang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Yuzhu Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Kevin de Haan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | | | - Yujie Wan
- Physics and Astronomy Department, University of California, Los Angeles, CA 90095, USA
| | - Jingyi Zuo
- Computer Science Department, University of California, Los Angeles, CA, USA
| | - Ngan B. Doan
- Translational Pathology Core Laboratory, University of California, Los Angeles, CA 90095, USA
| | - Xiaoran Zhang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
| | - Yijie Zhang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Jingxi Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Xilin Yang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Wenjie Dong
- Statistics Department, University of California, Los Angeles, CA 90095, USA
| | - Morgan Angus Darrow
- Department of Pathology and Laboratory Medicine, University of California at Davis, Sacramento, CA 95817, USA
| | - Elham Kamangar
- Department of Pathology and Laboratory Medicine, University of California at Davis, Sacramento, CA 95817, USA
| | - Han Sung Lee
- Department of Pathology and Laboratory Medicine, University of California at Davis, Sacramento, CA 95817, USA
| | - Yair Rivenson
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
- Department of Surgery, University of California, Los Angeles, CA 90095, USA
| |
Collapse
|
46
|
Qiao Y, Zhao L, Luo C, Luo Y, Wu Y, Li S, Bu D, Zhao Y. Multi-modality artificial intelligence in digital pathology. Brief Bioinform 2022; 23:6702380. [PMID: 36124675 PMCID: PMC9677480 DOI: 10.1093/bib/bbac367] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Revised: 07/27/2022] [Accepted: 08/05/2022] [Indexed: 12/14/2022] Open
Abstract
In common medical procedures, the time-consuming and expensive nature of obtaining test results plagues doctors and patients. Digital pathology research allows using computational technologies to manage data, presenting an opportunity to improve the efficiency of diagnosis and treatment. Artificial intelligence (AI) has a great advantage in the data analytics phase. Extensive research has shown that AI algorithms can produce more up-to-date and standardized conclusions for whole slide images. In conjunction with the development of high-throughput sequencing technologies, algorithms can integrate and analyze data from multiple modalities to explore the correspondence between morphological features and gene expression. This review investigates using the most popular image data, hematoxylin-eosin stained tissue slide images, to find a strategic solution for the imbalance of healthcare resources. The article focuses on the role that the development of deep learning technology has in assisting doctors' work and discusses the opportunities and challenges of AI.
Collapse
Affiliation(s)
- Yixuan Qiao
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Lianhe Zhao
- Corresponding authors: Yi Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences; Shandong First Medical University & Shandong Academy of Medical Sciences. Tel.: +86 10 6260 0822; Fax: +86 10 6260 1356; E-mail: ; Lianhe Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences. Tel.: +86 18513983324; E-mail:
| | - Chunlong Luo
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yufan Luo
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yang Wu
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Shengtong Li
- Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Dechao Bu
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Yi Zhao
- Corresponding authors: Yi Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences; Shandong First Medical University & Shandong Academy of Medical Sciences. Tel.: +86 10 6260 0822; Fax: +86 10 6260 1356; E-mail: ; Lianhe Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences. Tel.: +86 18513983324; E-mail:
| |
Collapse
|
47
|
Conditional GANs based system for fibrosis detection and quantification in Hematoxylin and Eosin whole slide images. Med Image Anal 2022; 81:102537. [DOI: 10.1016/j.media.2022.102537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Revised: 04/12/2022] [Accepted: 07/11/2022] [Indexed: 11/22/2022]
|
48
|
Prostate cancer histopathology using label-free multispectral deep-UV microscopy quantifies phenotypes of tumor aggressiveness and enables multiple diagnostic virtual stains. Sci Rep 2022; 12:9329. [PMID: 35665770 PMCID: PMC9167293 DOI: 10.1038/s41598-022-13332-9] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Accepted: 05/23/2022] [Indexed: 12/20/2022] Open
Abstract
Identifying prostate cancer patients that are harboring aggressive forms of prostate cancer remains a significant clinical challenge. Here we develop an approach based on multispectral deep-ultraviolet (UV) microscopy that provides novel quantitative insight into the aggressiveness and grade of this disease, thus providing a new tool to help address this important challenge. We find that UV spectral signatures from endogenous molecules give rise to a phenotypical continuum that provides unique structural insight (i.e., molecular maps or “optical stains") of thin tissue sections with subcellular (nanoscale) resolution. We show that this phenotypical continuum can also be applied as a surrogate biomarker of prostate cancer malignancy, where patients with the most aggressive tumors show a ubiquitous glandular phenotypical shift. In addition to providing several novel “optical stains” with contrast for disease, we also adapt a two-part Cycle-consistent Generative Adversarial Network to translate the label-free deep-UV images into virtual hematoxylin and eosin (H&E) stained images, thus providing multiple stains (including the gold-standard H&E) from the same unlabeled specimen. Agreement between the virtual H&E images and the H&E-stained tissue sections is evaluated by a panel of pathologists who find that the two modalities are in excellent agreement. This work has significant implications towards improving our ability to objectively quantify prostate cancer grade and aggressiveness, thus improving the management and clinical outcomes of prostate cancer patients. This same approach can also be applied broadly in other tumor types to achieve low-cost, stain-free, quantitative histopathological analysis.
Collapse
|
49
|
Li J, Garfinkel J, Zhang X, Wu D, Zhang Y, de Haan K, Wang H, Liu T, Bai B, Rivenson Y, Rubinstein G, Scumpia PO, Ozcan A. Biopsy-free in vivo virtual histology of skin using deep learning. LIGHT, SCIENCE & APPLICATIONS 2021; 10:233. [PMID: 34795202 PMCID: PMC8602311 DOI: 10.1038/s41377-021-00674-8] [Citation(s) in RCA: 42] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Revised: 10/22/2021] [Accepted: 10/28/2021] [Indexed: 05/09/2023]
Abstract
An invasive biopsy followed by histological staining is the benchmark for pathological diagnosis of skin tumors. The process is cumbersome and time-consuming, often leading to unnecessary biopsies and scars. Emerging noninvasive optical technologies such as reflectance confocal microscopy (RCM) can provide label-free, cellular-level resolution, in vivo images of skin without performing a biopsy. Although RCM is a useful diagnostic tool, it requires specialized training because the acquired images are grayscale, lack nuclear features, and are difficult to correlate with tissue pathology. Here, we present a deep learning-based framework that uses a convolutional neural network to rapidly transform in vivo RCM images of unstained skin into virtually-stained hematoxylin and eosin-like images with microscopic resolution, enabling visualization of the epidermis, dermal-epidermal junction, and superficial dermis layers. The network was trained under an adversarial learning scheme, which takes ex vivo RCM images of excised unstained/label-free tissue as inputs and uses the microscopic images of the same tissue labeled with acetic acid nuclear contrast staining as the ground truth. We show that this trained neural network can be used to rapidly perform virtual histology of in vivo, label-free RCM images of normal skin structure, basal cell carcinoma, and melanocytic nevi with pigmented melanocytes, demonstrating similar histological features to traditional histology from the same excised tissue. This application of deep learning-based virtual staining to noninvasive imaging technologies may permit more rapid diagnoses of malignant skin neoplasms and reduce invasive skin biopsies.
Collapse
Affiliation(s)
- Jingxi Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | | | - Xiaoran Zhang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
| | - Di Wu
- Computer Science Department, University of California, Los Angeles, CA, 90095, USA
| | - Yijie Zhang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Kevin de Haan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Hongda Wang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Tairan Liu
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Bijie Bai
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Yair Rivenson
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | | | - Philip O Scumpia
- Division of Dermatology, University of California, Los Angeles, CA, 90095, USA.
- Department of Dermatology, Veterans Affairs Greater Los Angeles Healthcare System, Los Angeles, CA, 90073, USA.
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA.
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA.
- Department of Surgery, University of California, Los Angeles, CA, 90095, USA.
| |
Collapse
|
50
|
Zhang G, Ning B, Hui H, Yu T, Yang X, Zhang H, Tian J, He W. Image-to-Images Translation for Multiple Virtual Histological Staining of Unlabeled Human Carotid Atherosclerotic Tissue. Mol Imaging Biol 2021; 24:31-41. [PMID: 34622424 DOI: 10.1007/s11307-021-01641-w] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2021] [Revised: 07/19/2021] [Accepted: 08/12/2021] [Indexed: 11/29/2022]
Abstract
PURPOSE Histological analysis of human carotid atherosclerotic plaques is critical in understanding atherosclerosis biology and developing effective plaque prevention and treatment for ischemic stroke. However, the histological staining process is laborious, tedious, variable, and destructive to the highly valuable atheroma tissue obtained from patients. PROCEDURES We proposed a deep learning-based method to simultaneously transfer bright-field microscopic images of unlabeled tissue sections into equivalent multiple sections of the same samples that are virtually stained. Using a pix2pix model, we trained a generative adversarial neural network to achieve image-to-images translation of multiple stains, including hematoxylin and eosin (H&E), picrosirius red (PSR), and Verhoeff van Gieson (EVG) stains. RESULTS The quantification of evaluation metrics indicated that the proposed approach achieved the best performance in comparison with other state-of-the-art methods. Further blind evaluation by board-certified pathologists demonstrated that the multiple virtual stains have high consistency with standard histological stains. The proposed approach also indicated that the generated histopathological features of atherosclerotic plaques, such as the necrotic core, neovascularization, cholesterol crystals, collagen, and elastic fibers, are optimally matched with those of standard histological stains. CONCLUSIONS The proposed approach allows for the virtual staining of unlabeled human carotid plaque tissue images with multiple types of stains. In addition, it identifies the histopathological features of atherosclerotic plaques in the same tissue sample, which could facilitate the development of personalized prevention and other interventional treatments for carotid atherosclerosis.
Collapse
Affiliation(s)
- Guanghao Zhang
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100190, China.,CAS Key Laboratory of Molecular Imaging, Institute of Automation, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China
| | - Bin Ning
- Department of Ultrasound, Beijing Tiantan Hospital, Capital Medical University, Beijing, 100070, China
| | - Hui Hui
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Tengfei Yu
- Department of Ultrasound, Beijing Tiantan Hospital, Capital Medical University, Beijing, 100070, China
| | - Xin Yang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Hongxia Zhang
- Department of Ultrasound, Beijing Tiantan Hospital, Capital Medical University, Beijing, 100070, China
| | - Jie Tian
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China. .,Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Medicine, Beihang University, Beijing, 100083, China. .,Zhuhai Precision Medical Center, Zhuhai People's Hospital, Affiliated With Jinan University, Zhuhai, 519000, China.
| | - Wen He
- Department of Ultrasound, Beijing Tiantan Hospital, Capital Medical University, Beijing, 100070, China.
| |
Collapse
|