1
|
Zhang Z, Zhou X, Fang Y, Xiong Z, Zhang T. AI-driven 3D bioprinting for regenerative medicine: From bench to bedside. Bioact Mater 2025; 45:201-230. [PMID: 39651398 PMCID: PMC11625302 DOI: 10.1016/j.bioactmat.2024.11.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2024] [Revised: 11/01/2024] [Accepted: 11/16/2024] [Indexed: 12/11/2024] Open
Abstract
In recent decades, 3D bioprinting has garnered significant research attention due to its ability to manipulate biomaterials and cells to create complex structures precisely. However, due to technological and cost constraints, the clinical translation of 3D bioprinted products (BPPs) from bench to bedside has been hindered by challenges in terms of personalization of design and scaling up of production. Recently, the emerging applications of artificial intelligence (AI) technologies have significantly improved the performance of 3D bioprinting. However, the existing literature remains deficient in a methodological exploration of AI technologies' potential to overcome these challenges in advancing 3D bioprinting toward clinical application. This paper aims to present a systematic methodology for AI-driven 3D bioprinting, structured within the theoretical framework of Quality by Design (QbD). This paper commences by introducing the QbD theory into 3D bioprinting, followed by summarizing the technology roadmap of AI integration in 3D bioprinting, including multi-scale and multi-modal sensing, data-driven design, and in-line process control. This paper further describes specific AI applications in 3D bioprinting's key elements, including bioink formulation, model structure, printing process, and function regulation. Finally, the paper discusses current prospects and challenges associated with AI technologies to further advance the clinical translation of 3D bioprinting.
Collapse
Affiliation(s)
- Zhenrui Zhang
- Biomanufacturing Center, Department of Mechanical Engineering, Tsinghua University, Beijing, 100084, PR China
- Biomanufacturing and Rapid Forming Technology Key Laboratory of Beijing, Beijing, 100084, PR China
- “Biomanufacturing and Engineering Living Systems” Innovation International Talents Base (111 Base), Beijing, 100084, PR China
| | - Xianhao Zhou
- Biomanufacturing Center, Department of Mechanical Engineering, Tsinghua University, Beijing, 100084, PR China
- Biomanufacturing and Rapid Forming Technology Key Laboratory of Beijing, Beijing, 100084, PR China
- “Biomanufacturing and Engineering Living Systems” Innovation International Talents Base (111 Base), Beijing, 100084, PR China
| | - Yongcong Fang
- Biomanufacturing Center, Department of Mechanical Engineering, Tsinghua University, Beijing, 100084, PR China
- Biomanufacturing and Rapid Forming Technology Key Laboratory of Beijing, Beijing, 100084, PR China
- “Biomanufacturing and Engineering Living Systems” Innovation International Talents Base (111 Base), Beijing, 100084, PR China
- State Key Laboratory of Tribology in Advanced Equipment, Tsinghua University, Beijing, 100084, PR China
| | - Zhuo Xiong
- Biomanufacturing Center, Department of Mechanical Engineering, Tsinghua University, Beijing, 100084, PR China
- Biomanufacturing and Rapid Forming Technology Key Laboratory of Beijing, Beijing, 100084, PR China
- “Biomanufacturing and Engineering Living Systems” Innovation International Talents Base (111 Base), Beijing, 100084, PR China
| | - Ting Zhang
- Biomanufacturing Center, Department of Mechanical Engineering, Tsinghua University, Beijing, 100084, PR China
- Biomanufacturing and Rapid Forming Technology Key Laboratory of Beijing, Beijing, 100084, PR China
- “Biomanufacturing and Engineering Living Systems” Innovation International Talents Base (111 Base), Beijing, 100084, PR China
- State Key Laboratory of Tribology in Advanced Equipment, Tsinghua University, Beijing, 100084, PR China
| |
Collapse
|
2
|
Guang Z, Jacobs A, Costa PC, Li Z, Robles FE. Acetic acid enabled nuclear contrast enhancement in epi-mode quantitative phase imaging. JOURNAL OF BIOMEDICAL OPTICS 2025; 30:026501. [PMID: 39906483 PMCID: PMC11792252 DOI: 10.1117/1.jbo.30.2.026501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/17/2024] [Revised: 12/12/2024] [Accepted: 01/09/2025] [Indexed: 02/06/2025]
Abstract
Significance The acetowhitening effect of acetic acid (AA) enhances light scattering of cell nuclei, an effect that has been widely leveraged to facilitate tissue inspection for (pre)cancerous lesions. Here, we show that a concomitant effect of acetowhitening-changes in refractive index composition-yields nuclear contrast enhancement in quantitative phase imaging (QPI) of thick tissue samples. Aim We aim to explore how changes in refractive index composition during acetowhitening can be captured through a novel epi-mode 3D QPI technique called quantitative oblique back-illumination microscopy (qOBM). We also aim to demonstrate the potential of using a machine learning-based approach to convert qOBM images of fresh tissues into virtually AA-stained images. Approach We implemented qOBM, an imaging technique that allows for epi-mode 3D QPI to observe phase changes induced by AA in thick tissue samples. We focus on detecting nuclear contrast changes caused by AA in mouse brain samples. As a proof of concept, we also applied a Cycle-GAN algorithm to convert the acquired qOBM images into virtually AA-stained images, simulating the effect of AA staining. Results Our findings demonstrate that AA-induced acetowhitening leads to significant nuclear contrast enhancement in qOBM images of thick tissue samples. In addition, the Cycle-GAN algorithm successfully converted qOBM images into virtually AA-stained images, further facilitating the nuclear enhancement process without any physical stains. Conclusions We show that the acetowhitening effect of acetic acid induces changes in refractive index composition that significantly enhance nuclear contrast in QPI. The application of qOBM with AA, along with the use of a Cycle-GAN algorithm to virtually stain tissues, highlights the potential of this approach for advancing label-free and slide-free, ex vivo, and in vivo histology.
Collapse
Affiliation(s)
- Zhe Guang
- Emory University, Georgia Institute of Technology, Wallace H. Coulter Department of Biomedical Engineering, Atlanta, Georgia, United States
| | - Amunet Jacobs
- Emory University, Georgia Institute of Technology, Wallace H. Coulter Department of Biomedical Engineering, Atlanta, Georgia, United States
- University of Kentucky, College of Medicine, Lexington, Kentucky, United States
| | - Paloma Casteleiro Costa
- Georgia Institute of Technology, School of Electrical and Computer Engineering, Atlanta, Georgia, United States
| | - Zhenmin Li
- Georgia Institute of Technology, School of Electrical and Computer Engineering, Atlanta, Georgia, United States
| | - Francisco E. Robles
- Emory University, Georgia Institute of Technology, Wallace H. Coulter Department of Biomedical Engineering, Atlanta, Georgia, United States
- Georgia Institute of Technology, School of Electrical and Computer Engineering, Atlanta, Georgia, United States
| |
Collapse
|
3
|
Işıl Ç, Koydemir HC, Eryilmaz M, de Haan K, Pillar N, Mentesoglu K, Unal AF, Rivenson Y, Chandrasekaran S, Garner OB, Ozcan A. Virtual Gram staining of label-free bacteria using dark-field microscopy and deep learning. SCIENCE ADVANCES 2025; 11:eads2757. [PMID: 39772690 PMCID: PMC11803577 DOI: 10.1126/sciadv.ads2757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/06/2024] [Accepted: 12/03/2024] [Indexed: 01/11/2025]
Abstract
Gram staining has been a frequently used staining protocol in microbiology. It is vulnerable to staining artifacts due to, e.g., operator errors and chemical variations. Here, we introduce virtual Gram staining of label-free bacteria using a trained neural network that digitally transforms dark-field images of unstained bacteria into their Gram-stained equivalents matching bright-field image contrast. After a one-time training, the virtual Gram staining model processes an axial stack of dark-field microscopy images of label-free bacteria (never seen before) to rapidly generate Gram staining, bypassing several chemical steps involved in the conventional staining process. We demonstrated the success of virtual Gram staining on label-free bacteria samples containing Escherichia coli and Listeria innocua by quantifying the staining accuracy of the model and comparing the chromatic and morphological features of the virtually stained bacteria against their chemically stained counterparts. This virtual bacterial staining framework bypasses the traditional Gram staining protocol and its challenges, including stain standardization, operator errors, and sensitivity to chemical variations.
Collapse
Affiliation(s)
- Çağatay Işıl
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Hatice Ceylan Koydemir
- Department of Biomedical Engineering, Texas A&M University, College Station, TX 77843, USA
- Center for Remote Health Technologies and Systems, Texas A&M Engineering Experiment Station, College Station, TX 77843, USA
| | - Merve Eryilmaz
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Kevin de Haan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Nir Pillar
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Koray Mentesoglu
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
| | - Aras Firat Unal
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Yair Rivenson
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Sukantha Chandrasekaran
- Department of Pathology and Laboratory Medicine, David Geffen School of Medicine, University of California, Los Angeles, CA 90095, USA
| | - Omai B. Garner
- Department of Pathology and Laboratory Medicine, David Geffen School of Medicine, University of California, Los Angeles, CA 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| |
Collapse
|
4
|
Alotaibi A, AlSaeed D. Skin Cancer Detection Using Transfer Learning and Deep Attention Mechanisms. Diagnostics (Basel) 2025; 15:99. [PMID: 39795627 PMCID: PMC11720014 DOI: 10.3390/diagnostics15010099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2024] [Revised: 12/26/2024] [Accepted: 12/29/2024] [Indexed: 01/13/2025] Open
Abstract
Background/Objectives: Early and accurate diagnosis of skin cancer improves survival rates; however, dermatologists often struggle with lesion detection due to similar pigmentation. Deep learning and transfer learning models have shown promise in diagnosing skin cancers through image processing. Integrating attention mechanisms (AMs) with deep learning has further enhanced the accuracy of medical image classification. While significant progress has been made, further research is needed to improve the detection accuracy. Previous studies have not explored the integration of attention mechanisms with the pre-trained Xception transfer learning model for binary classification of skin cancer. This study aims to investigate the impact of various attention mechanisms on the Xception model's performance in detecting benign and malignant skin lesions. Methods: We conducted four experiments on the HAM10000 dataset. Three models integrated self-attention (SL), hard attention (HD), and soft attention (SF) mechanisms, while the fourth model used the standard Xception without attention mechanisms. Each mechanism analyzed features from the Xception model uniquely: self-attention examined the input relationships, hard-attention selected elements sparsely, and soft-attention distributed the focus probabilistically. Results: Integrating AMs into the Xception architecture effectively enhanced its performance. The accuracy of the Xception alone was 91.05%. With AMs, the accuracy increased to 94.11% using self-attention, 93.29% with soft attention, and 92.97% with hard attention. Moreover, the proposed models outperformed previous studies in terms of the recall metrics, which are crucial for medical investigations. Conclusions: These findings suggest that AMs can enhance performance in relation to complex medical imaging tasks, potentially supporting earlier diagnosis and improving treatment outcomes.
Collapse
Affiliation(s)
- Areej Alotaibi
- College of Computer and Information Sciences, King Saud University, Riyadh 11451, Saudi Arabia;
| | | |
Collapse
|
5
|
Sagiv C, Hadar O, Najjar A, Pahnke J. Artificial intelligence in surgical pathology - Where do we stand, where do we go? EUROPEAN JOURNAL OF SURGICAL ONCOLOGY 2024:109541. [PMID: 39694737 DOI: 10.1016/j.ejso.2024.109541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Revised: 11/14/2024] [Accepted: 12/10/2024] [Indexed: 12/20/2024]
Abstract
Surgical and neuropathologists continuously search for new and disease-specific features, such as independent predictors of tumor prognosis or determinants of tumor entities and sub-entities. This is a task where artificial intelligence (AI)/machine learning (ML) systems could significantly contribute to help with tumor outcome prediction and the search for new diagnostic or treatment stratification biomarkers. AI systems are increasingly integrated into routine pathology workflows to improve accuracy, reproducibility, productivity and to reveal difficult-to-see features in complicated histological slides, including the quantification of important markers for tumor grading and staging. In this article, we review the infrastructure needed to facilitate digital and computational pathology. We address the barriers for its full deployment in the clinical setting and describe the use of AI in intraoperative or postoperative settings were frozen or formalin-fixed, paraffin-embedded materials are used. We also summarize quality assessment issues of slide digitization, new spatial biology approaches, and the determination of specific gene-expression from whole slide images. Finally, we highlight new innovative and future technologies, such as large language models, optical biopsies, and mass spectrometry imaging.
Collapse
Affiliation(s)
- Chen Sagiv
- DeePathology Ltd., HaTidhar 5, P. O. Box 2622, Ra'anana, IL-4365104, Israel.
| | - Ofir Hadar
- DeePathology Ltd., HaTidhar 5, P. O. Box 2622, Ra'anana, IL-4365104, Israel
| | - Abderrahman Najjar
- Department of Pathology, Rabin Medical Center (RMC), Ze'ev Jabotinsky 39, Petah Tikva, IL-4941492, Israel
| | - Jens Pahnke
- Translational Neurodegeneration Research and Neuropathology Lab, Department of Clinical Medicine (KlinMed), Medical Faculty, University of Oslo (UiO) and Section of Neuropathology Research, Department of Pathology, Clinics for Laboratory Medicine (KLM), Oslo University Hospital (OUS), Sognsvannsveien 20, NO-0372, Oslo, Norway; Institute of Nutritional Medicine (INUM) and Lübeck Institute of Dermatology (LIED), University of Lübeck (UzL) and University Medical Center Schleswig-Holstein (UKSH), Ratzeburger Allee 160, D-23538, Lübeck, Germany; Department of Pharmacology, Faculty of Medicine and Life Sciences, University of Latvia, Jelgavas iela 3, LV-1004, Rīga, Latvia; Department of Neurobiology, School of Neurobiology, Biochemistry and Biophysics, The Georg S. Wise Faculty of Life Sciences, Tel Aviv University, Ramat Aviv, IL-6997801, Israel.
| |
Collapse
|
6
|
Hou X, Guan Z, Zhang X, Hu X, Zou S, Liang C, Shi L, Zhang K, You H. Evaluation of tumor budding with virtual panCK stains generated by novel multi-model CNN framework. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 257:108352. [PMID: 39241330 DOI: 10.1016/j.cmpb.2024.108352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Revised: 06/03/2024] [Accepted: 07/22/2024] [Indexed: 09/09/2024]
Abstract
As the global incidence of cancer continues to rise rapidly, the need for swift and precise diagnoses has become increasingly pressing. Pathologists commonly rely on H&E-panCK stain pairs for various aspects of cancer diagnosis, including the detection of occult tumor cells and the evaluation of tumor budding. Nevertheless, conventional chemical staining methods suffer from notable drawbacks, such as time-intensive processes and irreversible staining outcomes. The virtual stain technique, leveraging generative adversarial network (GAN), has emerged as a promising alternative to chemical stains. This approach aims to transform biopsy scans (often H&E) into other stain types. Despite achieving notable progress in recent years, current state-of-the-art virtual staining models confront challenges that hinder their efficacy, particularly in achieving accurate staining outcomes under specific conditions. These limitations have impeded the practical integration of virtual staining into diagnostic practices. To address the goal of producing virtual panCK stains capable of replacing chemical panCK, we propose an innovative multi-model framework. Our approach involves employing a combination of Mask-RCNN (for cell segmentation) and GAN models to extract cytokeratin distribution from chemical H&E images. Additionally, we introduce a tailored dynamic GAN model to convert H&E images into virtual panCK stains, integrating the derived cytokeratin distribution. Our framework is motivated by the fact that the unique pattern of the panCK is derived from cytokeratin distribution. As a proof of concept, we employ our virtual panCK stains to evaluate tumor budding in 45 H&E whole-slide images taken from breast cancer-invaded lymph nodes . Through thorough validation by both pathologists and the QuPath software, our virtual panCK stains demonstrate a remarkable level of accuracy. In stark contrast, the accuracy of state-of-the-art single cycleGAN virtual panCK stains is negligible. To our best knowledge, this is the first instance of a multi-model virtual panCK framework and the utilization of virtual panCK for tumor budding assessment. Our framework excels in generating dependable virtual panCK stains with significantly improved efficiency, thereby considerably reducing turnaround times in diagnosis. Furthermore, its outcomes are easily comprehensible even to pathologists who may not be well-versed in computer technology. We firmly believe that our framework has the potential to advance the field of virtual stain, thereby making significant strides towards improved cancer diagnosis.
Collapse
Affiliation(s)
- Xingzhong Hou
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, 100190, China; School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing, 100190, China
| | - Zhen Guan
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, 100190, China
| | - Xianwei Zhang
- Department of Pathology, Henan Provincial People's Hospital; People's Hospital of Zhengzhou University, Zhengzhou, Henan 450003, China
| | - Xiao Hu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of Pathology, Peking University Cancer Hospital & Institute, Beijing, China
| | - Shuangmei Zou
- Department of Pathology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Chunzi Liang
- School of Laboratory Medicine, Hubei University of Chinese Medicine, 16 Huangjia Lake West Road, Wuhan, Hubei 430065, China.
| | - Lulin Shi
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, 100190, China; School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing, 100190, China
| | - Kaitai Zhang
- State Key Laboratory of Molecular Oncology, Department of Etiology and Carcinogenesis, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China.
| | - Haihang You
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, 100190, China; Zhongguancun Laboratory, Beijing 102206, China
| |
Collapse
|
7
|
Chen M, Liu YT, Khan FS, Fox MC, Reichenberg JS, Lopes FCPS, Sebastian KR, Markey MK, Tunnell JW. Single color digital H&E staining with In-and-Out Net. Comput Med Imaging Graph 2024; 118:102468. [PMID: 39579455 DOI: 10.1016/j.compmedimag.2024.102468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2024] [Revised: 11/06/2024] [Accepted: 11/07/2024] [Indexed: 11/25/2024]
Abstract
Digital staining streamlines traditional staining procedures by digitally generating stained images from unstained or differently stained images. While conventional staining methods involve time-consuming chemical processes, digital staining offers an efficient and low-infrastructure alternative. Researchers can expedite tissue analysis without physical sectioning by leveraging microscopy-based techniques, such as confocal microscopy. However, interpreting grayscale or pseudo-color microscopic images remains challenging for pathologists and surgeons accustomed to traditional histologically stained images. To fill this gap, various studies explore digitally simulating staining to mimic targeted histological stains. This paper introduces a novel network, In-and-Out Net, designed explicitly for digital staining tasks. Based on Generative Adversarial Networks (GAN), our model efficiently transforms Reflectance Confocal Microscopy (RCM) images into Hematoxylin and Eosin (H&E) stained images. Using aluminum chloride preprocessing for skin tissue, we enhance nuclei contrast in RCM images. We trained the model with digital H&E labels featuring two fluorescence channels, eliminating the need for image registration and providing pixel-level ground truth. Our contributions include proposing an optimal training strategy, conducting a comparative analysis demonstrating state-of-the-art performance, validating the model through an ablation study, and collecting perfectly matched input and ground truth images without registration. In-and-Out Net showcases promising results, offering a valuable tool for digital staining tasks and advancing the field of histological image analysis.
Collapse
Affiliation(s)
- Mengkun Chen
- University of Texas at Austin, Department of Biomedical Engineering, 107 W Dean Keeton St, Austin, 78712, TX, United States
| | - Yen-Tung Liu
- University of Texas at Austin, Department of Biomedical Engineering, 107 W Dean Keeton St, Austin, 78712, TX, United States
| | - Fadeel Sher Khan
- University of Texas at Austin, Department of Biomedical Engineering, 107 W Dean Keeton St, Austin, 78712, TX, United States
| | - Matthew C Fox
- The University of Texas at Austin, Division of Dermatology, Dell Medical School, 1301 Barbara Jordan Blvd #200, Austin, 78732, TX, United States
| | - Jason S Reichenberg
- The University of Texas at Austin, Division of Dermatology, Dell Medical School, 1301 Barbara Jordan Blvd #200, Austin, 78732, TX, United States
| | - Fabiana C P S Lopes
- The University of Texas at Austin, Division of Dermatology, Dell Medical School, 1301 Barbara Jordan Blvd #200, Austin, 78732, TX, United States
| | - Katherine R Sebastian
- The University of Texas at Austin, Division of Dermatology, Dell Medical School, 1301 Barbara Jordan Blvd #200, Austin, 78732, TX, United States
| | - Mia K Markey
- University of Texas at Austin, Department of Biomedical Engineering, 107 W Dean Keeton St, Austin, 78712, TX, United States; The University of Texas MD Anderson Cancer Center, Department of Imaging Physics, 1400 Pressler Street, Houston, 77030, TX, United States
| | - James W Tunnell
- University of Texas at Austin, Department of Biomedical Engineering, 107 W Dean Keeton St, Austin, 78712, TX, United States.
| |
Collapse
|
8
|
Restall BS, Haven NJM, Martell MT, Cikaluk BD, Wang J, Kedarisetti P, Tejay S, Adam BA, Sutendra G, Li X, Zemp RJ. Metabolic light absorption, scattering, and emission (MetaLASE) microscopy. SCIENCE ADVANCES 2024; 10:eadl5729. [PMID: 39423271 PMCID: PMC11488571 DOI: 10.1126/sciadv.adl5729] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Accepted: 09/13/2024] [Indexed: 10/21/2024]
Abstract
Optical imaging of metabolism can provide key information about health and disease progression in cells and tissues; however, current methods have lacked gold-standard information about histological structure. Conversely, histology and virtual histology methods have lacked metabolic contrast. Here, we present metabolic light absorption, scattering, and emission (MetaLASE) microscopy, which rapidly provides a virtual histology and optical metabolic readout simultaneously. Hematoxylin-like nucleic contrast and eosin-like cytoplasmic contrast are obtained using photoacoustic remote sensing and ultraviolet reflectance microscopy, respectively. The same ultraviolet source excites endogenous Nicotinamide adenine dinucleotide (phosphate), flavin adenine dinucleotide, and collagen autofluorescence, providing a map of optical redox ratios to visualize metabolic variations including in areas of invasive carcinoma. Benign chronic inflammation and glands also are seen to exhibit hypermetabolism. MetaLASE microscopy offers promise for future applications in intraoperative margin analysis and in research applications where greater insights into metabolic activity could be correlated with cell and tissue types.
Collapse
Affiliation(s)
- Brendon S. Restall
- Department of Electrical and Computer Engineering, University of Alberta, 116 Street & 85 Avenue, Edmonton, Alberta T6G 2R3, Canada
| | - Nathaniel J. M. Haven
- Department of Electrical and Computer Engineering, University of Alberta, 116 Street & 85 Avenue, Edmonton, Alberta T6G 2R3, Canada
| | - Matthew T. Martell
- Department of Electrical and Computer Engineering, University of Alberta, 116 Street & 85 Avenue, Edmonton, Alberta T6G 2R3, Canada
| | - Brendyn D. Cikaluk
- Department of Electrical and Computer Engineering, University of Alberta, 116 Street & 85 Avenue, Edmonton, Alberta T6G 2R3, Canada
| | - Joy Wang
- Department of Electrical and Computer Engineering, University of Alberta, 116 Street & 85 Avenue, Edmonton, Alberta T6G 2R3, Canada
| | - Pradyumna Kedarisetti
- Department of Electrical and Computer Engineering, University of Alberta, 116 Street & 85 Avenue, Edmonton, Alberta T6G 2R3, Canada
| | - Saymon Tejay
- Department of Medicine, Faculty of Medicine & Dentistry, University of Alberta, Edmonton, Alberta, Canada
| | - Benjamin A. Adam
- Department of Laboratory Medicine and Pathology, University of Alberta, 8440-112 Street, Edmonton, Alberta T6G 2B7, Canada
| | - Gopinath Sutendra
- Department of Medicine, Faculty of Medicine & Dentistry, University of Alberta, Edmonton, Alberta, Canada
| | - Xingyu Li
- Department of Electrical and Computer Engineering, University of Alberta, 116 Street & 85 Avenue, Edmonton, Alberta T6G 2R3, Canada
| | - Roger J. Zemp
- Department of Electrical and Computer Engineering, University of Alberta, 116 Street & 85 Avenue, Edmonton, Alberta T6G 2R3, Canada
| |
Collapse
|
9
|
Khattab SY, Hijaz BA, Semenov YR. Cutaneous Imaging Techniques. Hematol Oncol Clin North Am 2024; 38:907-919. [PMID: 39079790 DOI: 10.1016/j.hoc.2024.05.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/03/2024]
Abstract
Cutaneous imaging is a central tenant to the practice of dermatology. In this article, the authors explore various noninvasive and invasive skin imaging techniques, as well as the latest deployment of these technologies in conjunction with the use artificial intelligence and machine learning. The authors also provide insight into the benefits, limitations, and challenges around integrating these technologies into dermatologic practice.
Collapse
Affiliation(s)
- Sara Yasmin Khattab
- Department of Dermatology, Massachusetts General Hospital, 40 Blossom Street, Bartlett Hall 6R, Room 626, Boston, MA 02114, USA
| | - Baraa Ashraf Hijaz
- Department of Dermatology, Massachusetts General Hospital, 40 Blossom Street, Bartlett Hall 6R, Room 626, Boston, MA 02114, USA; Harvard Medical School, Boston, MA 02115, USA
| | - Yevgeniy Romanovich Semenov
- Department of Dermatology, Massachusetts General Hospital, 40 Blossom Street, Bartlett Hall 6R, Room 626, Boston, MA 02114, USA; Harvard Medical School, Boston, MA 02115, USA.
| |
Collapse
|
10
|
Pati P, Karkampouna S, Bonollo F, Compérat E, Radić M, Spahn M, Martinelli A, Wartenberg M, Kruithof-de Julio M, Rapsomaniki M. Accelerating histopathology workflows with generative AI-based virtually multiplexed tumour profiling. NAT MACH INTELL 2024; 6:1077-1093. [PMID: 39309216 PMCID: PMC11415301 DOI: 10.1038/s42256-024-00889-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 07/29/2024] [Indexed: 09/25/2024]
Abstract
Understanding the spatial heterogeneity of tumours and its links to disease initiation and progression is a cornerstone of cancer biology. Presently, histopathology workflows heavily rely on hematoxylin and eosin and serial immunohistochemistry staining, a cumbersome, tissue-exhaustive process that results in non-aligned tissue images. We propose the VirtualMultiplexer, a generative artificial intelligence toolkit that effectively synthesizes multiplexed immunohistochemistry images for several antibody markers (namely AR, NKX3.1, CD44, CD146, p53 and ERG) from only an input hematoxylin and eosin image. The VirtualMultiplexer captures biologically relevant staining patterns across tissue scales without requiring consecutive tissue sections, image registration or extensive expert annotations. Thorough qualitative and quantitative assessment indicates that the VirtualMultiplexer achieves rapid, robust and precise generation of virtually multiplexed imaging datasets of high staining quality that are indistinguishable from the real ones. The VirtualMultiplexer is successfully transferred across tissue scales and patient cohorts with no need for model fine-tuning. Crucially, the virtually multiplexed images enabled training a graph transformer that simultaneously learns from the joint spatial distribution of several proteins to predict clinically relevant endpoints. We observe that this multiplexed learning scheme was able to greatly improve clinical prediction, as corroborated across several downstream tasks, independent patient cohorts and cancer types. Our results showcase the clinical relevance of artificial intelligence-assisted multiplexed tumour imaging, accelerating histopathology workflows and cancer biology.
Collapse
Affiliation(s)
| | - Sofia Karkampouna
- Urology Research Laboratory, Department for BioMedical Research, University of Bern, Bern, Switzerland
- Department of Urology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Francesco Bonollo
- Urology Research Laboratory, Department for BioMedical Research, University of Bern, Bern, Switzerland
| | - Eva Compérat
- Department of Pathology, Medical University of Vienna, Vienna, Austria
| | - Martina Radić
- Urology Research Laboratory, Department for BioMedical Research, University of Bern, Bern, Switzerland
| | - Martin Spahn
- Department of Urology, Lindenhofspital Bern, Bern, Switzerland
- Department of Urology, University Duisburg-Essen, Essen, Germany
| | - Adriano Martinelli
- IBM Research Europe, Rüschlikon, Switzerland
- ETH Zürich, Zürich, Switzerland
- Biomedical Data Science Center, Lausanne University Hospital, Lausanne, Switzerland
| | - Martin Wartenberg
- Institute of Tissue Medicine and Pathology, University of Bern, Bern, Switzerland
| | - Marianna Kruithof-de Julio
- Urology Research Laboratory, Department for BioMedical Research, University of Bern, Bern, Switzerland
- Department of Urology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
- Translational Organoid Resource, Department for BioMedical Research, University of Bern, Bern, Switzerland
| | - Marianna Rapsomaniki
- IBM Research Europe, Rüschlikon, Switzerland
- Biomedical Data Science Center, Lausanne University Hospital, Lausanne, Switzerland
- Faculty of Biology and Medicine, University of Lausanne, Lausanne, Switzerland
| |
Collapse
|
11
|
Zhu R, He H, Chen Y, Yi M, Ran S, Wang C, Wang Y. Deep learning for rapid virtual H&E staining of label-free glioma tissue from hyperspectral images. Comput Biol Med 2024; 180:108958. [PMID: 39094325 DOI: 10.1016/j.compbiomed.2024.108958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2024] [Revised: 07/02/2024] [Accepted: 07/26/2024] [Indexed: 08/04/2024]
Abstract
Hematoxylin and eosin (H&E) staining is a crucial technique for diagnosing glioma, allowing direct observation of tissue structures. However, the H&E staining workflow necessitates intricate processing, specialized laboratory infrastructures, and specialist pathologists, rendering it expensive, labor-intensive, and time-consuming. In view of these considerations, we combine the deep learning method and hyperspectral imaging technique, aiming at accurately and rapidly converting the hyperspectral images into virtual H&E staining images. The method overcomes the limitations of H&E staining by capturing tissue information at different wavelengths, providing comprehensive and detailed tissue composition information as the realistic H&E staining. In comparison with various generator structures, the Unet exhibits substantial overall advantages, as evidenced by a mean structure similarity index measure (SSIM) of 0.7731 and a peak signal-to-noise ratio (PSNR) of 23.3120, as well as the shortest training and inference time. A comprehensive software system for virtual H&E staining, which integrates CCD control, microscope control, and virtual H&E staining technology, is developed to facilitate fast intraoperative imaging, promote disease diagnosis, and accelerate the development of medical automation. The platform reconstructs large-scale virtual H&E staining images of gliomas at a high speed of 3.81 mm2/s. This innovative approach will pave the way for a novel, expedited route in histological staining.
Collapse
Affiliation(s)
- Ruohua Zhu
- National Engineering Research Center of Ophthalmology and Optometry, School of Biomedical Engineering, Eye Hospital, Wenzhou Medical University, Xueyuan Road 270, Wenzhou, 325027, China
| | - Haiyang He
- National Engineering Research Center of Ophthalmology and Optometry, School of Biomedical Engineering, Eye Hospital, Wenzhou Medical University, Xueyuan Road 270, Wenzhou, 325027, China
| | - Yuzhe Chen
- National Engineering Research Center of Ophthalmology and Optometry, School of Biomedical Engineering, Eye Hospital, Wenzhou Medical University, Xueyuan Road 270, Wenzhou, 325027, China
| | - Ming Yi
- National Engineering Research Center of Ophthalmology and Optometry, School of Biomedical Engineering, Eye Hospital, Wenzhou Medical University, Xueyuan Road 270, Wenzhou, 325027, China
| | - Shengdong Ran
- National Engineering Research Center of Ophthalmology and Optometry, School of Biomedical Engineering, Eye Hospital, Wenzhou Medical University, Xueyuan Road 270, Wenzhou, 325027, China
| | - Chengde Wang
- Department of Neurosurgery, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou 325000, China.
| | - Yi Wang
- National Engineering Research Center of Ophthalmology and Optometry, School of Biomedical Engineering, Eye Hospital, Wenzhou Medical University, Xueyuan Road 270, Wenzhou, 325027, China; Wenzhou Institute, University of Chinese Academy of Sciences, Jinlian Road 1, Wenzhou, 325001, China.
| |
Collapse
|
12
|
Razi S, Kuo YH, Pathak G, Agarwal P, Horgan A, Parikh P, Deshmukh F, Rao BK. Line-Field Confocal Optical Coherence Tomography for the Diagnosis of Skin Tumors: A Systematic Review and Meta-Analysis. Diagnostics (Basel) 2024; 14:1522. [PMID: 39061659 PMCID: PMC11276068 DOI: 10.3390/diagnostics14141522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2024] [Revised: 07/08/2024] [Accepted: 07/11/2024] [Indexed: 07/28/2024] Open
Abstract
A line-field confocal optical coherence tomography (LC-OCT) combines confocal microscopy and optical coherence tomography into a single, rapid, easy-to-use device. This meta-analysis was performed to determine the reliability of LC-OCT for diagnosing malignant skin tumors. PubMed, EMBASE, Web of Science databases, and the Cochrane Library were searched for research studies in the English language from inception till December 2023. To assess quality and the risk of bias, the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) was used. The sensitivity and specificity of each study were calculated. The bivariate summary sensitivity and specificity were calculated using the linear mixed model. Five studies with 904 reported per lesion analyses in our study; the specificity and sensitivity ranged from 67% to 97% and 72% to 92%, respectively. The pooled specificity and sensitivity were 91% (95% CI: 76-97%) and 86.9% (95% CI: 81.8-90.8%), respectively. The summary sensitivity and specificity from the bivariate approach are 86.9% (95% CI: 81.8-90.8%) and 91.1% (95% CI: 76.7-97.0%), respectively. The area under the curve is 0.914. LC-OCT shows great sensitivity and specificity in diagnosing malignant skin tumors. However, due to the limited number of studies included in our meta-analysis, it is premature to elucidate the true potential of LC-OCT.
Collapse
Affiliation(s)
- Shazli Razi
- Department of Internal Medicine, Hackensack Meridian Ocean University Medical Center, Brick, NJ 08724, USA
- Department of Internal Medicine, Jersey Shore University Medical Center, Neptune, NJ 07753, USA
| | - Yen-Hong Kuo
- Office of Research Administration, Hackensack Meridian Health Research Institute, Nutley, NJ 07110, USA
- Department of Medical Sciences, Hackensack Meridian School of Medicine, Nutley, NJ 07110, USA
| | - Gaurav Pathak
- Center for Dermatology, Robert Wood Johnson Medical School, Rutgers University, New Brunswick, NJ 08901, USA
| | - Priya Agarwal
- Center for Dermatology, Robert Wood Johnson Medical School, Rutgers University, New Brunswick, NJ 08901, USA
| | - Arianna Horgan
- Center for Dermatology, Robert Wood Johnson Medical School, Rutgers University, New Brunswick, NJ 08901, USA
| | - Prachi Parikh
- Center for Dermatology, Robert Wood Johnson Medical School, Rutgers University, New Brunswick, NJ 08901, USA
| | - Farah Deshmukh
- Department of Internal Medicine, Jersey Shore University Medical Center, Neptune, NJ 07753, USA
| | - Babar K. Rao
- Center for Dermatology, Robert Wood Johnson Medical School, Rutgers University, New Brunswick, NJ 08901, USA
- Department of Dermatology, Rao Dermatology, Atlantic Highlands, NJ 07716, USA
- Department of Dermatology, Weill Cornell Medicine, New York, NY 10021, USA
| |
Collapse
|
13
|
Sarri B, Chevrier V, Poizat F, Heuke S, Franchi F, De Franqueville L, Traversari E, Ratone JP, Caillol F, Dahel Y, Hoibian S, Giovannini M, de Chaisemartin C, Appay R, Guasch G, Rigneault H. In vivo organoid growth monitoring by stimulated Raman histology. NPJ IMAGING 2024; 2:18. [PMID: 38948153 PMCID: PMC11213706 DOI: 10.1038/s44303-024-00019-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/07/2024] [Accepted: 05/21/2024] [Indexed: 07/02/2024]
Abstract
Patient-derived tumor organoids have emerged as a crucial tool for assessing the efficacy of chemotherapy and conducting preclinical drug screenings. However, the conventional histological investigation of these organoids necessitates their devitalization through fixation and slicing, limiting their utility to a single-time analysis. Here, we use stimulated Raman histology (SRH) to demonstrate non-destructive, label-free virtual staining of 3D organoids, while preserving their viability and growth. This novel approach provides contrast similar to conventional staining methods, allowing for the continuous monitoring of organoids over time. Our results demonstrate that SRH transforms organoids from one-time use products into repeatable models, facilitating the efficient selection of effective drug combinations. This advancement holds promise for personalized cancer treatment, allowing for the dynamic assessment and optimization of chemotherapy treatments in patient-specific contexts.
Collapse
Affiliation(s)
- Barbara Sarri
- Aix Marseille Univ, CNRS, Centrale Med, Institut Fresnel, Marseille, France
- Ligthcore Technologies, Marseille, France
| | - Véronique Chevrier
- CRCM, Inserm, CNRS, Institut Paoli-Calmettes, Aix-Marseille Univ, Epithelial Stem Cells and Cancer Lab, Marseille, France
| | - Flora Poizat
- Department of Biopathology, Institut Paoli-Calmettes, Marseille, France
| | - Sandro Heuke
- Aix Marseille Univ, CNRS, Centrale Med, Institut Fresnel, Marseille, France
| | - Florence Franchi
- Department of Biopathology, Institut Paoli-Calmettes, Marseille, France
| | | | - Eddy Traversari
- Department of Surgical Oncology, Institut Paoli-Calmette, Marseille, France
| | | | - Fabrice Caillol
- Department of Gastro-enterology, Institut Paoli-Calmettes, Marseille, France
| | - Yanis Dahel
- Department of Gastro-enterology, Institut Paoli-Calmettes, Marseille, France
| | - Solène Hoibian
- Department of Gastro-enterology, Institut Paoli-Calmettes, Marseille, France
| | - Marc Giovannini
- Department of Gastro-enterology, Institut Paoli-Calmettes, Marseille, France
| | | | - Romain Appay
- Aix- Marseille Univ, CNRS, Neurophysiopathology Institute, Marseille, France
| | - Géraldine Guasch
- CRCM, Inserm, CNRS, Institut Paoli-Calmettes, Aix-Marseille Univ, Epithelial Stem Cells and Cancer Lab, Marseille, France
| | - Hervé Rigneault
- Aix Marseille Univ, CNRS, Centrale Med, Institut Fresnel, Marseille, France
| |
Collapse
|
14
|
Wang Q, Akram AR, Dorward DA, Talas S, Monks B, Thum C, Hopgood JR, Javidi M, Vallejo M. Deep learning-based virtual H& E staining from label-free autofluorescence lifetime images. NPJ IMAGING 2024; 2:17. [PMID: 38948152 PMCID: PMC11213708 DOI: 10.1038/s44303-024-00021-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Accepted: 06/11/2024] [Indexed: 07/02/2024]
Abstract
Label-free autofluorescence lifetime is a unique feature of the inherent fluorescence signals emitted by natural fluorophores in biological samples. Fluorescence lifetime imaging microscopy (FLIM) can capture these signals enabling comprehensive analyses of biological samples. Despite the fundamental importance and wide application of FLIM in biomedical and clinical sciences, existing methods for analysing FLIM images often struggle to provide rapid and precise interpretations without reliable references, such as histology images, which are usually unavailable alongside FLIM images. To address this issue, we propose a deep learning (DL)-based approach for generating virtual Hematoxylin and Eosin (H&E) staining. By combining an advanced DL model with a contemporary image quality metric, we can generate clinical-grade virtual H&E-stained images from label-free FLIM images acquired on unstained tissue samples. Our experiments also show that the inclusion of lifetime information, an extra dimension beyond intensity, results in more accurate reconstructions of virtual staining when compared to using intensity-only images. This advancement allows for the instant and accurate interpretation of FLIM images at the cellular level without the complexities associated with co-registering FLIM and histology images. Consequently, we are able to identify distinct lifetime signatures of seven different cell types commonly found in the tumour microenvironment, opening up new opportunities towards biomarker-free tissue histology using FLIM across multiple cancer types.
Collapse
Affiliation(s)
- Qiang Wang
- Centre for Inflammation Research, Institute of Regeneration and Repair, The University of Edinburgh, Edinburgh, UK
- Translational Healthcare Technologies Group, Centre for Inflammation Research, Institute of Regeneration and Repair, The University of Edinburgh, Edinburgh, UK
| | - Ahsan R. Akram
- Centre for Inflammation Research, Institute of Regeneration and Repair, The University of Edinburgh, Edinburgh, UK
- Translational Healthcare Technologies Group, Centre for Inflammation Research, Institute of Regeneration and Repair, The University of Edinburgh, Edinburgh, UK
| | - David A. Dorward
- Centre for Inflammation Research, Institute of Regeneration and Repair, The University of Edinburgh, Edinburgh, UK
- Department of Pathology, Royal Infirmary of Edinburgh, Edinburgh, UK
| | - Sophie Talas
- Centre for Inflammation Research, Institute of Regeneration and Repair, The University of Edinburgh, Edinburgh, UK
- Department of Pathology, Royal Infirmary of Edinburgh, Edinburgh, UK
| | - Basil Monks
- Department of Pathology, Royal Infirmary of Edinburgh, Edinburgh, UK
| | - Chee Thum
- Department of Pathology, Royal Infirmary of Edinburgh, Edinburgh, UK
| | - James R. Hopgood
- School of Engineering, The University of Edinburgh, Edinburgh, UK
| | - Malihe Javidi
- School of Mathematical and Computer Sciences, Heriot-Watt University, Edinburgh, UK
- Department of Computer Engineering, Quchan University of Technology, Quchan, Iran
| | - Marta Vallejo
- School of Mathematical and Computer Sciences, Heriot-Watt University, Edinburgh, UK
| |
Collapse
|
15
|
Liu Y, Uttam S. Perspective on quantitative phase imaging to improve precision cancer medicine. JOURNAL OF BIOMEDICAL OPTICS 2024; 29:S22705. [PMID: 38584967 PMCID: PMC10996848 DOI: 10.1117/1.jbo.29.s2.s22705] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 03/03/2024] [Accepted: 03/15/2024] [Indexed: 04/09/2024]
Abstract
Significance Quantitative phase imaging (QPI) offers a label-free approach to non-invasively characterize cellular processes by exploiting their refractive index based intrinsic contrast. QPI captures this contrast by translating refractive index associated phase shifts into intensity-based quantifiable data with nanoscale sensitivity. It holds significant potential for advancing precision cancer medicine by providing quantitative characterization of the biophysical properties of cells and tissue in their natural states. Aim This perspective aims to discuss the potential of QPI to increase our understanding of cancer development and its response to therapeutics. It also explores new developments in QPI methods towards advancing personalized cancer therapy and early detection. Approach We begin by detailing the technical advancements of QPI, examining its implementations across transmission and reflection geometries and phase retrieval methods, both interferometric and non-interferometric. The focus then shifts to QPI's applications in cancer research, including dynamic cell mass imaging for drug response assessment, cancer risk stratification, and in-vivo tissue imaging. Results QPI has emerged as a crucial tool in precision cancer medicine, offering insights into tumor biology and treatment efficacy. Its sensitivity to detecting nanoscale changes holds promise for enhancing cancer diagnostics, risk assessment, and prognostication. The future of QPI is envisioned in its integration with artificial intelligence, morpho-dynamics, and spatial biology, broadening its impact in cancer research. Conclusions QPI presents significant potential in advancing precision cancer medicine and redefining our approach to cancer diagnosis, monitoring, and treatment. Future directions include harnessing high-throughput dynamic imaging, 3D QPI for realistic tumor models, and combining artificial intelligence with multi-omics data to extend QPI's capabilities. As a result, QPI stands at the forefront of cancer research and clinical application in cancer care.
Collapse
Affiliation(s)
- Yang Liu
- University of Illinois Urbana-Champaign, Beckman Institute for Advanced Science and Technology, Cancer Center at Illinois, Department of Bioengineering, Department of Electrical and Computer Engineering, Urbana, Illinois, United States
- University of Pittsburgh, Departments of Medicine and Bioengineering, Pittsburgh, Pennsylvania, United States
| | - Shikhar Uttam
- University of Pittsburgh, Department of Computational and Systems Biology, Pittsburgh, Pennsylvania, United States
| |
Collapse
|
16
|
Pillar N, Li Y, Zhang Y, Ozcan A. Virtual Staining of Nonfixed Tissue Histology. Mod Pathol 2024; 37:100444. [PMID: 38325706 PMCID: PMC11918264 DOI: 10.1016/j.modpat.2024.100444] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Revised: 01/19/2024] [Accepted: 01/29/2024] [Indexed: 02/09/2024]
Abstract
Surgical pathology workflow involves multiple labor-intensive steps, such as tissue removal, fixation, embedding, sectioning, staining, and microscopic examination. This process is time-consuming and costly and requires skilled technicians. In certain clinical scenarios, such as intraoperative consultations, there is a need for faster histologic evaluation to provide real-time surgical guidance. Currently, frozen section techniques involving hematoxylin and eosin (H&E) staining are used for intraoperative pathology consultations. However, these techniques have limitations, including a turnaround time of 20 to 30 minutes, staining artifacts, and potential tissue loss, negatively impacting accurate diagnosis. To address these challenges, researchers are exploring alternative optical imaging modalities for rapid microscopic tissue imaging. These modalities differ in optical characteristics, tissue preparation requirements, imaging equipment, and output image quality and format. Some of these imaging methods have been combined with computational algorithms to generate H&E-like images, which could greatly facilitate their adoption by pathologists. Here, we provide a comprehensive, organ-specific review of the latest advancements in emerging imaging modalities applied to nonfixed human tissue. We focused on studies that generated H&E-like images evaluated by pathologists. By presenting up-to-date research progress and clinical utility, this review serves as a valuable resource for scholars and clinicians, covering some of the major technical developments in this rapidly evolving field. It also offers insights into the potential benefits and drawbacks of alternative imaging modalities and their implications for improving patient care.
Collapse
Affiliation(s)
- Nir Pillar
- Electrical and Computer Engineering Department, University of California, Los Angeles, California; Bioengineering Department, University of California, Los Angeles, California; California NanoSystems Institute (CNSI), University of California, Los Angeles, California
| | - Yuzhu Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, California; Bioengineering Department, University of California, Los Angeles, California; California NanoSystems Institute (CNSI), University of California, Los Angeles, California
| | - Yijie Zhang
- Electrical and Computer Engineering Department, University of California, Los Angeles, California; Bioengineering Department, University of California, Los Angeles, California; California NanoSystems Institute (CNSI), University of California, Los Angeles, California
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, California; Bioengineering Department, University of California, Los Angeles, California; California NanoSystems Institute (CNSI), University of California, Los Angeles, California.
| |
Collapse
|
17
|
Winetraub Y, Van Vleck A, Yuan E, Terem I, Zhao J, Yu C, Chan W, Do H, Shevidi S, Mao M, Yu J, Hong M, Blankenberg E, Rieger KE, Chu S, Aasi S, Sarin KY, de la Zerda A. Noninvasive virtual biopsy using micro-registered optical coherence tomography (OCT) in human subjects. SCIENCE ADVANCES 2024; 10:eadi5794. [PMID: 38598626 PMCID: PMC11006228 DOI: 10.1126/sciadv.adi5794] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 03/07/2024] [Indexed: 04/12/2024]
Abstract
Histological hematoxylin and eosin-stained (H&E) tissue sections are used as the gold standard for pathologic detection of cancer, tumor margin detection, and disease diagnosis. Producing H&E sections, however, is invasive and time-consuming. While deep learning has shown promise in virtual staining of unstained tissue slides, true virtual biopsy requires staining of images taken from intact tissue. In this work, we developed a micron-accuracy coregistration method [micro-registered optical coherence tomography (OCT)] that can take a two-dimensional (2D) H&E slide and find the exact corresponding section in a 3D OCT image taken from the original fresh tissue. We trained a conditional generative adversarial network using the paired dataset and showed high-fidelity conversion of noninvasive OCT images to virtually stained H&E slices in both 2D and 3D. Applying these trained neural networks to in vivo OCT images should enable physicians to readily incorporate OCT imaging into their clinical practice, reducing the number of unnecessary biopsy procedures.
Collapse
Affiliation(s)
- Yonatan Winetraub
- Department of Structural Biology, Stanford University, Stanford, CA 94305, USA
- Molecular Imaging Program at Stanford, Stanford, CA 94305, USA
- The Bio-X Program, Stanford, CA 94305, USA
- Biophysics Program at Stanford, Stanford, CA 94305, USA
| | - Aidan Van Vleck
- Department of Structural Biology, Stanford University, Stanford, CA 94305, USA
| | - Edwin Yuan
- Department of Structural Biology, Stanford University, Stanford, CA 94305, USA
- Molecular Imaging Program at Stanford, Stanford, CA 94305, USA
- Department of Applied Physics, Stanford University, Stanford, CA 94305, USA
| | - Itamar Terem
- Department of Structural Biology, Stanford University, Stanford, CA 94305, USA
- Molecular Imaging Program at Stanford, Stanford, CA 94305, USA
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA
| | - Jinjing Zhao
- Department of Structural Biology, Stanford University, Stanford, CA 94305, USA
| | - Caroline Yu
- Department of Structural Biology, Stanford University, Stanford, CA 94305, USA
- Molecular Imaging Program at Stanford, Stanford, CA 94305, USA
| | - Warren Chan
- Department of Dermatology, Stanford University School of Medicine, Stanford, CA 94305, USA
| | - Hanh Do
- Department of Dermatology, Stanford University School of Medicine, Stanford, CA 94305, USA
| | - Saba Shevidi
- Department of Structural Biology, Stanford University, Stanford, CA 94305, USA
- Molecular Imaging Program at Stanford, Stanford, CA 94305, USA
| | - Maiya Mao
- Department of Structural Biology, Stanford University, Stanford, CA 94305, USA
- Molecular Imaging Program at Stanford, Stanford, CA 94305, USA
| | - Jacqueline Yu
- Department of Structural Biology, Stanford University, Stanford, CA 94305, USA
- Molecular Imaging Program at Stanford, Stanford, CA 94305, USA
| | - Megan Hong
- Department of Structural Biology, Stanford University, Stanford, CA 94305, USA
- Molecular Imaging Program at Stanford, Stanford, CA 94305, USA
| | - Erick Blankenberg
- Department of Structural Biology, Stanford University, Stanford, CA 94305, USA
- Molecular Imaging Program at Stanford, Stanford, CA 94305, USA
| | - Kerri E. Rieger
- Department of Pathology, Stanford University School of Medicine and Stanford Cancer Institute, Stanford, CA 94305, USA
| | - Steven Chu
- The Bio-X Program, Stanford, CA 94305, USA
- Biophysics Program at Stanford, Stanford, CA 94305, USA
- Departments of Physics and Molecular and Cellular Physiology, Energy, Science and Engineering Stanford University, Stanford, CA 94305, USA
| | - Sumaira Aasi
- Department of Dermatology, Stanford University School of Medicine, Stanford, CA 94305, USA
| | - Kavita Y. Sarin
- Department of Dermatology, Stanford University School of Medicine, Stanford, CA 94305, USA
| | - Adam de la Zerda
- Department of Structural Biology, Stanford University, Stanford, CA 94305, USA
- Molecular Imaging Program at Stanford, Stanford, CA 94305, USA
- The Bio-X Program, Stanford, CA 94305, USA
- Biophysics Program at Stanford, Stanford, CA 94305, USA
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA
- The Chan Zuckerberg Biohub, San Francisco, CA 94158, USA
| |
Collapse
|
18
|
Dai W, Wong IHM, Wong TTW. Exceeding the limit for microscopic image translation with a deep learning-based unified framework. PNAS NEXUS 2024; 3:pgae133. [PMID: 38601859 PMCID: PMC11004937 DOI: 10.1093/pnasnexus/pgae133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Accepted: 03/19/2024] [Indexed: 04/12/2024]
Abstract
Deep learning algorithms have been widely used in microscopic image translation. The corresponding data-driven models can be trained by supervised or unsupervised learning depending on the availability of paired data. However, general cases are where the data are only roughly paired such that supervised learning could be invalid due to data unalignment, and unsupervised learning would be less ideal as the roughly paired information is not utilized. In this work, we propose a unified framework (U-Frame) that unifies supervised and unsupervised learning by introducing a tolerance size that can be adjusted automatically according to the degree of data misalignment. Together with the implementation of a global sampling rule, we demonstrate that U-Frame consistently outperforms both supervised and unsupervised learning in all levels of data misalignments (even for perfectly aligned image pairs) in a myriad of image translation applications, including pseudo-optical sectioning, virtual histological staining (with clinical evaluations for cancer diagnosis), improvement of signal-to-noise ratio or resolution, and prediction of fluorescent labels, potentially serving as new standard for image translation.
Collapse
Affiliation(s)
- Weixing Dai
- Department of Chemical and Biological Engineering, Translational and Advanced Bioimaging Laboratory, Hong Kong University of Science and Technology, Hong Kong 999077, China
| | - Ivy H M Wong
- Department of Chemical and Biological Engineering, Translational and Advanced Bioimaging Laboratory, Hong Kong University of Science and Technology, Hong Kong 999077, China
| | - Terence T W Wong
- Department of Chemical and Biological Engineering, Translational and Advanced Bioimaging Laboratory, Hong Kong University of Science and Technology, Hong Kong 999077, China
| |
Collapse
|
19
|
Li Y, Pillar N, Li J, Liu T, Wu D, Sun S, Ma G, de Haan K, Huang L, Zhang Y, Hamidi S, Urisman A, Keidar Haran T, Wallace WD, Zuckerman JE, Ozcan A. Virtual histological staining of unlabeled autopsy tissue. Nat Commun 2024; 15:1684. [PMID: 38396004 PMCID: PMC10891155 DOI: 10.1038/s41467-024-46077-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Accepted: 02/09/2024] [Indexed: 02/25/2024] Open
Abstract
Traditional histochemical staining of post-mortem samples often confronts inferior staining quality due to autolysis caused by delayed fixation of cadaver tissue, and such chemical staining procedures covering large tissue areas demand substantial labor, cost and time. Here, we demonstrate virtual staining of autopsy tissue using a trained neural network to rapidly transform autofluorescence images of label-free autopsy tissue sections into brightfield equivalent images, matching hematoxylin and eosin (H&E) stained versions of the same samples. The trained model can effectively accentuate nuclear, cytoplasmic and extracellular features in new autopsy tissue samples that experienced severe autolysis, such as COVID-19 samples never seen before, where the traditional histochemical staining fails to provide consistent staining quality. This virtual autopsy staining technique provides a rapid and resource-efficient solution to generate artifact-free H&E stains despite severe autolysis and cell death, also reducing labor, cost and infrastructure requirements associated with the standard histochemical staining.
Collapse
Affiliation(s)
- Yuzhu Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Nir Pillar
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Jingxi Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Tairan Liu
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Di Wu
- Computer Science Department, University of California, Los Angeles, CA, 90095, USA
| | - Songyu Sun
- Computer Science Department, University of California, Los Angeles, CA, 90095, USA
| | - Guangdong Ma
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- School of Physics, Xi'an Jiaotong University, Xi'an, Shaanxi, 710049, China
| | - Kevin de Haan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Luzhe Huang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Yijie Zhang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Sepehr Hamidi
- Department of Pathology and Laboratory Medicine, David Geffen School of Medicine, University of California, Los Angeles, CA, 90095, USA
| | - Anatoly Urisman
- Department of Pathology, University of California, San Francisco, CA, 94143, USA
| | - Tal Keidar Haran
- Department of Pathology, Hadassah Hebrew University Medical Center, Jerusalem, 91120, Israel
| | - William Dean Wallace
- Department of Pathology, Keck School of Medicine, University of Southern California, Los Angeles, CA, 90033, USA
| | - Jonathan E Zuckerman
- Department of Pathology and Laboratory Medicine, David Geffen School of Medicine, University of California, Los Angeles, CA, 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA.
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA.
- Department of Surgery, University of California, Los Angeles, CA, 90095, USA.
| |
Collapse
|
20
|
Asaf MZ, Rao B, Akram MU, Khawaja SG, Khan S, Truong TM, Sekhon P, Khan IJ, Abbasi MS. Dual contrastive learning based image-to-image translation of unstained skin tissue into virtually stained H&E images. Sci Rep 2024; 14:2335. [PMID: 38282056 PMCID: PMC11269663 DOI: 10.1038/s41598-024-52833-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2023] [Accepted: 01/24/2024] [Indexed: 01/30/2024] Open
Abstract
Staining is a crucial step in histopathology that prepares tissue sections for microscopic examination. Hematoxylin and eosin (H&E) staining, also known as basic or routine staining, is used in 80% of histopathology slides worldwide. To enhance the histopathology workflow, recent research has focused on integrating generative artificial intelligence and deep learning models. These models have the potential to improve staining accuracy, reduce staining time, and minimize the use of hazardous chemicals, making histopathology a safer and more efficient field. In this study, we introduce a novel three-stage, dual contrastive learning-based, image-to-image generative (DCLGAN) model for virtually applying an "H&E stain" to unstained skin tissue images. The proposed model utilizes a unique learning setting comprising two pairs of generators and discriminators. By employing contrastive learning, our model maximizes the mutual information between traditional H&E-stained and virtually stained H&E patches. Our dataset consists of pairs of unstained and H&E-stained images, scanned with a brightfield microscope at 20 × magnification, providing a comprehensive set of training and testing images for evaluating the efficacy of our proposed model. Two metrics, Fréchet Inception Distance (FID) and Kernel Inception Distance (KID), were used to quantitatively evaluate virtual stained slides. Our analysis revealed that the average FID score between virtually stained and H&E-stained images (80.47) was considerably lower than that between unstained and virtually stained slides (342.01), and unstained and H&E stained (320.4) indicating a similarity virtual and H&E stains. Similarly, the mean KID score between H&E stained and virtually stained images (0.022) was significantly lower than the mean KID score between unstained and H&E stained (0.28) or unstained and virtually stained (0.31) images. In addition, a group of experienced dermatopathologists evaluated traditional and virtually stained images and demonstrated an average agreement of 78.8% and 90.2% for paired and single virtual stained image evaluations, respectively. Our study demonstrates that the proposed three-stage dual contrastive learning-based image-to-image generative model is effective in generating virtual stained images, as indicated by quantified parameters and grader evaluations. In addition, our findings suggest that GAN models have the potential to replace traditional H&E staining, which can reduce both time and environmental impact. This study highlights the promise of virtual staining as a viable alternative to traditional staining techniques in histopathology.
Collapse
Affiliation(s)
- Muhammad Zeeshan Asaf
- Department of Computer and Software Engineering, National University of Sciences and Technology, Islamabad, Pakistan
| | - Babar Rao
- Center for Dermatology, Rutgers Robert Wood Johnson Medical School, Somerset, NJ, 08873, USA
- Department of Dermatology, Weill Cornell Medicine, New York, NY, 10021, USA
| | - Muhammad Usman Akram
- Department of Computer and Software Engineering, National University of Sciences and Technology, Islamabad, Pakistan.
| | - Sajid Gul Khawaja
- Department of Computer and Software Engineering, National University of Sciences and Technology, Islamabad, Pakistan
| | - Samavia Khan
- Center for Dermatology, Rutgers Robert Wood Johnson Medical School, Somerset, NJ, 08873, USA
| | - Thu Minh Truong
- Center for Dermatology, Rutgers Robert Wood Johnson Medical School, Somerset, NJ, 08873, USA
- Department of Pathology, Immunology and Laboratory Medicine, New Jersey Medical School, 185 South Orange Ave, Newark, NJ, 07103, USA
| | - Palveen Sekhon
- EIV Diagnostics, Fresno, CA, USA
- University of California, San Francisco School of Medicine, San Francisco, USA
| | - Irfan J Khan
- Department of Pathology, St. Luke's University Health Network, Bethlehem, PA, 18015, USA
| | | |
Collapse
|
21
|
Tozawa A, Mori H, Ao M, Miyauchi R, Tsuji Y, Matsumoto M, Murakami M, Nakaoka H, Fujisawa Y. Factors for Improving Diagnosis of Skin Tumors. JOURNAL OF PLASTIC AND RECONSTRUCTIVE SURGERY 2024; 3:29-33. [PMID: 40104416 PMCID: PMC11913009 DOI: 10.53045/jprs.2021-0037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 05/19/2023] [Indexed: 03/20/2025]
Abstract
Background Although several studies have investigated the accuracy of clinical diagnoses of skin tumors, specific ways to improve diagnostic accuracy have not been identified. This study investigated factors that influence the accuracy of clinical skin tumor diagnostic methods and discusses strategies to improve accuracy. Methods Study 1 retrospectively analyzed 657 skin tumors excised at our hospital between March 2001 and March 2011. Data were extracted from surgical records to establish a diagnostic template for further research. Study 2 prospectively applied this template to aid clinical diagnosis at four facilities between April 2011 and March 2013. The clinical diagnoses were compared with the histological ones and the concordance was determined. Results A total of 448 and 209 benign and malignant tumors, respectively, were included in Study 1. The overall diagnostic accuracy was 79.0%. In Study 2, 310 patients were clinically diagnosed using a standardized template, which did not affect the diagnostic accuracy. Age, sex, duration of disease, tumor size and location, skin tone, mobility, stiffness, and years of diagnostic experience did not significantly affect diagnostic accuracy. A high proportion of pathologically malignant tumors were clinically misdiagnosed as benign (16/22; 72%). Other clinical examinations were performed in only 35 cases. Conclusions Auxiliary diagnostic tools such as dermoscopy and biopsies should be used to accurately diagnose malignant tumors.
Collapse
Affiliation(s)
- Asami Tozawa
- Division of Plastic and Reconstructive Surgery, Ehime University Graduate School of Medicine, Ehime, Japan
| | - Hideki Mori
- Division of Plastic and Reconstructive Surgery, Ehime University Graduate School of Medicine, Ehime, Japan
| | - Masakazu Ao
- National Hospital Organization Iwakuni Clinical Center, Yamaguchi, Japan
| | | | | | - Mayu Matsumoto
- Division of Plastic and Reconstructive Surgery, Ehime University Graduate School of Medicine, Ehime, Japan
| | - Masamoto Murakami
- Department of Dermatology, Ehime University Graduate School of Medicine, Ehime, Japan
| | - Hiroki Nakaoka
- Division of Plastic and Reconstructive Surgery, Ehime University Graduate School of Medicine, Ehime, Japan
| | - Yasuhiro Fujisawa
- Department of Dermatology, Ehime University Graduate School of Medicine, Ehime, Japan
| |
Collapse
|
22
|
Boktor M, Tweel JED, Ecclestone BR, Ye JA, Fieguth P, Haji Reza P. Multi-channel feature extraction for virtual histological staining of photon absorption remote sensing images. Sci Rep 2024; 14:2009. [PMID: 38263394 PMCID: PMC10805725 DOI: 10.1038/s41598-024-52588-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 01/20/2024] [Indexed: 01/25/2024] Open
Abstract
Accurate and fast histological staining is crucial in histopathology, impacting diagnostic precision and reliability. Traditional staining methods are time-consuming and subjective, causing delays in diagnosis. Digital pathology plays a vital role in advancing and optimizing histology processes to improve efficiency and reduce turnaround times. This study introduces a novel deep learning-based framework for virtual histological staining using photon absorption remote sensing (PARS) images. By extracting features from PARS time-resolved signals using a variant of the K-means method, valuable multi-modal information is captured. The proposed multi-channel cycleGAN model expands on the traditional cycleGAN framework, allowing the inclusion of additional features. Experimental results reveal that specific combinations of features outperform the conventional channels by improving the labeling of tissue structures prior to model training. Applied to human skin and mouse brain tissue, the results underscore the significance of choosing the optimal combination of features, as it reveals a substantial visual and quantitative concurrence between the virtually stained and the gold standard chemically stained hematoxylin and eosin images, surpassing the performance of other feature combinations. Accurate virtual staining is valuable for reliable diagnostic information, aiding pathologists in disease classification, grading, and treatment planning. This study aims to advance label-free histological imaging and opens doors for intraoperative microscopy applications.
Collapse
Affiliation(s)
- Marian Boktor
- PhotoMedicine Labs, University of Waterloo, 200 University Ave W, Waterloo, ON, N2L 3G1, Canada
- Vision and Image Processing Lab, University of Waterloo, 200 University Ave W, Waterloo, ON, N2L 3G1, Canada
| | - James E D Tweel
- PhotoMedicine Labs, University of Waterloo, 200 University Ave W, Waterloo, ON, N2L 3G1, Canada
- illumiSonics Inc., 22 King Street South, Suite 300, Waterloo, ON, N2J 1N8, Canada
| | - Benjamin R Ecclestone
- PhotoMedicine Labs, University of Waterloo, 200 University Ave W, Waterloo, ON, N2L 3G1, Canada
- illumiSonics Inc., 22 King Street South, Suite 300, Waterloo, ON, N2J 1N8, Canada
| | - Jennifer Ai Ye
- Vision and Image Processing Lab, University of Waterloo, 200 University Ave W, Waterloo, ON, N2L 3G1, Canada
| | - Paul Fieguth
- Vision and Image Processing Lab, University of Waterloo, 200 University Ave W, Waterloo, ON, N2L 3G1, Canada
| | - Parsin Haji Reza
- PhotoMedicine Labs, University of Waterloo, 200 University Ave W, Waterloo, ON, N2L 3G1, Canada.
| |
Collapse
|
23
|
Liu L, Du K. A perspective on computer vision in biosensing. BIOMICROFLUIDICS 2024; 18:011301. [PMID: 38223547 PMCID: PMC10787640 DOI: 10.1063/5.0185732] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Accepted: 12/26/2023] [Indexed: 01/16/2024]
Abstract
Computer vision has become a powerful tool in the field of biosensing, aiding in the development of innovative and precise systems for the analysis and interpretation of biological data. This interdisciplinary approach harnesses the capabilities of computer vision algorithms and techniques to extract valuable information from various biosensing applications, including medical diagnostics, environmental monitoring, and food health. Despite years of development, there is still significant room for improvement in this area. In this perspective, we outline how computer vision is applied to raw sensor data in biosensors and its advantages to biosensing applications. We then discuss ongoing research and developments in the field and subsequently explore the challenges and opportunities that computer vision faces in biosensor applications. We also suggest directions for future work, ultimately underscoring the significant impact of computer vision on advancing biosensing technologies and their applications.
Collapse
Affiliation(s)
- Li Liu
- Department of Chemical and Environmental Engineering, University of California, Riverside, California 92521, USA
| | - Ke Du
- Department of Chemical and Environmental Engineering, University of California, Riverside, California 92521, USA
| |
Collapse
|
24
|
Kiryushchenkova NP. [Non-invasive automated methods for the diagnosis of periorbital skin tumors]. Vestn Oftalmol 2024; 140:137-145. [PMID: 39569787 DOI: 10.17116/oftalma2024140051137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2024]
Abstract
Malignant skin tumors are the most common type of cancer in both Russia and globally. Malignant skin tumors located in the periorbital region, particularly basal cell carcinoma, pose a significant threat to the visual organ due to the high risk of local invasion, highlighting the need for early diagnosis and timely treatment. This review discusses the main methods of non-invasive instrumental diagnosis of skin tumors in the periorbital region. Key stages in the development of these methods are briefly outlined, and their most significant advantages and disadvantages are noted. The article also considers the automation of diagnostic studies, and potential challenges with its practical implementation.
Collapse
|
25
|
Samueli B, Aizenberg N, Shaco-Levy R, Katzav A, Kezerle Y, Krausz J, Mazareb S, Niv-Drori H, Peled HB, Sabo E, Tobar A, Asa SL. Complete digital pathology transition: A large multi-center experience. Pathol Res Pract 2024; 253:155028. [PMID: 38142526 DOI: 10.1016/j.prp.2023.155028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Accepted: 12/08/2023] [Indexed: 12/26/2023]
Abstract
INTRODUCTION Transitioning from glass slide pathology to digital pathology for primary diagnostics requires an appropriate laboratory information system, an image management system, and slide scanners; it also reinforces the need for sophisticated pathology informatics including synoptic reporting. Previous reports have discussed the transition itself and relevant considerations for it, but not the selection criteria and considerations for the infrastructure. OBJECTIVE To describe the process used to evaluate slide scanners, image management systems, and synoptic reporting systems for a large multisite institution. METHODS Six network hospitals evaluated six slide scanners, three image management systems, and three synoptic reporting systems. Scanners were evaluated based on the quality of image, speed, ease of operation, and special capabilities (including z-stacking, fluorescence and others). Image management and synoptic reporting systems were evaluated for their ease of use and capacity. RESULTS Among the scanners evaluated, the Leica GT450 produced the highest quality images, while the 3DHistech Pannoramic provided fluorescence and superior z-stacking. The newest generation of scanners, released relatively recently, performed better than slightly older scanners from major manufacturers Although the Olympus VS200 was not fully vetted due to not meeting all inclusion criteria, it is discussed herein due to its exceptional versatility. For Image Management Software, the authors believe that Sectra is, at the time of writing the best developed option, but this could change in the very near future as other systems improve their capabilities. All synoptic reporting systems performed impressively. CONCLUSIONS Specifics regarding quality and abilities of different components will change rapidly with time, but large pathology practices considering such a transition should be aware of the issues discussed and evaluate the most current generation to arrive at appropriate conclusions.
Collapse
Affiliation(s)
- Benzion Samueli
- Department of Pathology, Soroka University Medical Center, P.O. Box 151, Be'er Sheva 8410101, Israel; Faculty of Health Sciences, Ben Gurion University of the Negev, P.O. Box 653, Be'er Sheva 8410501, Israel.
| | - Natalie Aizenberg
- Department of Pathology, Soroka University Medical Center, P.O. Box 151, Be'er Sheva 8410101, Israel; Faculty of Health Sciences, Ben Gurion University of the Negev, P.O. Box 653, Be'er Sheva 8410501, Israel
| | - Ruthy Shaco-Levy
- Department of Pathology, Soroka University Medical Center, P.O. Box 151, Be'er Sheva 8410101, Israel; Faculty of Health Sciences, Ben Gurion University of the Negev, P.O. Box 653, Be'er Sheva 8410501, Israel; Department of Pathology, Barzilai Medical Center, 2 Ha-Histadrut St, Ashkelon 7830604, Israel
| | - Aviva Katzav
- Pathology Institute, Meir Medical Center, Kfar Saba 4428164, Israel
| | - Yarden Kezerle
- Department of Pathology, Soroka University Medical Center, P.O. Box 151, Be'er Sheva 8410101, Israel; Faculty of Health Sciences, Ben Gurion University of the Negev, P.O. Box 653, Be'er Sheva 8410501, Israel
| | - Judit Krausz
- Department of Pathology, HaEmek Medical Center, 21 Yitzhak Rabin Ave, Afula 183411, Israel
| | - Salam Mazareb
- Department of Pathology, Carmel Medical Center, 7 Michal Street, Haifa 3436212, Israel
| | - Hagit Niv-Drori
- Department of Pathology, Rabin Medical Center, 39 Jabotinsky St, Petah Tikva 4941492, Israel; Faculty of Medicine, Tel Aviv University, P.O. Box 39040, Tel Aviv 6139001, Israel
| | - Hila Belhanes Peled
- Department of Pathology, HaEmek Medical Center, 21 Yitzhak Rabin Ave, Afula 183411, Israel
| | - Edmond Sabo
- Department of Pathology, Carmel Medical Center, 7 Michal Street, Haifa 3436212, Israel; Rappaport Faculty of Medicine, Technion Israel Institute of Technology, Haifa 3525433, Israel
| | - Ana Tobar
- Department of Pathology, Rabin Medical Center, 39 Jabotinsky St, Petah Tikva 4941492, Israel; Faculty of Medicine, Tel Aviv University, P.O. Box 39040, Tel Aviv 6139001, Israel
| | - Sylvia L Asa
- Institute of Pathology, University Hospitals Cleveland Medical Center, Case Western Reserve University, 11100 Euclid Avenue, Room 204, Cleveland, OH 44106, USA
| |
Collapse
|
26
|
Abraham TM, Casteleiro Costa P, Filan C, Guang Z, Zhang Z, Neill S, Olson JJ, Levenson R, Robles FE. Label- and slide-free tissue histology using 3D epi-mode quantitative phase imaging and virtual hematoxylin and eosin staining. OPTICA 2023; 10:1605-1618. [PMID: 39640229 PMCID: PMC11620277 DOI: 10.1364/optica.502859] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 10/25/2023] [Indexed: 12/07/2024]
Abstract
Histological staining of tissue biopsies, especially hematoxylin and eosin (H&E) staining, serves as the benchmark for disease diagnosis and comprehensive clinical assessment of tissue. However, the typical formalin-fixation, paraffin-embedding (FFPE) process is laborious and time consuming, often limiting its usage in time-sensitive applications such as surgical margin assessment. To address these challenges, we combine an emerging 3D quantitative phase imaging technology, termed quantitative oblique back illumination microscopy (qOBM), with an unsupervised generative adversarial network pipeline to map qOBM phase images of unaltered thick tissues (i.e., label- and slide-free) to virtually stained H&E-like (vH&E) images. We demonstrate that the approach achieves high-fidelity conversions to H&E with subcellular detail using fresh tissue specimens from mouse liver, rat gliosarcoma, and human gliomas. We also show that the framework directly enables additional capabilities such as H&E-like contrast for volumetric imaging. The quality and fidelity of the vH&E images are validated using both a neural network classifier trained on real H&E images and tested on virtual H&E images, and a user study with neuropathologists. Given its simple and low-cost embodiment and ability to provide real-time feedback in vivo, this deep-learning-enabled qOBM approach could enable new workflows for histopathology with the potential to significantly save time, labor, and costs in cancer screening, detection, treatment guidance, and more.
Collapse
Affiliation(s)
- Tanishq Mathew Abraham
- Department of Biomedical Engineering, University of California, Davis, California 95616, USA
| | - Paloma Casteleiro Costa
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332, USA
| | - Caroline Filan
- George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332, USA
| | - Zhe Guang
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332, USA
| | - Zhaobin Zhang
- Winship Cancer Institute, Emory University, Atlanta, Georgia 30332, USA
- Department of Neurosurgery, Emory University School of Medicine, Atlanta, Georgia 30332, USA
| | - Stewart Neill
- Winship Cancer Institute, Emory University, Atlanta, Georgia 30332, USA
- Department of Pathology & Laboratory Medicine, Emory University School of Medicine, Atlanta, Georgia 30332, USA
| | - Jeffrey J. Olson
- Winship Cancer Institute, Emory University, Atlanta, Georgia 30332, USA
- Department of Neurosurgery, Emory University School of Medicine, Atlanta, Georgia 30332, USA
| | - Richard Levenson
- Department of Pathology and Laboratory Medicine, UC Davis Health, Sacramento, California 95817, USA
| | - Francisco E. Robles
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332, USA
| |
Collapse
|
27
|
Liu CH, Fu LW, Chen HH, Huang SL. Toward cell nuclei precision between OCT and H&E images translation using signal-to-noise ratio cycle-consistency. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 242:107824. [PMID: 37832427 DOI: 10.1016/j.cmpb.2023.107824] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 08/31/2023] [Accepted: 09/19/2023] [Indexed: 10/15/2023]
Abstract
Medical image-to-image translation is often difficult and of limited effectiveness due to the differences in image acquisition mechanisms and the diverse structure of biological tissues. This work presents an unpaired image translation model between in-vivo optical coherence tomography (OCT) and ex-vivo Hematoxylin and eosin (H&E) stained images without the need for image stacking, registration, post-processing, and annotation. The model can generate high-quality and highly accurate virtual medical images, and is robust and bidirectional. Our framework introduces random noise to (1) blur redundant features, (2) defend against self-adversarial attacks, (3) stabilize inverse conversion, and (4) mitigate the impact of OCT speckles. We also demonstrate that our model can be pre-trained and then fine-tuned using images from different OCT systems in just a few epochs. Qualitative and quantitative comparisons with traditional image-to-image translation models show the robustness of our proposed signal-to-noise ratio (SNR) cycle-consistency method.
Collapse
Affiliation(s)
- Chih-Hao Liu
- Graduate Institute of Photonics and Optoelectronics, National Taiwan University, No.1, Sec. 4, Roosevelt Road, Taipei, 10617, Taiwan.
| | - Li-Wei Fu
- Graduate Institute of Communication Engineering, National Taiwan University, No.1, Sec. 4, Roosevelt Road, Taipei, 10617, Taiwan.
| | - Homer H Chen
- Graduate Institute of Communication Engineering, National Taiwan University, No.1, Sec. 4, Roosevelt Road, Taipei, 10617, Taiwan; Department of Electrical Engineering, National Taiwan University, No.1, Sec. 4, Roosevelt Road, Taipei, 10617, Taiwan; Graduate Institute of Networking and Multimedia, National Taiwan University, No.1, Sec. 4, Roosevelt Road, Taipei, 10617, Taiwan.
| | - Sheng-Lung Huang
- Graduate Institute of Photonics and Optoelectronics, National Taiwan University, No.1, Sec. 4, Roosevelt Road, Taipei, 10617, Taiwan; Department of Electrical Engineering, National Taiwan University, No.1, Sec. 4, Roosevelt Road, Taipei, 10617, Taiwan; All Vista Healthcare Center, National Taiwan University, No.1, Sec. 4, Roosevelt Road, Taipei, 10617, Taiwan.
| |
Collapse
|
28
|
Astratov VN, Sahel YB, Eldar YC, Huang L, Ozcan A, Zheludev N, Zhao J, Burns Z, Liu Z, Narimanov E, Goswami N, Popescu G, Pfitzner E, Kukura P, Hsiao YT, Hsieh CL, Abbey B, Diaspro A, LeGratiet A, Bianchini P, Shaked NT, Simon B, Verrier N, Debailleul M, Haeberlé O, Wang S, Liu M, Bai Y, Cheng JX, Kariman BS, Fujita K, Sinvani M, Zalevsky Z, Li X, Huang GJ, Chu SW, Tzang O, Hershkovitz D, Cheshnovsky O, Huttunen MJ, Stanciu SG, Smolyaninova VN, Smolyaninov II, Leonhardt U, Sahebdivan S, Wang Z, Luk’yanchuk B, Wu L, Maslov AV, Jin B, Simovski CR, Perrin S, Montgomery P, Lecler S. Roadmap on Label-Free Super-Resolution Imaging. LASER & PHOTONICS REVIEWS 2023; 17:2200029. [PMID: 38883699 PMCID: PMC11178318 DOI: 10.1002/lpor.202200029] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Indexed: 06/18/2024]
Abstract
Label-free super-resolution (LFSR) imaging relies on light-scattering processes in nanoscale objects without a need for fluorescent (FL) staining required in super-resolved FL microscopy. The objectives of this Roadmap are to present a comprehensive vision of the developments, the state-of-the-art in this field, and to discuss the resolution boundaries and hurdles which need to be overcome to break the classical diffraction limit of the LFSR imaging. The scope of this Roadmap spans from the advanced interference detection techniques, where the diffraction-limited lateral resolution is combined with unsurpassed axial and temporal resolution, to techniques with true lateral super-resolution capability which are based on understanding resolution as an information science problem, on using novel structured illumination, near-field scanning, and nonlinear optics approaches, and on designing superlenses based on nanoplasmonics, metamaterials, transformation optics, and microsphere-assisted approaches. To this end, this Roadmap brings under the same umbrella researchers from the physics and biomedical optics communities in which such studies have often been developing separately. The ultimate intent of this paper is to create a vision for the current and future developments of LFSR imaging based on its physical mechanisms and to create a great opening for the series of articles in this field.
Collapse
Affiliation(s)
- Vasily N. Astratov
- Department of Physics and Optical Science, University of North Carolina at Charlotte, Charlotte, North Carolina 28223-0001, USA
| | - Yair Ben Sahel
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Yonina C. Eldar
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Luzhe Huang
- Electrical and Computer Engineering Department, University of California, Los Angeles, California 90095, USA
- Bioengineering Department, University of California, Los Angeles, California 90095, USA
- California Nano Systems Institute (CNSI), University of California, Los Angeles, California 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, California 90095, USA
- Bioengineering Department, University of California, Los Angeles, California 90095, USA
- California Nano Systems Institute (CNSI), University of California, Los Angeles, California 90095, USA
- David Geffen School of Medicine, University of California, Los Angeles, California 90095, USA
| | - Nikolay Zheludev
- Optoelectronics Research Centre, University of Southampton, Southampton, SO17 1BJ, UK
- Centre for Disruptive Photonic Technologies, The Photonics Institute, School of Physical and Mathematical Sciences, Nanyang Technological University, 637371, Singapore
| | - Junxiang Zhao
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
| | - Zachary Burns
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
| | - Zhaowei Liu
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
- Material Science and Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
| | - Evgenii Narimanov
- School of Electrical Engineering, and Birck Nanotechnology Center, Purdue University, West Lafayette, Indiana 47907, USA
| | - Neha Goswami
- Quantitative Light Imaging Laboratory, Beckman Institute of Advanced Science and Technology, University of Illinois at Urbana-Champaign, Illinois 61801, USA
| | - Gabriel Popescu
- Quantitative Light Imaging Laboratory, Beckman Institute of Advanced Science and Technology, University of Illinois at Urbana-Champaign, Illinois 61801, USA
| | - Emanuel Pfitzner
- Department of Chemistry, University of Oxford, Oxford OX1 3QZ, United Kingdom
| | - Philipp Kukura
- Department of Chemistry, University of Oxford, Oxford OX1 3QZ, United Kingdom
| | - Yi-Teng Hsiao
- Institute of Atomic and Molecular Sciences (IAMS), Academia Sinica 1, Roosevelt Rd. Sec. 4, Taipei 10617 Taiwan
| | - Chia-Lung Hsieh
- Institute of Atomic and Molecular Sciences (IAMS), Academia Sinica 1, Roosevelt Rd. Sec. 4, Taipei 10617 Taiwan
| | - Brian Abbey
- Australian Research Council Centre of Excellence for Advanced Molecular Imaging, La Trobe University, Melbourne, Victoria, Australia
- Department of Chemistry and Physics, La Trobe Institute for Molecular Science (LIMS), La Trobe University, Melbourne, Victoria, Australia
| | - Alberto Diaspro
- Optical Nanoscopy and NIC@IIT, CHT, Istituto Italiano di Tecnologia, Via Enrico Melen 83B, 16152 Genoa, Italy
- DIFILAB, Department of Physics, University of Genoa, Via Dodecaneso 33, 16146 Genoa, Italy
| | - Aymeric LeGratiet
- Optical Nanoscopy and NIC@IIT, CHT, Istituto Italiano di Tecnologia, Via Enrico Melen 83B, 16152 Genoa, Italy
- Université de Rennes, CNRS, Institut FOTON - UMR 6082, F-22305 Lannion, France
| | - Paolo Bianchini
- Optical Nanoscopy and NIC@IIT, CHT, Istituto Italiano di Tecnologia, Via Enrico Melen 83B, 16152 Genoa, Italy
- DIFILAB, Department of Physics, University of Genoa, Via Dodecaneso 33, 16146 Genoa, Italy
| | - Natan T. Shaked
- Tel Aviv University, Faculty of Engineering, Department of Biomedical Engineering, Tel Aviv 6997801, Israel
| | - Bertrand Simon
- LP2N, Institut d’Optique Graduate School, CNRS UMR 5298, Université de Bordeaux, Talence France
| | - Nicolas Verrier
- IRIMAS UR UHA 7499, Université de Haute-Alsace, Mulhouse, France
| | | | - Olivier Haeberlé
- IRIMAS UR UHA 7499, Université de Haute-Alsace, Mulhouse, France
| | - Sheng Wang
- School of Physics and Technology, Wuhan University, China
- Wuhan Institute of Quantum Technology, China
| | - Mengkun Liu
- Department of Physics and Astronomy, Stony Brook University, USA
- National Synchrotron Light Source II, Brookhaven National Laboratory, USA
| | - Yeran Bai
- Boston University Photonics Center, Boston, MA 02215, USA
| | - Ji-Xin Cheng
- Boston University Photonics Center, Boston, MA 02215, USA
| | - Behjat S. Kariman
- Optical Nanoscopy and NIC@IIT, CHT, Istituto Italiano di Tecnologia, Via Enrico Melen 83B, 16152 Genoa, Italy
- DIFILAB, Department of Physics, University of Genoa, Via Dodecaneso 33, 16146 Genoa, Italy
| | - Katsumasa Fujita
- Department of Applied Physics and the Advanced Photonics and Biosensing Open Innovation Laboratory (AIST); and the Transdimensional Life Imaging Division, Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Osaka, Japan
| | - Moshe Sinvani
- Faculty of Engineering and the Nano-Technology Center, Bar-Ilan University, Ramat Gan, 52900 Israel
| | - Zeev Zalevsky
- Faculty of Engineering and the Nano-Technology Center, Bar-Ilan University, Ramat Gan, 52900 Israel
| | - Xiangping Li
- Guangdong Provincial Key Laboratory of Optical Fiber Sensing and Communications, Institute of Photonics Technology, Jinan University, Guangzhou 510632, China
| | - Guan-Jie Huang
- Department of Physics and Molecular Imaging Center, National Taiwan University, Taipei 10617, Taiwan
- Brain Research Center, National Tsing Hua University, Hsinchu 30013, Taiwan
| | - Shi-Wei Chu
- Department of Physics and Molecular Imaging Center, National Taiwan University, Taipei 10617, Taiwan
- Brain Research Center, National Tsing Hua University, Hsinchu 30013, Taiwan
| | - Omer Tzang
- School of Chemistry, The Sackler faculty of Exact Sciences, and the Center for Light matter Interactions, and the Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv 69978, Israel
| | - Dror Hershkovitz
- School of Chemistry, The Sackler faculty of Exact Sciences, and the Center for Light matter Interactions, and the Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv 69978, Israel
| | - Ori Cheshnovsky
- School of Chemistry, The Sackler faculty of Exact Sciences, and the Center for Light matter Interactions, and the Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv 69978, Israel
| | - Mikko J. Huttunen
- Laboratory of Photonics, Physics Unit, Tampere University, FI-33014, Tampere, Finland
| | - Stefan G. Stanciu
- Center for Microscopy – Microanalysis and Information Processing, Politehnica University of Bucharest, 313 Splaiul Independentei, 060042, Bucharest, Romania
| | - Vera N. Smolyaninova
- Department of Physics Astronomy and Geosciences, Towson University, 8000 York Rd., Towson, MD 21252, USA
| | - Igor I. Smolyaninov
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD 20742, USA
| | - Ulf Leonhardt
- Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Sahar Sahebdivan
- EMTensor GmbH, TechGate, Donau-City-Strasse 1, 1220 Wien, Austria
| | - Zengbo Wang
- School of Computer Science and Electronic Engineering, Bangor University, Bangor, LL57 1UT, United Kingdom
| | - Boris Luk’yanchuk
- Faculty of Physics, Lomonosov Moscow State University, Moscow 119991, Russia
| | - Limin Wu
- Department of Materials Science and State Key Laboratory of Molecular Engineering of Polymers, Fudan University, Shanghai 200433, China
| | - Alexey V. Maslov
- Department of Radiophysics, University of Nizhny Novgorod, Nizhny Novgorod, 603022, Russia
| | - Boya Jin
- Department of Physics and Optical Science, University of North Carolina at Charlotte, Charlotte, North Carolina 28223-0001, USA
| | - Constantin R. Simovski
- Department of Electronics and Nano-Engineering, Aalto University, FI-00076, Espoo, Finland
- Faculty of Physics and Engineering, ITMO University, 199034, St-Petersburg, Russia
| | - Stephane Perrin
- ICube Research Institute, University of Strasbourg - CNRS - INSA de Strasbourg, 300 Bd. Sébastien Brant, 67412 Illkirch, France
| | - Paul Montgomery
- ICube Research Institute, University of Strasbourg - CNRS - INSA de Strasbourg, 300 Bd. Sébastien Brant, 67412 Illkirch, France
| | - Sylvain Lecler
- ICube Research Institute, University of Strasbourg - CNRS - INSA de Strasbourg, 300 Bd. Sébastien Brant, 67412 Illkirch, France
| |
Collapse
|
29
|
Martell MT, Haven NJM, Cikaluk BD, Restall BS, McAlister EA, Mittal R, Adam BA, Giannakopoulos N, Peiris L, Silverman S, Deschenes J, Li X, Zemp RJ. Deep learning-enabled realistic virtual histology with ultraviolet photoacoustic remote sensing microscopy. Nat Commun 2023; 14:5967. [PMID: 37749108 PMCID: PMC10519961 DOI: 10.1038/s41467-023-41574-2] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Accepted: 09/11/2023] [Indexed: 09/27/2023] Open
Abstract
The goal of oncologic surgeries is complete tumor resection, yet positive margins are frequently found postoperatively using gold standard H&E-stained histology methods. Frozen section analysis is sometimes performed for rapid intraoperative margin evaluation, albeit with known inaccuracies. Here, we introduce a label-free histological imaging method based on an ultraviolet photoacoustic remote sensing and scattering microscope, combined with unsupervised deep learning using a cycle-consistent generative adversarial network for realistic virtual staining. Unstained tissues are scanned at rates of up to 7 mins/cm2, at resolution equivalent to 400x digital histopathology. Quantitative validation suggests strong concordance with conventional histology in benign and malignant prostate and breast tissues. In diagnostic utility studies we demonstrate a mean sensitivity and specificity of 0.96 and 0.91 in breast specimens, and respectively 0.87 and 0.94 in prostate specimens. We also find virtual stain quality is preferred (P = 0.03) compared to frozen section analysis in a blinded survey of pathologists.
Collapse
Affiliation(s)
- Matthew T Martell
- Department of Electrical and Computer Engineering, University of Alberta, 116 Street & 85 Avenue, Edmonton, AB, T6G 2R3, Canada
| | - Nathaniel J M Haven
- Department of Electrical and Computer Engineering, University of Alberta, 116 Street & 85 Avenue, Edmonton, AB, T6G 2R3, Canada
| | - Brendyn D Cikaluk
- Department of Electrical and Computer Engineering, University of Alberta, 116 Street & 85 Avenue, Edmonton, AB, T6G 2R3, Canada
| | - Brendon S Restall
- Department of Electrical and Computer Engineering, University of Alberta, 116 Street & 85 Avenue, Edmonton, AB, T6G 2R3, Canada
| | - Ewan A McAlister
- Department of Electrical and Computer Engineering, University of Alberta, 116 Street & 85 Avenue, Edmonton, AB, T6G 2R3, Canada
| | - Rohan Mittal
- Department of Laboratory Medicine and Pathology, University of Alberta, 11405 87 Avenue NW, Edmonton, AB, T6G 1C9, Canada
| | - Benjamin A Adam
- Department of Laboratory Medicine and Pathology, University of Alberta, 11405 87 Avenue NW, Edmonton, AB, T6G 1C9, Canada
| | - Nadia Giannakopoulos
- Department of Laboratory Medicine and Pathology, University of Alberta, 11405 87 Avenue NW, Edmonton, AB, T6G 1C9, Canada
| | - Lashan Peiris
- Department of Surgery, University of Alberta, 8440 - 112 Street, Edmonton, AB, T6G 2B7, Canada
| | - Sveta Silverman
- Department of Laboratory Medicine and Pathology, University of Alberta, 11405 87 Avenue NW, Edmonton, AB, T6G 1C9, Canada
| | - Jean Deschenes
- Department of Laboratory Medicine and Pathology, University of Alberta, 11405 87 Avenue NW, Edmonton, AB, T6G 1C9, Canada
| | - Xingyu Li
- Department of Electrical and Computer Engineering, University of Alberta, 116 Street & 85 Avenue, Edmonton, AB, T6G 2R3, Canada
| | - Roger J Zemp
- Department of Electrical and Computer Engineering, University of Alberta, 116 Street & 85 Avenue, Edmonton, AB, T6G 2R3, Canada.
| |
Collapse
|
30
|
Fanous MJ, Pillar N, Ozcan A. Digital staining facilitates biomedical microscopy. FRONTIERS IN BIOINFORMATICS 2023; 3:1243663. [PMID: 37564725 PMCID: PMC10411189 DOI: 10.3389/fbinf.2023.1243663] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 07/17/2023] [Indexed: 08/12/2023] Open
Abstract
Traditional staining of biological specimens for microscopic imaging entails time-consuming, laborious, and costly procedures, in addition to producing inconsistent labeling and causing irreversible sample damage. In recent years, computational "virtual" staining using deep learning techniques has evolved into a robust and comprehensive application for streamlining the staining process without typical histochemical staining-related drawbacks. Such virtual staining techniques can also be combined with neural networks designed to correct various microscopy aberrations, such as out-of-focus or motion blur artifacts, and improve upon diffracted-limited resolution. Here, we highlight how such methods lead to a host of new opportunities that can significantly improve both sample preparation and imaging in biomedical microscopy.
Collapse
Affiliation(s)
- Michael John Fanous
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, United States
| | - Nir Pillar
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, United States
- Bioengineering Department, University of California, Los Angeles, CA, United States
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, United States
- Bioengineering Department, University of California, Los Angeles, CA, United States
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, United States
- Department of Surgery, David Geffen School of Medicine, University of California, Los Angeles, CA, United States
| |
Collapse
|
31
|
Gao G, Miyasato D, Barner LA, Serafin R, Bishop KW, Xie W, Glaser AK, Rosenthal EL, True LD, Liu JT. Comprehensive Surface Histology of Fresh Resection Margins With Rapid Open-Top Light-Sheet (OTLS) Microscopy. IEEE Trans Biomed Eng 2023; 70:2160-2171. [PMID: 37021859 PMCID: PMC10324671 DOI: 10.1109/tbme.2023.3237267] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
OBJECTIVE For tumor resections, margin status typically correlates with patient survival but positive margin rates are generally high (up to 45% for head and neck cancer). Frozen section analysis (FSA) is often used to intraoperatively assess the margins of excised tissue, but suffers from severe under-sampling of the actual margin surface, inferior image quality, slow turnaround, and tissue destructiveness. METHODS Here, we have developed an imaging workflow to generate en face histologic images of freshly excised surgical margin surfaces based on open-top light-sheet (OTLS) microscopy. Key innovations include (1) the ability to generate false-colored H&E-mimicking images of tissue surfaces stained for < 1 min with a single fluorophore, (2) rapid OTLS surface imaging at a rate of 15 min/cm2 followed by real-time post-processing of datasets within RAM at a rate of 5 min/cm2, and (3) rapid digital surface extraction to account for topological irregularities at the tissue surface. RESULTS In addition to the performance metrics listed above, we show that the image quality generated by our rapid surface-histology method approaches that of gold-standard archival histology. CONCLUSION OTLS microscopy has the feasibility to provide intraoperative guidance of surgical oncology procedures. SIGNIFICANCE The reported methods can potentially improve tumor-resection procedures, thereby improving patient outcomes and quality of life.
Collapse
Affiliation(s)
- Gan Gao
- Department of Mechanical Engineering, University of Washington, Seattle, WA, USA
| | - Dominie Miyasato
- Department of Mechanical Engineering, University of Washington, Seattle, WA, USA
| | - Lindsey A. Barner
- Department of Mechanical Engineering, University of Washington, Seattle, WA, USA
| | - Robert Serafin
- Department of Mechanical Engineering, University of Washington, Seattle, WA, USA
| | - Kevin W. Bishop
- Department of Mechanical Engineering, University of Washington, Seattle, WA, USA
- Department of Bioengineering, University of Washington, Seattle, WA, USA
| | - Weisi Xie
- Department of Mechanical Engineering, University of Washington, Seattle, WA, USA
| | - Adam K. Glaser
- Department of Mechanical Engineering, University of Washington, Seattle, WA, USA
- Allen Institute for Neural Dynamics, Seattle, WA, USA
| | - Eben L. Rosenthal
- Department of Otolaryngology, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Lawrence D. True
- Department of Laboratory Medicine and Pathology, University of Washington, Seattle, WA, USA
- Department of Urology, University of Washington, Seattle, WA, USA
| | - Jonathan T.C. Liu
- Department of Mechanical Engineering, University of Washington, Seattle, WA, USA
- Department of Bioengineering, University of Washington, Seattle, WA, USA
- Department of Laboratory Medicine and Pathology, University of Washington, Seattle, WA, USA
| |
Collapse
|
32
|
Bai B, Yang X, Li Y, Zhang Y, Pillar N, Ozcan A. Deep learning-enabled virtual histological staining of biological samples. LIGHT, SCIENCE & APPLICATIONS 2023; 12:57. [PMID: 36864032 PMCID: PMC9981740 DOI: 10.1038/s41377-023-01104-7] [Citation(s) in RCA: 74] [Impact Index Per Article: 37.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Revised: 02/10/2023] [Accepted: 02/14/2023] [Indexed: 06/18/2023]
Abstract
Histological staining is the gold standard for tissue examination in clinical pathology and life-science research, which visualizes the tissue and cellular structures using chromatic dyes or fluorescence labels to aid the microscopic assessment of tissue. However, the current histological staining workflow requires tedious sample preparation steps, specialized laboratory infrastructure, and trained histotechnologists, making it expensive, time-consuming, and not accessible in resource-limited settings. Deep learning techniques created new opportunities to revolutionize staining methods by digitally generating histological stains using trained neural networks, providing rapid, cost-effective, and accurate alternatives to standard chemical staining methods. These techniques, broadly referred to as virtual staining, were extensively explored by multiple research groups and demonstrated to be successful in generating various types of histological stains from label-free microscopic images of unstained samples; similar approaches were also used for transforming images of an already stained tissue sample into another type of stain, performing virtual stain-to-stain transformations. In this Review, we provide a comprehensive overview of the recent research advances in deep learning-enabled virtual histological staining techniques. The basic concepts and the typical workflow of virtual staining are introduced, followed by a discussion of representative works and their technical innovations. We also share our perspectives on the future of this emerging field, aiming to inspire readers from diverse scientific fields to further expand the scope of deep learning-enabled virtual histological staining techniques and their applications.
Collapse
Affiliation(s)
- Bijie Bai
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Xilin Yang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Yuzhu Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Yijie Zhang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Nir Pillar
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.
- Bioengineering Department, University of California, Los Angeles, 90095, USA.
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA.
| |
Collapse
|
33
|
Mengu D, Zhao Y, Tabassum A, Jarrahi M, Ozcan A. Diffractive interconnects: all-optical permutation operation using diffractive networks. NANOPHOTONICS (BERLIN, GERMANY) 2023; 12:905-923. [PMID: 39634345 PMCID: PMC11501510 DOI: 10.1515/nanoph-2022-0358] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Accepted: 08/23/2022] [Indexed: 12/07/2024]
Abstract
Permutation matrices form an important computational building block frequently used in various fields including, e.g., communications, information security, and data processing. Optical implementation of permutation operators with relatively large number of input-output interconnections based on power-efficient, fast, and compact platforms is highly desirable. Here, we present diffractive optical networks engineered through deep learning to all-optically perform permutation operations that can scale to hundreds of thousands of interconnections between an input and an output field-of-view using passive transmissive layers that are individually structured at the wavelength scale. Our findings indicate that the capacity of the diffractive optical network in approximating a given permutation operation increases proportional to the number of diffractive layers and trainable transmission elements in the system. Such deeper diffractive network designs can pose practical challenges in terms of physical alignment and output diffraction efficiency of the system. We addressed these challenges by designing misalignment tolerant diffractive designs that can all-optically perform arbitrarily selected permutation operations, and experimentally demonstrated, for the first time, a diffractive permutation network that operates at THz part of the spectrum. Diffractive permutation networks might find various applications in, e.g., security, image encryption, and data processing, along with telecommunications; especially with the carrier frequencies in wireless communications approaching THz-bands, the presented diffractive permutation networks can potentially serve as channel routing and interconnection panels in wireless networks.
Collapse
Affiliation(s)
- Deniz Mengu
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute, University of California, Los Angeles, CA, 90095, USA
| | - Yifan Zhao
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute, University of California, Los Angeles, CA, 90095, USA
| | - Anika Tabassum
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute, University of California, Los Angeles, CA, 90095, USA
| | - Mona Jarrahi
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute, University of California, Los Angeles, CA, 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute, University of California, Los Angeles, CA, 90095, USA
- Department of Surgery, David Geffen School of Medicine, University of California, Los Angeles, CA, 90095, USA
| |
Collapse
|
34
|
Atak MF, Farabi B, Navarrete-Dechent C, Rubinstein G, Rajadhyaksha M, Jain M. Confocal Microscopy for Diagnosis and Management of Cutaneous Malignancies: Clinical Impacts and Innovation. Diagnostics (Basel) 2023; 13:diagnostics13050854. [PMID: 36899999 PMCID: PMC10001140 DOI: 10.3390/diagnostics13050854] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Revised: 02/10/2023] [Accepted: 02/20/2023] [Indexed: 02/25/2023] Open
Abstract
Cutaneous malignancies are common malignancies worldwide, with rising incidence. Most skin cancers, including melanoma, can be cured if diagnosed correctly at an early stage. Thus, millions of biopsies are performed annually, posing a major economic burden. Non-invasive skin imaging techniques can aid in early diagnosis and save unnecessary benign biopsies. In this review article, we will discuss in vivo and ex vivo confocal microscopy (CM) techniques that are currently being utilized in dermatology clinics for skin cancer diagnosis. We will discuss their current applications and clinical impact. Additionally, we will provide a comprehensive review of the advances in the field of CM, including multi-modal approaches, the integration of fluorescent targeted dyes, and the role of artificial intelligence for improved diagnosis and management.
Collapse
Affiliation(s)
- Mehmet Fatih Atak
- Department of Dermatology, New York Medical College, Metropolitan Hospital, New York, NY 10029, USA
| | - Banu Farabi
- Department of Dermatology, New York Medical College, Metropolitan Hospital, New York, NY 10029, USA
| | - Cristian Navarrete-Dechent
- Department of Dermatology, Escuela de Medicina, Pontificia Universidad Catolica de Chile, Santiago 8331150, Chile
| | | | - Milind Rajadhyaksha
- Dermatology Service, Department of Medicine, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Manu Jain
- Dermatology Service, Department of Medicine, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
- Dermatology Service, Department of Medicine, Weill Cornell Medicine, New York, NY 10021, USA
- Correspondence: ; Tel.: +1-(646)-608-3562
| |
Collapse
|
35
|
Dobre EG, Surcel M, Constantin C, Ilie MA, Caruntu A, Caruntu C, Neagu M. Skin Cancer Pathobiology at a Glance: A Focus on Imaging Techniques and Their Potential for Improved Diagnosis and Surveillance in Clinical Cohorts. Int J Mol Sci 2023; 24:1079. [PMID: 36674595 PMCID: PMC9866322 DOI: 10.3390/ijms24021079] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2022] [Revised: 01/02/2023] [Accepted: 01/03/2023] [Indexed: 01/08/2023] Open
Abstract
Early diagnosis is essential for completely eradicating skin cancer and maximizing patients' clinical benefits. Emerging optical imaging modalities such as reflectance confocal microscopy (RCM), optical coherence tomography (OCT), magnetic resonance imaging (MRI), near-infrared (NIR) bioimaging, positron emission tomography (PET), and their combinations provide non-invasive imaging data that may help in the early detection of cutaneous tumors and surgical planning. Hence, they seem appropriate for observing dynamic processes such as blood flow, immune cell activation, and tumor energy metabolism, which may be relevant for disease evolution. This review discusses the latest technological and methodological advances in imaging techniques that may be applied for skin cancer detection and monitoring. In the first instance, we will describe the principle and prospective clinical applications of the most commonly used imaging techniques, highlighting the challenges and opportunities of their implementation in the clinical setting. We will also highlight how imaging techniques may complement the molecular and histological approaches in sharpening the non-invasive skin characterization, laying the ground for more personalized approaches in skin cancer patients.
Collapse
Affiliation(s)
- Elena-Georgiana Dobre
- Faculty of Biology, University of Bucharest, Splaiul Independentei 91-95, 050095 Bucharest, Romania
| | - Mihaela Surcel
- Immunology Department, “Victor Babes” National Institute of Pathology, 050096 Bucharest, Romania
| | - Carolina Constantin
- Immunology Department, “Victor Babes” National Institute of Pathology, 050096 Bucharest, Romania
- Department of Pathology, Colentina University Hospital, 020125 Bucharest, Romania
| | | | - Ana Caruntu
- Department of Oral and Maxillofacial Surgery, “Carol Davila” Central Military Emergency Hospital, 010825 Bucharest, Romania
- Department of Oral and Maxillofacial Surgery, Faculty of Dental Medicine, “Titu Maiorescu” University, 031593 Bucharest, Romania
| | - Constantin Caruntu
- Department of Physiology, “Carol Davila” University of Medicine and Pharmacy, 050474 Bucharest, Romania
- Department of Dermatology, “Prof. N.C. Paulescu” National Institute of Diabetes, Nutrition and Metabolic Diseases, 011233 Bucharest, Romania
| | - Monica Neagu
- Faculty of Biology, University of Bucharest, Splaiul Independentei 91-95, 050095 Bucharest, Romania
- Immunology Department, “Victor Babes” National Institute of Pathology, 050096 Bucharest, Romania
- Department of Pathology, Colentina University Hospital, 020125 Bucharest, Romania
| |
Collapse
|
36
|
Pillar N, Ozcan A. Virtual tissue staining in pathology using machine learning. Expert Rev Mol Diagn 2022; 22:987-989. [PMID: 36440487 DOI: 10.1080/14737159.2022.2153040] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Accepted: 11/25/2022] [Indexed: 11/29/2022]
Affiliation(s)
- Nir Pillar
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- Bioengineering Department, University of California, Los Angeles, CA, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- Bioengineering Department, University of California, Los Angeles, CA, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
- Department of Surgery, University of California, Los Angeles, CA, USA
| |
Collapse
|
37
|
Bai B, Wang H, Li Y, de Haan K, Colonnese F, Wan Y, Zuo J, Doan NB, Zhang X, Zhang Y, Li J, Yang X, Dong W, Darrow MA, Kamangar E, Lee HS, Rivenson Y, Ozcan A. Label-Free Virtual HER2 Immunohistochemical Staining of Breast Tissue using Deep Learning. BME FRONTIERS 2022; 2022:9786242. [PMID: 37850170 PMCID: PMC10521710 DOI: 10.34133/2022/9786242] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Accepted: 08/25/2022] [Indexed: 10/19/2023] Open
Abstract
The immunohistochemical (IHC) staining of the human epidermal growth factor receptor 2 (HER2) biomarker is widely practiced in breast tissue analysis, preclinical studies, and diagnostic decisions, guiding cancer treatment and investigation of pathogenesis. HER2 staining demands laborious tissue treatment and chemical processing performed by a histotechnologist, which typically takes one day to prepare in a laboratory, increasing analysis time and associated costs. Here, we describe a deep learning-based virtual HER2 IHC staining method using a conditional generative adversarial network that is trained to rapidly transform autofluorescence microscopic images of unlabeled/label-free breast tissue sections into bright-field equivalent microscopic images, matching the standard HER2 IHC staining that is chemically performed on the same tissue sections. The efficacy of this virtual HER2 staining framework was demonstrated by quantitative analysis, in which three board-certified breast pathologists blindly graded the HER2 scores of virtually stained and immunohistochemically stained HER2 whole slide images (WSIs) to reveal that the HER2 scores determined by inspecting virtual IHC images are as accurate as their immunohistochemically stained counterparts. A second quantitative blinded study performed by the same diagnosticians further revealed that the virtually stained HER2 images exhibit a comparable staining quality in the level of nuclear detail, membrane clearness, and absence of staining artifacts with respect to their immunohistochemically stained counterparts. This virtual HER2 staining framework bypasses the costly, laborious, and time-consuming IHC staining procedures in laboratory and can be extended to other types of biomarkers to accelerate the IHC tissue staining used in life sciences and biomedical workflow.
Collapse
Affiliation(s)
- Bijie Bai
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Hongda Wang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Yuzhu Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Kevin de Haan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | | | - Yujie Wan
- Physics and Astronomy Department, University of California, Los Angeles, CA 90095, USA
| | - Jingyi Zuo
- Computer Science Department, University of California, Los Angeles, CA, USA
| | - Ngan B. Doan
- Translational Pathology Core Laboratory, University of California, Los Angeles, CA 90095, USA
| | - Xiaoran Zhang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
| | - Yijie Zhang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Jingxi Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Xilin Yang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Wenjie Dong
- Statistics Department, University of California, Los Angeles, CA 90095, USA
| | - Morgan Angus Darrow
- Department of Pathology and Laboratory Medicine, University of California at Davis, Sacramento, CA 95817, USA
| | - Elham Kamangar
- Department of Pathology and Laboratory Medicine, University of California at Davis, Sacramento, CA 95817, USA
| | - Han Sung Lee
- Department of Pathology and Laboratory Medicine, University of California at Davis, Sacramento, CA 95817, USA
| | - Yair Rivenson
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
- Department of Surgery, University of California, Los Angeles, CA 90095, USA
| |
Collapse
|
38
|
ZHAO JINGJING, WINETRAUB YONATAN, DU LIN, VAN VLECK AIDAN, ICHIMURA KENZO, HUANG CHENG, AAsI SUMAIRAZ, SARIN KAVITAY, DE LA ZERDA ADAM. Flexible method for generating needle-shaped beams and its application in optical coherence tomography. OPTICA 2022; 9:859-867. [PMID: 37283722 PMCID: PMC10243785 DOI: 10.1364/optica.456894] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 06/24/2022] [Indexed: 06/08/2023]
Abstract
Needle-shaped beams (NBs) featuring a long depth-of-focus (DOF) can drastically improve the resolution of microscopy systems. However, thus far, the implementation of a specific NB has been onerous due to the lack of a common, flexible generation method. Here we develop a spatially multiplexed phase pattern that creates many axially closely spaced foci as a universal platform for customizing various NBs, allowing flexible manipulations of beam length and diameter, uniform axial intensity, and sub-diffraction-limit beams. NBs designed via this method successfully extended the DOF of our optical coherence tomography (OCT) system. It revealed clear individual epidermal cells of the entire human epidermis, fine structures of human dermal-epidermal junction in a large depth range, and a high-resolution dynamic heartbeat of alive Drosophila larvae.
Collapse
Affiliation(s)
- JINGJING ZHAO
- Department of Structural Biology, Stanford University School ofMedicine, Stanford, California 94305, USA
| | - YONATAN WINETRAUB
- Department of Structural Biology, Stanford University School ofMedicine, Stanford, California 94305, USA
- Biophysics Program at Stanford, Stanford, California 94305, USA
- Molecular Imaging Program at Stanford, Stanford, California 94305, USA
- The Bio-X Program, Stanford, California 94305, USA
| | - LIN DU
- Department ofElectrical and Systems Engineering, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
| | - AIDAN VAN VLECK
- Department of Structural Biology, Stanford University School ofMedicine, Stanford, California 94305, USA
| | - KENZO ICHIMURA
- Division of Pulmonary, Allergy and Critical Care, Stanford University School ofMedicine, Stanford, California 94305, USA
- Vera Moulton Wall Center of Pulmonary Vascular Disease, Stanford University School of Medicine, Stanford, California 94304, USA
- Cardiovascular Institute, Stanford University School of Medicine, Stanford, California 94304, USA
| | - CHENG HUANG
- Department of Biology, Stanford University, Stanford, California 94305, USA
| | - SUMAIRA Z. AAsI
- Department of Dermatology, Stanford University School of Medicine, Stanford, California 94305, USA
| | - KAVITA Y. SARIN
- Department of Dermatology, Stanford University School of Medicine, Stanford, California 94305, USA
| | - ADAM DE LA ZERDA
- Department of Structural Biology, Stanford University School ofMedicine, Stanford, California 94305, USA
- Biophysics Program at Stanford, Stanford, California 94305, USA
- Molecular Imaging Program at Stanford, Stanford, California 94305, USA
- The Bio-X Program, Stanford, California 94305, USA
- The Chan Zuckerberg Biohub, San Francisco, California 94158, USA
| |
Collapse
|
39
|
Sun J, Wu J, Wu S, Goswami R, Girardo S, Cao L, Guck J, Koukourakis N, Czarske JW. Quantitative phase imaging through an ultra-thin lensless fiber endoscope. LIGHT, SCIENCE & APPLICATIONS 2022; 11:204. [PMID: 35790748 PMCID: PMC9255502 DOI: 10.1038/s41377-022-00898-2] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/08/2022] [Revised: 06/10/2022] [Accepted: 06/16/2022] [Indexed: 05/29/2023]
Abstract
Quantitative phase imaging (QPI) is a label-free technique providing both morphology and quantitative biophysical information in biomedicine. However, applying such a powerful technique to in vivo pathological diagnosis remains challenging. Multi-core fiber bundles (MCFs) enable ultra-thin probes for in vivo imaging, but current MCF imaging techniques are limited to amplitude imaging modalities. We demonstrate a computational lensless microendoscope that uses an ultra-thin bare MCF to perform quantitative phase imaging with microscale lateral resolution and nanoscale axial sensitivity of the optical path length. The incident complex light field at the measurement side is precisely reconstructed from the far-field speckle pattern at the detection side, enabling digital refocusing in a multi-layer sample without any mechanical movement. The accuracy of the quantitative phase reconstruction is validated by imaging the phase target and hydrogel beads through the MCF. With the proposed imaging modality, three-dimensional imaging of human cancer cells is achieved through the ultra-thin fiber endoscope, promising widespread clinical applications.
Collapse
Affiliation(s)
- Jiawei Sun
- Laboratory of Measurement and Sensor System Technique (MST), TU Dresden, Helmholtzstrasse 18, 01069, Dresden, Germany.
- Competence Center for Biomedical Computational Laser Systems (BIOLAS), TU Dresden, Dresden, Germany.
| | - Jiachen Wu
- Laboratory of Measurement and Sensor System Technique (MST), TU Dresden, Helmholtzstrasse 18, 01069, Dresden, Germany
- State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instruments, Tsinghua University, 100084, Beijing, China
| | - Song Wu
- Institute for Integrative Nanosciences, IFW Dresden, Helmholtzstraße 20, 01069, Dresden, Germany
| | - Ruchi Goswami
- Max Planck Institute for the Science of Light & Max-Planck-Zentrum für Physik und Medizin, 91058, Erlangen, Germany
| | - Salvatore Girardo
- Max Planck Institute for the Science of Light & Max-Planck-Zentrum für Physik und Medizin, 91058, Erlangen, Germany
| | - Liangcai Cao
- State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instruments, Tsinghua University, 100084, Beijing, China
| | - Jochen Guck
- Max Planck Institute for the Science of Light & Max-Planck-Zentrum für Physik und Medizin, 91058, Erlangen, Germany
- Cluster of Excellence Physics of Life, TU Dresden, Dresden, Germany
| | - Nektarios Koukourakis
- Laboratory of Measurement and Sensor System Technique (MST), TU Dresden, Helmholtzstrasse 18, 01069, Dresden, Germany.
- Competence Center for Biomedical Computational Laser Systems (BIOLAS), TU Dresden, Dresden, Germany.
| | - Juergen W Czarske
- Laboratory of Measurement and Sensor System Technique (MST), TU Dresden, Helmholtzstrasse 18, 01069, Dresden, Germany.
- Competence Center for Biomedical Computational Laser Systems (BIOLAS), TU Dresden, Dresden, Germany.
- Cluster of Excellence Physics of Life, TU Dresden, Dresden, Germany.
- Institute of Applied Physics, TU Dresden, Dresden, Germany.
| |
Collapse
|
40
|
Virtual histological staining of label-free total absorption photoacoustic remote sensing (TA-PARS). Sci Rep 2022; 12:10296. [PMID: 35717539 PMCID: PMC9206643 DOI: 10.1038/s41598-022-14042-y] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Accepted: 05/31/2022] [Indexed: 01/21/2023] Open
Abstract
Histopathological visualizations are a pillar of modern medicine and biological research. Surgical oncology relies exclusively on post-operative histology to determine definitive surgical success and guide adjuvant treatments. The current histology workflow is based on bright-field microscopic assessment of histochemical stained tissues and has some major limitations. For example, the preparation of stained specimens for brightfield assessment requires lengthy sample processing, delaying interventions for days or even weeks. Therefore, there is a pressing need for improved histopathology methods. In this paper, we present a deep-learning-based approach for virtual label-free histochemical staining of total-absorption photoacoustic remote sensing (TA-PARS) images of unstained tissue. TA-PARS provides an array of directly measured label-free contrasts such as scattering and total absorption (radiative and non-radiative), ideal for developing H&E colorizations without the need to infer arbitrary tissue structures. We use a Pix2Pix generative adversarial network to develop visualizations analogous to H&E staining from label-free TA-PARS images. Thin sections of human skin tissue were first virtually stained with the TA-PARS, then were chemically stained with H&E producing a one-to-one comparison between the virtual and chemical staining. The one-to-one matched virtually- and chemically- stained images exhibit high concordance validating the digital colorization of the TA-PARS images against the gold standard H&E. TA-PARS images were reviewed by four dermatologic pathologists who confirmed they are of diagnostic quality, and that resolution, contrast, and color permitted interpretation as if they were H&E. The presented approach paves the way for the development of TA-PARS slide-free histological imaging, which promises to dramatically reduce the time from specimen resection to histological imaging.
Collapse
|
41
|
Yang D, Zhang S, Zheng C, Zhou G, Cao L, Hu Y, Hao Q. Fourier ptychography multi-parameunter neural network with composite physical priori optimization. BIOMEDICAL OPTICS EXPRESS 2022; 13:2739-2753. [PMID: 35774326 PMCID: PMC9203101 DOI: 10.1364/boe.456380] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Revised: 03/22/2022] [Accepted: 03/28/2022] [Indexed: 05/31/2023]
Abstract
Fourier ptychography microscopy(FPM) is a recently developed computational imaging approach for microscopic super-resolution imaging. By turning on each light-emitting-diode (LED) located on different position on the LED array sequentially and acquiring the corresponding images that contain different spatial frequency components, high spatial resolution and quantitative phase imaging can be achieved in the case of large field-of-view. Nevertheless, FPM has high requirements for the system construction and data acquisition processes, such as precise LEDs position, accurate focusing and appropriate exposure time, which brings many limitations to its practical applications. In this paper, inspired by artificial neural network, we propose a Fourier ptychography multi-parameter neural network (FPMN) with composite physical prior optimization. A hybrid parameter determination strategy combining physical imaging model and data-driven network training is proposed to recover the multi layers of the network corresponding to different physical parameters, including sample complex function, system pupil function, defocus distance, LED array position deviation and illumination intensity fluctuation, etc. Among these parameters, LED array position deviation is recovered based on the features of brightfield to darkfield transition low-resolution images while the others are recovered in the process of training of the neural network. The feasibility and effectiveness of FPMN are verified through simulations and actual experiments. Therefore FPMN can evidently reduce the requirement for practical applications of FPM.
Collapse
Affiliation(s)
- Delong Yang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Shaohui Zhang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
- Yangtze Delta Region Academy of Beijing Institute of Technology, China
| | - Chuanjian Zheng
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Guocheng Zhou
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Lei Cao
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Yao Hu
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Qun Hao
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
- Yangtze Delta Region Academy of Beijing Institute of Technology, China
| |
Collapse
|
42
|
Horizontal Histopathology Correlation with In Vivo Reflectance Confocal Microscopy in Inflammatory Skin Diseases: A Review. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12041930] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Horizontal histopathological sections (HHSs) have been reported to show a strong correlation with images obtained via in vivo reflectance confocal microscopy (RCM), as both reflect the same horizontal plane of the skin. Although vertical histopathology remains the diagnostic gold standard for most neoplastic and inflammatory skin diseases, HHSs represent a useful tool to validate the RCM features of some inflammatory disorders, including psoriasis, discoid lupus erythematosus, and eczema. The aim of the present review is to summarize the state of the art on the existing correlations between HHS and RCM in this field and to emphasize that RCM may represent a useful diagnostic tool to discriminate between diseases with similar clinical presentations.
Collapse
|