51
|
Sun T. Light People: Professor Aydogan Ozcan. LIGHT, SCIENCE & APPLICATIONS 2021; 10:208. [PMID: 34611128 PMCID: PMC8491441 DOI: 10.1038/s41377-021-00643-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
In 2016, the news that Google's artificial intelligence (AI) robot AlphaGo, based on the principle of deep learning, won the victory over lee Sedol, the former world Go champion and the famous 9th Dan competitor of Korea, caused a sensation in both fields of AI and Go, which brought epoch-making significance to the development of deep learning. Deep learning is a complex machine learning algorithm that uses multiple layers of artificial neural networks to automatically analyze signals or data. At present, deep learning has penetrated into our daily life, such as the applications of face recognition and speech recognition. Scientists have also made many remarkable achievements based on deep learning. Professor Aydogan Ozcan from the University of California, Los Angeles (UCLA) led his team to research deep learning algorithms, which provided new ideas for the exploring of optical computational imaging and sensing technology, and introduced image generation and reconstruction methods which brought major technological innovations to the development of related fields. Optical designs and devices are moving from being physically driven to being data-driven. We are much honored to have Aydogan Ozcan, Fellow of the National Academy of Inventors and Chancellor's Professor of UCLA, to unscramble his latest scientific research results and foresight for the future development of related fields, and to share his journey of pursuing Optics, his indissoluble relationship with Light: Science & Applications (LSA), and his experience in talent cultivation.
Collapse
Affiliation(s)
- Tingting Sun
- Light Publishing Group, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, 3888 Dong Nan Hu Road, Changchun, 130033, China.
| |
Collapse
|
52
|
Bao S, Tang Y, Lee HH, Gao R, Chiron S, Lyu I, Coburn LA, Wilson KT, Roland JT, Landman BA, Huo Y. Random Multi-Channel Image Synthesis for Multiplexed Immunofluorescence Imaging. PROCEEDINGS OF MACHINE LEARNING RESEARCH 2021; 156:36-46. [PMID: 34993490 PMCID: PMC8730359] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
Abstract
Multiplex immunofluorescence (MxIF) is an emerging imaging technique that produces the high sensitivity and specificity of single-cell mapping. With a tenet of "seeing is believing", MxIF enables iterative staining and imaging extensive antibodies, which provides comprehensive biomarkers to segment and group different cells on a single tissue section. However, considerable depletion of the scarce tissue is inevitable from extensive rounds of staining and bleaching ("missing tissue"). Moreover, the immunofluorescence (IF) imaging can globally fail for particular rounds ("missing stain"). In this work, we focus on the "missing stain" issue. It would be appealing to develop digital image synthesis approaches to restore missing stain images without losing more tissue physically. Herein, we aim to develop image synthesis approaches for eleven MxIF structural molecular markers (i.e., epithelial and stromal) on real samples. We propose a novel multi-channel high-resolution image synthesis approach, called pixN2N-HD, to tackle possible missing stain scenarios via a high-resolution generative adversarial network (GAN). Our contribution is three-fold: (1) a single deep network framework is proposed to tackle missing stain in MxIF; (2) the proposed "N-to-N" strategy reduces theoretical four years of computational time to 20 hours when covering all possible missing stains scenarios, with up to five missing stains (e.g., "(N-1)-to-1", "(N-2)-to-2"); and (3) this work is the first comprehensive experimental study of investigating cross-stain synthesis in MxIF. Our results elucidate a promising direction of advancing MxIF imaging with deep image synthesis.
Collapse
Affiliation(s)
- Shunxing Bao
- Dept. of Computer Science, Vanderbilt University, USA
| | - Yucheng Tang
- Dept. of Electrical and Computer Engineering, Vanderbilt University, USA
| | - Ho Hin Lee
- Dept. of Computer Science, Vanderbilt University, USA
| | - Riqiang Gao
- Dept. of Computer Science, Vanderbilt University, USA
| | - Sophie Chiron
- Division of Gastroenterology, Hepatology, and Nutrition, Department of Medicine, Vanderbilt University Medical Center, USA
| | - Ilwoo Lyu
- Computer Science & Engineering, Ulsan National Institute of Science and Technology, South Korea
| | - Lori A Coburn
- Division of Gastroenterology, Hepatology, and Nutrition, Dept. of Medicine, Vanderbilt University Medical Center, USA
| | - Keith T Wilson
- Division of Gastroenterology, Hepatology, and Nutrition, Dept. of Medicine, Vanderbilt University Medical Center, USA
| | - Joseph T Roland
- Epithelial Biology Center, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Bennett A Landman
- Dept. of Electrical and Computer Engineering, Vanderbilt University, USA
| | - Yuankai Huo
- Dept. of Computer Science, Vanderbilt University, USA
| |
Collapse
|
53
|
Chen Z, Yu W, Wong IHM, Wong TTW. Deep-learning-assisted microscopy with ultraviolet surface excitation for rapid slide-free histological imaging. BIOMEDICAL OPTICS EXPRESS 2021; 12:5920-5938. [PMID: 34692225 PMCID: PMC8515972 DOI: 10.1364/boe.433597] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/08/2021] [Revised: 07/15/2021] [Accepted: 08/03/2021] [Indexed: 05/08/2023]
Abstract
Histopathological examination of tissue sections is the gold standard for disease diagnosis. However, the conventional histopathology workflow requires lengthy and laborious sample preparation to obtain thin tissue slices, causing about a one-week delay to generate an accurate diagnostic report. Recently, microscopy with ultraviolet surface excitation (MUSE), a rapid and slide-free imaging technique, has been developed to image fresh and thick tissues with specific molecular contrast. Here, we propose to apply an unsupervised generative adversarial network framework to translate colorful MUSE images into Deep-MUSE images that highly resemble hematoxylin and eosin staining, allowing easy adaptation by pathologists. By eliminating the needs of all sample processing steps (except staining), a MUSE image with subcellular resolution for a typical brain biopsy (5 mm × 5 mm) can be acquired in 5 minutes, which is further translated into a Deep-MUSE image in 40 seconds, simplifying the standard histopathology workflow dramatically and providing histological images intraoperatively.
Collapse
Affiliation(s)
- Zhenghui Chen
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China
| | - Wentao Yu
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China
| | - Ivy H. M. Wong
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China
| | - Terence T. W. Wong
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China
| |
Collapse
|
54
|
Helgadottir S, Midtvedt B, Pineda J, Sabirsh A, B. Adiels C, Romeo S, Midtvedt D, Volpe G. Extracting quantitative biological information from bright-field cell images using deep learning. BIOPHYSICS REVIEWS 2021; 2:031401. [PMID: 38505631 PMCID: PMC10903417 DOI: 10.1063/5.0044782] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Accepted: 06/23/2021] [Indexed: 03/21/2024]
Abstract
Quantitative analysis of cell structures is essential for biomedical and pharmaceutical research. The standard imaging approach relies on fluorescence microscopy, where cell structures of interest are labeled by chemical staining techniques. However, these techniques are often invasive and sometimes even toxic to the cells, in addition to being time consuming, labor intensive, and expensive. Here, we introduce an alternative deep-learning-powered approach based on the analysis of bright-field images by a conditional generative adversarial neural network (cGAN). We show that this is a robust and fast-converging approach to generate virtually stained images from the bright-field images and, in subsequent downstream analyses, to quantify the properties of cell structures. Specifically, we train a cGAN to virtually stain lipid droplets, cytoplasm, and nuclei using bright-field images of human stem-cell-derived fat cells (adipocytes), which are of particular interest for nanomedicine and vaccine development. Subsequently, we use these virtually stained images to extract quantitative measures about these cell structures. Generating virtually stained fluorescence images is less invasive, less expensive, and more reproducible than standard chemical staining; furthermore, it frees up the fluorescence microscopy channels for other analytical probes, thus increasing the amount of information that can be extracted from each cell. To make this deep-learning-powered approach readily available for other users, we provide a Python software package, which can be easily personalized and optimized for specific virtual-staining and cell-profiling applications.
Collapse
Affiliation(s)
- Saga Helgadottir
- Department of Physics, University of Gothenburg, Gothenburg, Sweden
| | | | - Jesús Pineda
- Department of Physics, University of Gothenburg, Gothenburg, Sweden
| | - Alan Sabirsh
- Advanced Drug Delivery, Pharmaceutical Sciences, R&D, AstraZeneca, Gothenburg, Sweden
| | | | | | - Daniel Midtvedt
- Department of Physics, University of Gothenburg, Gothenburg, Sweden
| | - Giovanni Volpe
- Department of Physics, University of Gothenburg, Gothenburg, Sweden
| |
Collapse
|
55
|
Deep learning-based transformation of H&E stained tissues into special stains. Nat Commun 2021; 12:4884. [PMID: 34385460 PMCID: PMC8361203 DOI: 10.1038/s41467-021-25221-2] [Citation(s) in RCA: 127] [Impact Index Per Article: 31.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2020] [Accepted: 07/29/2021] [Indexed: 11/08/2022] Open
Abstract
Pathology is practiced by visual inspection of histochemically stained tissue slides. While the hematoxylin and eosin (H&E) stain is most commonly used, special stains can provide additional contrast to different tissue components. Here, we demonstrate the utility of supervised learning-based computational stain transformation from H&E to special stains (Masson's Trichrome, periodic acid-Schiff and Jones silver stain) using kidney needle core biopsy tissue sections. Based on the evaluation by three renal pathologists, followed by adjudication by a fourth pathologist, we show that the generation of virtual special stains from existing H&E images improves the diagnosis of several non-neoplastic kidney diseases, sampled from 58 unique subjects (P = 0.0095). A second study found that the quality of the computationally generated special stains was statistically equivalent to those which were histochemically stained. This stain-to-stain transformation framework can improve preliminary diagnoses when additional special stains are needed, also providing significant savings in time and cost.
Collapse
|
56
|
Tian L, Hunt B, Bell MAL, Yi J, Smith JT, Ochoa M, Intes X, Durr NJ. Deep Learning in Biomedical Optics. Lasers Surg Med 2021; 53:748-775. [PMID: 34015146 PMCID: PMC8273152 DOI: 10.1002/lsm.23414] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 04/02/2021] [Accepted: 04/15/2021] [Indexed: 01/02/2023]
Abstract
This article reviews deep learning applications in biomedical optics with a particular emphasis on image formation. The review is organized by imaging domains within biomedical optics and includes microscopy, fluorescence lifetime imaging, in vivo microscopy, widefield endoscopy, optical coherence tomography, photoacoustic imaging, diffuse tomography, and functional optical brain imaging. For each of these domains, we summarize how deep learning has been applied and highlight methods by which deep learning can enable new capabilities for optics in medicine. Challenges and opportunities to improve translation and adoption of deep learning in biomedical optics are also summarized. Lasers Surg. Med. © 2021 Wiley Periodicals LLC.
Collapse
Affiliation(s)
- L. Tian
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, USA
| | - B. Hunt
- Thayer School of Engineering, Dartmouth College, Hanover, NH, USA
| | - M. A. L. Bell
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - J. Yi
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Ophthalmology, Johns Hopkins University, Baltimore, MD, USA
| | - J. T. Smith
- Center for Modeling, Simulation, and Imaging in Medicine, Rensselaer Polytechnic Institute, Troy, New York NY 12180
| | - M. Ochoa
- Center for Modeling, Simulation, and Imaging in Medicine, Rensselaer Polytechnic Institute, Troy, New York NY 12180
| | - X. Intes
- Center for Modeling, Simulation, and Imaging in Medicine, Rensselaer Polytechnic Institute, Troy, New York NY 12180
| | - N. J. Durr
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
57
|
Guo S, Ma Y, Pan Y, Smith ZJ, Chu K. Organelle-specific phase contrast microscopy enables gentle monitoring and analysis of mitochondrial network dynamics. BIOMEDICAL OPTICS EXPRESS 2021; 12:4363-4379. [PMID: 34457419 PMCID: PMC8367278 DOI: 10.1364/boe.425848] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Revised: 05/16/2021] [Accepted: 05/28/2021] [Indexed: 06/07/2023]
Abstract
Mitochondria are delicate organelles that play a key role in cell fate. Current research methods rely on fluorescence labeling that introduces stress due to photobleaching and phototoxicity. Here we propose a new, gentle method to study mitochondrial dynamics, where organelle-specific three-dimensional information is obtained in a label-free manner at high resolution, high specificity, and without detrimental effects associated with staining. A mitochondria cleavage experiment demonstrates that not only do the label-free mitochondria-specific images have the required resolution and precision, but also fairly include all cells and mitochondria in downstream morphological analysis, while fluorescence images omit dim cells and mitochondria. The robustness of the method was tested on samples of different cell lines and on data collected from multiple systems. Thus, we have demonstrated that our method is an attractive alternative to study mitochondrial dynamics, connecting behavior and function in a simpler and more robust way than traditional fluorescence imaging.
Collapse
Affiliation(s)
- Siyue Guo
- Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, Anhui 230027, China
| | - Ying Ma
- Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, Anhui 230027, China
- School of Physics and Optoelectronic Engineering, Xidian University, Xi'an, Shanxi 710071, China
| | - Yang Pan
- Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, Anhui 230027, China
| | - Zachary J Smith
- Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, Anhui 230027, China
- Key Laboratory of Precision Scientific Instrumentation of Anhui Higher Education Institutes, University of Science and Technology of China, Hefei, Anhui 230027, China
| | - Kaiqin Chu
- Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, Anhui 230027, China
- Hefei National Laboratory for Physical Sciences at the Microscale, University of Science and Technology of China, Hefei, Anhui 230027, China
- Key Laboratory of Precision Scientific Instrumentation of Anhui Higher Education Institutes, University of Science and Technology of China, Hefei, Anhui 230027, China
| |
Collapse
|
58
|
Kruk SS, Gao W, Choi DY, Zentgraf T, Zhang S, Kivshar Y. Nonlinear Imaging of Nanoscale Topological Corner States. NANO LETTERS 2021; 21:4592-4597. [PMID: 34008406 DOI: 10.1021/acs.nanolett.1c00449] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Topological states of light represent counterintuitive optical modes localized at boundaries of finite-size optical structures that originate from the properties of the bulk. Being defined by bulk properties, such boundary states are insensitive to certain types of perturbations, thus naturally enhancing robustness of photonic circuitries. Conventionally, the N-dimensional bulk modes correspond to (N - 1)-dimensional boundary states. The higher-order bulk-boundary correspondence relates N-dimensional bulk to boundary states with dimensionality reduced by more than 1. A special interest lies in miniaturization of such higher-order topological states to the nanoscale. Here, we realize nanoscale topological corner states in metasurfaces with C6-symmetric honeycomb lattices. We directly observe nanoscale topology-empowered edge and corner localizations of light and enhancement of light-matter interactions via a nonlinear imaging technique. Control of light at the nanoscale empowered by topology may facilitate miniaturization and on-chip integration of classical and quantum photonic devices.
Collapse
Affiliation(s)
- Sergey S Kruk
- Nonlinear Physics Center, Research School of Physics, Australian National University, Canberra, Australian Capital Territory 2601, Australia
- Department of Physics, Paderborn University, 33098 Paderborn, Germany
| | - Wenlong Gao
- Nonlinear Physics Center, Research School of Physics, Australian National University, Canberra, Australian Capital Territory 2601, Australia
- Department of Physics, Paderborn University, 33098 Paderborn, Germany
| | - Duk-Yong Choi
- Laser Physics Center, Research School of Physics, Australian National University, Canberra, Australian Capital Territory 2601, Australia
| | - Thomas Zentgraf
- Department of Physics, Paderborn University, 33098 Paderborn, Germany
| | - Shuang Zhang
- School of Physics and Astronomy, University of Birmingham, Birmingham B15 2TT United Kingdom
- Department of Physics and Department of Electrical & Electronic Engineering, University of Hong Kong, Hong Kong, China
| | - Yuri Kivshar
- Nonlinear Physics Center, Research School of Physics, Australian National University, Canberra, Australian Capital Territory 2601, Australia
| |
Collapse
|
59
|
Huo Y, Deng R, Liu Q, Fogo AB, Yang H. AI applications in renal pathology. Kidney Int 2021; 99:1309-1320. [PMID: 33581198 PMCID: PMC8154730 DOI: 10.1016/j.kint.2021.01.015] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Revised: 01/09/2021] [Accepted: 01/13/2021] [Indexed: 12/20/2022]
Abstract
The explosive growth of artificial intelligence (AI) technologies, especially deep learning methods, has been translated at revolutionary speed to efforts in AI-assisted healthcare. New applications of AI to renal pathology have recently become available, driven by the successful AI deployments in digital pathology. However, synergetic developments of renal pathology and AI require close interdisciplinary collaborations between computer scientists and renal pathologists. Computer scientists should understand that not every AI innovation is translatable to renal pathology, while renal pathologists should capture high-level principles of the relevant AI technologies. Herein, we provide an integrated review on current and possible future applications in AI-assisted renal pathology, by including perspectives from computer scientists and renal pathologists. First, the standard stages, from data collection to analysis, in full-stack AI-assisted renal pathology studies are reviewed. Second, representative renal pathology-optimized AI techniques are introduced. Last, we review current clinical AI applications, as well as promising future applications with the recent advances in AI.
Collapse
Affiliation(s)
- Yuankai Huo
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, Tennessee, USA
| | - Ruining Deng
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, Tennessee, USA
| | - Quan Liu
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, Tennessee, USA
| | - Agnes B Fogo
- Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Haichun Yang
- Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, Tennessee, USA.
| |
Collapse
|
60
|
Alix-Panabieres C, Magliocco A, Cortes-Hernandez LE, Eslami-S Z, Franklin D, Messina JL. Detection of cancer metastasis: past, present and future. Clin Exp Metastasis 2021; 39:21-28. [PMID: 33961169 DOI: 10.1007/s10585-021-10088-w] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Accepted: 03/20/2021] [Indexed: 12/23/2022]
Abstract
The clinical importance of metastatic spread of cancer has been recognized for centuries, and melanoma has loomed large in historical descriptions of metastases, as well as the numerous mechanistic theories espoused. The "fatal black tumor" described by Hippocrates in 5000 BC that was later termed "melanose" by Rene Laennec in 1804 was recognized to have the propensity to metastasize by William Norris in 1820. And while the prognosis of melanoma was uniformly acknowledged to be dire, Samuel Cooper described surgical removal as having the potential to improve prognosis. Subsequent to this, in 1898 Herbert Snow was the first to recognize the potential clinical benefit of removing clinically normal lymph nodes at the time of initial cancer surgery. In describing "anticipatory gland excision," he noted that "it is essential to remove, whenever possible, those lymph glands which first receive the infective protoplasm, and bar its entrance into the blood, before they have undergone increase in bulk". This revolutionary concept marked the beginning of a debate that rages today: are regional lymph nodes the first stop for metastases ("incubator" hypothesis) or does their involvement serve as an indicator of aggressive disease with inherent metastatic potential ("marker" hypothesis). Is there a better way to improve prediction of disease outcome? This article attempts to address some of the resultant questions that were the subject of the session "Novel Frontiers in the Diagnosis of Cancer" at the 8th International Congress on Cancer Metastases, held in San Francisco, CA in October 2019. Some of these questions addressed include the significance of sentinel node metastasis in melanoma, and the optimal method for their pathologic analysis. The finding of circulating tumor cells in the blood may potentially supplant surgical techniques for detection of metastatic disease, and we are beginning to perfect techniques for their detection, understand how to apply the findings clinically, and develop clinical followup treatment algorithms based on these results. Finally, we will discuss the revolutionary field of machine learning and its applications in cancer diagnosis. Computer-based learning algorithms have the potential to improve efficiency and diagnostic accuracy of pathology, and can be used to develop novel predictors of prognosis, but significant challenges remain. This review will thus encompass latest concepts in the detection of cancer metastasis via the lymphatic system, the circulatory system, and the role of computers in enhancing our knowledge in this field.
Collapse
Affiliation(s)
- Catherine Alix-Panabieres
- Laboratory of Rare Human Circulating Cells (LCCRH), University Medical Centre of Montpellier, Montpellier, France
| | | | | | - Zahra Eslami-S
- Laboratory of Rare Human Circulating Cells (LCCRH), University Medical Centre of Montpellier, Montpellier, France
| | | | - Jane L Messina
- Moffitt Cancer Center, Department of Pathology, 12902 Magnolia Drive, Tampa, FL, 33612, USA.
| |
Collapse
|
61
|
He H, Yan S, Lyu D, Xu M, Ye R, Zheng P, Lu X, Wang L, Ren B. Deep Learning for Biospectroscopy and Biospectral Imaging: State-of-the-Art and Perspectives. Anal Chem 2021; 93:3653-3665. [PMID: 33599125 DOI: 10.1021/acs.analchem.0c04671] [Citation(s) in RCA: 62] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
With the advances in instrumentation and sampling techniques, there is an explosive growth of data from molecular and cellular samples. The call to extract more information from the large data sets has greatly challenged the conventional chemometrics method. Deep learning, which utilizes very large data sets for finding hidden features therein and for making accurate predictions for a wide range of applications, has been applied in an unbelievable pace in biospectroscopy and biospectral imaging in the recent 3 years. In this Feature, we first introduce the background and basic knowledge of deep learning. We then focus on the emerging applications of deep learning in the data preprocessing, feature detection, and modeling of the biological samples for spectral analysis and spectroscopic imaging. Finally, we highlight the challenges and limitations in deep learning and the outlook for future directions.
Collapse
Affiliation(s)
- Hao He
- School of Aerospace Engineering, Xiamen University, Xiamen, 361000, China
| | - Sen Yan
- State Key Laboratory of Physical Chemistry of Solid Surfaces, Collaborative Innovation Center of Chemistry for Energy Materials (iChEM), College of Chemistry and Chemical Engineering, Xiamen University, Xiamen 361005, China
| | - Danya Lyu
- State Key Laboratory of Physical Chemistry of Solid Surfaces, Collaborative Innovation Center of Chemistry for Energy Materials (iChEM), College of Chemistry and Chemical Engineering, Xiamen University, Xiamen 361005, China
| | - Mengxi Xu
- State Key Laboratory of Physical Chemistry of Solid Surfaces, Collaborative Innovation Center of Chemistry for Energy Materials (iChEM), College of Chemistry and Chemical Engineering, Xiamen University, Xiamen 361005, China
| | - Ruiqian Ye
- School of Aerospace Engineering, Xiamen University, Xiamen, 361000, China
| | - Peng Zheng
- School of Aerospace Engineering, Xiamen University, Xiamen, 361000, China
| | - Xinyu Lu
- State Key Laboratory of Physical Chemistry of Solid Surfaces, Collaborative Innovation Center of Chemistry for Energy Materials (iChEM), College of Chemistry and Chemical Engineering, Xiamen University, Xiamen 361005, China
| | - Lei Wang
- School of Aerospace Engineering, Xiamen University, Xiamen, 361000, China
| | - Bin Ren
- State Key Laboratory of Physical Chemistry of Solid Surfaces, Collaborative Innovation Center of Chemistry for Energy Materials (iChEM), College of Chemistry and Chemical Engineering, Xiamen University, Xiamen 361005, China
| |
Collapse
|
62
|
An J, Chua CK, Mironov V. Application of Machine Learning in 3D Bioprinting: Focus on Development of Big Data and Digital Twin. Int J Bioprint 2021; 7:342. [PMID: 33585718 PMCID: PMC7875058 DOI: 10.18063/ijb.v7i1.342] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2021] [Accepted: 01/18/2021] [Indexed: 02/07/2023] Open
Abstract
The application of machine learning (ML) in bioprinting has attracted considerable attention recently. Many have focused on the benefits and potential of ML, but a clear overview of how ML shapes the future of three-dimensional (3D) bioprinting is still lacking. Here, it is proposed that two missing links, Big Data and Digital Twin, are the key to articulate the vision of future 3D bioprinting. Creating training databases from Big Data curation and building digital twins of human organs with cellular resolution and properties are the most important and urgent challenges. With these missing links, it is envisioned that future 3D bioprinting will become more digital and in silico, and eventually strike a balance between virtual and physical experiments toward the most efficient utilization of bioprinting resources. Furthermore, the virtual component of bioprinting and biofabrication, namely, digital bioprinting, will become a new growth point for digital industry and information technology in future.
Collapse
Affiliation(s)
- Jia An
- Singapore Centre for 3D Printing, School of Mechanical and Aerospace Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798
| | - Chee Kai Chua
- Engineering Product Development, Singapore University of Technology and Design, 8 Somapah Road, Singapore 487372
| | - Vladimir Mironov
- 3D Bioprinting Solutions, 68/2 Kashirskoe Highway, Moscow, Russian Federation 115409
| |
Collapse
|
63
|
Kong Y, Genchev GZ, Wang X, Zhao H, Lu H. Nuclear Segmentation in Histopathological Images Using Two-Stage Stacked U-Nets With Attention Mechanism. Front Bioeng Biotechnol 2020; 8:573866. [PMID: 33195135 PMCID: PMC7649338 DOI: 10.3389/fbioe.2020.573866] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Accepted: 10/05/2020] [Indexed: 12/05/2022] Open
Abstract
Nuclei segmentation is a fundamental but challenging task in histopathological image analysis. One of the main problems is the existence of overlapping regions which increases the difficulty of independent nuclei separation. In this study, to solve the segmentation of nuclei and overlapping regions, we introduce a nuclei segmentation method based on two-stage learning framework consisting of two connected Stacked U-Nets (SUNets). The proposed SUNets consists of four parallel backbone nets, which are merged by the attention generation model. In the first stage, a Stacked U-Net is utilized to predict pixel-wise segmentation of nuclei. The output binary map together with RGB values of the original images are concatenated as the input of the second stage of SUNets. Due to the sizable imbalance of overlapping and background regions, the first network is trained with cross-entropy loss, while the second network is trained with focal loss. We applied the method on two publicly available datasets and achieved state-of-the-art performance for nuclei segmentation-mean Aggregated Jaccard Index (AJI) results were 0.5965 and 0.6210, and F1 scores were 0.8247 and 0.8060, respectively; our method also segmented the overlapping regions between nuclei, with average AJI = 0.3254. The proposed two-stage learning framework outperforms many current segmentation methods, and the consistent good segmentation performance on images from different organs indicates the generalized adaptability of our approach.
Collapse
Affiliation(s)
- Yan Kong
- SJTU-Yale Joint Center for Biostatistics and Data Science, Department of Bioinformatics and Biostatistics, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, Shanghai, China
| | - Georgi Z. Genchev
- SJTU-Yale Joint Center for Biostatistics and Data Science, Department of Bioinformatics and Biostatistics, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, Shanghai, China
- Center for Biomedical Informatics, Shanghai Engineering Research Center for Big Data in Pediatric Precision Medicine, Shanghai Children’s Hospital, Shanghai, China
- Bulgarian Institute for Genomics and Precision Medicine, Sofia, Bulgaria
| | - Xiaolei Wang
- SJTU-Yale Joint Center for Biostatistics and Data Science, Department of Bioinformatics and Biostatistics, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, Shanghai, China
| | - Hongyu Zhao
- Department of Biostatistics, Yale University, New Haven, CT, United States
| | - Hui Lu
- SJTU-Yale Joint Center for Biostatistics and Data Science, Department of Bioinformatics and Biostatistics, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, Shanghai, China
- Center for Biomedical Informatics, Shanghai Engineering Research Center for Big Data in Pediatric Precision Medicine, Shanghai Children’s Hospital, Shanghai, China
| |
Collapse
|
64
|
Rivenson Y, de Haan K, Wallace WD, Ozcan A. Emerging Advances to Transform Histopathology Using Virtual Staining. BME FRONTIERS 2020; 2020:9647163. [PMID: 37849966 PMCID: PMC10521663 DOI: 10.34133/2020/9647163] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2020] [Accepted: 07/28/2020] [Indexed: 10/19/2023] Open
Abstract
In an age where digitization is widespread in clinical and preclinical workflows, pathology is still predominantly practiced by microscopic evaluation of stained tissue specimens affixed on glass slides. Over the last decade, new high throughput digital scanning microscopes have ushered in the era of digital pathology that, along with recent advances in machine vision, have opened up new possibilities for Computer-Aided-Diagnoses. Despite these advances, the high infrastructural costs related to digital pathology and the perception that the digitization process is an additional and nondirectly reimbursable step have challenged its widespread adoption. Here, we discuss how emerging virtual staining technologies and machine learning can help to disrupt the standard histopathology workflow and create new avenues for the diagnostic paradigm that will benefit patients and healthcare systems alike via digital pathology.
Collapse
Affiliation(s)
- Yair Rivenson
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- Bioengineering Department, University of California, Los Angeles, CA, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Kevin de Haan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- Bioengineering Department, University of California, Los Angeles, CA, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - W. Dean Wallace
- Department of Pathology and Laboratory Medicine, Keck School of Medicine of USC, Los Angeles, CA, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- Bioengineering Department, University of California, Los Angeles, CA, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
- Department of Surgery, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| |
Collapse
|