1
|
Roberts EJ, Chavez T, Hexemer A, Zwart PH. DLSIA: Deep Learning for Scientific Image Analysis. J Appl Crystallogr 2024; 57:392-402. [PMID: 38596727 PMCID: PMC11001410 DOI: 10.1107/s1600576724001390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Accepted: 02/12/2024] [Indexed: 04/11/2024] Open
Abstract
DLSIA (Deep Learning for Scientific Image Analysis) is a Python-based machine learning library that empowers scientists and researchers across diverse scientific domains with a range of customizable convolutional neural network (CNN) architectures for a wide variety of tasks in image analysis to be used in downstream data processing. DLSIA features easy-to-use architectures, such as autoencoders, tunable U-Nets and parameter-lean mixed-scale dense networks (MSDNets). Additionally, this article introduces sparse mixed-scale networks (SMSNets), generated using random graphs, sparse connections and dilated convolutions connecting different length scales. For verification, several DLSIA-instantiated networks and training scripts are employed in multiple applications, including inpainting for X-ray scattering data using U-Nets and MSDNets, segmenting 3D fibers in X-ray tomographic reconstructions of concrete using an ensemble of SMSNets, and leveraging autoencoder latent spaces for data compression and clustering. As experimental data continue to grow in scale and complexity, DLSIA provides accessible CNN construction and abstracts CNN complexities, allowing scientists to tailor their machine learning approaches, accelerate discoveries, foster interdisciplinary collaboration and advance research in scientific image analysis.
Collapse
Affiliation(s)
- Eric J. Roberts
- Center for Advanced Mathematics for Energy Research Applications, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
- Molecular Biophysics and Integrated Bioimaging Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
| | - Tanny Chavez
- Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
| | - Alexander Hexemer
- Center for Advanced Mathematics for Energy Research Applications, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
- Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
| | - Petrus H. Zwart
- Center for Advanced Mathematics for Energy Research Applications, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
- Molecular Biophysics and Integrated Bioimaging Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
- Berkeley Synchrotron Infrared Structural Biology Program, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
| |
Collapse
|
2
|
Mahbod A, Polak C, Feldmann K, Khan R, Gelles K, Dorffner G, Woitek R, Hatamikia S, Ellinger I. NuInsSeg: A fully annotated dataset for nuclei instance segmentation in H&E-stained histological images. Sci Data 2024; 11:295. [PMID: 38486039 PMCID: PMC10940572 DOI: 10.1038/s41597-024-03117-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Accepted: 03/04/2024] [Indexed: 03/18/2024] Open
Abstract
In computational pathology, automatic nuclei instance segmentation plays an essential role in whole slide image analysis. While many computerized approaches have been proposed for this task, supervised deep learning (DL) methods have shown superior segmentation performances compared to classical machine learning and image processing techniques. However, these models need fully annotated datasets for training which is challenging to acquire, especially in the medical domain. In this work, we release one of the biggest fully manually annotated datasets of nuclei in Hematoxylin and Eosin (H&E)-stained histological images, called NuInsSeg. This dataset contains 665 image patches with more than 30,000 manually segmented nuclei from 31 human and mouse organs. Moreover, for the first time, we provide additional ambiguous area masks for the entire dataset. These vague areas represent the parts of the images where precise and deterministic manual annotations are impossible, even for human experts. The dataset and detailed step-by-step instructions to generate related segmentation masks are publicly available on the respective repositories.
Collapse
Affiliation(s)
- Amirreza Mahbod
- Research Center for Medical Image Analysis and Artificial Intelligence, Department of Medicine, Danube Private University, Krems an der Donau, 3500, Austria.
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, 1090, Austria.
| | - Christine Polak
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, 1090, Austria
| | - Katharina Feldmann
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, 1090, Austria
| | - Rumsha Khan
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, 1090, Austria
| | - Katharina Gelles
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, 1090, Austria
| | - Georg Dorffner
- Institute of Artificial Intelligence, Medical University of Vienna, Vienna, 1090, Austria
| | - Ramona Woitek
- Research Center for Medical Image Analysis and Artificial Intelligence, Department of Medicine, Danube Private University, Krems an der Donau, 3500, Austria
| | - Sepideh Hatamikia
- Research Center for Medical Image Analysis and Artificial Intelligence, Department of Medicine, Danube Private University, Krems an der Donau, 3500, Austria
- Austrian Center for Medical Innovation and Technology, Wiener Neustadt, 2700, Austria
| | - Isabella Ellinger
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, 1090, Austria
| |
Collapse
|
3
|
Bonatti AF, Vozzi G, De Maria C. Enhancing quality control in bioprinting through machine learning. Biofabrication 2024; 16:022001. [PMID: 38262061 DOI: 10.1088/1758-5090/ad2189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2023] [Accepted: 01/23/2024] [Indexed: 01/25/2024]
Abstract
Bioprinting technologies have been extensively studied in literature to fabricate three-dimensional constructs for tissue engineering applications. However, very few examples are currently available on clinical trials using bioprinted products, due to a combination of technological challenges (i.e. difficulties in replicating the native tissue complexity, long printing times, limited choice of printable biomaterials) and regulatory barriers (i.e. no clear indication on the product classification in the current regulatory framework). In particular, quality control (QC) solutions are needed at different stages of the bioprinting workflow (including pre-process optimization, in-process monitoring, and post-process assessment) to guarantee a repeatable product which is functional and safe for the patient. In this context, machine learning (ML) algorithms can be envisioned as a promising solution for the automatization of the quality assessment, reducing the inter-batch variability and thus potentially accelerating the product clinical translation and commercialization. In this review, we comprehensively analyse the main solutions that are being developed in the bioprinting literature on QC enabled by ML, evaluating different models from a technical perspective, including the amount and type of data used, the algorithms, and performance measures. Finally, we give a perspective view on current challenges and future research directions on using these technologies to enhance the quality assessment in bioprinting.
Collapse
Affiliation(s)
- Amedeo Franco Bonatti
- Department of Information Engineering and Research Center 'E. Piaggio', University of Pisa, Pisa, Italy
| | - Giovanni Vozzi
- Department of Information Engineering and Research Center 'E. Piaggio', University of Pisa, Pisa, Italy
| | - Carmelo De Maria
- Department of Information Engineering and Research Center 'E. Piaggio', University of Pisa, Pisa, Italy
| |
Collapse
|
4
|
Shao Z, Buchanan LB, Zuanazzi D, Khan YN, Khan AR, Prodger JL. Comparison between a deep-learning and a pixel-based approach for the automated quantification of HIV target cells in foreskin tissue. Sci Rep 2024; 14:1985. [PMID: 38263439 PMCID: PMC10806185 DOI: 10.1038/s41598-024-52613-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Accepted: 01/21/2024] [Indexed: 01/25/2024] Open
Abstract
The availability of target cells expressing the HIV receptors CD4 and CCR5 in genital tissue is a critical determinant of HIV susceptibility during sexual transmission. Quantification of immune cells in genital tissue is therefore an important outcome for studies on HIV susceptibility and prevention. Immunofluorescence microscopy allows for precise visualization of immune cells in mucosal tissues; however, this technique is limited in clinical studies by the lack of an accurate, unbiased, high-throughput image analysis method. Current pixel-based thresholding methods for cell counting struggle in tissue regions with high cell density and autofluorescence, both of which are common features in genital tissue. We describe a deep-learning approach using the publicly available StarDist method to count cells in immunofluorescence microscopy images of foreskin stained for nuclei, CD3, CD4, and CCR5. The accuracy of the model was comparable to manual counting (gold standard) and surpassed the capability of a previously described pixel-based cell counting method. We show that the performance of our deep-learning model is robust in tissue regions with high cell density and high autofluorescence. Moreover, we show that this deep-learning analysis method is both easy to implement and to adapt for the identification of other cell types in genital mucosal tissue.
Collapse
Affiliation(s)
- Zhongtian Shao
- Department of Microbiology and Immunology, The University of Western Ontario, 1151 Richmond St, London, ON, N6A 3K7, Canada
| | - Lane B Buchanan
- Department of Microbiology and Immunology, The University of Western Ontario, 1151 Richmond St, London, ON, N6A 3K7, Canada
| | - David Zuanazzi
- Department of Microbiology and Immunology, The University of Western Ontario, 1151 Richmond St, London, ON, N6A 3K7, Canada
| | - Yazan N Khan
- Department of Microbiology and Immunology, The University of Western Ontario, 1151 Richmond St, London, ON, N6A 3K7, Canada
| | - Ali R Khan
- Department of Medical Biophysics, The University of Western Ontario, 1151 Richmond St, London, ON, N6A 3K7, Canada
| | - Jessica L Prodger
- Department of Microbiology and Immunology, The University of Western Ontario, 1151 Richmond St, London, ON, N6A 3K7, Canada.
- Department of Epidemiology and Biostatistics, The University of Western Ontario, 1151 Richmond St, London, ON, N6A 3K7, Canada.
| |
Collapse
|
5
|
Das N, Saha S, Nasipuri M, Basu S, Chakraborti T. Deep-Fuzz: A synergistic integration of deep learning and fuzzy water flows for fine-grained nuclei segmentation in digital pathology. PLoS One 2023; 18:e0286862. [PMID: 37352172 PMCID: PMC10289330 DOI: 10.1371/journal.pone.0286862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Accepted: 05/24/2023] [Indexed: 06/25/2023] Open
Abstract
Robust semantic segmentation of tumour micro-environment is one of the major open challenges in machine learning enabled computational pathology. Though deep learning based systems have made significant progress, their task agnostic data driven approach often lacks the contextual grounding necessary in biomedical applications. We present a novel fuzzy water flow scheme that takes the coarse segmentation output of a base deep learning framework to then provide a more fine-grained and instance level robust segmentation output. Our two stage synergistic segmentation method, Deep-Fuzz, works especially well for overlapping objects, and achieves state-of-the-art performance in four public cell nuclei segmentation datasets. We also show through visual examples how our final output is better aligned with pathological insights, and thus more clinically interpretable.
Collapse
Affiliation(s)
- Nirmal Das
- Deapartemnt of Computer Science and Engineering (AIML), Institute of Engineering and Management, Kolkata, West Bengal, India
- Deapartment of Computer Science and Engineering, Jadavpur University, Kolkata, West Bengal, India
| | - Satadal Saha
- Department of Electrical and Computer Engineering, MCKV Institute of Engineering, Howrah, West Bengal, India
| | - Mita Nasipuri
- Deapartment of Computer Science and Engineering, Jadavpur University, Kolkata, West Bengal, India
| | - Subhadip Basu
- Deapartment of Computer Science and Engineering, Jadavpur University, Kolkata, West Bengal, India
| | - Tapabrata Chakraborti
- University College London and The Alan Turing Institute, London, United Kingdom
- Linacre College, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
6
|
Giacopelli G, Migliore M, Tegolo D. NeuronAlg: An Innovative Neuronal Computational Model for Immunofluorescence Image Segmentation. SENSORS (BASEL, SWITZERLAND) 2023; 23:4598. [PMID: 37430509 DOI: 10.3390/s23104598] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2023] [Revised: 04/24/2023] [Accepted: 05/03/2023] [Indexed: 07/12/2023]
Abstract
Background: Image analysis applications in digital pathology include various methods for segmenting regions of interest. Their identification is one of the most complex steps and therefore of great interest for the study of robust methods that do not necessarily rely on a machine learning (ML) approach. Method: A fully automatic and optimized segmentation process for different datasets is a prerequisite for classifying and diagnosing indirect immunofluorescence (IIF) raw data. This study describes a deterministic computational neuroscience approach for identifying cells and nuclei. It is very different from the conventional neural network approaches but has an equivalent quantitative and qualitative performance, and it is also robust against adversative noise. The method is robust, based on formally correct functions, and does not suffer from having to be tuned on specific data sets. Results: This work demonstrates the robustness of the method against variability of parameters, such as image size, mode, and signal-to-noise ratio. We validated the method on three datasets (Neuroblastoma, NucleusSegData, and ISBI 2009 Dataset) using images annotated by independent medical doctors. Conclusions: The definition of deterministic and formally correct methods, from a functional and structural point of view, guarantees the achievement of optimized and functionally correct results. The excellent performance of our deterministic method (NeuronalAlg) in segmenting cells and nuclei from fluorescence images was measured with quantitative indicators and compared with those achieved by three published ML approaches.
Collapse
Affiliation(s)
| | - Michele Migliore
- National Research Council, Institute of Biophysics, 90153 Palermo, Italy
| | - Domenico Tegolo
- National Research Council, Institute of Biophysics, 90153 Palermo, Italy
- Dipartimento Matematica e Informatica, Universitá degli Studi di Palermo, 90123 Palermo, Italy
| |
Collapse
|
7
|
Ke J, Lu Y, Shen Y, Zhu J, Zhou Y, Huang J, Yao J, Liang X, Guo Y, Wei Z, Liu S, Huang Q, Jiang F, Shen D. ClusterSeg: A crowd cluster pinpointed nucleus segmentation framework with cross-modality datasets. Med Image Anal 2023; 85:102758. [PMID: 36731275 DOI: 10.1016/j.media.2023.102758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Revised: 11/27/2022] [Accepted: 01/18/2023] [Indexed: 01/26/2023]
Abstract
The detection and segmentation of individual cells or nuclei is often involved in image analysis across a variety of biology and biomedical applications as an indispensable prerequisite. However, the ubiquitous presence of crowd clusters with morphological variations often hinders successful instance segmentation. In this paper, nuclei cluster focused annotation strategies and frameworks are proposed to overcome this challenging practical problem. Specifically, we design a nucleus segmentation framework, namely ClusterSeg, to tackle nuclei clusters, which consists of a convolutional-transformer hybrid encoder and a 2.5-path decoder for precise predictions of nuclei instance mask, contours, and clustered-edges. Additionally, an annotation-efficient clustered-edge pointed strategy pinpoints the salient and error-prone boundaries, where a partially-supervised PS-ClusterSeg is presented using ClusterSeg as the segmentation backbone. The framework is evaluated with four privately curated image sets and two public sets with characteristic severely clustered nuclei across a variety range of image modalities, e.g., microscope, cytopathology, and histopathology images. The proposed ClusterSeg and PS-ClusterSeg are modality-independent and generalizable, and superior to current state-of-the-art approaches in multiple metrics empirically. Our collected data, the elaborate annotations to both public and private set, as well the source code, are released publicly at https://github.com/lu-yizhou/ClusterSeg.
Collapse
Affiliation(s)
- Jing Ke
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China; School of Computer Science and Engineering, University of New South Wales, Sydney, Australia.
| | - Yizhou Lu
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yiqing Shen
- Department of Computer Science, Johns Hopkins University, MD, USA
| | - Junchao Zhu
- School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, Shanghai, China
| | - Yijin Zhou
- School of Mathematical Sciences, Shanghai Jiao Tong University, Shanghai, China
| | - Jinghan Huang
- Department of Biomedical Engineering, National University of Singapore, Singapore
| | - Jieteng Yao
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Xiaoyao Liang
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yi Guo
- School of Computer, Data and Mathematical Sciences, Western Sydney University, Sydney, Australia
| | - Zhonghua Wei
- Department of Pathology, Shanghai Sixth people's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Sheng Liu
- Department of Thyroid Breast and Vascular Surgery, Shanghai Fourth People's Hospital, School of Medicine, Tongji University, Shanghai, China.
| | - Qin Huang
- Department of Pathology, Shanghai Sixth people's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China.
| | - Fusong Jiang
- Department of Endocrinology and Metabolism, Shanghai Sixth people's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China.
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China; Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China; Shanghai Clinical Research and Trial Center, Shanghai, China
| |
Collapse
|
8
|
Vu QD, Rajpoot K, Raza SEA, Rajpoot N. Handcrafted Histological Transformer (H2T): Unsupervised representation of whole slide images. Med Image Anal 2023; 85:102743. [PMID: 36702037 DOI: 10.1016/j.media.2023.102743] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Revised: 11/30/2022] [Accepted: 01/05/2023] [Indexed: 01/20/2023]
Abstract
Diagnostic, prognostic and therapeutic decision-making of cancer in pathology clinics can now be carried out based on analysis of multi-gigapixel tissue images, also known as whole-slide images (WSIs). Recently, deep convolutional neural networks (CNNs) have been proposed to derive unsupervised WSI representations; these are attractive as they rely less on expert annotation which is cumbersome. However, a major trade-off is that higher predictive power generally comes at the cost of interpretability, posing a challenge to their clinical use where transparency in decision-making is generally expected. To address this challenge, we present a handcrafted framework based on deep CNN for constructing holistic WSI-level representations. Building on recent findings about the internal working of the Transformer in the domain of natural language processing, we break down its processes and handcraft them into a more transparent framework that we term as the Handcrafted Histological Transformer or H2T. Based on our experiments involving various datasets consisting of a total of 10,042 WSIs, the results demonstrate that H2T based holistic WSI-level representations offer competitive performance compared to recent state-of-the-art methods and can be readily utilized for various downstream analysis tasks. Finally, our results demonstrate that the H2T framework can be up to 14 times faster than the Transformer models.
Collapse
Affiliation(s)
- Quoc Dang Vu
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK
| | - Kashif Rajpoot
- School of Computer Science, University of Birmingham, UK
| | - Shan E Ahmed Raza
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK
| | - Nasir Rajpoot
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK; The Alan Turing Institute, London, UK; Department of Pathology, University Hospitals Coventry & Warwickshire, UK.
| |
Collapse
|
9
|
Basu A, Senapati P, Deb M, Rai R, Dhal KG. A survey on recent trends in deep learning for nucleus segmentation from histopathology images. EVOLVING SYSTEMS 2023; 15:1-46. [PMID: 38625364 PMCID: PMC9987406 DOI: 10.1007/s12530-023-09491-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 02/13/2023] [Indexed: 03/08/2023]
Abstract
Nucleus segmentation is an imperative step in the qualitative study of imaging datasets, considered as an intricate task in histopathology image analysis. Segmenting a nucleus is an important part of diagnosing, staging, and grading cancer, but overlapping regions make it hard to separate and tell apart independent nuclei. Deep Learning is swiftly paving its way in the arena of nucleus segmentation, attracting quite a few researchers with its numerous published research articles indicating its efficacy in the field. This paper presents a systematic survey on nucleus segmentation using deep learning in the last five years (2017-2021), highlighting various segmentation models (U-Net, SCPP-Net, Sharp U-Net, and LiverNet) and exploring their similarities, strengths, datasets utilized, and unfolding research areas.
Collapse
Affiliation(s)
- Anusua Basu
- Department of Computer Science and Application, Midnapore College (Autonomous), Paschim Medinipur, Midnapore, West Bengal India
| | - Pradip Senapati
- Department of Computer Science and Application, Midnapore College (Autonomous), Paschim Medinipur, Midnapore, West Bengal India
| | - Mainak Deb
- Wipro Technologies, Pune, Maharashtra India
| | - Rebika Rai
- Department of Computer Applications, Sikkim University, Sikkim, India
| | - Krishna Gopal Dhal
- Department of Computer Science and Application, Midnapore College (Autonomous), Paschim Medinipur, Midnapore, West Bengal India
| |
Collapse
|
10
|
Wang S, Gramm V, Laport E, Holland-Letz T, Alonso A, Schenkel J. Transgenic HPV11-E2 protein modulates URR activity in vivo. Transgenic Res 2023; 32:67-76. [PMID: 36826606 PMCID: PMC10102070 DOI: 10.1007/s11248-023-00336-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Accepted: 01/20/2023] [Indexed: 02/25/2023]
Abstract
In vitro experiments have shown that the E2 protein of human papillomaviruses (HPV) binds to the upstream regulatory region (URR) of the viral genome and modulates transcription. Additionally, it seems to be a necessary component for viral DNA replication together with E1. We have developed a transgenic mouse model containing the URR region of the low-risk virus HPV11 that regulates the expression of the lacZ reporter gene. Most interestingly, in these mice, the transgene was exclusively expressed in the bulge region of the hair follicle but not in any other tissues. Further experimental data indicate that in double transgenic mice that also express the HPV11-E2 protein under the control of the Ubiquitin C-promoter, the transcription of the reporter gene is modulated. When E2 is present, the expression of the reporter gene also occurs exclusively in the bulge region of the hair follicles as it does in the single transgenic mice, but the expression of the lacZ driven by the URR is increased and the statistical spread is greater. Even if the expression of the reporter gene occurs in the hair follicles of the dorsal skin of an animal uniform, E2 obviously has the capacity for both to induce and to repress the URR activity in vivo.
Collapse
Affiliation(s)
- Shubei Wang
- Cryopreservation W430, German Cancer Research Center, Heidelberg, Germany.,Institute for Physiology and Pathophysiology, University of Heidelberg, Heidelberg, Germany
| | - Vera Gramm
- Cryopreservation W430, German Cancer Research Center, Heidelberg, Germany.,Institute for Physiology and Pathophysiology, University of Heidelberg, Heidelberg, Germany
| | - Elke Laport
- Cryopreservation W430, German Cancer Research Center, Heidelberg, Germany
| | - Tim Holland-Letz
- Biostatistics C060, German Cancer Research Center, Heidelberg, Germany
| | - Angel Alonso
- Tumor Virology F050, German Cancer Research Center, Heidelberg, Germany
| | - Johannes Schenkel
- Cryopreservation W430, German Cancer Research Center, Heidelberg, Germany. .,Institute for Physiology and Pathophysiology, University of Heidelberg, Heidelberg, Germany. .,Deutsches Krebsforschungszentrum (DKFZ) W430, Im Neuenheimer Feld 280, 69120, Heidelberg, Germany.
| |
Collapse
|
11
|
Tseng JJ, Lu CH, Li JZ, Lai HY, Chen MH, Cheng FY, Kuo CE. An Open Dataset of Annotated Metaphase Cell Images for Chromosome Identification. Sci Data 2023; 10:104. [PMID: 36823215 PMCID: PMC9950090 DOI: 10.1038/s41597-023-02003-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Accepted: 02/03/2023] [Indexed: 02/25/2023] Open
Abstract
Chromosomes are a principal target of clinical cytogenetic studies. While chromosomal analysis is an integral part of prenatal care, the conventional manual identification of chromosomes in images is time-consuming and costly. This study developed a chromosome detector that uses deep learning and that achieved an accuracy of 98.88% in chromosomal identification. Specifically, we compiled and made available a large and publicly accessible database containing chromosome images and annotations for training chromosome detectors. The database contains five thousand 24 chromosome class annotations and 2,000 single chromosome annotations. This database also contains examples of chromosome variations. Our database provides a reference for researchers in this field and may help expedite the development of clinical applications.
Collapse
Affiliation(s)
- Jenn-Jhy Tseng
- Department of Obstetrics, Gynecology and Women's Health, Taichung Veterans General Hospital, No. 1650 Sec. 4 Taiwan Blvd. Xitun Dist., Taichung, 407, Taiwan
| | - Chien-Hsing Lu
- Department of Obstetrics, Gynecology and Women's Health, Taichung Veterans General Hospital, No. 1650 Sec. 4 Taiwan Blvd. Xitun Dist., Taichung, 407, Taiwan
| | - Jun-Zhou Li
- Department of Automatic Control Engineering, Feng Chia University, No. 100 Wenhua Rd. Xitun Dist., Taichung, 407, Taiwan
| | - Hui-Yu Lai
- Department of Obstetrics, Gynecology and Women's Health, Taichung Veterans General Hospital, No. 1650 Sec. 4 Taiwan Blvd. Xitun Dist., Taichung, 407, Taiwan
| | - Min-Hu Chen
- Department of Obstetrics, Gynecology and Women's Health, Taichung Veterans General Hospital, No. 1650 Sec. 4 Taiwan Blvd. Xitun Dist., Taichung, 407, Taiwan
| | - Fu-Yuan Cheng
- Department of Obstetrics, Gynecology and Women's Health, Taichung Veterans General Hospital, No. 1650 Sec. 4 Taiwan Blvd. Xitun Dist., Taichung, 407, Taiwan
| | - Chih-En Kuo
- Department of Applied Mathematics, National Chung Hsing University, No. 145, Xingda Rd., South Dist., Taichung, 402, Taiwan. .,Smart Sustainable New Agriculture Research Center (SMARTer), Taichung, 402, Taiwan.
| |
Collapse
|
12
|
Deep-learning based breast cancer detection for cross-staining histopathology images. Heliyon 2023; 9:e13171. [PMID: 36755605 PMCID: PMC9900267 DOI: 10.1016/j.heliyon.2023.e13171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Revised: 01/13/2023] [Accepted: 01/19/2023] [Indexed: 01/23/2023] Open
Abstract
Hematoxylin and eosin (H&E) staining is the gold standard for tissue characterization in routine pathological diagnoses. However, these visible light dyes do not exclusively label the nuclei and cytoplasm, making clear-cut segmentation of staining signals challenging. Currently, fluorescent staining technology is much more common in clinical research for analyzing tissue morphology and protein distribution owing to its advantages of channel independence, multiplex labeling, and the possibility of enabling 3D tissue labeling. Although both H&E and fluorescent dyes can stain the nucleus and cytoplasm for representative tissue morphology, color variation between these two staining technologies makes cross-analysis difficult, especially with computer-assisted artificial intelligence (AI) algorithms. In this study, we applied color normalization and nucleus extraction methods to overcome the variation between staining technologies. We also developed an available workflow for using an H&E-stained segmentation AI model in the analysis of fluorescent nucleic acid staining images in breast cancer tumor recognition, resulting in 89.6% and 80.5% accuracy in recognizing specific tumor features in H&E- and fluorescent-stained pathological images, respectively. The results show that the cross-staining inference maintained the same precision level as the proposed workflow, providing an opportunity for an expansion of the application of current pathology AI models.
Collapse
|
13
|
Sinitca AM, Kayumov AR, Zelenikhin PV, Porfiriev AG, Kaplun DI, Bogachev MI. Segmentation of patchy areas in biomedical images based on local edge density estimation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
14
|
UnMICST: Deep learning with real augmentation for robust segmentation of highly multiplexed images of human tissues. Commun Biol 2022; 5:1263. [DOI: 10.1038/s42003-022-04076-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Accepted: 10/06/2022] [Indexed: 11/19/2022] Open
Abstract
AbstractUpcoming technologies enable routine collection of highly multiplexed (20–60 channel), subcellular resolution images of mammalian tissues for research and diagnosis. Extracting single cell data from such images requires accurate image segmentation, a challenging problem commonly tackled with deep learning. In this paper, we report two findings that substantially improve image segmentation of tissues using a range of machine learning architectures. First, we unexpectedly find that the inclusion of intentionally defocused and saturated images in training data substantially improves subsequent image segmentation. Such real augmentation outperforms computational augmentation (Gaussian blurring). In addition, we find that it is practical to image the nuclear envelope in multiple tissues using an antibody cocktail thereby better identifying nuclear outlines and improving segmentation. The two approaches cumulatively and substantially improve segmentation on a wide range of tissue types. We speculate that the use of real augmentations will have applications in image processing outside of microscopy.
Collapse
|
15
|
Abstract
Fluorescence microscopy has represented a crucial technique to explore the cellular and molecular mechanisms in the field of biomedicine. However, the conventional one-photon microscopy exhibits many limitations when living samples are imaged. The new technologies, including two-photon microscopy (2PM), have considerably improved the in vivo study of pathophysiological processes, allowing the investigators to overcome the limits displayed by previous techniques. 2PM enables the real-time intravital imaging of the biological functions in different organs at cellular and subcellular resolution thanks to its improved laser penetration and less phototoxicity. The development of more sensitive detectors and long-wavelength fluorescent dyes as well as the implementation of semi-automatic software for data analysis allowed to gain insights in essential physiological functions, expanding the frontiers of cellular and molecular imaging. The future applications of 2PM are promising to push the intravital microscopy beyond the existing limits. In this review, we provide an overview of the current state-of-the-art methods of intravital microscopy, focusing on the most recent applications of 2PM in kidney physiology.
Collapse
|
16
|
Bilodeau A, Delmas CVL, Parent M, De Koninck P, Durand A, Lavoie-Cardinal F. Microscopy analysis neural network to solve detection, enumeration and segmentation from image-level annotations. NAT MACH INTELL 2022. [DOI: 10.1038/s42256-022-00472-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
17
|
Ritchie A, Laitinen S, Katajisto P, Englund JI. “Tonga”: A Novel Toolbox for Straightforward Bioimage Analysis. FRONTIERS IN COMPUTER SCIENCE 2022. [DOI: 10.3389/fcomp.2022.777458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Techniques to acquire and analyze biological images are central to life science. However, the workflow downstream of imaging can be complex and involve several tools, leading to creation of very specialized scripts and pipelines that are difficult to reproduce by other users. Although many commercial and open-source software are available, non-expert users are often challenged by a knowledge gap in setting up analysis pipelines and selecting correct tools for extracting data from images. Moreover, a significant share of everyday image analysis requires simple tools, such as precise segmentation, cell counting, and recording of fluorescent intensities. Hence, there is a need for user-friendly platforms for everyday image analysis that do not require extensive prior knowledge on bioimage analysis or coding. We set out to create a bioimage analysis software that has a straightforward interface and covers common analysis tasks such as object segmentation and analysis, in a practical, reproducible, and modular fashion. We envision our software being useful for analysis of cultured cells, histological sections, and high-content data.
Collapse
|
18
|
Lin YY, Wang LC, Hsieh YH, Hung YL, Chen YA, Lin YC, Lin YY, Chou TY. Computer-assisted three-dimensional quantitation of programmed death-ligand 1 in non-small cell lung cancer using tissue clearing technology. J Transl Med 2022; 20:131. [PMID: 35296339 PMCID: PMC8925228 DOI: 10.1186/s12967-022-03335-5] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Accepted: 03/06/2022] [Indexed: 12/29/2022] Open
Abstract
Immune checkpoint blockade therapy has revolutionized non-small cell lung cancer treatment. However, not all patients respond to this therapy. Assessing the tumor expression of immune checkpoint molecules, including programmed death-ligand 1 (PD-L1), is the current standard in predicting treatment response. However, the correlation between PD-L1 expression and anti-PD-1/PD-L1 treatment response is not perfect. This is partly caused by tumor heterogeneity and the common practice of assessing PD-L1 expression based on limited biopsy material. To overcome this problem, we developed a novel method that can make formalin-fixed, paraffin-embedded tissue translucent, allowing three-dimensional (3D) imaging. Our protocol can process tissues up to 150 μm in thickness, allowing anti-PD-L1 staining of the entire tissue and producing high resolution 3D images. Compared to a traditional 4 μm section, our 3D image provides 30 times more coverage of the specimen, assessing PD-L1 expression of approximately 10 times more cells. We further developed a computer-assisted PD-L1 quantitation method to analyze these images, and we found marked variation of PD-L1 expression in 3D. In 5 of 33 needle-biopsy-sized specimens (15.2%), the PD-L1 tumor proportion score (TPS) varied by greater than 10% at different depth levels. In 14 cases (42.4%), the TPS at different depth levels fell into different categories (< 1%, 1–49%, or ≥ 50%), which can potentially influence treatment decisions. Importantly, our technology permits recovery of the processed tissue for subsequent analysis, including histology examination, immunohistochemistry, and mutation analysis. In conclusion, our novel method has the potential to increase the accuracy of tumor PD-L1 expression assessment and enable precise deployment of cancer immunotherapy.
Collapse
Affiliation(s)
- Yen-Yu Lin
- Department of Pathology and Laboratory Medicine, Taipei Veterans General Hospital, No.201, Sec. 2, Shipai Rd., Taipei, 11217, Taiwan
| | - Lei-Chi Wang
- Department of Pathology and Laboratory Medicine, Taipei Veterans General Hospital, No.201, Sec. 2, Shipai Rd., Taipei, 11217, Taiwan.,Institute of Clinical Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | | | | | | | - Yu-Chieh Lin
- JelloX Biotech Inc., Hsinchu, Taiwan.,Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan
| | | | - Teh-Ying Chou
- Department of Pathology and Laboratory Medicine, Taipei Veterans General Hospital, No.201, Sec. 2, Shipai Rd., Taipei, 11217, Taiwan. .,Institute of Clinical Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan. .,Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan.
| |
Collapse
|
19
|
Pan W, Liu Z, Song W, Zhen X, Yuan K, Xu F, Lin GN. An Integrative Segmentation Framework for Cell Nucleus of Fluorescence Microscopy. Genes (Basel) 2022; 13:genes13030431. [PMID: 35327985 PMCID: PMC8950038 DOI: 10.3390/genes13030431] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Revised: 02/22/2022] [Accepted: 02/23/2022] [Indexed: 01/27/2023] Open
Abstract
Nucleus segmentation of fluorescence microscopy is a critical step in quantifying measurements in cell biology. Automatic and accurate nucleus segmentation has powerful applications in analyzing intrinsic characterization in nucleus morphology. However, existing methods have limited capacity to perform accurate segmentation in challenging samples, such as noisy images and clumped nuclei. In this paper, inspired by the idea of cascaded U-Net (or W-Net) and its remarkable performance improvement in medical image segmentation, we proposed a novel framework called Attention-enhanced Simplified W-Net (ASW-Net), in which a cascade-like structure with between-net connections was used. Results showed that this lightweight model could reach remarkable segmentation performance in the BBBC039 testing set (aggregated Jaccard index, 0.90). In addition, our proposed framework performed better than the state-of-the-art methods in terms of segmentation performance. Moreover, we further explored the effectiveness of our designed network by visualizing the deep features from the network. Notably, our proposed framework is open source.
Collapse
Affiliation(s)
- Weihao Pan
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200030, China; (W.P.); (Z.L.); (W.S.); (X.Z.); (K.Y.)
| | - Zhe Liu
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200030, China; (W.P.); (Z.L.); (W.S.); (X.Z.); (K.Y.)
| | - Weichen Song
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200030, China; (W.P.); (Z.L.); (W.S.); (X.Z.); (K.Y.)
| | - Xuyang Zhen
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200030, China; (W.P.); (Z.L.); (W.S.); (X.Z.); (K.Y.)
| | - Kai Yuan
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200030, China; (W.P.); (Z.L.); (W.S.); (X.Z.); (K.Y.)
| | - Fei Xu
- State Key Laboratory of Functional Materials for Informatics, Shanghai Institute of Microsystem and Information Technology (SIMIT), Chinese Academy of Sciences, Shanghai 200050, China
- College of Science, Donghua University, Shanghai 201620, China
- Correspondence: (F.X.); (G.N.L.)
| | - Guan Ning Lin
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200030, China; (W.P.); (Z.L.); (W.S.); (X.Z.); (K.Y.)
- Correspondence: (F.X.); (G.N.L.)
| |
Collapse
|
20
|
Yao K, Sun J, Huang K, Jing L, Liu H, Huang D, Jude C. Analyzing Cell-Scaffold Interaction through Unsupervised 3D Nuclei Segmentation. Int J Bioprint 2022; 8:495. [PMID: 35187282 PMCID: PMC8852265 DOI: 10.18063/ijb.v8i1.495] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 12/07/2021] [Indexed: 11/23/2022] Open
Abstract
Fibrous scaffolds have been extensively used in three-dimensional (3D) cell culture systems to establish in vitro models in cell biology, tissue engineering, and drug screening. It is a common practice to characterize cell behaviors on such scaffolds using confocal laser scanning microscopy (CLSM). As a noninvasive technology, CLSM images can be utilized to describe cell-scaffold interaction under varied morphological features, biomaterial composition, and internal structure. Unfortunately, such information has not been fully translated and delivered to researchers due to the lack of effective cell segmentation methods. We developed herein an end-to-end model called Aligned Disentangled Generative Adversarial Network (AD-GAN) for 3D unsupervised nuclei segmentation of CLSM images. AD-GAN utilizes representation disentanglement to separate content representation (the underlying nuclei spatial structure) from style representation (the rendering of the structure) and align the disentangled content in the latent space. The CLSM images collected from fibrous scaffold-based culturing A549, 3T3, and HeLa cells were utilized for nuclei segmentation study. Compared with existing commercial methods such as Squassh and CellProfiler, our AD-GAN can effectively and efficiently distinguish nuclei with the preserved shape and location information. Building on such information, we can rapidly screen cell-scaffold interaction in terms of adhesion, migration and proliferation, so as to improve scaffold design.
Collapse
Affiliation(s)
- Kai Yao
- School of Advanced Technology, Xi'an Jiaotong-Liverpool University, 111 Ren'ai Road, Suzhou, Jiangsu 215123, China.,School of Engineering, University of Liverpool, The Quadrangle, Brownlow Hill, L69 3GH, UK
| | - Jie Sun
- School of Advanced Technology, Xi'an Jiaotong-Liverpool University, 111 Ren'ai Road, Suzhou, Jiangsu 215123, China
| | - Kaizhu Huang
- School of Advanced Technology, Xi'an Jiaotong-Liverpool University, 111 Ren'ai Road, Suzhou, Jiangsu 215123, China
| | - Linzhi Jing
- National University of Singapore (Suzhou) Research Institute, 377 Linquan Street, Suzhou, Jiangsu 215123, China
| | - Hang Liu
- Department of Food Science and Technology, National University of Singapore, 3 Science Drive 2, 117542, Singapore
| | - Dejian Huang
- National University of Singapore (Suzhou) Research Institute, 377 Linquan Street, Suzhou, Jiangsu 215123, China.,Department of Food Science and Technology, National University of Singapore, 3 Science Drive 2, 117542, Singapore
| | - Curran Jude
- School of Engineering, University of Liverpool, The Quadrangle, Brownlow Hill, L69 3GH, UK
| |
Collapse
|
21
|
MUW researcher of the month. Wien Klin Wochenschr 2022; 134:91-93. [PMID: 35113205 DOI: 10.1007/s00508-022-02003-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
22
|
Hollandi R, Moshkov N, Paavolainen L, Tasnadi E, Piccinini F, Horvath P. Nucleus segmentation: towards automated solutions. Trends Cell Biol 2022; 32:295-310. [DOI: 10.1016/j.tcb.2021.12.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 11/30/2021] [Accepted: 12/14/2021] [Indexed: 11/25/2022]
|
23
|
Bilodeau A, Bouchard C, Lavoie-Cardinal F. Automated Microscopy Image Segmentation and Analysis with Machine Learning. Methods Mol Biol 2022; 2440:349-365. [PMID: 35218549 DOI: 10.1007/978-1-0716-2051-9_20] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The development of automated quantitative image analysis pipelines requires thoughtful considerations to extract meaningful information. Commonly, extraction rules for quantitative parameters are defined and agreed beforehand to ensure repeatability between annotators. Machine/Deep Learning (ML/DL) now provides tools to automatically extract the set of rules to obtain quantitative information from the images (e.g. segmentation, enumeration, classification, etc.). Many parameters must be considered in the development of proper ML/DL pipelines. We herein present the important vocabulary, the necessary steps to create a thorough image segmentation pipeline, and also discuss technical aspects that should be considered in the development of automated image analysis pipelines through ML/DL.
Collapse
Affiliation(s)
- Anthony Bilodeau
- Université Laval, Québec, QC, Canada
- CERVO Brain research center, Québec, QC, Canada
| | - Catherine Bouchard
- Université Laval, Québec, QC, Canada
- CERVO Brain research center, Québec, QC, Canada
| | - Flavie Lavoie-Cardinal
- CERVO Brain research center, Québec, QC, Canada.
- Département de psychiatrie et de neurosciences, Université Laval, Québec, QC, Canada.
| |
Collapse
|
24
|
Lazic D, Kromp F, Rifatbegovic F, Repiscak P, Kirr M, Mivalt F, Halbritter F, Bernkopf M, Bileck A, Ussowicz M, Ambros IM, Ambros PF, Gerner C, Ladenstein R, Ostalecki C, Taschner-Mandl S. Landscape of Bone Marrow Metastasis in Human Neuroblastoma Unraveled by Transcriptomics and Deep Multiplex Imaging. Cancers (Basel) 2021; 13:cancers13174311. [PMID: 34503120 PMCID: PMC8431445 DOI: 10.3390/cancers13174311] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 08/18/2021] [Accepted: 08/23/2021] [Indexed: 11/16/2022] Open
Abstract
While the bone marrow attracts tumor cells in many solid cancers leading to poor outcome in affected patients, comprehensive analyses of bone marrow metastases have not been performed on a single-cell level. We here set out to capture tumor heterogeneity and unravel microenvironmental changes in neuroblastoma, a solid cancer with bone marrow involvement. To this end, we employed a multi-omics data mining approach to define a multiplex imaging panel and developed DeepFLEX, a pipeline for subsequent multiplex image analysis, whereby we constructed a single-cell atlas of over 35,000 disseminated tumor cells (DTCs) and cells of their microenvironment in the metastatic bone marrow niche. Further, we independently profiled the transcriptome of a cohort of 38 patients with and without bone marrow metastasis. Our results revealed vast diversity among DTCs and suggest that FAIM2 can act as a complementary marker to capture DTC heterogeneity. Importantly, we demonstrate that malignant bone marrow infiltration is associated with an inflammatory response and at the same time the presence of immuno-suppressive cell types, most prominently an immature neutrophil/granulocytic myeloid-derived suppressor-like cell type. The presented findings indicate that metastatic tumor cells shape the bone marrow microenvironment, warranting deeper investigations of spatio-temporal dynamics at the single-cell level and their clinical relevance.
Collapse
Affiliation(s)
- Daria Lazic
- St. Anna Children’s Cancer Research Institute (CCRI), 1090 Vienna, Austria; (D.L.); (F.K.); (F.R.); (P.R.); (F.M.); (F.H.); (M.B.); (I.M.A.); (P.F.A.); (R.L.)
| | - Florian Kromp
- St. Anna Children’s Cancer Research Institute (CCRI), 1090 Vienna, Austria; (D.L.); (F.K.); (F.R.); (P.R.); (F.M.); (F.H.); (M.B.); (I.M.A.); (P.F.A.); (R.L.)
- Software Competence Center Hagenberg (SCCH), 4232 Hagenberg, Austria
| | - Fikret Rifatbegovic
- St. Anna Children’s Cancer Research Institute (CCRI), 1090 Vienna, Austria; (D.L.); (F.K.); (F.R.); (P.R.); (F.M.); (F.H.); (M.B.); (I.M.A.); (P.F.A.); (R.L.)
| | - Peter Repiscak
- St. Anna Children’s Cancer Research Institute (CCRI), 1090 Vienna, Austria; (D.L.); (F.K.); (F.R.); (P.R.); (F.M.); (F.H.); (M.B.); (I.M.A.); (P.F.A.); (R.L.)
| | - Michael Kirr
- Department of Dermatology, University Hospital Erlangen, 91054 Erlangen, Germany; (M.K.); (C.O.)
| | - Filip Mivalt
- St. Anna Children’s Cancer Research Institute (CCRI), 1090 Vienna, Austria; (D.L.); (F.K.); (F.R.); (P.R.); (F.M.); (F.H.); (M.B.); (I.M.A.); (P.F.A.); (R.L.)
| | - Florian Halbritter
- St. Anna Children’s Cancer Research Institute (CCRI), 1090 Vienna, Austria; (D.L.); (F.K.); (F.R.); (P.R.); (F.M.); (F.H.); (M.B.); (I.M.A.); (P.F.A.); (R.L.)
| | - Marie Bernkopf
- St. Anna Children’s Cancer Research Institute (CCRI), 1090 Vienna, Austria; (D.L.); (F.K.); (F.R.); (P.R.); (F.M.); (F.H.); (M.B.); (I.M.A.); (P.F.A.); (R.L.)
| | - Andrea Bileck
- Department of Analytical Chemistry, Faculty of Chemistry, University of Vienna, 1090 Vienna, Austria; (A.B.); (C.G.)
| | - Marek Ussowicz
- Department and Clinic of Pediatric Oncology, Hematology and Bone Marrow, Transplantation, Wroclaw Medical University, 50-556 Wroclaw, Poland;
| | - Inge M. Ambros
- St. Anna Children’s Cancer Research Institute (CCRI), 1090 Vienna, Austria; (D.L.); (F.K.); (F.R.); (P.R.); (F.M.); (F.H.); (M.B.); (I.M.A.); (P.F.A.); (R.L.)
| | - Peter F. Ambros
- St. Anna Children’s Cancer Research Institute (CCRI), 1090 Vienna, Austria; (D.L.); (F.K.); (F.R.); (P.R.); (F.M.); (F.H.); (M.B.); (I.M.A.); (P.F.A.); (R.L.)
| | - Christopher Gerner
- Department of Analytical Chemistry, Faculty of Chemistry, University of Vienna, 1090 Vienna, Austria; (A.B.); (C.G.)
| | - Ruth Ladenstein
- St. Anna Children’s Cancer Research Institute (CCRI), 1090 Vienna, Austria; (D.L.); (F.K.); (F.R.); (P.R.); (F.M.); (F.H.); (M.B.); (I.M.A.); (P.F.A.); (R.L.)
| | - Christian Ostalecki
- Department of Dermatology, University Hospital Erlangen, 91054 Erlangen, Germany; (M.K.); (C.O.)
| | - Sabine Taschner-Mandl
- St. Anna Children’s Cancer Research Institute (CCRI), 1090 Vienna, Austria; (D.L.); (F.K.); (F.R.); (P.R.); (F.M.); (F.H.); (M.B.); (I.M.A.); (P.F.A.); (R.L.)
- Correspondence: ; Tel.: +43-1-40470-4050
| |
Collapse
|
25
|
Kromp F, Fischer L, Bozsaky E, Ambros IM, Dorr W, Beiske K, Ambros PF, Hanbury A, Taschner-Mandl S. Evaluation of Deep Learning Architectures for Complex Immunofluorescence Nuclear Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1934-1949. [PMID: 33784615 DOI: 10.1109/tmi.2021.3069558] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Separating and labeling each nuclear instance (instance-aware segmentation) is the key challenge in nuclear image segmentation. Deep Convolutional Neural Networks have been demonstrated to solve nuclear image segmentation tasks across different imaging modalities, but a systematic comparison on complex immunofluorescence images has not been performed. Deep learning based segmentation requires annotated datasets for training, but annotated fluorescence nuclear image datasets are rare and of limited size and complexity. In this work, we evaluate and compare the segmentation effectiveness of multiple deep learning architectures (U-Net, U-Net ResNet, Cellpose, Mask R-CNN, KG instance segmentation) and two conventional algorithms (Iterative h-min based watershed, Attributed relational graphs) on complex fluorescence nuclear images of various types. We propose and evaluate a novel strategy to create artificial images to extend the training set. Results show that instance-aware segmentation architectures and Cellpose outperform the U-Net architectures and conventional methods on complex images in terms of F1 scores, while the U-Net architectures achieve overall higher mean Dice scores. Training with additional artificially generated images improves recall and F1 scores for complex images, thereby leading to top F1 scores for three out of five sample preparation types. Mask R-CNN trained on artificial images achieves the overall highest F1 score on complex images of similar conditions to the training set images while Cellpose achieves the overall highest F1 score on complex images of new imaging conditions. We provide quantitative results demonstrating that images annotated by under-graduates are sufficient for training instance-aware segmentation architectures to efficiently segment complex fluorescence nuclear images.
Collapse
|
26
|
Mahbod A, Schaefer G, Löw C, Dorffner G, Ecker R, Ellinger I. Investigating the Impact of the Bit Depth of Fluorescence-Stained Images on the Performance of Deep Learning-Based Nuclei Instance Segmentation. Diagnostics (Basel) 2021; 11:diagnostics11060967. [PMID: 34072131 PMCID: PMC8230326 DOI: 10.3390/diagnostics11060967] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Revised: 05/05/2021] [Accepted: 05/25/2021] [Indexed: 11/25/2022] Open
Abstract
Nuclei instance segmentation can be considered as a key point in the computer-mediated analysis of histological fluorescence-stained (FS) images. Many computer-assisted approaches have been proposed for this task, and among them, supervised deep learning (DL) methods deliver the best performances. An important criterion that can affect the DL-based nuclei instance segmentation performance of FS images is the utilised image bit depth, but to our knowledge, no study has been conducted so far to investigate this impact. In this work, we released a fully annotated FS histological image dataset of nuclei at different image magnifications and from five different mouse organs. Moreover, by different pre-processing techniques and using one of the state-of-the-art DL-based methods, we investigated the impact of image bit depth (i.e., eight bits vs. sixteen bits) on the nuclei instance segmentation performance. The results obtained from our dataset and another publicly available dataset showed very competitive nuclei instance segmentation performances for the models trained with 8 bit and 16 bit images. This suggested that processing 8 bit images is sufficient for nuclei instance segmentation of FS images in most cases. The dataset including the raw image patches, as well as the corresponding segmentation masks is publicly available in the published GitHub repository.
Collapse
Affiliation(s)
- Amirreza Mahbod
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, A-1090 Vienna, Austria; (C.L.); (I.E.)
- Correspondence:
| | - Gerald Schaefer
- Department of Computer Science, Loughborough University, Loughborough LE11 3TT, UK;
| | - Christine Löw
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, A-1090 Vienna, Austria; (C.L.); (I.E.)
| | - Georg Dorffner
- Section for Artificial Intelligence and Decision Support, Medical University of Vienna, 1090 Vienna, Austria;
| | - Rupert Ecker
- Department of Research and Development, TissueGnostics GmbH, 1020 Vienna, Austria;
| | - Isabella Ellinger
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, A-1090 Vienna, Austria; (C.L.); (I.E.)
| |
Collapse
|
27
|
Dou Y, Tsai YH, Liu CC, Hobson BA, Lein PJ. Co-localization of fluorescent signals using deep learning with Manders overlapping coefficient. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2021; 11596:115963C. [PMID: 34305257 PMCID: PMC8301216 DOI: 10.1117/12.2580650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Object-based co-localization of fluorescent signals allows the assessment of interactions between two (or more) biological entities using spatial information. It relies on object identification with high accuracy to separate fluorescent signals from the background. Object detectors using convolutional neural networks (CNN) with annotated training samples could facilitate the process by detecting and counting fluorescent-labeled cells from fluorescence photomicrographs. However, datasets containing segmented annotations of colocalized cells are generally not available, and creating a new dataset with delineated masks is label-intensive. Also, the co-localization coefficient is often not used as a component during training with the CNN model. Yet, it may aid with localizing and detecting objects during training and testing. In this work, we propose to address these issues by using a quantification coefficient for co-localization called Manders overlapping coefficient (MOC)1 as a single-layer branch in a CNN. Fully convolutional one-state (FCOS)2 with a Resnet101 backbone served as the network to evaluate the effectiveness of the novel branch to assist with bounding box prediction. Training data were sourced from lab curated fluorescence images of neurons from the rat hippocampus, piriform cortex, somatosensory cortex, and amygdala. Results suggest that using modified FCOS with MOC outperformed the original FCOS model for accuracy in detecting fluorescence signals by 1.1% in mean average precision (mAP). The model could be downloaded from https://github.com/Alphafrey946/Colocalization-MOC.
Collapse
Affiliation(s)
- Yimeng Dou
- UW-Madison, Department of Biostatistics and Medical Informatics, Madison, Wisconsin, United States
- UC Davis School of Veterinary Medicine, Department of Molecular Biosciences, Davis, California, United States
| | - Yi-Hua Tsai
- UC Davis School of Veterinary Medicine, Department of Molecular Biosciences, Davis, California, United States
| | - Chih-Chieh Liu
- UC Davis, Department of Biomedical Engineering, Davis, California, United States
| | - Brad A. Hobson
- UC Davis, Center for Molecular and Genomic Imaging, Davis, California, United States
| | - Pamela J. Lein
- UC Davis School of Veterinary Medicine, Department of Molecular Biosciences, Davis, California, United States
| |
Collapse
|
28
|
AI System Engineering—Key Challenges and Lessons Learned. MACHINE LEARNING AND KNOWLEDGE EXTRACTION 2020. [DOI: 10.3390/make3010004] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The main challenges are discussed together with the lessons learned from past and ongoing research along the development cycle of machine learning systems. This will be done by taking into account intrinsic conditions of nowadays deep learning models, data and software quality issues and human-centered artificial intelligence (AI) postulates, including confidentiality and ethical aspects. The analysis outlines a fundamental theory-practice gap which superimposes the challenges of AI system engineering at the level of data quality assurance, model building, software engineering and deployment. The aim of this paper is to pinpoint research topics to explore approaches to address these challenges.
Collapse
|