1
|
Xing F, Yang X, Cornish TC, Ghosh D. Learning with limited target data to detect cells in cross-modality images. Med Image Anal 2023; 90:102969. [PMID: 37802010 DOI: 10.1016/j.media.2023.102969] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 08/16/2023] [Accepted: 09/11/2023] [Indexed: 10/08/2023]
Abstract
Deep neural networks have achieved excellent cell or nucleus quantification performance in microscopy images, but they often suffer from performance degradation when applied to cross-modality imaging data. Unsupervised domain adaptation (UDA) based on generative adversarial networks (GANs) has recently improved the performance of cross-modality medical image quantification. However, current GAN-based UDA methods typically require abundant target data for model training, which is often very expensive or even impossible to obtain for real applications. In this paper, we study a more realistic yet challenging UDA situation, where (unlabeled) target training data is limited and previous work seldom delves into cell identification. We first enhance a dual GAN with task-specific modeling, which provides additional supervision signals to assist with generator learning. We explore both single-directional and bidirectional task-augmented GANs for domain adaptation. Then, we further improve the GAN by introducing a differentiable, stochastic data augmentation module to explicitly reduce discriminator overfitting. We examine source-, target-, and dual-domain data augmentation for GAN enhancement, as well as joint task and data augmentation in a unified GAN-based UDA framework. We evaluate the framework for cell detection on multiple public and in-house microscopy image datasets, which are acquired with different imaging modalities, staining protocols and/or tissue preparations. The experiments demonstrate that our method significantly boosts performance when compared with the reference baseline, and it is superior to or on par with fully supervised models that are trained with real target annotations. In addition, our method outperforms recent state-of-the-art UDA approaches by a large margin on different datasets.
Collapse
Affiliation(s)
- Fuyong Xing
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA.
| | - Xinyi Yang
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| | - Toby C Cornish
- Department of Pathology, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| | - Debashis Ghosh
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| |
Collapse
|
2
|
Honkamaa J, Khan U, Koivukoski S, Valkonen M, Latonen L, Ruusuvuori P, Marttinen P. Deformation equivariant cross-modality image synthesis with paired non-aligned training data. Med Image Anal 2023; 90:102940. [PMID: 37666115 DOI: 10.1016/j.media.2023.102940] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 08/14/2023] [Accepted: 08/18/2023] [Indexed: 09/06/2023]
Abstract
Cross-modality image synthesis is an active research topic with multiple medical clinically relevant applications. Recently, methods allowing training with paired but misaligned data have started to emerge. However, no robust and well-performing methods applicable to a wide range of real world data sets exist. In this work, we propose a generic solution to the problem of cross-modality image synthesis with paired but non-aligned data by introducing new deformation equivariance encouraging loss functions. The method consists of joint training of an image synthesis network together with separate registration networks and allows adversarial training conditioned on the input even with misaligned data. The work lowers the bar for new clinical applications by allowing effortless training of cross-modality image synthesis networks for more difficult data sets.
Collapse
Affiliation(s)
- Joel Honkamaa
- Department of Computer Science, Aalto University, Finland.
| | - Umair Khan
- Institute of Biomedicine, University of Turku, Finland
| | - Sonja Koivukoski
- Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland
| | - Mira Valkonen
- Faculty of Medicine and Health Technology, Tampere University, Finland
| | - Leena Latonen
- Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland
| | - Pekka Ruusuvuori
- Institute of Biomedicine, University of Turku, Finland; Faculty of Medicine and Health Technology, Tampere University, Finland
| | | |
Collapse
|
3
|
Gorman BG, Lifson MA, Vidal NY. Artificial intelligence and frozen section histopathology: A systematic review. J Cutan Pathol 2023; 50:852-859. [PMID: 37394789 DOI: 10.1111/cup.14481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 05/14/2023] [Accepted: 05/29/2023] [Indexed: 07/04/2023]
Abstract
Frozen sections are a useful pathologic tool, but variable image quality may impede the use of artificial intelligence and machine learning in their interpretation. We aimed to identify the current research on machine learning models trained or tested on frozen section images. We searched PubMed and Web of Science for articles presenting new machine learning models published in any year. Eighteen papers met all inclusion criteria. All papers presented at least one novel model trained or tested on frozen section images. Overall, convolutional neural networks tended to have the best performance. When physicians were able to view the output of the model, they tended to perform better than either the model or physicians alone at the tested task. Models trained on frozen sections performed well when tested on other slide preparations, but models trained on only formalin-fixed tissue performed significantly worse across other modalities. This suggests not only that machine learning can be applied to frozen section image processing, but also use of frozen section images may increase model generalizability. Additionally, expert physicians working in concert with artificial intelligence may be the future of frozen section histopathology.
Collapse
Affiliation(s)
- Benjamin G Gorman
- Mayo Clinic Alix School of Medicine, Rochester, Minnesota, USA
- Mayo Clinic Graduate School of Biomedical Sciences, Rochester, Minnesota, USA
| | - Mark A Lifson
- Center for Digital Health, Mayo Clinic, Rochester, Minnesota, USA
| | - Nahid Y Vidal
- Department of Dermatology, Mayo Clinic, Rochester, Minnesota, USA
- Division of Dermatologic Surgery, Mayo Clinic, Rochester, Minnesota, USA
| |
Collapse
|
4
|
Khan U, Koivukoski S, Valkonen M, Latonen L, Ruusuvuori P. The effect of neural network architecture on virtual H&E staining: Systematic assessment of histological feasibility. PATTERNS (NEW YORK, N.Y.) 2023; 4:100725. [PMID: 37223268 PMCID: PMC10201298 DOI: 10.1016/j.patter.2023.100725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Revised: 12/23/2022] [Accepted: 03/08/2023] [Indexed: 05/25/2023]
Abstract
Conventional histopathology has relied on chemical staining for over a century. The staining process makes tissue sections visible to the human eye through a tedious and labor-intensive procedure that alters the tissue irreversibly, preventing repeated use of the sample. Deep learning-based virtual staining can potentially alleviate these shortcomings. Here, we used standard brightfield microscopy on unstained tissue sections and studied the impact of increased network capacity on the resulting virtually stained H&E images. Using the generative adversarial neural network model pix2pix as a baseline, we observed that replacing simple convolutions with dense convolution units increased the structural similarity score, peak signal-to-noise ratio, and nuclei reproduction accuracy. We also demonstrated highly accurate reproduction of histology, especially with increased network capacity, and demonstrated applicability to several tissues. We show that network architecture optimization can improve the image translation accuracy of virtual H&E staining, highlighting the potential of virtual staining in streamlining histopathological analysis.
Collapse
Affiliation(s)
- Umair Khan
- University of Turku, Institute of Biomedicine, Turku 20014, Finland
| | - Sonja Koivukoski
- University of Eastern Finland, Institute of Biomedicine, Kuopio 70211, Finland
| | - Mira Valkonen
- Tampere University, Faculty of Medicine and Health Technology, Tampere 33100, Finland
| | - Leena Latonen
- University of Eastern Finland, Institute of Biomedicine, Kuopio 70211, Finland
- Foundation for the Finnish Cancer Institute, Helsinki 00290, Finland
| | - Pekka Ruusuvuori
- University of Turku, Institute of Biomedicine, Turku 20014, Finland
- Tampere University, Faculty of Medicine and Health Technology, Tampere 33100, Finland
- FICAN West Cancer Centre, Cancer Research Unit, Turku University Hospital, Turku 20500, Finland
| |
Collapse
|
5
|
Nasir ES, Parvaiz A, Fraz MM. Nuclei and glands instance segmentation in histology images: a narrative review. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10372-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
6
|
Schirmer EC, Latonen L, Tollis S. Nuclear size rectification: A potential new therapeutic approach to reduce metastasis in cancer. Front Cell Dev Biol 2022; 10:1022723. [PMID: 36299481 PMCID: PMC9589484 DOI: 10.3389/fcell.2022.1022723] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Accepted: 09/12/2022] [Indexed: 03/07/2024] Open
Abstract
Research on metastasis has recently regained considerable interest with the hope that single cell technologies might reveal the most critical changes that support tumor spread. However, it is possible that part of the answer has been visible through the microscope for close to 200 years. Changes in nuclear size characteristically occur in many cancer types when the cells metastasize. This was initially discarded as contributing to the metastatic spread because, depending on tumor types, both increases and decreases in nuclear size could correlate with increased metastasis. However, recent work on nuclear mechanics and the connectivity between chromatin, the nucleoskeleton, and the cytoskeleton indicate that changes in this connectivity can have profound impacts on cell mobility and invasiveness. Critically, a recent study found that reversing tumor type-dependent nuclear size changes correlated with reduced cell migration and invasion. Accordingly, it seems appropriate to now revisit possible contributory roles of nuclear size changes to metastasis.
Collapse
Affiliation(s)
- Eric C. Schirmer
- Institute of Cell Biology, University of Edinburgh, Edinburgh, United Kingdom
| | - Leena Latonen
- Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland
- Foundation for the Finnish Cancer Institute, Helsinki, Finland
| | - Sylvain Tollis
- Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland
| |
Collapse
|
7
|
Mougeot G, Dubos T, Chausse F, Péry E, Graumann K, Tatout C, Evans DE, Desset S. Deep learning -- promises for 3D nuclear imaging: a guide for biologists. J Cell Sci 2022; 135:jcs258986. [PMID: 35420128 PMCID: PMC9016621 DOI: 10.1242/jcs.258986] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022] Open
Abstract
For the past century, the nucleus has been the focus of extensive investigations in cell biology. However, many questions remain about how its shape and size are regulated during development, in different tissues, or during disease and aging. To track these changes, microscopy has long been the tool of choice. Image analysis has revolutionized this field of research by providing computational tools that can be used to translate qualitative images into quantitative parameters. Many tools have been designed to delimit objects in 2D and, eventually, in 3D in order to define their shapes, their number or their position in nuclear space. Today, the field is driven by deep-learning methods, most of which take advantage of convolutional neural networks. These techniques are remarkably adapted to biomedical images when trained using large datasets and powerful computer graphics cards. To promote these innovative and promising methods to cell biologists, this Review summarizes the main concepts and terminologies of deep learning. Special emphasis is placed on the availability of these methods. We highlight why the quality and characteristics of training image datasets are important and where to find them, as well as how to create, store and share image datasets. Finally, we describe deep-learning methods well-suited for 3D analysis of nuclei and classify them according to their level of usability for biologists. Out of more than 150 published methods, we identify fewer than 12 that biologists can use, and we explain why this is the case. Based on this experience, we propose best practices to share deep-learning methods with biologists.
Collapse
Affiliation(s)
- Guillaume Mougeot
- Université Clermont Auvergne, CNRS, Inserm, GReD, F-63000 Clermont-Ferrand, France
- Department of Biological and Molecular Sciences, Faculty of Health and Life Sciences, Oxford Brookes University, Oxford OX3 0BP, UK
| | - Tristan Dubos
- Université Clermont Auvergne, CNRS, Inserm, GReD, F-63000 Clermont-Ferrand, France
| | - Frédéric Chausse
- Université Clermont Auvergne, Clermont Auvergne INP, CNRS, Institut Pascal, F-63000 Clermont-Ferrand, France
| | - Emilie Péry
- Université Clermont Auvergne, Clermont Auvergne INP, CNRS, Institut Pascal, F-63000 Clermont-Ferrand, France
| | - Katja Graumann
- Department of Biological and Molecular Sciences, Faculty of Health and Life Sciences, Oxford Brookes University, Oxford OX3 0BP, UK
| | - Christophe Tatout
- Université Clermont Auvergne, CNRS, Inserm, GReD, F-63000 Clermont-Ferrand, France
| | - David E Evans
- Department of Biological and Molecular Sciences, Faculty of Health and Life Sciences, Oxford Brookes University, Oxford OX3 0BP, UK
| | - Sophie Desset
- Université Clermont Auvergne, CNRS, Inserm, GReD, F-63000 Clermont-Ferrand, France
| |
Collapse
|