1
|
Hosseini MS, Bejnordi BE, Trinh VQH, Chan L, Hasan D, Li X, Yang S, Kim T, Zhang H, Wu T, Chinniah K, Maghsoudlou S, Zhang R, Zhu J, Khaki S, Buin A, Chaji F, Salehi A, Nguyen BN, Samaras D, Plataniotis KN. Computational pathology: A survey review and the way forward. J Pathol Inform 2024; 15:100357. [PMID: 38420608 PMCID: PMC10900832 DOI: 10.1016/j.jpi.2023.100357] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 12/21/2023] [Accepted: 12/23/2023] [Indexed: 03/02/2024] Open
Abstract
Computational Pathology (CPath) is an interdisciplinary science that augments developments of computational approaches to analyze and model medical histopathology images. The main objective for CPath is to develop infrastructure and workflows of digital diagnostics as an assistive CAD system for clinical pathology, facilitating transformational changes in the diagnosis and treatment of cancer that are mainly address by CPath tools. With evergrowing developments in deep learning and computer vision algorithms, and the ease of the data flow from digital pathology, currently CPath is witnessing a paradigm shift. Despite the sheer volume of engineering and scientific works being introduced for cancer image analysis, there is still a considerable gap of adopting and integrating these algorithms in clinical practice. This raises a significant question regarding the direction and trends that are undertaken in CPath. In this article we provide a comprehensive review of more than 800 papers to address the challenges faced in problem design all-the-way to the application and implementation viewpoints. We have catalogued each paper into a model-card by examining the key works and challenges faced to layout the current landscape in CPath. We hope this helps the community to locate relevant works and facilitate understanding of the field's future directions. In a nutshell, we oversee the CPath developments in cycle of stages which are required to be cohesively linked together to address the challenges associated with such multidisciplinary science. We overview this cycle from different perspectives of data-centric, model-centric, and application-centric problems. We finally sketch remaining challenges and provide directions for future technical developments and clinical integration of CPath. For updated information on this survey review paper and accessing to the original model cards repository, please refer to GitHub. Updated version of this draft can also be found from arXiv.
Collapse
Affiliation(s)
- Mahdi S Hosseini
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | | | - Vincent Quoc-Huy Trinh
- Institute for Research in Immunology and Cancer of the University of Montreal, Montreal, QC H3T 1J4, Canada
| | - Lyndon Chan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Danial Hasan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Xingwen Li
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Stephen Yang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Taehyo Kim
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Haochen Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Theodore Wu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Kajanan Chinniah
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Sina Maghsoudlou
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ryan Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Jiadai Zhu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Samir Khaki
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Andrei Buin
- Huron Digitial Pathology, St. Jacobs, ON N0B 2N0, Canada
| | - Fatemeh Chaji
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ala Salehi
- Department of Electrical and Computer Engineering, University of New Brunswick, Fredericton, NB E3B 5A3, Canada
| | - Bich Ngoc Nguyen
- University of Montreal Hospital Center, Montreal, QC H2X 0C2, Canada
| | - Dimitris Samaras
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, United States
| | - Konstantinos N Plataniotis
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| |
Collapse
|
2
|
Bai Y, Liu X, Wang K, Ji X, Wu X, Gao W. Deep Lossy Plus Residual Coding for Lossless and Near-Lossless Image Compression. IEEE Trans Pattern Anal Mach Intell 2024; 46:3577-3594. [PMID: 38163313 DOI: 10.1109/tpami.2023.3348486] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2024]
Abstract
Lossless and near-lossless image compression is of paramount importance to professional users in many technical fields, such as medicine, remote sensing, precision engineering and scientific research. But despite rapidly growing research interests in learning-based image compression, no published method offers both lossless and near-lossless modes. In this paper, we propose a unified and powerful deep lossy plus residual (DLPR) coding framework for both lossless and near-lossless image compression. In the lossless mode, the DLPR coding system first performs lossy compression and then lossless coding of residuals. We solve the joint lossy and residual compression problem in the approach of VAEs, and add autoregressive context modeling of the residuals to enhance lossless compression performance. In the near-lossless mode, we quantize the original residuals to satisfy a given ℓ∞ error bound, and propose a scalable near-lossless compression scheme that works for variable ℓ∞ bounds instead of training multiple networks. To expedite the DLPR coding, we increase the degree of algorithm parallelization by a novel design of coding context, and accelerate the entropy coding with adaptive residual interval. Experimental results demonstrate that the DLPR coding system achieves both the state-of-the-art lossless and near-lossless image compression performance with competitive coding speed.
Collapse
|
3
|
Wodzinski M, Marini N, Atzori M, Müller H. RegWSI: Whole slide image registration using combined deep feature- and intensity-based methods: Winner of the ACROBAT 2023 challenge. Comput Methods Programs Biomed 2024; 250:108187. [PMID: 38657383 DOI: 10.1016/j.cmpb.2024.108187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Revised: 04/05/2024] [Accepted: 04/17/2024] [Indexed: 04/26/2024]
Abstract
BACKGROUND AND OBJECTIVE The automatic registration of differently stained whole slide images (WSIs) is crucial for improving diagnosis and prognosis by fusing complementary information emerging from different visible structures. It is also useful to quickly transfer annotations between consecutive or restained slides, thus significantly reducing the annotation time and associated costs. Nevertheless, the slide preparation is different for each stain and the tissue undergoes complex and large deformations. Therefore, a robust, efficient, and accurate registration method is highly desired by the scientific community and hospitals specializing in digital pathology. METHODS We propose a two-step hybrid method consisting of (i) deep learning- and feature-based initial alignment algorithm, and (ii) intensity-based nonrigid registration using the instance optimization. The proposed method does not require any fine-tuning to a particular dataset and can be used directly for any desired tissue type and stain. The registration time is low, allowing one to perform efficient registration even for large datasets. The method was proposed for the ACROBAT 2023 challenge organized during the MICCAI 2023 conference and scored 1st place. The method is released as open-source software. RESULTS The proposed method is evaluated using three open datasets: (i) Automatic Nonrigid Histological Image Registration Dataset (ANHIR), (ii) Automatic Registration of Breast Cancer Tissue Dataset (ACROBAT), and (iii) Hybrid Restained and Consecutive Histological Serial Sections Dataset (HyReCo). The target registration error (TRE) is used as the evaluation metric. We compare the proposed algorithm to other state-of-the-art solutions, showing considerable improvement. Additionally, we perform several ablation studies concerning the resolution used for registration and the initial alignment robustness and stability. The method achieves the most accurate results for the ACROBAT dataset, the cell-level registration accuracy for the restained slides from the HyReCo dataset, and is among the best methods evaluated on the ANHIR dataset. CONCLUSIONS The article presents an automatic and robust registration method that outperforms other state-of-the-art solutions. The method does not require any fine-tuning to a particular dataset and can be used out-of-the-box for numerous types of microscopic images. The method is incorporated into the DeeperHistReg framework, allowing others to directly use it to register, transform, and save the WSIs at any desired pyramid level (resolution up to 220k x 220k). We provide free access to the software. The results are fully and easily reproducible. The proposed method is a significant contribution to improving the WSI registration quality, thus advancing the field of digital pathology.
Collapse
Affiliation(s)
- Marek Wodzinski
- Institute of Informatics, University of Applied Sciences Western Switzerland, Sierre, Switzerland; Department of Measurement and Electronics, AGH University of Kraków, Krakow, Poland.
| | - Niccolò Marini
- Institute of Informatics, University of Applied Sciences Western Switzerland, Sierre, Switzerland
| | - Manfredo Atzori
- Institute of Informatics, University of Applied Sciences Western Switzerland, Sierre, Switzerland; Department of Neuroscience, University of Padova, Padova, Italy
| | - Henning Müller
- Institute of Informatics, University of Applied Sciences Western Switzerland, Sierre, Switzerland; Medical Faculty, University of Geneva, Geneva, Switzerland
| |
Collapse
|
4
|
Ngo H, Fang H, Rumbut J, Wang H. Federated Fuzzy Clustering for Decentralized Incomplete Longitudinal Behavioral Data. IEEE Internet Things J 2024; 11:14657-14670. [PMID: 38605934 PMCID: PMC11006372 DOI: 10.1109/jiot.2023.3343719] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/13/2024]
Abstract
The use of medical data for machine learning, including unsupervised methods such as clustering, is often restricted by privacy regulations such as the Health Insurance Portability and Accountability Act (HIPAA). Medical data is sensitive and highly regulated and anonymization is often insufficient to protect a patient's identity. Traditional clustering algorithms are also unsuitable for longitudinal behavioral health trials, which often have missing data and observe individual behaviors over varying time periods. In this work, we develop a new decentralized federated multiple imputation-based fuzzy clustering algorithm for complex longitudinal behavioral trial data collected from multisite randomized controlled trials over different time periods. Federated learning (FL) preserves privacy by aggregating model parameters instead of data. Unlike previous FL methods, this proposed algorithm requires only two rounds of communication and handles clients with varying numbers of time points for incomplete longitudinal data. The model is evaluated on both empirical longitudinal dietary health data and simulated clusters with different numbers of clients, effect sizes, correlations, and sample sizes. The proposed algorithm converges rapidly and achieves desirable performance on multiple clustering metrics. This new method allows for targeted treatments for various patient groups while preserving their data privacy and enables the potential for broader applications in the Internet of Medical Things.
Collapse
Affiliation(s)
- Hieu Ngo
- College of Engineering, University of Massachusetts Dartmouth, North Dartmouth, MA, 02747
| | - Hua Fang
- Department of Computer and Information Science, University of Massachusetts Dartmouth, North Dartmouth, MA, 02747 and the Department of Population and Quantitative Health Science, University of Massachusetts Chan Medical School, Worcester, MA 01655 USA
| | - Joshua Rumbut
- College of Engineering, University of Massachusetts Dartmouth, North Dartmouth, MA, 02747 and the Department of Population and Quantitative Health Science, University of Massachusetts Chan Medical School, Worcester, MA 01655 USA
| | - Honggang Wang
- Department of Graduate Computer Science and Engineering, Katz School of Science and Health, Yeshiva University, New York City, NY, 10033
| |
Collapse
|
5
|
Escobar Díaz Guerrero R, Oliveira JL, Popp J, Bocklitz T. MMIR: an open-source software for the registration of multimodal histological images. BMC Med Inform Decis Mak 2024; 24:65. [PMID: 38443881 PMCID: PMC10916274 DOI: 10.1186/s12911-024-02424-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Accepted: 01/11/2024] [Indexed: 03/07/2024] Open
Abstract
BACKGROUND Multimodal histology image registration is a process that transforms into a common coordinate system two or more images obtained from different microscopy modalities. The combination of information from various modalities can contribute to a comprehensive understanding of tissue specimens, aiding in more accurate diagnoses, and improved research insights. Multimodal image registration in histology samples presents a significant challenge due to the inherent differences in characteristics and the need for tailored optimization algorithms for each modality. RESULTS We developed MMIR a cloud-based system for multimodal histological image registration, which consists of three main modules: a project manager, an algorithm manager, and an image visualization system. CONCLUSION Our software solution aims to simplify image registration tasks with a user-friendly approach. It facilitates effective algorithm management, responsive web interfaces, supports multi-resolution images, and facilitates batch image registration. Moreover, its adaptable architecture allows for the integration of custom algorithms, ensuring that it aligns with the specific requirements of each modality combination. Beyond image registration, our software enables the conversion of segmented annotations from one modality to another.
Collapse
Affiliation(s)
- Rodrigo Escobar Díaz Guerrero
- BMD Software, PCI - Creative Science Park, 3830-352, Ilhavo, Portugal.
- DETI/IEETA, University of Aveiro, 3810-193, Aveiro, Portugal.
- Leibniz Institute of Photonic Technology Jena, Member of Leibniz research alliance 'Health technologies', Albert-Einstein-Straße 9, 07745, Jena, Germany.
| | | | - Juergen Popp
- Leibniz Institute of Photonic Technology Jena, Member of Leibniz research alliance 'Health technologies', Albert-Einstein-Straße 9, 07745, Jena, Germany
- Institute of Physical Chemistry and Abbe Center of Photonics (IPC), Friedrich-Schiller-University, Helmholtzweg 4, 07743, Jena, Germany
| | - Thomas Bocklitz
- Leibniz Institute of Photonic Technology Jena, Member of Leibniz research alliance 'Health technologies', Albert-Einstein-Straße 9, 07745, Jena, Germany
- Institute of Physical Chemistry and Abbe Center of Photonics (IPC), Friedrich-Schiller-University, Helmholtzweg 4, 07743, Jena, Germany
- Institute of Computer Science, Faculty of Mathematics, Physics & Computer Science, University Bayreuth, Universitätsstraße 30, 95447, Bayreuth, Germany
| |
Collapse
|
6
|
Li R, Chen X, Yang X. Navigating the landscapes of spatial transcriptomics: How computational methods guide the way. Wiley Interdiscip Rev RNA 2024; 15:e1839. [PMID: 38527900 DOI: 10.1002/wrna.1839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Revised: 02/24/2024] [Accepted: 03/04/2024] [Indexed: 03/27/2024]
Abstract
Spatially resolved transcriptomics has been dramatically transforming biological and medical research in various fields. It enables transcriptome profiling at single-cell, multi-cellular, or sub-cellular resolution, while retaining the information of geometric localizations of cells in complex tissues. The coupling of cell spatial information and its molecular characteristics generates a novel multi-modal high-throughput data source, which poses new challenges for the development of analytical methods for data-mining. Spatial transcriptomic data are often highly complex, noisy, and biased, presenting a series of difficulties, many unresolved, for data analysis and generation of biological insights. In addition, to keep pace with the ever-evolving spatial transcriptomic experimental technologies, the existing analytical theories and tools need to be updated and reformed accordingly. In this review, we provide an overview and discussion of the current computational approaches for mining of spatial transcriptomics data. Future directions and perspectives of methodology design are proposed to stimulate further discussions and advances in new analytical models and algorithms. This article is categorized under: RNA Methods > RNA Analyses in Cells RNA Evolution and Genomics > Computational Analyses of RNA RNA Export and Localization > RNA Localization.
Collapse
Affiliation(s)
- Runze Li
- MOE Key Laboratory of Bioinformatics, Center for Synthetic & Systems Biology, School of Life Sciences, Tsinghua University, Beijing, China
| | - Xu Chen
- MOE Key Laboratory of Bioinformatics, Center for Synthetic & Systems Biology, School of Life Sciences, Tsinghua University, Beijing, China
| | - Xuerui Yang
- MOE Key Laboratory of Bioinformatics, Center for Synthetic & Systems Biology, School of Life Sciences, Tsinghua University, Beijing, China
| |
Collapse
|
7
|
Li H, Xie J, Song J, Jin C, Xin H, Pan X, Ke J, Yuan Y, Shen H, Ning G. CRCS: An automatic image processing pipeline for hormone level analysis of Cushing's disease. Methods 2024; 222:28-40. [PMID: 38159688 DOI: 10.1016/j.ymeth.2023.12.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Revised: 12/01/2023] [Accepted: 12/25/2023] [Indexed: 01/03/2024] Open
Abstract
Due to the abnormal secretion of adreno-cortico-tropic-hormone (ACTH) by tumors, Cushing's disease leads to hypercortisonemia, a precursor to a series of metabolic disorders and serious complications. Cushing's disease has high recurrence rate, short recurrence time and undiscovered recurrence reason after surgical resection. Qualitative or quantitative automatic image analysis of histology images can potentially in providing insights into Cushing's disease, but still no software has been available to the best of our knowledge. In this study, we propose a quantitative image analysis-based pipeline CRCS, which aims to explore the relationship between the expression level of ACTH in normal cell tissues adjacent to tumor cells and the postoperative prognosis of patients. CRCS mainly consists of image-level clustering, cluster-level multi-modal image registration, patch-level image classification and pixel-level image segmentation on the whole slide imaging (WSI). On both image registration and classification tasks, our method CRCS achieves state-of-the-art performance compared to recently published methods on our collected benchmark dataset. In addition, CRCS achieves an accuracy of 0.83 for postoperative prognosis of 12 cases. CRCS demonstrates great potential for instrumenting automatic diagnosis and treatment for Cushing's disease.
Collapse
Affiliation(s)
- Haiyue Li
- Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, and Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai 200240, China
| | - Jing Xie
- Department of Pathology, Ruijin Hospital, Shanghai Jiao Tong University, School of Medicine, 197 Ruijin 2nd Road, Shanghai 200025, China
| | - Jialin Song
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiao Tong University, Xi'an 710049, China
| | - Cheng Jin
- Medical Robot Research Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Hongyi Xin
- University of Michigan - Shanghai Jiao Tong University Joint Institute Shanghai Jiao Tong University, Shanghai 200240, China
| | - Xiaoyong Pan
- Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, and Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai 200240, China
| | - Jing Ke
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Ye Yuan
- Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, and Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai 200240, China
| | - Hongbin Shen
- Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, and Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai 200240, China.
| | - Guang Ning
- State Key Laboratory of Medical Genomes, National Clinical Research Center for Endocrine and Metabolic Diseases, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China; Laboratory of Endocrinology and Metabolism, Institute of Health Sciences, Shanghai Institutes for Biological Sciences (SIBS), Chinese Academy of Sciences (CAS) & Shanghai Jiao Tong University School of Medicine (SJTUSM), Shanghai, China.
| |
Collapse
|
8
|
Piluso S, Souedet N, Jan C, Hérard AS, Clouchoux C, Delzescaux T. giRAff: an automated atlas segmentation tool adapted to single histological slices. Front Neurosci 2024; 17:1230814. [PMID: 38274499 PMCID: PMC10808556 DOI: 10.3389/fnins.2023.1230814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Accepted: 10/31/2023] [Indexed: 01/27/2024] Open
Abstract
Conventional histology of the brain remains the gold standard in the analysis of animal models. In most biological studies, standard protocols usually involve producing a limited number of histological slices to be analyzed. These slices are often selected into a specific anatomical region of interest or around a specific pathological lesion. Due to the lack of automated solutions to analyze such single slices, neurobiologists perform the segmentation of anatomical regions manually most of the time. Because the task is long, tedious, and operator-dependent, we propose an automated atlas segmentation method called giRAff, which combines rigid and affine registrations and is suitable for conventional histological protocols involving any number of single slices from a given mouse brain. In particular, the method has been tested on several routine experimental protocols involving different anatomical regions of different sizes and for several brains. For a given set of single slices, the method can automatically identify the corresponding slices in the mouse Allen atlas template with good accuracy and segmentations comparable to those of an expert. This versatile and generic method allows the segmentation of any single slice without additional anatomical context in about 1 min. Basically, our proposed giRAff method is an easy-to-use, rapid, and automated atlas segmentation tool compliant with a wide variety of standard histological protocols.
Collapse
Affiliation(s)
- Sébastien Piluso
- Université Paris-Saclay, CEA, CNRS, MIRCen, Laboratoire des Maladies Neurodégénératives, Fontenay-aux-Roses, France
- WITSEE, Paris, France
| | - Nicolas Souedet
- Université Paris-Saclay, CEA, CNRS, MIRCen, Laboratoire des Maladies Neurodégénératives, Fontenay-aux-Roses, France
| | - Caroline Jan
- Université Paris-Saclay, CEA, CNRS, MIRCen, Laboratoire des Maladies Neurodégénératives, Fontenay-aux-Roses, France
| | - Anne-Sophie Hérard
- Université Paris-Saclay, CEA, CNRS, MIRCen, Laboratoire des Maladies Neurodégénératives, Fontenay-aux-Roses, France
| | | | - Thierry Delzescaux
- Université Paris-Saclay, CEA, CNRS, MIRCen, Laboratoire des Maladies Neurodégénératives, Fontenay-aux-Roses, France
| |
Collapse
|
9
|
Zhou G, Tward D, Lange K. A Majorization-Minimization Algorithm for Neuroimage Registration. SIAM J Imaging Sci 2024; 17:273-300. [PMID: 38550750 PMCID: PMC10977051 DOI: 10.1137/22m1516907] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/01/2024]
Abstract
Intensity-based image registration is critical for neuroimaging tasks, such as 3D reconstruction, times-series alignment, and common coordinate mapping. The gradient-based optimization methods commonly used to solve this problem require a careful selection of step-length. This limitation imposes substantial time and computational costs. Here we propose a gradient-independent rigid-motion registration algorithm based on the majorization-minimization (MM) principle. Each iteration of our intensity-based MM algorithm reduces to a simple point-set rigid registration problem with a closed form solution that avoids the step-length issue altogether. The details of the algorithm are presented, and an error bound for its more practical truncated form is derived. The performance of the MM algorithm is shown to be more effective than gradient descent on simulated images and Nissl stained coronal slices of mouse brain. We also compare and contrast the similarities and differences between the MM algorithm and another gradient-free registration algorithm called the block-matching method. Finally, extensions of this algorithm to more complex problems are discussed.
Collapse
Affiliation(s)
- Gaiting Zhou
- Computational Medicine, UCLA, Los Angeles, CA 90024 USA
| | - Daniel Tward
- Computational Medicine, UCLA, Los Angeles, CA 90024 USA
| | - Kenneth Lange
- Computational Medicine, UCLA, Los Angeles, CA 90024 USA
| |
Collapse
|
10
|
Li X, Long M, Huang J, Wu J, Shen H, Zhou F, Hou J, Xu Y, Wang D, Mei L, Liu Y, Hu T, Lei C. An orientation-free ring feature descriptor with stain-variability normalization for pathology image matching. Comput Biol Med 2023; 167:107675. [PMID: 37976825 DOI: 10.1016/j.compbiomed.2023.107675] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Revised: 10/08/2023] [Accepted: 11/06/2023] [Indexed: 11/19/2023]
Abstract
Comprehensively analyzing the corresponding regions in the images of serial slices stained using different methods is a common but important operation in pathological diagnosis. To help increase the efficiency of the analysis, various image registration methods are proposed to match the corresponding regions in different images, but their performance is highly influenced by the rotations, deformations, and variations of staining between the serial pathology images. In this work, we propose an orientation-free ring feature descriptor with stain-variability normalization for pathology image matching. Specifically, we normalize image staining to similar levels to minimize the impact of staining differences on pathology image matching. To overcome the rotation and deformation issues, we propose a rotation-invariance orientation-free ring feature descriptor that generates novel adaptive bins from ring features to build feature vectors. We measure the Euclidean distance of the feature vectors to evaluate keypoint similarity to achieve pathology image matching. A total of 46 pairs of clinical pathology images in hematoxylin-eosin and immunohistochemistry straining to verify the performance of our method. Experimental results indicate that our method meets the pathology image matching accuracy requirements (error ¡ 300μm), especially competent for large-angle rotation cases common in clinical practice.
Collapse
Affiliation(s)
- Xiaoxiao Li
- The Institute of Technological Sciences, Wuhan University, Wuhan 430072, China
| | - Mengping Long
- The Institute of Technological Sciences, Wuhan University, Wuhan 430072, China; Department of Pathology, Peking University Cancer Hospital, Beijing 100142, China
| | - Jin Huang
- The Institute of Technological Sciences, Wuhan University, Wuhan 430072, China
| | - Jianghua Wu
- Department of Pathology, Peking University Cancer Hospital, Beijing 100142, China
| | - Hui Shen
- Department of Hematology, Zhongnan Hospital of Wuhan University, Wuhan, 430071, China
| | - Fuling Zhou
- Department of Hematology, Zhongnan Hospital of Wuhan University, Wuhan, 430071, China
| | - Jinxuan Hou
- Department of Thyroid and Breast Surgery, Zhongnan Hospital of Wuhan University, Wuhan, 430071, China
| | - Yu Xu
- Department of Radiation and Medical Oncology, Zhongnan Hospital of Wuhan University, Wuhan, 430071, China
| | - Du Wang
- The Institute of Technological Sciences, Wuhan University, Wuhan 430072, China
| | - Liye Mei
- The Institute of Technological Sciences, Wuhan University, Wuhan 430072, China; School of Computer Science, Hubei University of Technology, Wuhan, 430068, China.
| | - Yiqiang Liu
- Department of Pathology, Peking University Cancer Hospital, Beijing 100142, China
| | - Taobo Hu
- The Institute of Technological Sciences, Wuhan University, Wuhan 430072, China; Department of Breast Surgery, Peking University People's Hospital, Beijing, 100044, China
| | - Cheng Lei
- The Institute of Technological Sciences, Wuhan University, Wuhan 430072, China; Suzhou Institute of Wuhan University, Suzhou, 215000, China; Shenzhen Institute of Wuhan University, Shenzhen, 518057, China.
| |
Collapse
|
11
|
Honkamaa J, Khan U, Koivukoski S, Valkonen M, Latonen L, Ruusuvuori P, Marttinen P. Deformation equivariant cross-modality image synthesis with paired non-aligned training data. Med Image Anal 2023; 90:102940. [PMID: 37666115 DOI: 10.1016/j.media.2023.102940] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 08/14/2023] [Accepted: 08/18/2023] [Indexed: 09/06/2023]
Abstract
Cross-modality image synthesis is an active research topic with multiple medical clinically relevant applications. Recently, methods allowing training with paired but misaligned data have started to emerge. However, no robust and well-performing methods applicable to a wide range of real world data sets exist. In this work, we propose a generic solution to the problem of cross-modality image synthesis with paired but non-aligned data by introducing new deformation equivariance encouraging loss functions. The method consists of joint training of an image synthesis network together with separate registration networks and allows adversarial training conditioned on the input even with misaligned data. The work lowers the bar for new clinical applications by allowing effortless training of cross-modality image synthesis networks for more difficult data sets.
Collapse
Affiliation(s)
- Joel Honkamaa
- Department of Computer Science, Aalto University, Finland.
| | - Umair Khan
- Institute of Biomedicine, University of Turku, Finland
| | - Sonja Koivukoski
- Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland
| | - Mira Valkonen
- Faculty of Medicine and Health Technology, Tampere University, Finland
| | - Leena Latonen
- Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland
| | - Pekka Ruusuvuori
- Institute of Biomedicine, University of Turku, Finland; Faculty of Medicine and Health Technology, Tampere University, Finland
| | | |
Collapse
|
12
|
Lindemann MC, Glänzer L, Roeth AA, Schmitz-Rode T, Slabu I. Towards Realistic 3D Models of Tumor Vascular Networks. Cancers (Basel) 2023; 15:5352. [PMID: 38001612 PMCID: PMC10670125 DOI: 10.3390/cancers15225352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 11/03/2023] [Accepted: 11/03/2023] [Indexed: 11/26/2023] Open
Abstract
For reliable in silico or in vitro investigations in, for example, biosensing and drug delivery applications, accurate models of tumor vascular networks down to the capillary size are essential. Compared to images acquired with conventional medical imaging techniques, digitalized histological tumor slices have a higher resolution, enabling the delineation of capillaries. Volume rendering procedures can then be used to generate a 3D model. However, the preparation of such slices leads to misalignments in relative slice orientation between consecutive slices. Thus, image registration algorithms are necessary to re-align the slices. Here, we present an algorithm for the registration and reconstruction of a vascular network from histologic slices applied to 169 tumor slices. The registration includes two steps. First, consecutive images are incrementally pre-aligned using feature- and area-based transformations. Second, using the previous transformations, parallel registration for all images is enabled. Combining intensity- and color-based thresholds along with heuristic analysis, vascular structures are segmented. A 3D interpolation technique is used for volume rendering. This results in a 3D vascular network with approximately 400-450 vessels with diameters down to 25-30 µm. A delineation of vessel structures with close distance was limited in areas of high structural density. Improvement can be achieved by using images with higher resolution and or machine learning techniques.
Collapse
Affiliation(s)
- Max C. Lindemann
- Institute of Applied Medical Engineering, Helmholtz Institute, Medical Faculty, RWTH Aachen University, Pauwelsstraße 20, 52074 Aachen, Germany (L.G.); (T.S.-R.)
| | - Lukas Glänzer
- Institute of Applied Medical Engineering, Helmholtz Institute, Medical Faculty, RWTH Aachen University, Pauwelsstraße 20, 52074 Aachen, Germany (L.G.); (T.S.-R.)
| | - Anjali A. Roeth
- Department of General, Visceral and Transplant Surgery, RWTH Aachen University Hospital, Pauwelsstrasse 30, 52074 Aachen, Germany
- Department of Surgery, Maastricht University, P. Debyelaan 25, 6229 HX Maastricht, The Netherlands
| | - Thomas Schmitz-Rode
- Institute of Applied Medical Engineering, Helmholtz Institute, Medical Faculty, RWTH Aachen University, Pauwelsstraße 20, 52074 Aachen, Germany (L.G.); (T.S.-R.)
| | - Ioana Slabu
- Institute of Applied Medical Engineering, Helmholtz Institute, Medical Faculty, RWTH Aachen University, Pauwelsstraße 20, 52074 Aachen, Germany (L.G.); (T.S.-R.)
| |
Collapse
|
13
|
Lin Y, Liang Z, He Y, Huang W, Guan T. End-to-end affine registration framework for histopathological images with weak annotations. Comput Methods Programs Biomed 2023; 241:107763. [PMID: 37634308 DOI: 10.1016/j.cmpb.2023.107763] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 08/12/2023] [Accepted: 08/12/2023] [Indexed: 08/29/2023]
Abstract
BACKGROUND AND OBJECTIVE Histopathological image registration is an essential component in digital pathology and biomedical image analysis. Deep-learning-based algorithms have been proposed to achieve fast and accurate affine registration. Some previous studies assume that the pairs are free from sizeable initial position misalignment and large rotation angles before performing the affine transformation. However, large-rotation angles are often introduced into image pairs during the production process in real-world pathology images. Reliable initial alignment is important for registration performance. The existing deep-learning-based approaches often use a two-step affine registration pipeline because convolutional neural networks (CNNs) cannot correct large-angle rotations. METHODS In this manuscript, a general framework ARoNet is developed to achieve end-to-end affine registration for histopathological images. We use CNNs to extract global features of images and fuse them to construct correspondent information for affine transformation. In ARoNet, a rotation recognition network is implemented to eliminate great rotation misalignment. In addition, a self-supervised learning task is proposed to assist the learning of image representations in an unsupervised manner. RESULTS We applied our model to four datasets, and the results indicate that ARoNet surpasses existing affine registration algorithms in alignment accuracy when large angular misalignments (e.g., 180 rotation) are present, providing accurate affine initialization for subsequent non-rigid alignments. Besides, ARoNet shows advantages in execution time (0.05 per pair), registration accuracy, and robustness. CONCLUSION We believe that the proposed general framework promises to simplify and speed up the registration process and has the potential for clinical applications.
Collapse
Affiliation(s)
- Yuanhua Lin
- Shenzhen International Graduate School, Tsinghua University, 518055, Shenzhen, China
| | - Zhendong Liang
- Shenzhen International Graduate School, Tsinghua University, 518055, Shenzhen, China
| | - Yonghong He
- Shenzhen International Graduate School, Tsinghua University, 518055, Shenzhen, China
| | - Wenting Huang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, 518116, Shenzhen, China
| | - Tian Guan
- Shenzhen International Graduate School, Tsinghua University, 518055, Shenzhen, China.
| |
Collapse
|
14
|
Doyle J, Green BF, Eminizer M, Jimenez-Sanchez D, Lu S, Engle EL, Xu H, Ogurtsova A, Lai J, Soto-Diaz S, Roskes JS, Deutsch JS, Taube JM, Sunshine JC, Szalay AS. Whole-Slide Imaging, Mutual Information Registration for Multiplex Immunohistochemistry and Immunofluorescence. J Transl Med 2023; 103:100175. [PMID: 37196983 PMCID: PMC10527458 DOI: 10.1016/j.labinv.2023.100175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 03/24/2023] [Accepted: 05/08/2023] [Indexed: 05/19/2023] Open
Abstract
Multiplex immunohistochemistry/immunofluorescence (mIHC/mIF) is a developing technology that facilitates the evaluation of multiple, simultaneous protein expressions at single-cell resolution while preserving tissue architecture. These approaches have shown great potential for biomarker discovery, yet many challenges remain. Importantly, streamlined cross-registration of multiplex immunofluorescence images with additional imaging modalities and immunohistochemistry (IHC) can help increase the plex and/or improve the quality of the data generated by potentiating downstream processes such as cell segmentation. To address this problem, a fully automated process was designed to perform a hierarchical, parallelizable, and deformable registration of multiplexed digital whole-slide images (WSIs). We generalized the calculation of mutual information as a registration criterion to an arbitrary number of dimensions, making it well suited for multiplexed imaging. We also used the self-information of a given IF channel as a criterion to select the optimal channels to use for registration. Additionally, as precise labeling of cellular membranes in situ is essential for robust cell segmentation, a pan-membrane immunohistochemical staining method was developed for incorporation into mIF panels or for use as an IHC followed by cross-registration. In this study, we demonstrate this process by registering whole-slide 6-plex/7-color mIF images with whole-slide brightfield mIHC images, including a CD3 and a pan-membrane stain. Our algorithm, WSI, mutual information registration (WSIMIR), performed highly accurate registration allowing the retrospective generation of an 8-plex/9-color, WSI, and outperformed 2 alternative automated methods for cross-registration by Jaccard index and Dice similarity coefficient (WSIMIR vs automated WARPY, P < .01 and P < .01, respectively, vs HALO + transformix, P = .083 and P = .049, respectively). Furthermore, the addition of a pan-membrane IHC stain cross-registered to an mIF panel facilitated improved automated cell segmentation across mIF WSIs, as measured by significantly increased correct detections, Jaccard index (0.78 vs 0.65), and Dice similarity coefficient (0.88 vs 0.79).
Collapse
Affiliation(s)
- Joshua Doyle
- Department of Astronomy and Physics, Johns Hopkins University, Baltimore, Maryland
| | - Benjamin F Green
- Department of Dermatology, Johns Hopkins University School of Medicine, Baltimore, Maryland; The Mark Foundation Center for Advanced Genomics and Imaging, Johns Hopkins University, Baltimore, Maryland; Bloomberg∼Kimmel Institute for Cancer Immunotherapy and Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University, Baltimore, Maryland
| | - Margaret Eminizer
- Department of Astronomy and Physics, Johns Hopkins University, Baltimore, Maryland; Institute for Data Intensive Engineering and Science, Johns Hopkins University, Baltimore, Maryland
| | - Daniel Jimenez-Sanchez
- Department of Dermatology, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Steve Lu
- Department of Dermatology, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Elizabeth L Engle
- Department of Dermatology, Johns Hopkins University School of Medicine, Baltimore, Maryland; The Mark Foundation Center for Advanced Genomics and Imaging, Johns Hopkins University, Baltimore, Maryland; Bloomberg∼Kimmel Institute for Cancer Immunotherapy and Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University, Baltimore, Maryland
| | - Haiying Xu
- Department of Dermatology, Johns Hopkins University School of Medicine, Baltimore, Maryland; The Mark Foundation Center for Advanced Genomics and Imaging, Johns Hopkins University, Baltimore, Maryland; Bloomberg∼Kimmel Institute for Cancer Immunotherapy and Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University, Baltimore, Maryland
| | - Aleksandra Ogurtsova
- Department of Dermatology, Johns Hopkins University School of Medicine, Baltimore, Maryland; The Mark Foundation Center for Advanced Genomics and Imaging, Johns Hopkins University, Baltimore, Maryland; Bloomberg∼Kimmel Institute for Cancer Immunotherapy and Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University, Baltimore, Maryland
| | - Jonathan Lai
- Department of Dermatology, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Sigfredo Soto-Diaz
- Department of Dermatology, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Jeffrey S Roskes
- Department of Astronomy and Physics, Johns Hopkins University, Baltimore, Maryland; Institute for Data Intensive Engineering and Science, Johns Hopkins University, Baltimore, Maryland
| | - Julie S Deutsch
- Department of Dermatology, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Janis M Taube
- Department of Dermatology, Johns Hopkins University School of Medicine, Baltimore, Maryland; The Mark Foundation Center for Advanced Genomics and Imaging, Johns Hopkins University, Baltimore, Maryland; Bloomberg∼Kimmel Institute for Cancer Immunotherapy and Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University, Baltimore, Maryland; Department of Pathology, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Joel C Sunshine
- Department of Dermatology, Johns Hopkins University School of Medicine, Baltimore, Maryland; Bloomberg∼Kimmel Institute for Cancer Immunotherapy and Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University, Baltimore, Maryland; Department of Pathology, Johns Hopkins University School of Medicine, Baltimore, Maryland; Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, Maryland; Johns Hopkins Center for Translational Immunoengineering, Johns Hopkins University School of Medicine, Baltimore, Maryland.
| | - Alexander S Szalay
- Department of Astronomy and Physics, Johns Hopkins University, Baltimore, Maryland; The Mark Foundation Center for Advanced Genomics and Imaging, Johns Hopkins University, Baltimore, Maryland; Institute for Data Intensive Engineering and Science, Johns Hopkins University, Baltimore, Maryland
| |
Collapse
|
15
|
Gatenbee CD, Baker AM, Prabhakaran S, Swinyard O, Slebos RJC, Mandal G, Mulholland E, Andor N, Marusyk A, Leedham S, Conejo-Garcia JR, Chung CH, Robertson-Tessi M, Graham TA, Anderson ARA. Virtual alignment of pathology image series for multi-gigapixel whole slide images. Nat Commun 2023; 14:4502. [PMID: 37495577 PMCID: PMC10372014 DOI: 10.1038/s41467-023-40218-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Accepted: 07/13/2023] [Indexed: 07/28/2023] Open
Abstract
Interest in spatial omics is on the rise, but generation of highly multiplexed images remains challenging, due to cost, expertise, methodical constraints, and access to technology. An alternative approach is to register collections of whole slide images (WSI), generating spatially aligned datasets. WSI registration is a two-part problem, the first being the alignment itself and the second the application of transformations to huge multi-gigapixel images. To address both challenges, we developed Virtual Alignment of pathoLogy Image Series (VALIS), software which enables generation of highly multiplexed images by aligning any number of brightfield and/or immunofluorescent WSI, the results of which can be saved in the ome.tiff format. Benchmarking using publicly available datasets indicates VALIS provides state-of-the-art accuracy in WSI registration and 3D reconstruction. Leveraging existing open-source software tools, VALIS is written in Python, providing a free, fast, scalable, robust, and easy-to-use pipeline for registering multi-gigapixel WSI, facilitating downstream spatial analyses.
Collapse
Affiliation(s)
- Chandler D Gatenbee
- Department of Integrated Mathematical Oncology, H. Lee Moffitt Cancer Center & Research Institute, 12902 Magnolia Drive, SRB 4, Tampa, FL, 336122, USA.
| | - Ann-Marie Baker
- Evolution and Cancer Laboratory, Centre for Genomics and Computational Biology, Barts Cancer Institute, Queen Mary University of London, London, EC1M 6BQ, UK
| | - Sandhya Prabhakaran
- Department of Integrated Mathematical Oncology, H. Lee Moffitt Cancer Center & Research Institute, 12902 Magnolia Drive, SRB 4, Tampa, FL, 336122, USA
| | - Ottilie Swinyard
- Evolution and Cancer Laboratory, Centre for Genomics and Computational Biology, Barts Cancer Institute, Queen Mary University of London, London, EC1M 6BQ, UK
| | - Robbert J C Slebos
- Department of Head and Neck-Endocrine Oncology, H. Lee Moffitt Cancer Center & Research Institute, 12902 Magnolia Drive, CSB 6, Tampa, FL, USA
| | - Gunjan Mandal
- Department of Immunology, H. Lee Moffitt Cancer Center & Research Institute, 12902 Magnolia Drive, MRC, Tampa, FL, 336122, USA
| | - Eoghan Mulholland
- Wellcome Centre for Human Genetics, University of Oxford, Oxford, OX37BN, UK
| | - Noemi Andor
- Department of Integrated Mathematical Oncology, H. Lee Moffitt Cancer Center & Research Institute, 12902 Magnolia Drive, SRB 4, Tampa, FL, 336122, USA
| | - Andriy Marusyk
- Department of Cancer Physiology, H. Lee Moffitt Cancer Center & Research Institute, 12902 Magnolia Drive, SRB 4, Tampa, FL, USA
| | - Simon Leedham
- Wellcome Centre for Human Genetics, University of Oxford, Oxford, OX37BN, UK
| | - Jose R Conejo-Garcia
- Department of Immunology, H. Lee Moffitt Cancer Center & Research Institute, 12902 Magnolia Drive, MRC, Tampa, FL, 336122, USA
| | - Christine H Chung
- Department of Head and Neck-Endocrine Oncology, H. Lee Moffitt Cancer Center & Research Institute, 12902 Magnolia Drive, CSB 6, Tampa, FL, USA
| | - Mark Robertson-Tessi
- Department of Integrated Mathematical Oncology, H. Lee Moffitt Cancer Center & Research Institute, 12902 Magnolia Drive, SRB 4, Tampa, FL, 336122, USA
| | - Trevor A Graham
- Evolution and Cancer Laboratory, Centre for Genomics and Computational Biology, Barts Cancer Institute, Queen Mary University of London, London, EC1M 6BQ, UK
| | - Alexander R A Anderson
- Department of Integrated Mathematical Oncology, H. Lee Moffitt Cancer Center & Research Institute, 12902 Magnolia Drive, SRB 4, Tampa, FL, 336122, USA.
| |
Collapse
|
16
|
Jurgas A, Wodzinski M, Celniak W, Atzori M, Muller H. Artifact Augmentation for Learning-based Quality Control of Whole Slide Images. Annu Int Conf IEEE Eng Med Biol Soc 2023; 2023:1-4. [PMID: 38082977 DOI: 10.1109/embc40787.2023.10340997] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
The acquisition of whole slide images is prone to artifacts that can require human control and re-scanning, both in clinical workflows and in research-oriented settings. Quality control algorithms are a first step to overcome this challenge, as they limit the use of low quality images. Developing quality control systems in histopathology is not straightforward, also due to the limited availability of data related to this topic. We address the problem by proposing a tool to augment data with artifacts. The proposed method seamlessly generates and blends artifacts from an external library to a given histopathology dataset. The datasets augmented by the blended artifacts are then used to train an artifact detection network in a supervised way. We use the YOLOv5 model for the artifact detection with a slightly modified training pipeline. The proposed tool can be extended into a complete framework for the quality assessment of whole slide images.Clinical relevance- The proposed method may be useful for the initial quality screening of whole slide images. Each year, millions of whole slide images are acquired and digitized worldwide. Numerous of them contain artifacts affecting the following AI-oriented analysis. Therefore, a tool operating at the acquisition phase and improving the initial quality assessment is crucial to increase the performance of digital pathology algorithms, e.g., early cancer diagnosis.
Collapse
|
17
|
Chen S, Rao BY, Herrlinger S, Losonczy A, Paninski L, Varol E. MULTIMODAL MICROSCOPY IMAGE ALIGNMENT USING SPATIAL AND SHAPE INFORMATION AND A BRANCH-AND-BOUND ALGORITHM. Proc IEEE Int Conf Acoust Speech Signal Process 2023; 2023:10.1109/icassp49357.2023.10096185. [PMID: 37388235 PMCID: PMC10308861 DOI: 10.1109/icassp49357.2023.10096185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/01/2023]
Abstract
Multimodal microscopy experiments that image the same population of cells under different experimental conditions have become a widely used approach in systems and molecular neuroscience. The main obstacle is to align the different imaging modalities to obtain complementary information about the observed cell population (e.g., gene expression and calcium signal). Traditional image registration methods perform poorly when only a small subset of cells are present in both images, as is common in multimodal experiments. We cast multimodal microscopy alignment as a cell subset matching problem. To solve this non-convex problem, we introduce an efficient and globally optimal branch-and-bound algorithm to find subsets of point clouds that are in rotational alignment with each other. In addition, we use complementary information about cell shape and location to compute the matching likelihood of cell pairs in two imaging modalities to further prune the optimization search tree. Finally, we use the maximal set of cells in rigid rotational alignment to seed image deformation fields to obtain a final registration result. Our framework performs better than the state-of-the-art histology alignment approaches regarding matching quality and is faster than manual alignment, providing a viable solution to improve the throughput of multimodal microscopy experiments.
Collapse
Affiliation(s)
- Shuonan Chen
- Department of System Biology
- Zuckerman Institute
- Columbia University
| | - Bovey Y Rao
- Department of Neurobiology
- Zuckerman Institute
- Columbia University
| | | | - Attila Losonczy
- Department of Neurobiology
- Zuckerman Institute
- Columbia University
| | - Liam Paninski
- Department of Statistics
- Zuckerman Institute
- Columbia University
| | - Erdem Varol
- Department of Statistics
- Department of Computer Science & Engineering
- Zuckerman Institute
- Columbia University
- New York University
| |
Collapse
|
18
|
Roy M, Wang F, Teodoro G, Bhattarai S, Bhargava M, Rekha TS, Aneja R, Kong J. Deep learning based registration of serial whole-slide histopathology images in different stains. J Pathol Inform 2023; 14:100311. [PMID: 37214150 PMCID: PMC10193019 DOI: 10.1016/j.jpi.2023.100311] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 04/11/2023] [Accepted: 04/12/2023] [Indexed: 05/24/2023] Open
Abstract
For routine pathology diagnosis and imaging-based biomedical research, Whole-slide image (WSI) analyses have been largely limited to a 2D tissue image space. For a more definitive tissue representation to support fine-resolution spatial and integrative analyses, it is critical to extend such tissue-based investigations to a 3D tissue space with spatially aligned serial tissue WSIs in different stains, such as Hematoxylin and Eosin (H&E) and Immunohistochemistry (IHC) biomarkers. However, such WSI registration is technically challenged by the overwhelming image scale, the complex histology structure change, and the significant difference in tissue appearances in different stains. The goal of this study is to register serial sections from multi-stain histopathology whole-slide image blocks. We propose a novel translation-based deep learning registration network CGNReg that spatially aligns serial WSIs stained in H&E and by IHC biomarkers without prior deformation information for the model training. First, synthetic IHC images are produced from H&E slides through a robust image synthesis algorithm. Next, the synthetic and the real IHC images are registered through a Fully Convolutional Network with multi-scaled deformable vector fields and a joint loss optimization. We perform the registration at the full image resolution, retaining the tissue details in the results. Evaluated with a dataset of 76 breast cancer patients with 1 H&E and 2 IHC serial WSIs for each patient, CGNReg presents promising performance as compared with multiple state-of-the-art systems in our evaluation. Our results suggest that CGNReg can produce promising registration results with serial WSIs in different stains, enabling integrative 3D tissue-based biomedical investigations.
Collapse
Affiliation(s)
- Mousumi Roy
- Department of Computer Science, Stony Brook University, NY 11794, USA
| | - Fusheng Wang
- Department of Computer Science, Stony Brook University, NY 11794, USA
- Department of Biomedical Informatics, Stony Brook University, NY 11794, USA
| | - George Teodoro
- Department of Computer Science, Federal University of Minas Gerais, Belo Horizonte 31270-901, Brazil
| | - Shristi Bhattarai
- Department of Clinical and Diagnostic Sciences, School of Health Profession, University of Alabama at Birmingham, Birmingham, AL 35233, USA
| | - Mahak Bhargava
- Department of Clinical and Diagnostic Sciences, School of Health Profession, University of Alabama at Birmingham, Birmingham, AL 35233, USA
| | - T. Subbanna Rekha
- Department of Pathology, JSS Medical College, JSS Academy of Higher Education and Research, Mysuru, Karnataka 570009, India
| | - Ritu Aneja
- Department of Clinical and Diagnostic Sciences, School of Health Profession, University of Alabama at Birmingham, Birmingham, AL 35233, USA
| | - Jun Kong
- Department of Mathematics and Statistics, Georgia State University, Atlanta, GA 30303, USA
- Department of Computer Science and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| |
Collapse
|
19
|
Hering A, Hansen L, Mok TCW, Chung ACS, Siebert H, Hager S, Lange A, Kuckertz S, Heldmann S, Shao W, Vesal S, Rusu M, Sonn G, Estienne T, Vakalopoulou M, Han L, Huang Y, Yap PT, Brudfors M, Balbastre Y, Joutard S, Modat M, Lifshitz G, Raviv D, Lv J, Li Q, Jaouen V, Visvikis D, Fourcade C, Rubeaux M, Pan W, Xu Z, Jian B, De Benetti F, Wodzinski M, Gunnarsson N, Sjolund J, Grzech D, Qiu H, Li Z, Thorley A, Duan J, Grosbrohmer C, Hoopes A, Reinertsen I, Xiao Y, Landman B, Huo Y, Murphy K, Lessmann N, van Ginneken B, Dalca AV, Heinrich MP. Learn2Reg: Comprehensive Multi-Task Medical Image Registration Challenge, Dataset and Evaluation in the Era of Deep Learning. IEEE Trans Med Imaging 2023; 42:697-712. [PMID: 36264729 DOI: 10.1109/tmi.2022.3213983] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Image registration is a fundamental medical image analysis task, and a wide variety of approaches have been proposed. However, only a few studies have comprehensively compared medical image registration approaches on a wide range of clinically relevant tasks. This limits the development of registration methods, the adoption of research advances into practice, and a fair benchmark across competing approaches. The Learn2Reg challenge addresses these limitations by providing a multi-task medical image registration data set for comprehensive characterisation of deformable registration algorithms. A continuous evaluation will be possible at https://learn2reg.grand-challenge.org. Learn2Reg covers a wide range of anatomies (brain, abdomen, and thorax), modalities (ultrasound, CT, MR), availability of annotations, as well as intra- and inter-patient registration evaluation. We established an easily accessible framework for training and validation of 3D registration methods, which enabled the compilation of results of over 65 individual method submissions from more than 20 unique teams. We used a complementary set of metrics, including robustness, accuracy, plausibility, and runtime, enabling unique insight into the current state-of-the-art of medical image registration. This paper describes datasets, tasks, evaluation methods and results of the challenge, as well as results of further analysis of transferability to new datasets, the importance of label supervision, and resulting bias. While no single approach worked best across all tasks, many methodological aspects could be identified that push the performance of medical image registration to new state-of-the-art performance. Furthermore, we demystified the common belief that conventional registration methods have to be much slower than deep-learning-based methods.
Collapse
|
20
|
Awan R, Raza SEA, Lotz J, Weiss N, Rajpoot N. Deep feature based cross-slide registration. Comput Med Imaging Graph 2023; 104:102162. [PMID: 36584537 DOI: 10.1016/j.compmedimag.2022.102162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2022] [Revised: 11/15/2022] [Accepted: 12/08/2022] [Indexed: 12/23/2022]
Abstract
Registration of multiple sections in a tissue block is an important pre-requisite task before any cross-slide image analysis. Non-rigid registration methods are capable of finding correspondence by locally transforming a moving image. These methods often rely on an initial guess to roughly align an image pair linearly and globally. This is essential to prevent convergence to a non-optimal minimum. We explore a deep feature based registration (DFBR) method which utilises data-driven descriptors to estimate the global transformation. A multi-stage strategy is adopted for improving the quality of registration. A visualisation tool is developed to view registered pairs of WSIs at different magnifications. With the help of this tool, one can apply a transformation on the fly without the need to generate a transformed moving WSI in a pyramidal form. We compare the performance on our dataset of data-driven descriptors with that of hand-crafted descriptors. Our approach can align the images with only small registration errors. The efficacy of our proposed method is evaluated for a subsequent non-rigid registration step. To this end, the first two steps of the ANHIR winner's framework are replaced with DFBR to register image pairs provided by the challenge. The modified framework produce comparable results to those of the challenge winning team.
Collapse
Affiliation(s)
- Ruqayya Awan
- Department of Computer Science, University of Warwick, CV4 7AL Coventry, UK.
| | - Shan E Ahmed Raza
- Department of Computer Science, University of Warwick, CV4 7AL Coventry, UK.
| | - Johannes Lotz
- Fraunhofer Institute for Digital Medicine MEVIS, Lübeck, Germany.
| | - Nick Weiss
- Fraunhofer Institute for Digital Medicine MEVIS, Lübeck, Germany.
| | - Nasir Rajpoot
- Department of Computer Science, University of Warwick, CV4 7AL Coventry, UK; Department of Pathology, University Hospitals Coventry, Warwickshire, UK; The Alan Turing Institute, London, UK.
| |
Collapse
|
21
|
Huang Z, Shao W, Han Z, Alkashash AM, De la Sancha C, Parwani AV, Nitta H, Hou Y, Wang T, Salama P, Rizkalla M, Zhang J, Huang K, Li Z. Artificial intelligence reveals features associated with breast cancer neoadjuvant chemotherapy responses from multi-stain histopathologic images. NPJ Precis Oncol 2023; 7:14. [PMID: 36707660 PMCID: PMC9883475 DOI: 10.1038/s41698-023-00352-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2021] [Accepted: 01/16/2023] [Indexed: 01/28/2023] Open
Abstract
Advances in computational algorithms and tools have made the prediction of cancer patient outcomes using computational pathology feasible. However, predicting clinical outcomes from pre-treatment histopathologic images remains a challenging task, limited by the poor understanding of tumor immune micro-environments. In this study, an automatic, accurate, comprehensive, interpretable, and reproducible whole slide image (WSI) feature extraction pipeline known as, IMage-based Pathological REgistration and Segmentation Statistics (IMPRESS), is described. We used both H&E and multiplex IHC (PD-L1, CD8+, and CD163+) images, investigated whether artificial intelligence (AI)-based algorithms using automatic feature extraction methods can predict neoadjuvant chemotherapy (NAC) outcomes in HER2-positive (HER2+) and triple-negative breast cancer (TNBC) patients. Features are derived from tumor immune micro-environment and clinical data and used to train machine learning models to accurately predict the response to NAC in breast cancer patients (HER2+ AUC = 0.8975; TNBC AUC = 0.7674). The results demonstrate that this method outperforms the results trained from features that were manually generated by pathologists. The developed image features and algorithms were further externally validated by independent cohorts, yielding encouraging results, especially for the HER2+ subtype.
Collapse
Affiliation(s)
- Zhi Huang
- School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, 47907, USA
- Department of Electrical and Computer Engineering, Indiana University - Purdue University Indianapolis, Indianapolis, IN, 46202, USA
| | - Wei Shao
- Department of Medicine, Indiana University School of Medicine, Indianapolis, IN, 46202, USA
| | - Zhi Han
- Department of Medicine, Indiana University School of Medicine, Indianapolis, IN, 46202, USA
- Regenstrief Institute, Indianapolis, IN, 46202, USA
- Department of Biostatistics and Health Data Science, Indiana University School of Medicine, Indianapolis, IN, 46202, USA
| | - Ahmad Mahmoud Alkashash
- Department of Pathology, Indiana University School of Medicine, Indianapolis, IN, 46202, USA
| | - Carlo De la Sancha
- Department of Pathology, Indiana University School of Medicine, Indianapolis, IN, 46202, USA
| | - Anil V Parwani
- Department of Pathology, The Ohio State University Wexner Medical Center, Columbus, OH, 43210, USA
| | - Hiroaki Nitta
- Roche Tissue Diagnostics, 1910 E. Innovation Park Drive, Tucson, AZ, 85755, USA
| | - Yanjun Hou
- University Hospitals Cleveland Medical Center, Case Western Reserve University, 11100 Euclid Avenue, Cleveland, OH, 44106, USA
| | - Tongxin Wang
- Department of Computer Science, Indiana University Bloomington, Bloomington, IN, 47408, USA
| | - Paul Salama
- Department of Electrical and Computer Engineering, Indiana University - Purdue University Indianapolis, Indianapolis, IN, 46202, USA
| | - Maher Rizkalla
- Department of Electrical and Computer Engineering, Indiana University - Purdue University Indianapolis, Indianapolis, IN, 46202, USA
| | - Jie Zhang
- Department of Medical and Molecular Genetics, Indiana University School of Medicine, Indianapolis, IN, 46202, USA
| | - Kun Huang
- Department of Medicine, Indiana University School of Medicine, Indianapolis, IN, 46202, USA.
- Regenstrief Institute, Indianapolis, IN, 46202, USA.
- Department of Biostatistics and Health Data Science, Indiana University School of Medicine, Indianapolis, IN, 46202, USA.
| | - Zaibo Li
- Department of Pathology, The Ohio State University Wexner Medical Center, Columbus, OH, 43210, USA.
| |
Collapse
|
22
|
Nolte P, Dullin C, Svetlove A, Brettmacher M, Rußmann C, Schilling AF, Alves F, Stock B. Current Approaches for Image Fusion of Histological Data with Computed Tomography and Magnetic Resonance Imaging. Radiol Res Pract 2022; 2022:6765895. [PMID: 36408297 PMCID: PMC9668453 DOI: 10.1155/2022/6765895] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Accepted: 08/17/2022] [Indexed: 10/30/2023] Open
Abstract
Classical analysis of biological samples requires the destruction of the tissue's integrity by cutting or grinding it down to thin slices for (Immuno)-histochemical staining and microscopic analysis. Despite high specificity, encoded in the stained 2D section of the whole tissue, the structural information, especially 3D information, is limited. Computed tomography (CT) or magnetic resonance imaging (MRI) scans performed prior to sectioning in combination with image registration algorithms provide an opportunity to regain access to morphological characteristics as well as to relate histological findings to the 3D structure of the local tissue environment. This review provides a summary of prevalent literature addressing the problem of multimodal coregistration of hard- and soft-tissue in microscopy and tomography. Grouped according to the complexity of the dimensions, including image-to-volume (2D ⟶ 3D), image-to-image (2D ⟶ 2D), and volume-to-volume (3D ⟶ 3D), selected currently applied approaches are investigated by comparing the method accuracy with respect to the limiting resolution of the tomography. Correlation of multimodal imaging could position itself as a useful tool allowing for precise histological diagnostic and allow the a priori planning of tissue extraction like biopsies.
Collapse
Affiliation(s)
- Philipp Nolte
- Faculty of Engineering and Health, University of Applied Sciences and Arts, Goettingen 37085, Germany
- Institute for Diagnostic and Interventional Radiology, University Medical Center Goettingen, Goettingen 37075, Germany
- Department of Trauma Surgery, Orthopedics and Plastic Surgery, University Medical Center Goettingen, Gottingen 37075, Germany
| | - Christian Dullin
- Institute for Diagnostic and Interventional Radiology, University Medical Center Goettingen, Goettingen 37075, Germany
- Translational Molecular Imaging, Max-Planck Institute for Multidisciplinary Sciences, City Campus, 37075 Goettingen, Germany
- Department for Diagnostic and Interventional Radiology, University Hospital Heidelberg, Heidelberg 69120, Germany
| | - Angelika Svetlove
- Institute for Diagnostic and Interventional Radiology, University Medical Center Goettingen, Goettingen 37075, Germany
- Translational Molecular Imaging, Max-Planck Institute for Multidisciplinary Sciences, City Campus, 37075 Goettingen, Germany
| | - Marcel Brettmacher
- Faculty of Engineering and Health, University of Applied Sciences and Arts, Goettingen 37085, Germany
| | - Christoph Rußmann
- Faculty of Engineering and Health, University of Applied Sciences and Arts, Goettingen 37085, Germany
- Brigham and Women's Hospital, Harvard Medical School, Boston 02155, MA, USA
| | - Arndt F. Schilling
- Department of Trauma Surgery, Orthopedics and Plastic Surgery, University Medical Center Goettingen, Gottingen 37075, Germany
| | - Frauke Alves
- Institute for Diagnostic and Interventional Radiology, University Medical Center Goettingen, Goettingen 37075, Germany
- Translational Molecular Imaging, Max-Planck Institute for Multidisciplinary Sciences, City Campus, 37075 Goettingen, Germany
| | - Bernd Stock
- Faculty of Engineering and Health, University of Applied Sciences and Arts, Goettingen 37085, Germany
| |
Collapse
|
23
|
Lipkova J, Chen RJ, Chen B, Lu MY, Barbieri M, Shao D, Vaidya AJ, Chen C, Zhuang L, Williamson DFK, Shaban M, Chen TY, Mahmood F. Artificial intelligence for multimodal data integration in oncology. Cancer Cell 2022; 40:1095-1110. [PMID: 36220072 PMCID: PMC10655164 DOI: 10.1016/j.ccell.2022.09.012] [Citation(s) in RCA: 74] [Impact Index Per Article: 37.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 07/12/2022] [Accepted: 09/15/2022] [Indexed: 02/07/2023]
Abstract
In oncology, the patient state is characterized by a whole spectrum of modalities, ranging from radiology, histology, and genomics to electronic health records. Current artificial intelligence (AI) models operate mainly in the realm of a single modality, neglecting the broader clinical context, which inevitably diminishes their potential. Integration of different data modalities provides opportunities to increase robustness and accuracy of diagnostic and prognostic models, bringing AI closer to clinical practice. AI models are also capable of discovering novel patterns within and across modalities suitable for explaining differences in patient outcomes or treatment resistance. The insights gleaned from such models can guide exploration studies and contribute to the discovery of novel biomarkers and therapeutic targets. To support these advances, here we present a synopsis of AI methods and strategies for multimodal data fusion and association discovery. We outline approaches for AI interpretability and directions for AI-driven exploration through multimodal data interconnections. We examine challenges in clinical adoption and discuss emerging solutions.
Collapse
Affiliation(s)
- Jana Lipkova
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Richard J Chen
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA; Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
| | - Bowen Chen
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Department of Computer Science, Harvard University, Cambridge, MA, USA
| | - Ming Y Lu
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA; Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology (MIT), Cambridge, MA, USA
| | - Matteo Barbieri
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Daniel Shao
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Harvard-MIT Health Sciences and Technology (HST), Cambridge, MA, USA
| | - Anurag J Vaidya
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Harvard-MIT Health Sciences and Technology (HST), Cambridge, MA, USA
| | - Chengkuan Chen
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Luoting Zhuang
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
| | - Drew F K Williamson
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Muhammad Shaban
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Tiffany Y Chen
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Faisal Mahmood
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA; Harvard Data Science Initiative, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
24
|
Hermsen M, Ciompi F, Adefidipe A, Denic A, Dendooven A, Smith BH, van Midden D, Bräsen JH, Kers J, Stegall MD, Bándi P, Nguyen T, Swiderska-Chadaj Z, Smeets B, Hilbrands LB, van der Laak JAWM. Convolutional Neural Networks for the Evaluation of Chronic and Inflammatory Lesions in Kidney Transplant Biopsies. Am J Pathol 2022; 192:1418-1432. [PMID: 35843265 DOI: 10.1016/j.ajpath.2022.06.009] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/21/2021] [Revised: 06/13/2022] [Accepted: 06/22/2022] [Indexed: 06/15/2023]
Abstract
In kidney transplant biopsies, both inflammation and chronic changes are important features that predict long-term graft survival. Quantitative scoring of these features is important for transplant diagnostics and kidney research. However, visual scoring is poorly reproducible and labor intensive. The goal of this study was to investigate the potential of convolutional neural networks (CNNs) to quantify inflammation and chronic features in kidney transplant biopsies. A structure segmentation CNN and a lymphocyte detection CNN were applied on 125 whole-slide image pairs of periodic acid-Schiff- and CD3-stained slides. The CNN results were used to quantify healthy and sclerotic glomeruli, interstitial fibrosis, tubular atrophy, and inflammation within both nonatrophic and atrophic tubuli, and in areas of interstitial fibrosis. The computed tissue features showed high correlation with Banff lesion scores of five pathologists (A.A., A.Dend., J.H.B., J.K., and T.N.). Analyses on a small subset showed a moderate correlation toward higher CD3+ cell density within scarred regions and higher CD3+ cell count inside atrophic tubuli correlated with long-term change of estimated glomerular filtration rate. The presented CNNs are valid tools to yield objective quantitative information on glomeruli number, fibrotic tissue, and inflammation within scarred and non-scarred kidney parenchyma in a reproducible manner. CNNs have the potential to improve kidney transplant diagnostics and will benefit the community as a novel method to generate surrogate end points for large-scale clinical studies.
Collapse
Affiliation(s)
- Meyke Hermsen
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Francesco Ciompi
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Adeyemi Adefidipe
- Department of Pathology, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, the Netherlands
| | - Aleksandar Denic
- Division of Nephrology and Hypertension, Mayo Clinic, Rochester, Minnesota
| | - Amélie Dendooven
- Department of Pathology, Ghent University Hospital, Ghent, Belgium; Faculty of Medicine, University of Antwerp, Wilrijk, Antwerp, Belgium
| | - Byron H Smith
- William J. von Liebig Center for Transplantation and Clinical Regeneration, Mayo Clinic, Rochester, Minnesota; Division of Biomedical Statistics and Informatics, Mayo Clinic, Rochester, Minnesota
| | - Dominique van Midden
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Jan Hinrich Bräsen
- Nephropathology Unit, Institute of Pathology, Hannover Medical School, Hannover, Germany
| | - Jesper Kers
- Department of Pathology, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, the Netherlands; Department of Pathology, Leiden University Medical Center, Leiden, the Netherlands; Center for Analytical Sciences Amsterdam, Van 't Hoff Institute for Molecular Sciences, University of Amsterdam, Amsterdam, the Netherlands
| | - Mark D Stegall
- Division of Transplantation Surgery, Mayo Clinic, Rochester, Minnesota
| | - Péter Bándi
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Tri Nguyen
- Department of Pathology, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Zaneta Swiderska-Chadaj
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands; Faculty of Electrical Engineering, Warsaw University of Technology, Warsaw, Poland
| | - Bart Smeets
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Luuk B Hilbrands
- Department of Nephrology, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Jeroen A W M van der Laak
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands; Center for Medical Image Science and Visualization, Linköping University, Linköping, Sweden.
| |
Collapse
|
25
|
Qiao Y, Zhao L, Luo C, Luo Y, Wu Y, Li S, Bu D, Zhao Y. Multi-modality artificial intelligence in digital pathology. Brief Bioinform 2022; 23:6702380. [PMID: 36124675 PMCID: PMC9677480 DOI: 10.1093/bib/bbac367] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Revised: 07/27/2022] [Accepted: 08/05/2022] [Indexed: 12/14/2022] Open
Abstract
In common medical procedures, the time-consuming and expensive nature of obtaining test results plagues doctors and patients. Digital pathology research allows using computational technologies to manage data, presenting an opportunity to improve the efficiency of diagnosis and treatment. Artificial intelligence (AI) has a great advantage in the data analytics phase. Extensive research has shown that AI algorithms can produce more up-to-date and standardized conclusions for whole slide images. In conjunction with the development of high-throughput sequencing technologies, algorithms can integrate and analyze data from multiple modalities to explore the correspondence between morphological features and gene expression. This review investigates using the most popular image data, hematoxylin-eosin stained tissue slide images, to find a strategic solution for the imbalance of healthcare resources. The article focuses on the role that the development of deep learning technology has in assisting doctors' work and discusses the opportunities and challenges of AI.
Collapse
Affiliation(s)
- Yixuan Qiao
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Lianhe Zhao
- Corresponding authors: Yi Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences; Shandong First Medical University & Shandong Academy of Medical Sciences. Tel.: +86 10 6260 0822; Fax: +86 10 6260 1356; E-mail: ; Lianhe Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences. Tel.: +86 18513983324; E-mail:
| | - Chunlong Luo
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yufan Luo
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yang Wu
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Shengtong Li
- Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Dechao Bu
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Yi Zhao
- Corresponding authors: Yi Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences; Shandong First Medical University & Shandong Academy of Medical Sciences. Tel.: +86 10 6260 0822; Fax: +86 10 6260 1356; E-mail: ; Lianhe Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences. Tel.: +86 18513983324; E-mail:
| |
Collapse
|
26
|
Pyatov VA, Sorokin DV. Affine Registration of Histological Images Using Transformer-Based Feature Matching. Pattern Recognit Image Anal 2022. [DOI: 10.1134/s1054661822030324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
27
|
Ge L, Wei X, Hao Y, Luo J, Xu Y. Unsupervised Histological Image Registration Using Structural Feature Guided Convolutional Neural Network. IEEE Trans Med Imaging 2022; 41:2414-2431. [PMID: 35363611 DOI: 10.1109/tmi.2022.3164088] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Registration of multiple stained images is a fundamental task in histological image analysis. In supervised methods, obtaining ground-truth data with known correspondences is laborious and time-consuming. Thus, unsupervised methods are expected. Unsupervised methods ease the burden of manual annotation but often at the cost of inferior results. In addition, registration of histological images suffers from appearance variance due to multiple staining, repetitive texture, and section missing during making tissue sections. To deal with these challenges, we propose an unsupervised structural feature guided convolutional neural network (SFG). Structural features are robust to multiple staining. The combination of low-resolution rough structural features and high-resolution fine structural features can overcome repetitive texture and section missing, respectively. SFG consists of two components of structural consistency constraints according to the formations of structural features, i.e., dense structural component and sparse structural component. The dense structural component uses structural feature maps of the whole image as structural consistency constraints, which represent local contextual information. The sparse structural component utilizes the distance of automatically obtained matched key points as structural consistency constraints because the matched key points in an image pair emphasize the matching of significant structures, which imply global information. In addition, a multi-scale strategy is used in both dense and sparse structural components to make full use of the structural information at low resolution and high resolution to overcome repetitive texture and section missing. The proposed method was evaluated on a public histological dataset (ANHIR) and ranked first as of Jan 18th, 2022.
Collapse
|
28
|
Naglah A, Khalifa F, El-baz A, Gondim D. Conditional GANs based system for fibrosis detection and quantification in Hematoxylin and Eosin whole slide images. Med Image Anal 2022. [DOI: 10.1016/j.media.2022.102537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Revised: 04/12/2022] [Accepted: 07/11/2022] [Indexed: 11/22/2022]
|
29
|
Sakamoto H, Nishimura M, Teplov A, Leung G, Ntiamoah P, Cesmecioglu E, Kawata N, Ohnishi T, Kareem I, Shia J, Yagi Y. A pilot study of micro-CT-based whole tissue imaging (WTI) on endoscopic submucosal dissection (ESD) specimens. Sci Rep 2022; 12:9889. [PMID: 35701447 DOI: 10.1038/s41598-022-13907-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Accepted: 05/30/2022] [Indexed: 12/25/2022] Open
Abstract
Endoscopic submucosal dissection can remove large superficial gastrointestinal lesions in en bloc. A detailed pathological evaluation of the resected specimen is required to assess the risk of recurrence after treatment. However, the current method of sectioning specimens to a thickness of a few millimeters does not provide information between the sections that are lost during the preparation. In this study, we have produced three-dimensional images of the entire dissected lesion for nine samples by using micro-CT imaging system. Although it was difficult to diagnose histological type on micro-CT images, it successfully evaluates the extent of the lesion and its surgical margins. Micro-CT images can depict sites that cannot be observed by the conventional pathological diagnostic process, suggesting that it may be useful to use in a complementary manner.
Collapse
|
30
|
Naumov A, Ushakov E, Ivanov A, Midiber K, Khovanskaya T, Konyukova A, Vishnyakova P, Nora S, Mikhaleva L, Fatkhudinov T, Karpulevich E. EndoNuke: Nuclei Detection Dataset for Estrogen and Progesterone Stained IHC Endometrium Scans. Data 2022; 7:75. [DOI: 10.3390/data7060075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
We present EndoNuke, an open dataset consisting of tiles from endometrium immunohistochemistry slides with the nuclei annotated as keypoints. Several experts with various experience have annotated the dataset. Apart from gathering the data and creating the annotation, we have performed an agreement study and analyzed the distribution of nuclei staining intensity.
Collapse
|
31
|
Ghahremani P, Li Y, Kaufman A, Vanguri R, Greenwald N, Angelo M, Hollmann TJ, Nadeem S. Deep learning-inferred multiplex immunofluorescence for immunohistochemical image quantification. NAT MACH INTELL 2022; 4:401-412. [DOI: 10.1038/s42256-022-00471-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
|
32
|
Affiliation(s)
- Jongwon Lee
- Department of Biomedical Sciences, Korea University College of Medicine, Seoul 02841, Korea
- Brain Korea 21 Plus Project for Biomedical Science, Korea University College of Medicine, Seoul 02841, Korea
| | - Minsu Yoo
- Department of Biomedical Sciences, Korea University College of Medicine, Seoul 02841, Korea
| | - Jungmin Choi
- Department of Biomedical Sciences, Korea University College of Medicine, Seoul 02841, Korea
- Department of Genetics, Yale University School of Medicine, New Haven, CT 06510, USA
| |
Collapse
|
33
|
Lee J, Yoo M, Choi J. Recent advances in spatially resolved transcriptomics: challenges and opportunities. BMB Rep 2022; 55:113-124. [PMID: 35168703 PMCID: PMC8972138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Revised: 02/07/2022] [Accepted: 02/11/2022] [Indexed: 03/09/2024] Open
Abstract
Single-cell RNA sequencing (scRNA-seq) has greatly advanced our understanding of cellular heterogeneity by profiling individual cell transcriptomes. However, cell dissociation from the tissue structure causes a loss of spatial information, which hinders the identification of intercellular communication networks and global transcriptional patterns present in the tissue architecture. To overcome this limitation, novel transcriptomic platforms that preserve spatial information have been actively developed. Significant achievements in imaging technologies have enabled in situ targeted transcriptomic profiling in single cells at singlemolecule resolution. In addition, technologies based on mRNA capture followed by sequencing have made possible profiling of the genome-wide transcriptome at the 55-100 μm resolution. Unfortunately, neither imaging-based technology nor capturebased method elucidates a complete picture of the spatial transcriptome in a tissue. Therefore, addressing specific biological questions requires balancing experimental throughput and spatial resolution, mandating the efforts to develop computational algorithms that are pivotal to circumvent technology-specific limitations. In this review, we focus on the current state-of-the-art spatially resolved transcriptomic technologies, describe their applications in a variety of biological domains, and explore recent discoveries demonstrating their enormous potential in biomedical research. We further highlight novel integrative computational methodologies with other data modalities that provide a framework to derive biological insight into heterogeneous and complex tissue organization. [BMB Reports 2022; 55(3): 113-124].
Collapse
Affiliation(s)
- Jongwon Lee
- Department of Biomedical Sciences, Korea University College of Medicine, Seoul 02841, USA
- Brain Korea 21 Plus Project for Biomedical Science, Korea University College of Medicine, Seoul 02841, Korea, CT 06510, USA
| | - Minsu Yoo
- Department of Biomedical Sciences, Korea University College of Medicine, Seoul 02841, USA
| | - Jungmin Choi
- Department of Biomedical Sciences, Korea University College of Medicine, Seoul 02841, USA
- Department of Genetics, Yale University School of Medicine, New Haven, CT 06510, USA
| |
Collapse
|
34
|
Hoque MZ, Keskinarkaus A, Nyberg P, Mattila T, Seppänen T. Whole slide image registration via multi-stained feature matching. Comput Biol Med 2022; 144:105301. [DOI: 10.1016/j.compbiomed.2022.105301] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 01/18/2022] [Accepted: 01/24/2022] [Indexed: 11/15/2022]
|
35
|
Faust K, Lee MK, Dent A, Fiala C, Portante A, Rabindranath M, Alsafwani N, Gao A, Djuric U, Diamandis P. Integrating morphologic and molecular histopathological features through whole slide image registration and deep learning. Neurooncol Adv 2022; 4:vdac001. [PMID: 35156037 PMCID: PMC8826810 DOI: 10.1093/noajnl/vdac001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Abstract
Background
Modern molecular pathology workflows in neuro-oncology heavily rely on the integration of morphologic and immunohistochemical patterns for analysis, classification, and prognostication. However, despite the recent emergence of digital pathology platforms and artificial intelligence-driven computational image analysis tools, automating the integration of histomorphologic information found across these multiple studies is challenged by large files sizes of whole slide images (WSIs) and shifts/rotations in tissue sections introduced during slide preparation.
Methods
To address this, we develop a workflow that couples different computer vision tools including scale-invariant feature transform (SIFT) and deep learning to efficiently align and integrate histopathological information found across multiple independent studies. We highlight the utility and automation potential of this workflow in the molecular subclassification and discovery of previously unappreciated spatial patterns in diffuse gliomas.
Results
First, we show how a SIFT-driven computer vision workflow was effective at automated WSI alignment in a cohort of 107 randomly selected surgical neuropathology cases (97/107 (91%) showing appropriate matches, AUC = 0.96). This alignment allows our AI-driven diagnostic workflow to not only differentiate different brain tumor types, but also integrate and carry out molecular subclassification of diffuse gliomas using relevant immunohistochemical biomarkers (IDH1-R132H, ATRX). To highlight the discovery potential of this workflow, we also examined spatial distributions of tumors showing heterogenous expression of the proliferation marker MIB1 and Olig2. This analysis helped uncovered an interesting and unappreciated association of Olig2 positive and proliferative areas in some gliomas (r = 0.62).
Conclusion
This efficient neuropathologist-inspired workflow provides a generalizable approach to help automate a variety of advanced immunohistochemically compatible diagnostic and discovery exercises in surgical neuropathology and neuro-oncology.
Collapse
Affiliation(s)
- Kevin Faust
- Department of Computer Science, University of Toronto, 40 St. George Street, Toronto, ON M5S 2E4, Canada
- Laboratory Medicine Program, Department of Pathology, University Health Network, 200 Elizabeth Street, Toronto, ON M5G 2C4, Canada
| | - Michael K Lee
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, ON M5S 1A8, Canada
| | - Anglin Dent
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, ON M5S 1A8, Canada
| | - Clare Fiala
- Laboratory Medicine Program, Department of Pathology, University Health Network, 200 Elizabeth Street, Toronto, ON M5G 2C4, Canada
| | - Alessia Portante
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, ON M5S 1A8, Canada
| | - Madhu Rabindranath
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, ON M5S 1A8, Canada
| | - Noor Alsafwani
- Laboratory Medicine Program, Department of Pathology, University Health Network, 200 Elizabeth Street, Toronto, ON M5G 2C4, Canada
- Department of Pathology, College of Medicine, Imam Abdulrahman Bin Faisal University, P.O. Box.2208, Dammam, 31441, Saudi Arabia
| | - Andrew Gao
- Laboratory Medicine Program, Department of Pathology, University Health Network, 200 Elizabeth Street, Toronto, ON M5G 2C4, Canada
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, ON M5S 1A8, Canada
| | - Ugljesa Djuric
- Princess Margaret Cancer Centre, 101 College Street, Toronto, ON M5G 1L7, Canada
| | - Phedias Diamandis
- Laboratory Medicine Program, Department of Pathology, University Health Network, 200 Elizabeth Street, Toronto, ON M5G 2C4, Canada
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, ON M5S 1A8, Canada
- Princess Margaret Cancer Centre, 101 College Street, Toronto, ON M5G 1L7, Canada
- Department of Medical Biophysics, University of Toronto, 101 College St, Toronto, ON M5G 1L7, Canada
| |
Collapse
|
36
|
Chiaruttini N, Burri O, Haub P, Guiet R, Sordet-Dessimoz J, Seitz A. An Open-Source Whole Slide Image Registration Workflow at Cellular Precision Using Fiji, QuPath and Elastix. Front Comput Sci 2022. [DOI: 10.3389/fcomp.2021.780026] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Image analysis workflows for Histology increasingly require the correlation and combination of measurements across several whole slide images. Indeed, for multiplexing, as well as multimodal imaging, it is indispensable that the same sample is imaged multiple times, either through various systems for multimodal imaging, or using the same system but throughout rounds of sample manipulation (e.g. multiple staining sessions). In both cases slight deformations from one image to another are unavoidable, leading to an imperfect superimposition Redundant and thus a loss of accuracy making it difficult to link measurements, in particular at the cellular level. Using pre-existing software components and developing missing ones, we propose a user-friendly workflow which facilitates the nonlinear registration of whole slide images in order to reach sub-cellular resolution level. The set of whole slide images to register and analyze is at first defined as a QuPath project. Fiji is then used to open the QuPath project and perform the registrations. Each registration is automated by using an elastix backend, or semi-automated by using BigWarp in order to interactively correct the results of the automated registration. These transformations can then be retrieved in QuPath to transfer any regions of interest from an image to the corresponding registered images. In addition, the transformations can be applied in QuPath to produce on-the-fly transformed images that can be displayed on top of the reference image. Thus, relevant data can be combined and analyzed throughout all registered slides, facilitating the analysis of correlative results for multiplexed and multimodal imaging.
Collapse
|
37
|
Jessup J, Krueger R, Warchol S, Hoffer J, Muhlich J, Ritch CC, Gaglia G, Coy S, Chen YA, Lin JR, Santagata S, Sorger PK, Pfister H. Scope2Screen: Focus+Context Techniques for Pathology Tumor Assessment in Multivariate Image Data. IEEE Trans Vis Comput Graph 2022; 28:259-269. [PMID: 34606456 PMCID: PMC8805697 DOI: 10.1109/tvcg.2021.3114786] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Inspection of tissues using a light microscope is the primary method of diagnosing many diseases, notably cancer. Highly multiplexed tissue imaging builds on this foundation, enabling the collection of up to 60 channels of molecular information plus cell and tissue morphology using antibody staining. This provides unique insight into disease biology and promises to help with the design of patient-specific therapies. However, a substantial gap remains with respect to visualizing the resulting multivariate image data and effectively supporting pathology workflows in digital environments on screen. We, therefore, developed Scope2Screen, a scalable software system for focus+context exploration and annotation of whole-slide, high-plex, tissue images. Our approach scales to analyzing 100GB images of 109 or more pixels per channel, containing millions of individual cells. A multidisciplinary team of visualization experts, microscopists, and pathologists identified key image exploration and annotation tasks involving finding, magnifying, quantifying, and organizing regions of interest (ROIs) in an intuitive and cohesive manner. Building on a scope-to-screen metaphor, we present interactive lensing techniques that operate at single-cell and tissue levels. Lenses are equipped with task-specific functionality and descriptive statistics, making it possible to analyze image features, cell types, and spatial arrangements (neighborhoods) across image channels and scales. A fast sliding-window search guides users to regions similar to those under the lens; these regions can be analyzed and considered either separately or as part of a larger image collection. A novel snapshot method enables linked lens configurations and image statistics to be saved, restored, and shared with these regions. We validate our designs with domain experts and apply Scope2Screen in two case studies involving lung and colorectal cancers to discover cancer-relevant image features.
Collapse
|
38
|
Abstract
Spatial transcriptomics is a rapidly growing field that promises to comprehensively characterize tissue organization and architecture at the single-cell or subcellular resolution. Such information provides a solid foundation for mechanistic understanding of many biological processes in both health and disease that cannot be obtained by using traditional technologies. The development of computational methods plays important roles in extracting biological signals from raw data. Various approaches have been developed to overcome technology-specific limitations such as spatial resolution, gene coverage, sensitivity, and technical biases. Downstream analysis tools formulate spatial organization and cell-cell communications as quantifiable properties, and provide algorithms to derive such properties. Integrative pipelines further assemble multiple tools in one package, allowing biologists to conveniently analyze data from beginning to end. In this review, we summarize the state of the art of spatial transcriptomic data analysis methods and pipelines, and discuss how they operate on different technological platforms.
Collapse
Affiliation(s)
- Ruben Dries
- Department of Medicine, Boston University School of Medicine, Boston, Massachusetts 02118, USA
- Bioinformatics Graduate Program, Boston University, Boston, Massachusetts 02215, USA
- Section of Computational Biomedicine, Boston University School of Medicine, Boston, Massachusetts 02118, USA
| | - Jiaji Chen
- Department of Medicine, Boston University School of Medicine, Boston, Massachusetts 02118, USA
| | - Natalie Del Rossi
- Department of Genetics and Genomic Sciences, Charles Bronfman Institute for Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, New York 10029, USA
| | - Mohammed Muzamil Khan
- Department of Medicine, Boston University School of Medicine, Boston, Massachusetts 02118, USA
- Bioinformatics Graduate Program, Boston University, Boston, Massachusetts 02215, USA
- Section of Computational Biomedicine, Boston University School of Medicine, Boston, Massachusetts 02118, USA
| | - Adriana Sistig
- Department of Genetics and Genomic Sciences, Charles Bronfman Institute for Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, New York 10029, USA
| | - Guo-Cheng Yuan
- Department of Genetics and Genomic Sciences, Charles Bronfman Institute for Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, New York 10029, USA
- Precision Immunology Institute, Icahn School of Medicine at Mount Sinai, New York, New York 10029, USA
| |
Collapse
|
39
|
Korzynska A, Roszkowiak L, Zak J, Siemion K. A review of current systems for annotation of cell and tissue images in digital pathology. Biocybern Biomed Eng 2021. [DOI: 10.1016/j.bbe.2021.04.012] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
40
|
Venet L, Pati S, Feldman MD, Nasrallah MP, Yushkevich P, Bakas S. Accurate and Robust Alignment of Differently Stained Histologic Images Based on Greedy Diffeomorphic Registration. Appl Sci (Basel) 2021; 11:1892. [PMID: 34290888 PMCID: PMC8291745 DOI: 10.3390/app11041892] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
Histopathologic assessment routinely provides rich microscopic information about tissue structure and disease process. However, the sections used are very thin, and essentially capture only 2D representations of a certain tissue sample. Accurate and robust alignment of sequentially cut 2D slices should contribute to more comprehensive assessment accounting for surrounding 3D information. Towards this end, we here propose a two-step diffeomorphic registration approach that aligns differently stained histology slides to each other, starting with an initial affine step followed by estimating a deformation field. It was quantitatively evaluated on ample (n = 481) and diverse data from the automatic non-rigid histological image registration challenge, where it was awarded the second rank. The obtained results demonstrate the ability of the proposed approach to robustly (average robustness = 0.9898) and accurately (average relative target registration error = 0.2%) align differently stained histology slices of various anatomical sites while maintaining reasonable computational efficiency (<1 min per registration). The method was developed by adapting a general-purpose registration algorithm designed for 3D radiographic scans and achieved consistently accurate results for aligning high-resolution 2D histologic images. Accurate alignment of histologic images can contribute to a better understanding of the spatial arrangement and growth patterns of cells, vessels, matrix, nerves, and immune cell interactions.
Collapse
Affiliation(s)
- Ludovic Venet
- Center for Biomedical Image Computing & Analytics, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Sarthak Pati
- Center for Biomedical Image Computing & Analytics, University of Pennsylvania, Philadelphia, PA 19104, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
- Department of Pathology & Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Michael D. Feldman
- Department of Pathology & Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - MacLean P. Nasrallah
- Department of Pathology & Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Paul Yushkevich
- Center for Biomedical Image Computing & Analytics, University of Pennsylvania, Philadelphia, PA 19104, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Spyridon Bakas
- Center for Biomedical Image Computing & Analytics, University of Pennsylvania, Philadelphia, PA 19104, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
- Department of Pathology & Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| |
Collapse
|
41
|
Wodzinski M, Müller H. DeepHistReg: Unsupervised Deep Learning Registration Framework for Differently Stained Histology Samples. Comput Methods Programs Biomed 2021; 198:105799. [PMID: 33137701 DOI: 10.1016/j.cmpb.2020.105799] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/12/2020] [Accepted: 10/10/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE The use of several stains during histology sample preparation can be useful for fusing complementary information about different tissue structures. It reveals distinct tissue properties that combined may be useful for grading, classification, or 3-D reconstruction. Nevertheless, since the slide preparation is different for each stain and the procedure uses consecutive slices, the tissue undergoes complex and possibly large deformations. Therefore, a nonrigid registration is required before further processing. The nonrigid registration of differently stained histology images is a challenging task because: (i) the registration must be fully automatic, (ii) the histology images are extremely high-resolution, (iii) the registration should be as fast as possible, (iv) there are significant differences in the tissue appearance, and (v) there are not many unique features due to a repetitive texture. METHODS In this article, we propose a deep learning-based solution to the histology registration. We describe a registration framework dedicated to high-resolution histology images that can perform the registration in real-time. The framework consists of an automatic background segmentation, iterative initial rotation search and learning-based affine/nonrigid registration. RESULTS We evaluate our approach using an open dataset provided for the Automatic Non-rigid Histological Image Registration (ANHIR) challenge organized jointly with the IEEE ISBI 2019 conference. We compare our solution to the challenge participants using a server-side evaluation tool provided by the challenge organizers. Following the challenge evaluation criteria, we use the target registration error (TRE) as the evaluation metric. Our algorithm provides registration accuracy close to the best scoring teams (median rTRE 0.19% of the image diagonal) while being significantly faster (the average registration time is about 2 seconds). CONCLUSIONS The proposed framework provides results, in terms of the TRE, comparable to the best-performing state-of-the-art methods. However, it is significantly faster, thus potentially more useful in clinical practice where a large number of histology images are being processed. The proposed method is of particular interest to researchers requiring an accurate, real-time, nonrigid registration of high-resolution histology images for whom the processing time of traditional, iterative methods in unacceptable. We provide free access to the software implementation of the method, including training and inference code, as well as pretrained models. Since the ANHIR dataset is open, this makes the results fully and easily reproducible.
Collapse
Affiliation(s)
- Marek Wodzinski
- AGH University of Science and Technology Department of Measurement and Electronics Kraków, Poland.
| | - Henning Müller
- University of Applied Sciences Western Switzerland (HES-SO Valais) Information Systems Institute Sierre, Switzerland.
| |
Collapse
|
42
|
Nicolás-Sáenz L, Guerrero-Aspizua S, Pascau J, Muñoz-Barrutia A. Nonlinear Image Registration and Pixel Classification Pipeline for the Study of Tumor Heterogeneity Maps. Entropy (Basel) 2020; 22:E946. [PMID: 33286715 PMCID: PMC7597219 DOI: 10.3390/e22090946] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Revised: 08/05/2020] [Accepted: 08/10/2020] [Indexed: 12/24/2022]
Abstract
We present a novel method to assess the variations in protein expression and spatial heterogeneity of tumor biopsies with application in computational pathology. This was done using different antigen stains for each tissue section and proceeding with a complex image registration followed by a final step of color segmentation to detect the exact location of the proteins of interest. For proper assessment, the registration needs to be highly accurate for the careful study of the antigen patterns. However, accurate registration of histopathological images comes with three main problems: the high amount of artifacts due to the complex biopsy preparation, the size of the images, and the complexity of the local morphology. Our method manages to achieve an accurate registration of the tissue cuts and segmentation of the positive antigen areas.
Collapse
Affiliation(s)
- Laura Nicolás-Sáenz
- Departamento de Bioingenieria e Ingenieria Aeroespacial, Universidad Carlos III de Madrid, 28911 Leganes, Spain; (L.N.-S.); (S.G.-A.); (J.P.)
| | - Sara Guerrero-Aspizua
- Departamento de Bioingenieria e Ingenieria Aeroespacial, Universidad Carlos III de Madrid, 28911 Leganes, Spain; (L.N.-S.); (S.G.-A.); (J.P.)
- Centre for Biomedical Network Research on Rare Diseases (CIBERER), U714, 28029 Madrid, Spain
- Hospital Fundación Jiménez Díaz e Instituto de Investigación FJD, 28040 Madrid, Spain
- Epithelial Biomedicine Division, CIEMAT, 28040 Madrid, Spain
| | - Javier Pascau
- Departamento de Bioingenieria e Ingenieria Aeroespacial, Universidad Carlos III de Madrid, 28911 Leganes, Spain; (L.N.-S.); (S.G.-A.); (J.P.)
- Instituto de Investigación Sanitaria Gregorio Marañon, 28007 Madrid, Spain
| | - Arrate Muñoz-Barrutia
- Departamento de Bioingenieria e Ingenieria Aeroespacial, Universidad Carlos III de Madrid, 28911 Leganes, Spain; (L.N.-S.); (S.G.-A.); (J.P.)
- Instituto de Investigación Sanitaria Gregorio Marañon, 28007 Madrid, Spain
| |
Collapse
|
43
|
Sheller MJ, Edwards B, Reina GA, Martin J, Pati S, Kotrotsou A, Milchenko M, Xu W, Marcus D, Colen RR, Bakas S. Federated learning in medicine: facilitating multi-institutional collaborations without sharing patient data. Sci Rep 2020; 10:12598. [PMID: 32724046 PMCID: PMC7387485 DOI: 10.1038/s41598-020-69250-1] [Citation(s) in RCA: 240] [Impact Index Per Article: 60.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Accepted: 06/23/2020] [Indexed: 12/15/2022] Open
Abstract
Several studies underscore the potential of deep learning in identifying complex patterns, leading to diagnostic and prognostic biomarkers. Identifying sufficiently large and diverse datasets, required for training, is a significant challenge in medicine and can rarely be found in individual institutions. Multi-institutional collaborations based on centrally-shared patient data face privacy and ownership challenges. Federated learning is a novel paradigm for data-private multi-institutional collaborations, where model-learning leverages all available data without sharing data between institutions, by distributing the model-training to the data-owners and aggregating their results. We show that federated learning among 10 institutions results in models reaching 99% of the model quality achieved with centralized data, and evaluate generalizability on data from institutions outside the federation. We further investigate the effects of data distribution across collaborating institutions on model quality and learning patterns, indicating that increased access to data through data private multi-institutional collaborations can benefit model quality more than the errors introduced by the collaborative method. Finally, we compare with other collaborative-learning approaches demonstrating the superiority of federated learning, and discuss practical implementation considerations. Clinical adoption of federated learning is expected to lead to models trained on datasets of unprecedented size, hence have a catalytic impact towards precision/personalized medicine.
Collapse
Affiliation(s)
- Micah J Sheller
- Intel Corporation, 2200 Mission College Blvd., Santa Clara, CA, 95052, USA
| | - Brandon Edwards
- Intel Corporation, 2200 Mission College Blvd., Santa Clara, CA, 95052, USA
| | - G Anthony Reina
- Intel Corporation, 2200 Mission College Blvd., Santa Clara, CA, 95052, USA
| | - Jason Martin
- Intel Corporation, 2200 Mission College Blvd., Santa Clara, CA, 95052, USA
| | - Sarthak Pati
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Richards Medical Research Laboratories, Floor 7, 3700 Hamilton Walk, Philadelphia, PA, 19104, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Richards Medical Research Laboratories, Floor 7, 3700 Hamilton Walk, Philadelphia, PA, 19104, USA
| | - Aikaterini Kotrotsou
- Department of Diagnostic Radiology, The University of Texas MD Anderson Cancer Center, 1400 Pressler St., Houston, TX, 77030, USA
- Department of Cancer Systems Imaging, The University of Texas MD Anderson Cancer Center, 1881 East Rd, 3SCRB4, Houston, TX, 77054, USA
| | - Mikhail Milchenko
- Department of Radiology, Washington University School of Medicine, St. Louis, MO, 63110, USA
| | - Weilin Xu
- Intel Corporation, 2200 Mission College Blvd., Santa Clara, CA, 95052, USA
| | - Daniel Marcus
- Department of Radiology, Washington University School of Medicine, St. Louis, MO, 63110, USA
| | - Rivka R Colen
- Department of Diagnostic Radiology, The University of Texas MD Anderson Cancer Center, 1400 Pressler St., Houston, TX, 77030, USA
- Department of Cancer Systems Imaging, The University of Texas MD Anderson Cancer Center, 1881 East Rd, 3SCRB4, Houston, TX, 77054, USA
- Hillman Cancer Center, University of Pittsburgh Medical Center, Pittsburgh, PA, 15232, USA
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA, 15213, USA
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Richards Medical Research Laboratories, Floor 7, 3700 Hamilton Walk, Philadelphia, PA, 19104, USA.
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Richards Medical Research Laboratories, Floor 7, 3700 Hamilton Walk, Philadelphia, PA, 19104, USA.
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Richards Medical Research Laboratories, Floor 7, 3700 Hamilton Walk, Philadelphia, PA, 19104, USA.
| |
Collapse
|
44
|
Abstract
The use of different stains for histological sample preparation reveals distinct tissue properties and may result in a more accurate diagnosis. However, as a result of the staining process, the tissue slides are being deformed and registration is required before further processing. The importance of this problem led to organizing an open challenge named Automatic Non-rigid Histological Image Registration Challenge (ANHIR), organized jointly with the IEEE ISBI 2019 conference. The challenge organizers provided several hundred image pairs and a server-side evaluation platform. One of the most difficult sub-problems for the challenge participants was to find an initial, global transform, before attempting to calculate the final, non-rigid deformation field. This article solves the problem by proposing a deep network trained in an unsupervised way with a good generalization. We propose a method that works well for images with different resolutions, aspect ratios, without the necessity to perform image padding, while maintaining a low number of network parameters and fast forward pass time. The proposed method is orders of magnitude faster than the classical approach based on the iterative similarity metric optimization or computer vision descriptors. The success rate is above 98% for both the training set and the evaluation set. We make both the training and inference code freely available.
Collapse
Affiliation(s)
- Žiga Špiclin
- Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| | - Jamie McClelland
- Centre for Medical Image Computing, University College London, London, UK
| | - Jan Kybic
- Faculty of Electrical Engineering, Czech Technical University in Prague, Prague, Czech Republic
| | - Orcun Goksel
- Computer Vision Lab, ETH Zurich, Zurich, Switzerland
| |
Collapse
|