101
|
Xie Y, Zhang J, Lu H, Shen C, Xia Y. SESV: Accurate Medical Image Segmentation by Predicting and Correcting Errors. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:286-296. [PMID: 32956049 DOI: 10.1109/tmi.2020.3025308] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Medical image segmentation is an essential task in computer-aided diagnosis. Despite their prevalence and success, deep convolutional neural networks (DCNNs) still need to be improved to produce accurate and robust enough segmentation results for clinical use. In this paper, we propose a novel and generic framework called Segmentation-Emendation-reSegmentation-Verification (SESV) to improve the accuracy of existing DCNNs in medical image segmentation, instead of designing a more accurate segmentation model. Our idea is to predict the segmentation errors produced by an existing model and then correct them. Since predicting segmentation errors is challenging, we design two ways to tolerate the mistakes in the error prediction. First, rather than using a predicted segmentation error map to correct the segmentation mask directly, we only treat the error map as the prior that indicates the locations where segmentation errors are prone to occur, and then concatenate the error map with the image and segmentation mask as the input of a re-segmentation network. Second, we introduce a verification network to determine whether to accept or reject the refined mask produced by the re-segmentation network on a region-by-region basis. The experimental results on the CRAG, ISIC, and IDRiD datasets suggest that using our SESV framework can improve the accuracy of DeepLabv3+ substantially and achieve advanced performance in the segmentation of gland cells, skin lesions, and retinal microaneurysms. Consistent conclusions can also be drawn when using PSPNet, U-Net, and FPN as the segmentation network, respectively. Therefore, our SESV framework is capable of improving the accuracy of different DCNNs on different medical image segmentation tasks.
Collapse
|
102
|
Xing F, Zhang X, Cornish TC. Artificial intelligence for pathology. Artif Intell Med 2021. [DOI: 10.1016/b978-0-12-821259-2.00011-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
103
|
Cao X, Chen H, Li Y, Peng Y, Wang S, Cheng L. Uncertainty Aware Temporal-Ensembling Model for Semi-Supervised ABUS Mass Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:431-443. [PMID: 33021936 DOI: 10.1109/tmi.2020.3029161] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Accurate breast mass segmentation of automated breast ultrasound (ABUS) images plays a crucial role in 3D breast reconstruction which can assist radiologists in surgery planning. Although the convolutional neural network has great potential for breast mass segmentation due to the remarkable progress of deep learning, the lack of annotated data limits the performance of deep CNNs. In this article, we present an uncertainty aware temporal ensembling (UATE) model for semi-supervised ABUS mass segmentation. Specifically, a temporal ensembling segmentation (TEs) model is designed to segment breast mass using a few labeled images and a large number of unlabeled images. Considering the network output contains correct predictions and unreliable predictions, equally treating each prediction in pseudo label update and loss calculation may degrade the network performance. To alleviate this problem, the uncertainty map is estimated for each image. Then an adaptive ensembling momentum map and an uncertainty aware unsupervised loss are designed and integrated with TEs model. The effectiveness of the proposed UATE model is mainly verified on an ABUS dataset of 107 patients with 170 volumes, including 13382 2D labeled slices. The Jaccard index (JI), Dice similarity coefficient (DSC), pixel-wise accuracy (AC) and Hausdorff distance (HD) of the proposed method on testing set are 63.65%, 74.25%, 99.21% and 3.81mm respectively. Experimental results demonstrate that our semi-supervised method outperforms the fully supervised method, and get a promising result compared with existing semi-supervised methods.
Collapse
|
104
|
Dabass M, Vashisth S, Vig R. Attention-Guided deep atrous-residual U-Net architecture for automated gland segmentation in colon histopathology images. INFORMATICS IN MEDICINE UNLOCKED 2021. [DOI: 10.1016/j.imu.2021.100784] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
|
105
|
Gunesli GN, Sokmensuer C, Gunduz-Demir C. AttentionBoost: Learning What to Attend for Gland Segmentation in Histopathological Images by Boosting Fully Convolutional Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:4262-4273. [PMID: 32780699 DOI: 10.1109/tmi.2020.3015198] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Fully convolutional networks (FCNs) are widely used for instance segmentation. One important challenge is to sufficiently train these networks to yield good generalizations for hard-to-learn pixels, correct prediction of which may greatly affect the success. A typical group of such hard-to-learn pixels are boundaries between instances. Many studies have developed strategies to pay more attention to learning these boundary pixels. They include designing multi-task networks with an additional task of boundary prediction and increasing the weights of boundary pixels' predictions in the loss function. Such strategies require defining what to attend beforehand and incorporating this defined attention to the learning model. However, there may exist other groups of hard-to-learn pixels and manually defining and incorporating the appropriate attention for each group may not be feasible. In order to provide an adaptable solution to learn different groups of hard-to-learn pixels, this article proposes AttentionBoost, which is a new multi-attention learning model based on adaptive boosting, for the task of gland instance segmentation in histopathological images. AttentionBoost designs a multi-stage network and introduces a new loss adjustment mechanism for an FCN to adaptively learn what to attend at each stage directly on image data without necessitating any prior definition. This mechanism modulates the attention of each stage to correct the mistakes of previous stages, by adjusting the loss weight of each pixel prediction separately with respect to how accurate the previous stages are on this pixel. Working on histopathological images of colon tissues, our experiments demonstrate that the proposed AttentionBoost model improves the results of gland segmentation compared to its counterparts.
Collapse
|
106
|
Shi T, Jiang H, Zheng B. A Stacked Generalization U-shape network based on zoom strategy and its application in biomedical image segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 197:105678. [PMID: 32791449 DOI: 10.1016/j.cmpb.2020.105678] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/21/2020] [Accepted: 07/23/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE The deep neural network model can learn complex non-linear relationships in the data and has superior flexibility and adaptability. A downside of this flexibility is that they are sensitive to initial conditions, both in terms of the initial random weights and in terms of the statistical noise in the training dataset. And the disadvantage caused by adaptability is that deep convolutional networks usually have poor robustness or generalization when the models are trained using the extremely limited amount of labeled data, especially in the biomedical imaging informatics field. METHODS In this paper, we propose to develop and test a stacked generalization U-shape network (SG-UNet) based on the zoom strategy applying to biomedical image segmentation. SG-UNet is essentially a stacked generalization architecture consisting of multiple sub-modules, which takes multi-resolution images as input and uses hybrid features to segment regions of interest and detect diseases under the multi-supervision. The proposed new SG-UNet applies the zoom of multi-supervision to do optimization search in global feature space without pre-training. Besides, the zoom loss function can gradually enhance the focus training on a sparse set of hard samples. RESULTS We evaluated the proposed algorithm in comparison with several popular U-shape ensemble network architectures across multi-modal biomedical image segmentation tasks to segment malignant rectal cancers, polyps and glands from the three imaging modalities of computed tomography (CT), digital colonoscopy and histopathology images. Applying the proposed algorithm improves 3.116%, 2.676%, 2.356% on Dice coefficients, and 3.044%, 2.420%, 1.928% on F2-score for the three imaging modality datasets, respectively. The comparison results using different amounts of rectal cancer CT data show that the proposed algorithm has a slower tendency of diminishing marginal efficiency. And glands segmentation study results also support the feasibility of yielding comparable performance with other state-of-the-art methods. CONCLUSIONS The proposed algorithm can be trained more efficiently by using the small image datasets without using additional techniques such as fine-tuning, and achieves higher accuracy with less computational complexity than other stacked ensemble networks for biomedical image segmentation.
Collapse
Affiliation(s)
- Tianyu Shi
- Software College, Northeastern University, Shenyang 110819, China
| | - Huiyan Jiang
- Software College, Northeastern University, Shenyang 110819, China; Key Laboratory of Intelligent Computing in Biomedical Image, Ministry of Education, Northeastern University, Shenyang 110819, China.
| | - Bin Zheng
- School of Electrical and Computer Engineering, The University of Oklahoma, Norman, OK 73019, USA.
| |
Collapse
|
107
|
Graham S, Epstein D, Rajpoot N. Dense Steerable Filter CNNs for Exploiting Rotational Symmetry in Histology Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:4124-4136. [PMID: 32746153 DOI: 10.1109/tmi.2020.3013246] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Histology images are inherently symmetric under rotation, where each orientation is equally as likely to appear. However, this rotational symmetry is not widely utilised as prior knowledge in modern Convolutional Neural Networks (CNNs), resulting in data hungry models that learn independent features at each orientation. Allowing CNNs to be rotation-equivariant removes the necessity to learn this set of transformations from the data and instead frees up model capacity, allowing more discriminative features to be learned. This reduction in the number of required parameters also reduces the risk of overfitting. In this paper, we propose Dense Steerable Filter CNNs (DSF-CNNs) that use group convolutions with multiple rotated copies of each filter in a densely connected framework. Each filter is defined as a linear combination of steerable basis filters, enabling exact rotation and decreasing the number of trainable parameters compared to standard filters. We also provide the first in-depth comparison of different rotation-equivariant CNNs for histology image analysis and demonstrate the advantage of encoding rotational symmetry into modern architectures. We show that DSF-CNNs achieve state-of-the-art performance, with significantly fewer parameters, when applied to three different tasks in the area of computational pathology: breast tumour classification, colon gland segmentation and multi-tissue nuclear segmentation.
Collapse
|
108
|
Kudou M, Kosuga T, Otsuji E. Artificial intelligence in gastrointestinal cancer: Recent advances and future perspectives. Artif Intell Gastroenterol 2020; 1:71-85. [DOI: 10.35712/aig.v1.i4.71] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/19/2020] [Revised: 10/28/2020] [Accepted: 11/12/2020] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) using machine or deep learning algorithms is attracting increasing attention because of its more accurate image recognition ability and prediction performance than human-aid analyses. The application of AI models to gastrointestinal (GI) clinical oncology has been investigated for the past decade. AI has the capacity to automatically detect and diagnose GI tumors with similar diagnostic accuracy to expert clinicians. AI may also predict malignant potential, such as tumor histology, metastasis, patient survival, resistance to cancer treatments and the molecular biology of tumors, through image analyses of radiological or pathological imaging data using complex deep learning models beyond human cognition. The introduction of AI-assisted diagnostic systems into clinical settings is expected in the near future. However, limitations associated with the evaluation of GI tumors by AI models have yet to be resolved. Recent studies on AI-assisted diagnostic models of gastric and colorectal cancers in the endoscopic, pathological, and radiological fields were herein reviewed. The limitations and future perspectives for the application of AI systems in clinical settings have also been discussed. With the establishment of a multidisciplinary team containing AI experts in each medical institution and prospective studies, AI-assisted medical systems will become a promising tool for GI cancer.
Collapse
Affiliation(s)
- Michihiro Kudou
- Division of Digestive Surgery, Department of Surgery, Kyoto Prefectural University of Medicine, Kyoto 602-8566, Japan
- Department of Surgery, Kyoto Okamoto Memorial Hospital, Kyoto 613-0034, Japan
| | - Toshiyuki Kosuga
- Division of Digestive Surgery, Department of Surgery, Kyoto Prefectural University of Medicine, Kyoto 602-8566, Japan
- Department of Surgery, Saiseikai Shiga Hospital, Ritto 520-3046, Japan
| | - Eigo Otsuji
- Division of Digestive Surgery, Department of Surgery, Kyoto Prefectural University of Medicine, Kyoto 602-8566, Japan
| |
Collapse
|
109
|
Vukicevic AM, Radovic M, Zabotti A, Milic V, Hocevar A, Callegher SZ, De Lucia O, De Vita S, Filipovic N. Deep learning segmentation of Primary Sjögren's syndrome affected salivary glands from ultrasonography images. Comput Biol Med 2020; 129:104154. [PMID: 33260099 DOI: 10.1016/j.compbiomed.2020.104154] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Revised: 11/23/2020] [Accepted: 11/23/2020] [Indexed: 11/17/2022]
Abstract
Salivary gland ultrasonography (SGUS) has proven to be a promising tool for diagnosing various diseases manifesting with abnormalities in salivary glands (SGs), including primary Sjögren's syndrome (pSS). At present, the major obstacle for establishing SUGS as a standardized tool for pSS diagnosis is its low inter/intra observer reliability. The aim of this study was to address this problem by proposing a robust deep learning-based solution for the automated segmentation of SGUS images. For these purposes, four architectures were considered: a fully convolutional neural network, fully convolutional "DenseNets" (FCN-DenseNet) network, U-Net, and LinkNet. During the course of the study, the growing HarmonicSS cohort included 1184 annotated SGUS images. Accordingly, the algorithms were trained using a transfer learning approach. With regard to the intersection-over-union (IoU), the top-performing FCN-DenseNet (IoU = 0.85) network showed a considerable margin above the inter-observer agreement (IoU = 0.76) and slightly above the intra-observer agreement (IoU = 0.84) between clinical experts. Considering its accuracy and speed (24.5 frames per second), it was concluded that the FCN-DenseNet could have wider applications in clinical practice. Further work on the topic will consider the integration of methods for pSS scoring, with the end goal of establishing SGUS as an effective noninvasive pSS diagnostic tool. To aid this progress, we created inference (frozen models) files for the developed models, and made them publicly available.
Collapse
Affiliation(s)
- Arso M Vukicevic
- Faculty of Engineering, University of Kragujevac, Sestre Janjic 6, Kragujevac, Serbia; BioIRC R&D Center, Prvoslava Stojanovica 6, Kragujevac, Serbia.
| | - Milos Radovic
- BioIRC R&D Center, Prvoslava Stojanovica 6, Kragujevac, Serbia; Everseen, Milutina Milankovica 1z, Belgrade, Serbia.
| | - Alen Zabotti
- Azienda Ospedaliero Universitaria, Santa Maria Della Misericordia di Udine, Udine, Italy
| | - Vera Milic
- Institute of Rheumatology, School of Medicine, University of Belgrade, Serbia
| | - Alojzija Hocevar
- Department of Rheumatology, Ljubljana University Medical Centre, Ljubljana, Slovenia
| | | | - Orazio De Lucia
- Department of Rheumatology, ASST Centro Traumatologico Ortopedico G. Pini-CTO, Milano, Italy
| | - Salvatore De Vita
- Azienda Ospedaliero Universitaria, Santa Maria Della Misericordia di Udine, Udine, Italy
| | - Nenad Filipovic
- Faculty of Engineering, University of Kragujevac, Sestre Janjic 6, Kragujevac, Serbia; BioIRC R&D Center, Prvoslava Stojanovica 6, Kragujevac, Serbia
| |
Collapse
|
110
|
Wang X, Fang Y, Yang S, Zhu D, Wang M, Zhang J, Tong KY, Han X. A hybrid network for automatic hepatocellular carcinoma segmentation in H&E-stained whole slide images. Med Image Anal 2020; 68:101914. [PMID: 33285479 DOI: 10.1016/j.media.2020.101914] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2020] [Revised: 11/09/2020] [Accepted: 11/16/2020] [Indexed: 12/17/2022]
Abstract
Hepatocellular carcinoma (HCC), as the most common type of primary malignant liver cancer, has become a leading cause of cancer deaths in recent years. Accurate segmentation of HCC lesions is critical for tumor load assessment, surgery planning, and postoperative examination. As the appearance of HCC lesions varies greatly across patients, traditional manual segmentation is a very tedious and time-consuming process, the accuracy of which is also difficult to ensure. Therefore, a fully automated and reliable HCC segmentation system is in high demand. In this work, we present a novel hybrid neural network based on multi-task learning and ensemble learning techniques for accurate HCC segmentation of hematoxylin and eosin (H&E)-stained whole slide images (WSIs). First, three task-specific branches are integrated to enlarge the feature space, based on which the network is able to learn more general features and thus reduce the risk of overfitting. Second, an ensemble learning scheme is leveraged to perform feature aggregation, in which selective kernel modules (SKMs) and spatial and channel-wise squeeze-and-excitation modules (scSEMs) are adopted for capturing the features from different spaces and scales. Our proposed method achieves state-of-the-art performance on three publicly available datasets, with segmentation accuracies of 0.797, 0.923, and 0.765 in the PAIP, CRAG, and UHCMC&CWRU datasets, respectively, which demonstrates its effectiveness in addressing the HCC segmentation problem. To the best of our knowledge, this is also the first work on the pixel-wise HCC segmentation of H&E-stained WSIs.
Collapse
Affiliation(s)
- Xiyue Wang
- College of Computer Science, Sichuan University, Chengdu 610065, China
| | - Yuqi Fang
- Department of Electronic Engineering, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong; Tencent AI Lab, Shenzhen 518057, China
| | - Sen Yang
- College of Biomedical Engineering, Sichuan University, Chengdu 610065, China; Tencent AI Lab, Shenzhen 518057, China
| | - Delong Zhu
- Department of Electronic Engineering, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong
| | - Minghui Wang
- College of Computer Science, Sichuan University, Chengdu 610065, China
| | - Jing Zhang
- College of Biomedical Engineering, Sichuan University, Chengdu 610065, China.
| | - Kai-Yu Tong
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong
| | - Xiao Han
- Tencent AI Lab, Shenzhen 518057, China.
| |
Collapse
|
111
|
Debelee TG, Kebede SR, Schwenker F, Shewarega ZM. Deep Learning in Selected Cancers' Image Analysis-A Survey. J Imaging 2020; 6:121. [PMID: 34460565 PMCID: PMC8321208 DOI: 10.3390/jimaging6110121] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 10/19/2020] [Accepted: 10/26/2020] [Indexed: 02/08/2023] Open
Abstract
Deep learning algorithms have become the first choice as an approach to medical image analysis, face recognition, and emotion recognition. In this survey, several deep-learning-based approaches applied to breast cancer, cervical cancer, brain tumor, colon and lung cancers are studied and reviewed. Deep learning has been applied in almost all of the imaging modalities used for cervical and breast cancers and MRIs for the brain tumor. The result of the review process indicated that deep learning methods have achieved state-of-the-art in tumor detection, segmentation, feature extraction and classification. As presented in this paper, the deep learning approaches were used in three different modes that include training from scratch, transfer learning through freezing some layers of the deep learning network and modifying the architecture to reduce the number of parameters existing in the network. Moreover, the application of deep learning to imaging devices for the detection of various cancer cases has been studied by researchers affiliated to academic and medical institutes in economically developed countries; while, the study has not had much attention in Africa despite the dramatic soar of cancer risks in the continent.
Collapse
Affiliation(s)
- Taye Girma Debelee
- Artificial Intelligence Center, 40782 Addis Ababa, Ethiopia; (S.R.K.); (Z.M.S.)
- College of Electrical and Mechanical Engineering, Addis Ababa Science and Technology University, 120611 Addis Ababa, Ethiopia
| | - Samuel Rahimeto Kebede
- Artificial Intelligence Center, 40782 Addis Ababa, Ethiopia; (S.R.K.); (Z.M.S.)
- Department of Electrical and Computer Engineering, Debreberhan University, 445 Debre Berhan, Ethiopia
| | - Friedhelm Schwenker
- Institute of Neural Information Processing, University of Ulm, 89081 Ulm, Germany;
| | | |
Collapse
|
112
|
Pacal I, Karaboga D, Basturk A, Akay B, Nalbantoglu U. A comprehensive review of deep learning in colon cancer. Comput Biol Med 2020; 126:104003. [DOI: 10.1016/j.compbiomed.2020.104003] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2020] [Revised: 08/28/2020] [Accepted: 08/28/2020] [Indexed: 12/17/2022]
|
113
|
NuClick: A deep learning framework for interactive segmentation of microscopic images. Med Image Anal 2020; 65:101771. [DOI: 10.1016/j.media.2020.101771] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2019] [Revised: 06/07/2020] [Accepted: 07/03/2020] [Indexed: 12/25/2022]
|
114
|
Wang Y, Nie H, He X, Liao Z, Zhou Y, Zhou J, Ou C. The emerging role of super enhancer-derived noncoding RNAs in human cancer. Theranostics 2020; 10:11049-11062. [PMID: 33042269 PMCID: PMC7532672 DOI: 10.7150/thno.49168] [Citation(s) in RCA: 50] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2020] [Accepted: 08/23/2020] [Indexed: 02/06/2023] Open
Abstract
Super enhancers (SEs) are large clusters of adjacent enhancers that drive the expression of genes which regulate cellular identity; SE regions can be enriched with a high density of transcription factors, co-factors, and enhancer-associated epigenetic modifications. Through enhanced activation of their target genes, SEs play an important role in various diseases and conditions, including cancer. Recent studies have shown that SEs not only activate the transcriptional expression of coding genes to directly regulate biological functions, but also drive the transcriptional expression of non-coding RNAs (ncRNAs) to indirectly regulate biological functions. SE-derived ncRNAs play critical roles in tumorigenesis, including malignant proliferation, metastasis, drug resistance, and inflammatory response. Moreover, the abnormal expression of SE-derived ncRNAs is closely related to the clinical and pathological characterization of tumors. In this review, we summarize the functions and roles of SE-derived ncRNAs in tumorigenesis and discuss their prospective applications in tumor therapy. A deeper understanding of the potential mechanism underlying the action of SE-derived ncRNAs in tumorigenesis may provide new strategies for the early diagnosis of tumors and targeted therapy.
Collapse
MESH Headings
- Antineoplastic Agents/pharmacology
- Antineoplastic Agents/therapeutic use
- Biomarkers, Tumor/analysis
- Biomarkers, Tumor/genetics
- Biomarkers, Tumor/metabolism
- Carcinogenesis/drug effects
- Carcinogenesis/genetics
- Cell Proliferation/drug effects
- Cell Proliferation/genetics
- Drug Resistance, Neoplasm/genetics
- Enhancer Elements, Genetic/genetics
- Gene Expression Regulation, Neoplastic/drug effects
- Gene Expression Regulation, Neoplastic/genetics
- Humans
- Molecular Targeted Therapy/methods
- Neoplasms/diagnosis
- Neoplasms/drug therapy
- Neoplasms/genetics
- Neoplasms/pathology
- Precision Medicine/methods
- RNA, Untranslated/analysis
- RNA, Untranslated/genetics
- RNA, Untranslated/metabolism
Collapse
Affiliation(s)
- Yutong Wang
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, Hunan, 410008, China
| | - Hui Nie
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, Hunan, 410008, China
| | - Xiaoyun He
- Department of Endocrinology, Xiangya Hospital, Central South University, Changsha, Hunan 410008, China
| | - Zhiming Liao
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, Hunan, 410008, China
| | - Yangying Zhou
- Department of Oncology, Xiangya Hospital, Central South University, Changsha 410008, Hunan, China
| | - Jianhua Zhou
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, Hunan, 410008, China
| | - Chunlin Ou
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, Hunan, 410008, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha 410008, Hunan, China
| |
Collapse
|
115
|
Wan T, Zhao L, Feng H, Li D, Tong C, Qin Z. Robust nuclei segmentation in histopathology using ASPPU-Net and boundary refinement. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.08.103] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
|
116
|
Thakur N, Yoon H, Chong Y. Current Trends of Artificial Intelligence for Colorectal Cancer Pathology Image Analysis: A Systematic Review. Cancers (Basel) 2020; 12:1884. [PMID: 32668721 PMCID: PMC7408874 DOI: 10.3390/cancers12071884] [Citation(s) in RCA: 47] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Revised: 07/06/2020] [Accepted: 07/09/2020] [Indexed: 02/06/2023] Open
Abstract
Colorectal cancer (CRC) is one of the most common cancers requiring early pathologic diagnosis using colonoscopy biopsy samples. Recently, artificial intelligence (AI) has made significant progress and shown promising results in the field of medicine despite several limitations. We performed a systematic review of AI use in CRC pathology image analysis to visualize the state-of-the-art. Studies published between January 2000 and January 2020 were searched in major online databases including MEDLINE (PubMed, Cochrane Library, and EMBASE). Query terms included "colorectal neoplasm," "histology," and "artificial intelligence." Of 9000 identified studies, only 30 studies consisting of 40 models were selected for review. The algorithm features of the models were gland segmentation (n = 25, 62%), tumor classification (n = 8, 20%), tumor microenvironment characterization (n = 4, 10%), and prognosis prediction (n = 3, 8%). Only 20 gland segmentation models met the criteria for quantitative analysis, and the model proposed by Ding et al. (2019) performed the best. Studies with other features were in the elementary stage, although most showed impressive results. Overall, the state-of-the-art is promising for CRC pathological analysis. However, datasets in most studies had relatively limited scale and quality for clinical application of this technique. Future studies with larger datasets and high-quality annotations are required for routine practice-level validation.
Collapse
Affiliation(s)
- Nishant Thakur
- Department of Hospital Pathology, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, 10, 63-ro, Yeongdeungpo-gu, Seoul 07345, Korea;
| | - Hongjun Yoon
- AI Lab, Deepnoid, #1305 E&C Venture Dream Tower 2, 55, Digital-ro 33-Gil, Guro-gu, Seoul 06216, Korea;
| | - Yosep Chong
- Department of Hospital Pathology, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, 10, 63-ro, Yeongdeungpo-gu, Seoul 07345, Korea;
| |
Collapse
|
117
|
Zhao P, Zhang J, Fang W, Deng S. SCAU-Net: Spatial-Channel Attention U-Net for Gland Segmentation. Front Bioeng Biotechnol 2020; 8:670. [PMID: 32719781 PMCID: PMC7347985 DOI: 10.3389/fbioe.2020.00670] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Accepted: 05/28/2020] [Indexed: 12/11/2022] Open
Abstract
With the development of medical technology, image semantic segmentation is of great significance for morphological analysis, quantification, and diagnosis of human tissues. However, manual detection and segmentation is a time-consuming task. Especially for biomedical image, only experts are able to identify tissues and mark their contours. In recent years, the development of deep learning has greatly improved the accuracy of computer automatic segmentation. This paper proposes a deep learning image semantic segmentation network named Spatial-Channel Attention U-Net (SCAU-Net) based on current research status of medical image. SCAU-Net has an encoder-decoder-style symmetrical structure integrated with spatial and channel attention as plug-and-play modules. The main idea is to enhance local related features and restrain irrelevant features at the spatial and channel levels. Experiments on the gland dataset GlaS and CRAG show that the proposed SCAU-Net model is superior to the classic U-Net model in image segmentation task, with 1% improvement on Dice score and 1.5% improvement on Jaccard score.
Collapse
Affiliation(s)
- Peng Zhao
- First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Jindi Zhang
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | - Weijia Fang
- First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Shuiguang Deng
- First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| |
Collapse
|
118
|
Yan Z, Yang X, Cheng KT. Enabling a Single Deep Learning Model for Accurate Gland Instance Segmentation: A Shape-Aware Adversarial Learning Framework. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2176-2189. [PMID: 31944936 DOI: 10.1109/tmi.2020.2966594] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Segmenting gland instances in histology images is highly challenging as it requires not only detecting glands from a complex background but also separating each individual gland instance with accurate boundary detection. However, due to the boundary uncertainty problem in manual annotations, pixel-to-pixel matching based loss functions are too restrictive for simultaneous gland detection and boundary detection. State-of-the-art approaches adopted multi-model schemes, resulting in unnecessarily high model complexity and difficulties in the training process. In this paper, we propose to use one single deep learning model for accurate gland instance segmentation. To address the boundary uncertainty problem, instead of pixel-to-pixel matching, we propose a segment-level shape similarity measure to calculate the curve similarity between each annotated boundary segment and the corresponding detected boundary segment within a fixed searching range. As the segment-level measure allows location variations within a fixed range for shape similarity calculation, it has better tolerance to boundary uncertainty and is more effective for boundary detection. Furthermore, by adjusting the radius of the searching range, the segment-level shape similarity measure is able to deal with different levels of boundary uncertainty. Therefore, in our framework, images of different scales are down-sampled and integrated to provide both global and local contextual information for training, which is helpful in segmenting gland instances of different sizes. To reduce the variations of multi-scale training images, by referring to adversarial domain adaptation, we propose a pseudo domain adaptation framework for feature alignment. By constructing loss functions based on the segment-level shape similarity measure, combining with the adversarial loss function, the proposed shape-aware adversarial learning framework enables one single deep learning model for gland instance segmentation. Experimental results on the 2015 MICCAI Gland Challenge dataset demonstrate that the proposed framework achieves state-of-the-art performance with one single deep learning model. As the boundary uncertainty problem widely exists in medical image segmentation, it is broadly applicable to other applications.
Collapse
|
119
|
Jungo A, Balsiger F, Reyes M. Analyzing the Quality and Challenges of Uncertainty Estimations for Brain Tumor Segmentation. Front Neurosci 2020; 14:282. [PMID: 32322186 PMCID: PMC7156850 DOI: 10.3389/fnins.2020.00282] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Accepted: 03/12/2020] [Indexed: 12/18/2022] Open
Abstract
Automatic segmentation of brain tumors has the potential to enable volumetric measures and high-throughput analysis in the clinical setting. Reaching this potential seems almost achieved, considering the steady increase in segmentation accuracy. However, despite segmentation accuracy, the current methods still do not meet the robustness levels required for patient-centered clinical use. In this regard, uncertainty estimates are a promising direction to improve the robustness of automated segmentation systems. Different uncertainty estimation methods have been proposed, but little is known about their usefulness and limitations for brain tumor segmentation. In this study, we present an analysis of the most commonly used uncertainty estimation methods in regards to benefits and challenges for brain tumor segmentation. We evaluated their quality in terms of calibration, segmentation error localization, and segmentation failure detection. Our results show that the uncertainty methods are typically well-calibrated when evaluated at the dataset level. Evaluated at the subject level, we found notable miscalibrations and limited segmentation error localization (e.g., for correcting segmentations), which hinder the direct use of the voxel-wise uncertainties. Nevertheless, voxel-wise uncertainty showed value to detect failed segmentations when uncertainty estimates are aggregated at the subject level. Therefore, we suggest a careful usage of voxel-wise uncertainty measures and highlight the importance of developing solutions that address the subject-level requirements on calibration and segmentation error localization.
Collapse
Affiliation(s)
- Alain Jungo
- Insel Data Science Center, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
- ARTORG Center, University of Bern, Bern, Switzerland
| | - Fabian Balsiger
- Insel Data Science Center, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
- ARTORG Center, University of Bern, Bern, Switzerland
| | - Mauricio Reyes
- Insel Data Science Center, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
- ARTORG Center, University of Bern, Bern, Switzerland
| |
Collapse
|
120
|
Ding H, Pan Z, Cen Q, Li Y, Chen S. Multi-scale fully convolutional network for gland segmentation using three-class classification. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.10.097] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
121
|
Yang Q, Xu Z, Liao C, Cai J, Huang Y, Chen H, Tao X, Huang Z, Chen J, Dong J, Zhu X. Epithelium segmentation and automated Gleason grading of prostate cancer via deep learning in label-free multiphoton microscopic images. JOURNAL OF BIOPHOTONICS 2020; 13:e201900203. [PMID: 31710780 DOI: 10.1002/jbio.201900203] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/22/2019] [Revised: 11/10/2019] [Accepted: 11/10/2019] [Indexed: 06/10/2023]
Abstract
In the current clinical care practice, Gleason grading system is one of the most powerful prognostic predictors for prostate cancer (PCa). The grading system is based on the architectural pattern of cancerous epithelium in histological images. However, the standard procedure of histological examination often involves complicated tissue fixation and staining, which are time-consuming and may delay the diagnosis and surgery. In this study, label-free multiphoton microscopy (MPM) was used to acquire subcellular-resolution images of unstained prostate tissues. Then, a deep learning architecture (U-net) was introduced for epithelium segmentation of prostate tissues in MPM images. The obtained segmentation results were then merged with the original MPM images to train a classification network (AlexNet) for automated Gleason grading. The developed method achieved an overall pixel accuracy of 92.3% with a mean F1 score of 0.839 for epithelium segmentation. By merging the segmentation results with the MPM images, the accuracy of Gleason grading was improved from 72.42% to 81.13% in hold-out test set. Our results suggest that MPM in combination with deep learning holds the potential to be used as a fast and powerful clinical tool for PCa diagnosis.
Collapse
Affiliation(s)
- Qinqin Yang
- Institute of Laser and Optoelectronics Technology, Fujian Provincial Key Laboratory for Photonics Technology, Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Normal University, Fuzhou, China
- Department of Electronic Science, Xiamen University, Xiamen, China
| | - Zhexin Xu
- Institute of Laser and Optoelectronics Technology, Fujian Provincial Key Laboratory for Photonics Technology, Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Normal University, Fuzhou, China
| | - Chenxi Liao
- Institute of Laser and Optoelectronics Technology, Fujian Provincial Key Laboratory for Photonics Technology, Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Normal University, Fuzhou, China
| | - Jianyong Cai
- Institute of Laser and Optoelectronics Technology, Fujian Provincial Key Laboratory for Photonics Technology, Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Normal University, Fuzhou, China
| | - Ying Huang
- Institute of Laser and Optoelectronics Technology, Fujian Provincial Key Laboratory for Photonics Technology, Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Normal University, Fuzhou, China
| | - Hong Chen
- Department of Pathology, The First Affiliated Hospital of Fujian Medical University, Fuzhou, China
| | - Xuan Tao
- Department of Pathology, The First Affiliated Hospital of Fujian Medical University, Fuzhou, China
| | - Zheng Huang
- Institute of Laser and Optoelectronics Technology, Fujian Provincial Key Laboratory for Photonics Technology, Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Normal University, Fuzhou, China
| | - Jianxin Chen
- Institute of Laser and Optoelectronics Technology, Fujian Provincial Key Laboratory for Photonics Technology, Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Normal University, Fuzhou, China
| | - Jiyang Dong
- Department of Electronic Science, Xiamen University, Xiamen, China
| | - Xiaoqin Zhu
- Institute of Laser and Optoelectronics Technology, Fujian Provincial Key Laboratory for Photonics Technology, Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Normal University, Fuzhou, China
| |
Collapse
|
122
|
FABnet: feature attention-based network for simultaneous segmentation of microvessels and nerves in routine histology images of oral cancer. Neural Comput Appl 2019. [DOI: 10.1007/s00521-019-04516-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|
123
|
Rathore S, Iftikhar MA, Chaddad A, Niazi T, Karasic T, Bilello M. Segmentation and Grade Prediction of Colon Cancer Digital Pathology Images Across Multiple Institutions. Cancers (Basel) 2019; 11:cancers11111700. [PMID: 31683818 PMCID: PMC6896042 DOI: 10.3390/cancers11111700] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2019] [Revised: 10/03/2019] [Accepted: 10/17/2019] [Indexed: 12/11/2022] Open
Abstract
Distinguishing benign from malignant disease is a primary challenge for colon histopathologists. Current clinical methods rely on qualitative visual analysis of features such as glandular architecture and size that exist on a continuum from benign to malignant. Consequently, discordance between histopathologists is common. To provide more reliable analysis of colon specimens, we propose an end-to-end computational pathology pipeline that encompasses gland segmentation, cancer detection, and then further breaking down the malignant samples into different cancer grades. We propose a multi-step gland segmentation method, which models tissue components as ellipsoids. For cancer detection/grading, we encode cellular morphology, spatial architectural patterns of glands, and texture by extracting multi-scale features: (i) Gland-based: extracted from individual glands, (ii) local-patch-based: computed from randomly-selected image patches, and (iii) image-based: extracted from images, and employ a hierarchical ensemble-classification method. Using two datasets (Rawalpindi Medical College (RMC), n = 174 and gland segmentation (GlaS), n = 165) with three cancer grades, our method reliably delineated gland regions (RMC = 87.5%, GlaS = 88.4%), detected the presence of malignancy (RMC = 97.6%, GlaS = 98.3%), and predicted tumor grade (RMC = 98.6%, GlaS = 98.6%). Training the model using one dataset and testing it on the other showed strong concordance in cancer detection (Train RMC – Test GlaS = 94.5%, Train GlaS – Test RMC = 93.7%) and grading (Train RMC – Test GlaS = 95%, Train GlaS – Test RMC = 95%) suggesting that the model will be applicable across institutions. With further prospective validation, the techniques demonstrated here may provide a reproducible and easily accessible method to standardize analysis of colon cancer specimens.
Collapse
Affiliation(s)
- Saima Rathore
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, PA 19104, USA.
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA.
| | - Muhammad Aksam Iftikhar
- Department of Computer Science, COMSATS University Islamabad, Lahore Campus, Lahore 54000, Pakistan.
| | - Ahmad Chaddad
- Division of Radiation Oncology, Department of Oncology, McGill University, Montreal, QC H3S 1Y9, Canada.
| | - Tamim Niazi
- Division of Radiation Oncology, Department of Oncology, McGill University, Montreal, QC H3S 1Y9, Canada.
| | - Thomas Karasic
- Department of Medicine, Division of Hematology/Oncology, University of Pennsylvania, Philadelphia, PA 19104, USA.
| | - Michel Bilello
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, PA 19104, USA.
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA.
| |
Collapse
|
124
|
Binder T, Tantaoui EM, Pati P, Catena R, Set-Aghayan A, Gabrani M. Multi-Organ Gland Segmentation Using Deep Learning. Front Med (Lausanne) 2019; 6:173. [PMID: 31428614 PMCID: PMC6690405 DOI: 10.3389/fmed.2019.00173] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2019] [Accepted: 07/12/2019] [Indexed: 02/06/2023] Open
Abstract
Clinical morphological analysis of histopathology samples is an effective method in cancer diagnosis. Computational pathology methods can be employed to automate this analysis, providing improved objectivity and scalability. More specifically, computational techniques can be used in segmenting glands, which is an essential factor in cancer diagnosis. Automatic delineation of glands is a challenging task considering a large variability in glandular morphology across tissues and pathological subtypes. A deep learning based gland segmentation method can be developed to address the above task, but it requires a large number of accurate gland annotations from several tissue slides. Such a large dataset need to be generated manually by experienced pathologists, which is laborious, time-consuming, expensive, and suffers from the subjectivity of the annotator. So far, deep learning techniques have produced promising results on a few organ-specific gland segmentation tasks, however, the demand for organ-specific gland annotations hinder the extensibility of these techniques to other organs. This work investigates the idea of cross-domain (-organ type) approximation that aims at reducing the need for organ-specific annotations. Unlike parenchyma, the stromal component of tissues, that lies between the glands, is more consistent across several organs. It is hypothesized that an automatic method, that can precisely segment the stroma, would pave the way for a cross-organ gland segmentation. Two proposed Dense-U-Nets are trained on H&E strained colon adenocarcinoma samples focusing on the gland and stroma segmentation. The trained networks are evaluated on two independent datasets, they are, a H&E stained colon adenocarcinoma dataset and a H&E stained breast invasive cancer dataset. The trained network targeting the stroma segmentation performs similar to the network targeting the gland segmentation on the colon dataset. Whereas, the former approach performs significantly better compared to the latter approach on the breast dataset, showcasing the higher generalization capacity of the stroma segmentation approach. The networks are evaluated using Dice coefficient and Hausdorff distance computed between the ground truth gland masks and the predicted gland masks. The conducted experiments validate the efficacy of the proposed stoma segmentation approach toward multi-organ gland segmentation.
Collapse
|
125
|
Payer C, Štern D, Feiner M, Bischof H, Urschler M. Segmenting and tracking cell instances with cosine embeddings and recurrent hourglass networks. Med Image Anal 2019; 57:106-119. [PMID: 31299493 DOI: 10.1016/j.media.2019.06.015] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2019] [Revised: 06/05/2019] [Accepted: 06/26/2019] [Indexed: 11/28/2022]
Abstract
Differently to semantic segmentation, instance segmentation assigns unique labels to each individual instance of the same object class. In this work, we propose a novel recurrent fully convolutional network architecture for tracking such instance segmentations over time, which is highly relevant, e.g., in biomedical applications involving cell growth and migration. Our network architecture incorporates convolutional gated recurrent units (ConvGRU) into a stacked hourglass network to utilize temporal information, e.g., from microscopy videos. Moreover, we train our network with a novel embedding loss based on cosine similarities, such that the network predicts unique embeddings for every instance throughout videos, even in the presence of dynamic structural changes due to mitosis of cells. To create the final tracked instance segmentations, the pixel-wise embeddings are clustered among subsequent video frames by using the mean shift algorithm. After showing the performance of the instance segmentation on a static in-house dataset of muscle fibers from H&E-stained microscopy images, we also evaluate our proposed recurrent stacked hourglass network regarding instance segmentation and tracking performance on six datasets from the ISBI celltracking challenge, where it delivers state-of-the-art results.
Collapse
Affiliation(s)
- Christian Payer
- Institute of Computer Graphics and Vision, Graz University of Technology, Graz, Austria
| | - Darko Štern
- Ludwig Boltzmann Institute for Clinical Forensic Imaging, Graz, Austria
| | - Marlies Feiner
- Division of Phoniatrics, Medical University Graz, Graz, Austria
| | - Horst Bischof
- Institute of Computer Graphics and Vision, Graz University of Technology, Graz, Austria
| | - Martin Urschler
- Ludwig Boltzmann Institute for Clinical Forensic Imaging, Graz, Austria; Department of Computer Science, The University of Auckland, New Zealand.
| |
Collapse
|
126
|
Pell R, Oien K, Robinson M, Pitman H, Rajpoot N, Rittscher J, Snead D, Verrill C. The use of digital pathology and image analysis in clinical trials. J Pathol Clin Res 2019; 5:81-90. [PMID: 30767396 PMCID: PMC6463857 DOI: 10.1002/cjp2.127] [Citation(s) in RCA: 69] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2018] [Revised: 02/08/2019] [Accepted: 02/12/2019] [Indexed: 02/06/2023]
Abstract
Digital pathology and image analysis potentially provide greater accuracy, reproducibility and standardisation of pathology-based trial entry criteria and endpoints, alongside extracting new insights from both existing and novel features. Image analysis has great potential to identify, extract and quantify features in greater detail in comparison to pathologist assessment, which may produce improved prediction models or perform tasks beyond manual capability. In this article, we provide an overview of the utility of such technologies in clinical trials and provide a discussion of the potential applications, current challenges, limitations and remaining unanswered questions that require addressing prior to routine adoption in such studies. We reiterate the value of central review of pathology in clinical trials, and discuss inherent logistical, cost and performance advantages of using a digital approach. The current and emerging regulatory landscape is outlined. The role of digital platforms and remote learning to improve the training and performance of clinical trial pathologists is discussed. The impact of image analysis on quantitative tissue morphometrics in key areas such as standardisation of immunohistochemical stain interpretation, assessment of tumour cellularity prior to molecular analytical applications and the assessment of novel histological features is described. The standardisation of digital image production, establishment of criteria for digital pathology use in pre-clinical and clinical studies, establishment of performance criteria for image analysis algorithms and liaison with regulatory bodies to facilitate incorporation of image analysis applications into clinical practice are key issues to be addressed to improve digital pathology incorporation into clinical trials.
Collapse
Affiliation(s)
- Robert Pell
- Nuffield Department of Surgical SciencesUniversity of Oxford, and Oxford NIHR Biomedical Research CentreOxfordUK
| | - Karin Oien
- Institute of Cancer Sciences – PathologyUniversity of GlasgowGlasgowUK
| | - Max Robinson
- Centre for Oral Health ResearchNewcastle UniversityNewcastle upon TyneUK
| | - Helen Pitman
- Strategy and InitiativesNational Cancer Research InstituteLondonUK
| | - Nasir Rajpoot
- Department of Computer ScienceUniversity of WarwickWarwickUK
| | - Jens Rittscher
- Nuffield Department of Surgical SciencesUniversity of Oxford, and Oxford NIHR Biomedical Research CentreOxfordUK
| | - David Snead
- Department of PathologyUniversity Hospitals Coventry and WarwickshireCoventryUK
| | - Clare Verrill
- Nuffield Department of Surgical SciencesUniversity of Oxford, and Oxford NIHR Biomedical Research CentreOxfordUK
| |
Collapse
|
127
|
Jungo A, Reyes M. Assessing Reliability and Challenges of Uncertainty Estimations for Medical Image Segmentation. LECTURE NOTES IN COMPUTER SCIENCE 2019. [DOI: 10.1007/978-3-030-32245-8_6] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|