1
|
Cano P, Musulen E, Gil D. Diagnosing Helicobacter pylori using autoencoders and limited annotations through anomalous staining patterns in IHC whole slide images. Int J Comput Assist Radiol Surg 2025; 20:765-773. [PMID: 39779639 DOI: 10.1007/s11548-024-03313-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Accepted: 12/18/2024] [Indexed: 01/11/2025]
Abstract
PURPOSE This work addresses the detection of Helicobacter pylori (H. pylori) in histological images with immunohistochemical staining. This analysis is a time-demanding task, currently done by an expert pathologist that visually inspects the samples. Given the effort required to localize the pathogen in images, a limited number of annotations might be available in an initial setting. Our goal is to design an approach that, using a limited set of annotations, is capable of obtaining results good enough to be used as a support tool. METHODS We propose to use autoencoders to learn the latent patterns of healthy patches and formulate a specific measure of the reconstruction error of the image in HSV space. ROC analysis is used to set the optimal threshold of this measure and the percentage of positive patches in a sample that determines the presence of H. pylori. RESULTS Our method has been tested on an own database of 245 whole slide images (WSI) having 117 cases without H. pylori and different density of the bacteria in the remaining ones. The database has 1211 annotated patches, with only 163 positive patches. This dataset of positive annotations was used to train a baseline thresholding and an SVM using the features of a pre-trained RedNet-18 and ViT models. A 10-fold cross-validation shows that our method has better performance with 91% accuracy, 86% sensitivity, 96% specificity and 0.97 AUC in the diagnosis of H. pylori . CONCLUSION Unlike classification approaches, our shallow autoencoder with threshold adaptation for the detection of anomalous staining is able to achieve competitive results with a limited set of annotated data. This initial approach is good enough to be used as a guide for fast annotation of infected patches.
Collapse
Affiliation(s)
- Pau Cano
- Comp. Sci. Dep, Universitat Autònoma de Barcelona, Campus UAB, Cerdanyola del Vallès, 08193, Catalunya, Spain.
- Computer Vision Center, Campus UAB, Cerdanyola del Vallès, 08193, Catalunya, Spain.
| | - Eva Musulen
- Pathology Department, Hospital Universitari General de Catalunya-Grupo QuironSalud, Sant Cugat del Vallès, 08195, Catalunya, Spain
- Institut de Recerca contra la Leucèmia Josep Carreras (IJC), Badalona, 08916, Catalunya, Spain
| | - Debora Gil
- Comp. Sci. Dep, Universitat Autònoma de Barcelona, Campus UAB, Cerdanyola del Vallès, 08193, Catalunya, Spain.
- Computer Vision Center, Campus UAB, Cerdanyola del Vallès, 08193, Catalunya, Spain.
| |
Collapse
|
2
|
Bao LX, Luo ZM, Zhu XL, Xu YY. Automated identification of protein expression intensity and classification of protein cellular locations in mouse brain regions from immunofluorescence images. Med Biol Eng Comput 2024; 62:1105-1119. [PMID: 38150111 DOI: 10.1007/s11517-023-02985-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Accepted: 11/28/2023] [Indexed: 12/28/2023]
Abstract
Knowledge of protein expression in mammalian brains at regional and cellular levels can facilitate understanding of protein functions and associated diseases. As the mouse brain is a typical mammalian brain considering cell type and structure, several studies have been conducted to analyze protein expression in mouse brains. However, labeling protein expression using biotechnology is costly and time-consuming. Therefore, automated models that can accurately recognize protein expression are needed. Here, we constructed machine learning models to automatically annotate the protein expression intensity and cellular location in different mouse brain regions from immunofluorescence images. The brain regions and sub-regions were segmented through learning image features using an autoencoder and then performing K-means clustering and registration to align with the anatomical references. The protein expression intensities for those segmented structures were computed on the basis of the statistics of the image pixels, and patch-based weakly supervised methods and multi-instance learning were used to classify the cellular locations. Results demonstrated that the models achieved high accuracy in the expression intensity estimation, and the F1 score of the cellular location prediction was 74.5%. This work established an automated pipeline for analyzing mouse brain images and provided a foundation for further study of protein expression and functions.
Collapse
Affiliation(s)
- Lin-Xia Bao
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China
- Guangdong Provincial Key Laboratory of Medical Imaging Processing, Southern Medical University, Guangzhou, 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510623, China
| | - Zhuo-Ming Luo
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China
- Guangdong Provincial Key Laboratory of Medical Imaging Processing, Southern Medical University, Guangzhou, 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510623, China
| | - Xi-Liang Zhu
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China
- Guangdong Provincial Key Laboratory of Medical Imaging Processing, Southern Medical University, Guangzhou, 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510623, China
| | - Ying-Ying Xu
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China.
- Guangdong Provincial Key Laboratory of Medical Imaging Processing, Southern Medical University, Guangzhou, 510515, China.
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510623, China.
| |
Collapse
|
3
|
Liang H, Zeng H, Dong X. Regional economic forecast using Elman neural networks with wavelet function. PLoS One 2024; 19:e0299657. [PMID: 38452027 PMCID: PMC10919664 DOI: 10.1371/journal.pone.0299657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Accepted: 02/12/2024] [Indexed: 03/09/2024] Open
Abstract
Recently, the economy in Guangdong province has ranked first in the country, maintaining a good growth momentum. The prediction of Gross Domestic Product (GDP) for Guangdong province is an important issue. Through predicting the GDP, it is possible to analyze whether the economy in Guangdong province can maintain high-quality growth. Hence, to accurately forecast the economy in Guangdong, this paper proposed an Elman neural network combining with wavelet function. The wavelet function not only stimulates the forecast ability of Elman neural network, but also improves the convergence speed of Elman neural network. Experimental results indicate that our model has good forecast ability of regional economy, and the forecast accuracy reach 0.971. In terms of forecast precision and errors, our model defeats the competitors. Moreover, our model gains advanced forecast results to both individual economic indicator and multiple economic indicators. This means that our model is independently of specific scenarios in regional economic forecast. We also find that the investment in education has a major positive impact on regional economic development in Guangdong province, and the both surges positive correlation. Experimental results also show that our model does not exhibit exponential training time with the augmenting of data volume. Consequently, we propose that our model is suitable for the prediction of large-scale datasets. Additionally, we demonstrate that using wavelet function gains more profits than using complex network architectures in forecast accuracy and training cost. Moreover, using wavelet function can simplify the designs of complexity network architectures, reducing the training parameter of neural networks.
Collapse
Affiliation(s)
- Huade Liang
- Guangzhou Nanyang Polytechnic College, Guangdong, China
| | - Huilin Zeng
- Guangzhou Nanyang Polytechnic College, Guangdong, China
| | - Xiaojuan Dong
- Guangzhou Nanyang Polytechnic College, Guangdong, China
| |
Collapse
|
4
|
Tu C, Du D, Zeng T, Zhang Y. Deep Multi-Dictionary Learning for Survival Prediction With Multi-Zoom Histopathological Whole Slide Images. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2024; 21:14-25. [PMID: 37788195 DOI: 10.1109/tcbb.2023.3321593] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
Survival prediction based on histopathological whole slide images (WSIs) is of great significance for risk-benefit assessment and clinical decision. However, complex microenvironments and heterogeneous tissue structures in WSIs bring challenges to learning informative prognosis-related representations. Additionally, previous studies mainly focus on modeling using mono-scale WSIs, which commonly ignore useful subtle differences existed in multi-zoom WSIs. To this end, we propose a deep multi-dictionary learning framework for cancer survival prediction with multi-zoom histopathological WSIs. The framework can recognize and learn discriminative clusters (i.e., microenvironments) based on multi-scale deep representations for survival analysis. Specifically, we learn multi-scale features based on multi-zoom tiles from WSIs via stacked deep autoencoders network followed by grouping different microenvironments by cluster algorithm. Based on multi-scale deep features of clusters, a multi-dictionary learning method with a post-pruning strategy is devised to learn discriminative representations from selected prognosis-related clusters in a task-driven manner. Finally, a survival model (i.e., EN-Cox) is constructed to estimate the risk index of an individual patient. The proposed model is evaluated on three datasets derived from The Cancer Genome Atlas (TCGA), and the experimental results demonstrate that it outperforms several state-of-the-art survival analysis approaches.
Collapse
|
5
|
Zaki M, Elallam O, Jami O, EL Ghoubali D, Jhilal F, Alidrissi N, Ghazal H, Habib N, Abbad F, Benmoussa A, Bakkali F. Advancing Tumor Cell Classification and Segmentation in Ki-67 Images: A Systematic Review of Deep Learning Approaches. LECTURE NOTES IN NETWORKS AND SYSTEMS 2024:94-112. [DOI: 10.1007/978-3-031-52385-4_9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
|
6
|
Cooper M, Ji Z, Krishnan RG. Machine learning in computational histopathology: Challenges and opportunities. Genes Chromosomes Cancer 2023; 62:540-556. [PMID: 37314068 DOI: 10.1002/gcc.23177] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 05/18/2023] [Accepted: 05/20/2023] [Indexed: 06/15/2023] Open
Abstract
Digital histopathological images, high-resolution images of stained tissue samples, are a vital tool for clinicians to diagnose and stage cancers. The visual analysis of patient state based on these images are an important part of oncology workflow. Although pathology workflows have historically been conducted in laboratories under a microscope, the increasing digitization of histopathological images has led to their analysis on computers in the clinic. The last decade has seen the emergence of machine learning, and deep learning in particular, a powerful set of tools for the analysis of histopathological images. Machine learning models trained on large datasets of digitized histopathology slides have resulted in automated models for prediction and stratification of patient risk. In this review, we provide context for the rise of such models in computational histopathology, highlight the clinical tasks they have found success in automating, discuss the various machine learning techniques that have been applied to this domain, and underscore open problems and opportunities.
Collapse
Affiliation(s)
- Michael Cooper
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- University Health Network, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
| | - Zongliang Ji
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
| | - Rahul G Krishnan
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
7
|
Mandair D, Reis-Filho JS, Ashworth A. Biological insights and novel biomarker discovery through deep learning approaches in breast cancer histopathology. NPJ Breast Cancer 2023; 9:21. [PMID: 37024522 PMCID: PMC10079681 DOI: 10.1038/s41523-023-00518-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Accepted: 02/27/2023] [Indexed: 04/08/2023] Open
Abstract
Breast cancer remains a highly prevalent disease with considerable inter- and intra-tumoral heterogeneity complicating prognostication and treatment decisions. The utilization and depth of genomic, transcriptomic and proteomic data for cancer has exploded over recent times and the addition of spatial context to this information, by understanding the correlating morphologic and spatial patterns of cells in tissue samples, has created an exciting frontier of research, histo-genomics. At the same time, deep learning (DL), a class of machine learning algorithms employing artificial neural networks, has rapidly progressed in the last decade with a confluence of technical developments - including the advent of modern graphic processing units (GPU), allowing efficient implementation of increasingly complex architectures at scale; advances in the theoretical and practical design of network architectures; and access to larger datasets for training - all leading to sweeping advances in image classification and object detection. In this review, we examine recent developments in the application of DL in breast cancer histology with particular emphasis of those producing biologic insights or novel biomarkers, spanning the extraction of genomic information to the use of stroma to predict cancer recurrence, with the aim of suggesting avenues for further advancing this exciting field.
Collapse
Affiliation(s)
- Divneet Mandair
- UCSF Helen Diller Family Comprehensive Cancer Center, San Francisco, CA, 94158, USA
| | | | - Alan Ashworth
- UCSF Helen Diller Family Comprehensive Cancer Center, San Francisco, CA, 94158, USA.
| |
Collapse
|
8
|
Wang X, Yu G, Yan Z, Wan L, Wang W, Cui L. Lung Cancer Subtype Diagnosis by Fusing Image-Genomics Data and Hybrid Deep Networks. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:512-523. [PMID: 34855599 DOI: 10.1109/tcbb.2021.3132292] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Accurate diagnosis of cancer subtypes is crucial for precise treatment, because different cancer subtypes are involved with different pathology and require different therapies. Although deep learning techniques have made great success in computer vision and other fields, they do not work well on Lung cancer subtype diagnosis, due to the distinction of slide images between different cancer subtypes is ambiguous. Furthermore, they often over-fit to high-dimensional genomics data with limited samples, and do not fuse the image and genomics data in a sensible way. In this paper, we propose a hybrid deep network based approach LungDIG for Lung cancer subtype Diagnosis by fusing Image-Genomics data. LungDIG first tiles the tissue slide image into small patches and extracts the patch-level features by fine-tuning an Inception-V3 model. Since the patches may contain some false positives in non-diagnostic regions, it further designs a patch-level feature combination strategy to integrate the extracted patch features and maintain the diversity between different cancer subtypes. At the same time, it extracts the genomics features from Copy Number Variation data by an attention based nonlinear extractor. Next, it fuses the image and genomics features by an attention based multilayer perceptron (MLP) to diagnose cancer subtype. Experiments on TCGA lung cancer data show that LungDIG can not only achieve higher accuracy for cancer subtype diagnosis than state-of-the-art methods, but also have a high authenticity and good interpretability.
Collapse
|
9
|
Thalakottor LA, Shirwaikar RD, Pothamsetti PT, Mathews LM. Classification of Histopathological Images from Breast Cancer Patients Using Deep Learning: A Comparative Analysis. Crit Rev Biomed Eng 2023; 51:41-62. [PMID: 37581350 DOI: 10.1615/critrevbiomedeng.2023047793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/16/2023]
Abstract
Cancer, a leading cause of mortality, is distinguished by the multi-stage conversion of healthy cells into cancer cells. Discovery of the disease early can significantly enhance the possibility of survival. Histology is a procedure where the tissue of interest is first surgically removed from a patient and cut into thin slices. A pathologist will then mount these slices on glass slides, stain them with specialized dyes like hematoxylin and eosin (H&E), and then inspect the slides under a microscope. Unfortunately, a manual analysis of histopathology images during breast cancer biopsy is time consuming. Literature suggests that automated techniques based on deep learning algorithms with artificial intelligence can be used to increase the speed and accuracy of detection of abnormalities within the histopathological specimens obtained from breast cancer patients. This paper highlights some recent work on such algorithms, a comparative study on various deep learning methods is provided. For the present study the breast cancer histopathological database (BreakHis) is used. These images are processed to enhance the inherent features, classified and an evaluation is carried out regarding the accuracy of the algorithm. Three convolutional neural network (CNN) models, visual geometry group (VGG19), densely connected convolutional networks (DenseNet201), and residual neural network (ResNet50V2), were employed while analyzing the images. Of these the DenseNet201 model performed better than other models and attained an accuracy of 91.3%. The paper includes a review of different classification techniques based on machine learning methods including CNN-based models and some of which may replace manual breast cancer diagnosis and detection.
Collapse
Affiliation(s)
- Louie Antony Thalakottor
- Department of Information Science and Engineering, Ramaiah Institute of Technology (RIT), 560054, India
| | - Rudresh Deepak Shirwaikar
- Department of Computer Engineering, Agnel Institute of Technology and Design (AITD), Goa University, Assagao, Goa, India, 403507
| | - Pavan Teja Pothamsetti
- Department of Information Science and Engineering, Ramaiah Institute of Technology (RIT), 560054, India
| | - Lincy Meera Mathews
- Department of Information Science and Engineering, Ramaiah Institute of Technology (RIT), 560054, India
| |
Collapse
|
10
|
Saini M, Susan S. VGGIN-Net: Deep Transfer Network for Imbalanced Breast Cancer Dataset. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:752-762. [PMID: 35349449 DOI: 10.1109/tcbb.2022.3163277] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
In this paper, we have presented a novel deep neural network architecture involving transfer learning approach, formed by freezing and concatenating all the layers till block4 pool layer of VGG16 pre-trained model (at the lower level) with the layers of a randomly initialized naïve Inception block module (at the higher level). Further, we have added the batch normalization, flatten, dropout and dense layers in the proposed architecture. Our transfer network, called VGGIN-Net, facilitates the transfer of domain knowledge from the larger ImageNet object dataset to the smaller imbalanced breast cancer dataset. To improve the performance of the proposed model, regularization was used in the form of dropout and data augmentation. A detailed block-wise fine tuning has been conducted on the proposed deep transfer network for images of different magnification factors. The results of extensive experiments indicate a significant improvement of classification performance after the application of fine-tuning. The proposed deep learning architecture with transfer learning and fine-tuning yields the highest accuracies in comparison to other state-of-the-art approaches for the classification of BreakHis breast cancer dataset. The articulated architecture is designed in a way that it can be effectively transfer learned on other breast cancer datasets.
Collapse
|
11
|
Zhao Y, Zhang J, Hu D, Qu H, Tian Y, Cui X. Application of Deep Learning in Histopathology Images of Breast Cancer: A Review. MICROMACHINES 2022; 13:2197. [PMID: 36557496 PMCID: PMC9781697 DOI: 10.3390/mi13122197] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 12/04/2022] [Accepted: 12/09/2022] [Indexed: 06/17/2023]
Abstract
With the development of artificial intelligence technology and computer hardware functions, deep learning algorithms have become a powerful auxiliary tool for medical image analysis. This study was an attempt to use statistical methods to analyze studies related to the detection, segmentation, and classification of breast cancer in pathological images. After an analysis of 107 articles on the application of deep learning to pathological images of breast cancer, this study is divided into three directions based on the types of results they report: detection, segmentation, and classification. We introduced and analyzed models that performed well in these three directions and summarized the related work from recent years. Based on the results obtained, the significant ability of deep learning in the application of breast cancer pathological images can be recognized. Furthermore, in the classification and detection of pathological images of breast cancer, the accuracy of deep learning algorithms has surpassed that of pathologists in certain circumstances. Our study provides a comprehensive review of the development of breast cancer pathological imaging-related research and provides reliable recommendations for the structure of deep learning network models in different application scenarios.
Collapse
Affiliation(s)
- Yue Zhao
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110169, China
- Key Laboratory of Data Analytics and Optimization for Smart Industry, Northeastern University, Shenyang 110169, China
| | - Jie Zhang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Dayu Hu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Hui Qu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Ye Tian
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Xiaoyu Cui
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110169, China
- Key Laboratory of Data Analytics and Optimization for Smart Industry, Northeastern University, Shenyang 110169, China
| |
Collapse
|
12
|
Multi-instance learning based on spatial continuous category representation for case-level meningioma grading in MRI images. APPL INTELL 2022. [DOI: 10.1007/s10489-022-04114-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|
13
|
Wakili MA, Shehu HA, Sharif MH, Sharif MHU, Umar A, Kusetogullari H, Ince IF, Uyaver S. Classification of Breast Cancer Histopathological Images Using DenseNet and Transfer Learning. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:8904768. [PMID: 36262621 PMCID: PMC9576400 DOI: 10.1155/2022/8904768] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 06/19/2022] [Accepted: 07/30/2022] [Indexed: 11/22/2022]
Abstract
Breast cancer is one of the most common invading cancers in women. Analyzing breast cancer is nontrivial and may lead to disagreements among experts. Although deep learning methods achieved an excellent performance in classification tasks including breast cancer histopathological images, the existing state-of-the-art methods are computationally expensive and may overfit due to extracting features from in-distribution images. In this paper, our contribution is mainly twofold. First, we perform a short survey on deep-learning-based models for classifying histopathological images to investigate the most popular and optimized training-testing ratios. Our findings reveal that the most popular training-testing ratio for histopathological image classification is 70%: 30%, whereas the best performance (e.g., accuracy) is achieved by using the training-testing ratio of 80%: 20% on an identical dataset. Second, we propose a method named DenTnet to classify breast cancer histopathological images chiefly. DenTnet utilizes the principle of transfer learning to solve the problem of extracting features from the same distribution using DenseNet as a backbone model. The proposed DenTnet method is shown to be superior in comparison to a number of leading deep learning methods in terms of detection accuracy (up to 99.28% on BreaKHis dataset deeming training-testing ratio of 80%: 20%) with good generalization ability and computational speed. The limitation of existing methods including the requirement of high computation and utilization of the same feature distribution is mitigated by dint of the DenTnet.
Collapse
Affiliation(s)
| | - Harisu Abdullahi Shehu
- School of Engineering and Computer Science, Victoria University of Wellington, Wellington 6012, New Zealand
| | - Md. Haidar Sharif
- College of Computer Science and Engineering, University of Hail, Hail 2440, Saudi Arabia
| | - Md. Haris Uddin Sharif
- School of Computer & Information Sciences, University of the Cumberlands, Williamsburg, KY 40769, USA
| | - Abubakar Umar
- Abubakar Tafawa Balewa University, Bauchi 740272, Nigeria
| | - Huseyin Kusetogullari
- Department of Computer Science, Blekinge Institute of Technology, Karlskrona 37141, Sweden
| | - Ibrahim Furkan Ince
- Department of Digital Game Design, Nisantasi University, 34485 Istanbul, Turkey
| | - Sahin Uyaver
- Department of Energy Science and Technologies, Turkish-German University, 34820 Istanbul, Turkey
| |
Collapse
|
14
|
Wang Y, Zhang L, Shu X, Feng Y, Yi Z, Lv Q. Feature-Sensitive Deep Convolutional Neural Network for Multi-Instance Breast Cancer Detection. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2022; 19:2241-2251. [PMID: 33600319 DOI: 10.1109/tcbb.2021.3060183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
To obtain a well-performed computer-aided detection model for detecting breast cancer, it is usually needed to design an effective and efficient algorithm and a well-labeled dataset to train it. In this paper, first, a multi-instance mammography clinic dataset was constructed. Each case in the dataset includes a different number of instances captured from different views, it is labeled according to the pathological report, and all the instances of one case share one label. Nevertheless, the instances captured from different views may have various levels of contributions to conclude the category of the target case. Motivated by this observation, a feature-sensitive deep convolutional neural network with an end-to-end training manner is proposed to detect breast cancer. The proposed method first uses a pre-train model with some custom layers to extract image features. Then, it adopts a feature fusion module to learn to compute the weight of each feature vector. It makes the different instances of each case have different sensibility on the classifier. Lastly, a classifier module is used to classify the fused features. The experimental results on both our constructed clinic dataset and two public datasets have demonstrated the effectiveness of the proposed method.
Collapse
|
15
|
Liu W, Shu X, Zhang L, Li D, Lv Q. Deep Multiscale Multi-Instance Networks With Regional Scoring for Mammogram Classification. IEEE TRANSACTIONS ON ARTIFICIAL INTELLIGENCE 2022. [DOI: 10.1109/tai.2021.3136146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Affiliation(s)
- Wenjie Liu
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Xin Shu
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Lei Zhang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Dong Li
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Qing Lv
- Department of Galactophore Surgery, West China Hospital, Sichuan University, Chengdu, P.R. China
| |
Collapse
|
16
|
Das R, Kaur K, Walia E. Feature Generalization for Breast Cancer Detection in Histopathological Images. Interdiscip Sci 2022; 14:566-581. [PMID: 35482216 DOI: 10.1007/s12539-022-00515-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Revised: 03/17/2022] [Accepted: 03/22/2022] [Indexed: 06/14/2023]
Abstract
Recent period has witnessed benchmarked performance of transfer learning using deep architectures in computer-aided diagnosis (CAD) of breast cancer. In this perspective, the pre-trained neural network needs to be fine-tuned with relevant data to extract useful features from the dataset. However, in addition to the computational overhead, it suffers the curse of overfitting in case of feature extraction from smaller datasets. Handcrafted feature extraction techniques as well as feature extraction using pre-trained deep networks come into rescue in aforementioned situation and have proved to be much more efficient and lightweight compared to deep architecture-based transfer learning techniques. This research has identified the competence of classifying breast cancer images using feature engineering and representation learning over the established and contemporary notion of using transfer learning techniques. Moreover, it has revealed superior feature learning capacity with feature fusion in contrast to the conventional belief of understanding unknown feature patterns better with representation learning alone. Experiments have been conducted on two different and popular breast cancer image datasets, namely, KIMIA Path960 and BreakHis datasets. A comparison of image-level accuracy is performed on these datasets using the above-mentioned feature extraction techniques. Image level accuracy of 97.81% is achieved for KIMIA Path960 dataset using individual features extracted with handcrafted (color histogram) technique. Fusion of uniform Local Binary Pattern (uLBP) and color histogram features has resulted in 99.17% of highest accuracy for the same dataset. Experimentation with BreakHis dataset has resulted in highest classification accuracy of 88.41% with color histogram features for images with 200X magnification factor. Finally, the results are contrasted to that of state-of-the-art and superior performances are observed on many occasions with the proposed fusion-based techniques. In case of BreakHis dataset, the highest accuracies 87.60% (with least standard deviation) and 85.77% are recorded for 200X and 400X magnification factors, respectively, and the results for the aforesaid magnification factors of images have exceeded the state-of-the-art.
Collapse
Affiliation(s)
- Rik Das
- Programme of Information Technology, Xavier Institute of Social Service, Ranchi, 834001, Jharkhand, India.
| | - Kanwalpreet Kaur
- Department of Computer Science, Punjabi University, Patiala, India
| | - Ekta Walia
- Department of Medical Imaging, University of Saskatchewan, Saskatoon, Canada
| |
Collapse
|
17
|
Hou C, Li J, Wang W, Sun L, Zhang J. Second-order asymmetric convolution network for breast cancer histopathology image classification. JOURNAL OF BIOPHOTONICS 2022; 15:e202100370. [PMID: 35076187 DOI: 10.1002/jbio.202100370] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Revised: 01/20/2022] [Accepted: 01/24/2022] [Indexed: 06/14/2023]
Abstract
Recently, convolutional neural networks (CNNs) have been widely utilized for breast cancer histopathology image classification. Besides, research works have also convinced that deep high-order statistic models obviously outperform corresponding first-order counterparts in vision tasks. Inspired by this, we attempt to explore global deep high-order statistics to distinguish breast cancer histopathology images. To further boost the classification performance, we also integrate asymmetric convolution into the second-order network and propose a novel second-order asymmetric convolution network (SoACNet). SoACNet adopts a series of asymmetric convolution blocks to replace each stand square-kernel convolutional layer of the backbone architecture, followed by a global covariance pooling to compute second-order statistics of deep features, leading to a more robust representation of histopathology images. Extensive experiments on the public BreakHis dataset demonstrate the effectiveness of SoACNet for breast cancer histopathology image classification, which achieves competitive performance with the state-of-the-arts.
Collapse
Affiliation(s)
- Cunqiao Hou
- School of Computer Science and Engineering, Dalian Minzu University, Dalian, China
- Institute of Machine Intelligence and Bio-computing, Dalian Minzu University, Dalian, China
| | - Jiasen Li
- Information Technology Department, Bank of TianJin, Tianjin, China
- Key Lab of Advanced Design and Intelligent Computing (Ministry of Education), Dalian University, Dalian, China
| | - Wei Wang
- School of Computer Science and Engineering, Dalian Minzu University, Dalian, China
| | - Lin Sun
- Information Center, Beijing Tongren Hospital, Beijing, China
| | - Jianxin Zhang
- School of Computer Science and Engineering, Dalian Minzu University, Dalian, China
- Institute of Machine Intelligence and Bio-computing, Dalian Minzu University, Dalian, China
- Key Lab of Advanced Design and Intelligent Computing (Ministry of Education), Dalian University, Dalian, China
| |
Collapse
|
18
|
Breast Histopathological Image Classification Method Based on Autoencoder and Siamese Framework. INFORMATION 2022. [DOI: 10.3390/info13030107] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The automated classification of breast cancer histopathological images is one of the important tasks in computer-aided diagnosis systems (CADs). Due to the characteristics of small inter-class and large intra-class variances in breast cancer histopathological images, extracting features for breast cancer classification is difficult. To address this problem, an improved autoencoder (AE) network using a Siamese framework that can learn the effective features from histopathological images for CAD breast cancer classification tasks was designed. First, the inputted image is processed at multiple scales using a Gaussian pyramid to obtain multi-scale features. Second, in the feature extraction stage, a Siamese framework is used to constrain the pre-trained AE so that the extracted features have smaller intra-class variance and larger inter-class variance. Experimental results show that the proposed method classification accuracy was as high as 97.8% on the BreakHis dataset. Compared with commonly used algorithms in breast cancer histopathological classification, this method has superior, faster performance.
Collapse
|
19
|
Liu W, Zhang L, Dai G, Zhang X, Li G, Yi Z. Deep Neural Network with Structural Similarity Difference and Orientation-based Loss for Position Error Classification in The Radiotherapy of Graves' Ophthalmopathy Patients. IEEE J Biomed Health Inform 2021; 26:2606-2614. [PMID: 34941537 DOI: 10.1109/jbhi.2021.3137451] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Identifying position errors for Graves' ophthalmopathy (GO) patients using electronic portal imaging device (EPID) transmission fluence maps is helpful in monitoring treatment.} However, most of the existing models only extract features from dose difference maps computed from EPID images, which do not fully characterize all information of the positional errors. In addition, the position error has a three-dimensional spatial nature, which has never been explored in previous work. To address the above problems, a deep neural network (DNN) model with structural similarity difference and orientation-based loss is proposed in this paper, which consists of a feature extraction network and a feature enhancement network. To capture more information, three types of Structural SIMilarity (SSIM) sub-index maps are computed to enhance the luminance, contrast, and structural features of EPID images, respectively. These maps and the dose difference maps are fed into different networks to extract radiomic features. To acquire spatial features of the position errors, an orientation-based loss function is proposed for optimal training. It makes the data distribution more consistent with the realistic 3D space by integrating the error deviations of the predicted values in the left-right, superior-inferior, anterior-posterior directions. Experimental results on a constructed dataset demonstrate the effectiveness of the proposed model, compared with other related models and existing state-of-the-art methods.
Collapse
|
20
|
Hu T, Xie L, Zhang L, Li G, Yi Z. Deep Multimodal Neural Network Based on Data-Feature Fusion for Patient-Specific Quality Assurance. Int J Neural Syst 2021; 32:2150055. [PMID: 34895106 DOI: 10.1142/s0129065721500556] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Patient-specific quality assurance (QA) for Volumetric Modulated Arc Therapy (VMAT) plans is routinely performed in the clinical. However, it is labor-intensive and time-consuming for medical physicists. QA prediction models can address these shortcomings and improve efficiency. Current approaches mainly focus on single cancer and single modality data. They are not applicable to clinical practice. To assess the accuracy of QA results for VMAT plans, this paper presents a new model that learns complementary features from the multi-modal data to predict the gamma passing rate (GPR). According to the characteristics of VMAT plans, a feature-data fusion approach is designed to fuse the features of imaging and non-imaging information in the model. In this study, 690 VMAT plans are collected encompassing more than ten diseases. The model can accurately predict the most VMAT plans at all three gamma criteria: 2%/2 mm, 3%/2 mm and 3%/3 mm. The mean absolute error between the predicted and measured GPR is 2.17%, 1.16% and 0.71%, respectively. The maximum deviation between the predicted and measured GPR is 3.46%, 4.6%, 8.56%, respectively. The proposed model is effective, and the features of the two modalities significantly influence QA results.
Collapse
Affiliation(s)
- Ting Hu
- Department of Computer Science and Technology, Sichuan University, Section 4, Southern 1st Ring Rd, Chengdu, Sichuan, P. R. China
| | - Lizhang Xie
- Department of Computer Science and Technology, Sichuan University, Section 4, Southern 1st Ring Rd, Chengdu, Sichuan, P. R. China
| | - Lei Zhang
- Department of Computer Science and Technology, Sichuan University, Section 4, Southern 1st Ring Rd, Chengdu, Sichuan, P. R. China
| | - Guangjun Li
- Department of Radiation Oncology, Cancer Center and State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Chengdu, Sichuan, P. R. China
| | - Zhang Yi
- Department of Computer Science and Technology, Sichuan University, Section 4, Southern 1st Ring Rd, Chengdu, Sichuan, P. R. China
| |
Collapse
|
21
|
A Systematic Review of Federated Learning in the Healthcare Area: From the Perspective of Data Properties and Applications. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app112311191] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Recent advances in deep learning have shown many successful stories in smart healthcare applications with data-driven insight into improving clinical institutions’ quality of care. Excellent deep learning models are heavily data-driven. The more data trained, the more robust and more generalizable the performance of the deep learning model. However, pooling the medical data into centralized storage to train a robust deep learning model faces privacy, ownership, and strict regulation challenges. Federated learning resolves the previous challenges with a shared global deep learning model using a central aggregator server. At the same time, patient data remain with the local party, maintaining data anonymity and security. In this study, first, we provide a comprehensive, up-to-date review of research employing federated learning in healthcare applications. Second, we evaluate a set of recent challenges from a data-centric perspective in federated learning, such as data partitioning characteristics, data distributions, data protection mechanisms, and benchmark datasets. Finally, we point out several potential challenges and future research directions in healthcare applications.
Collapse
|
22
|
3PCNNB-Net: Three Parallel CNN Branches for Breast Cancer Classification Through Histopathological Images. J Med Biol Eng 2021. [DOI: 10.1007/s40846-021-00620-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
23
|
Mercan C, Aygunes B, Aksoy S, Mercan E, Shapiro LG, Weaver DL, Elmore JG. Deep Feature Representations for Variable-Sized Regions of Interest in Breast Histopathology. IEEE J Biomed Health Inform 2021; 25:2041-2049. [PMID: 33166257 PMCID: PMC8274968 DOI: 10.1109/jbhi.2020.3036734] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
OBJECTIVE Modeling variable-sized regions of interest (ROIs) in whole slide images using deep convolutional networks is a challenging task, as these networks typically require fixed-sized inputs that should contain sufficient structural and contextual information for classification. We propose a deep feature extraction framework that builds an ROI-level feature representation via weighted aggregation of the representations of variable numbers of fixed-sized patches sampled from nuclei-dense regions in breast histopathology images. METHODS First, the initial patch-level feature representations are extracted from both fully-connected layer activations and pixel-level convolutional layer activations of a deep network, and the weights are obtained from the class predictions of the same network trained on patch samples. Then, the final patch-level feature representations are computed by concatenation of weighted instances of the extracted feature activations. Finally, the ROI-level representation is obtained by fusion of the patch-level representations by average pooling. RESULTS Experiments using a well-characterized data set of 240 slides containing 437 ROIs marked by experienced pathologists with variable sizes and shapes result in an accuracy score of 72.65% in classifying ROIs into four diagnostic categories that cover the whole histologic spectrum. CONCLUSION The results show that the proposed feature representations are superior to existing approaches and provide accuracies that are higher than the average accuracy of another set of pathologists. SIGNIFICANCE The proposed generic representation that can be extracted from any type of deep convolutional architecture combines the patch appearance information captured by the network activations and the diagnostic relevance predicted by the class-specific scoring of patches for effective modeling of variable-sized ROIs.
Collapse
|
24
|
Lian S, Li L, Lian G, Xiao X, Luo Z, Li S. A Global and Local Enhanced Residual U-Net for Accurate Retinal Vessel Segmentation. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2021; 18:852-862. [PMID: 31095493 DOI: 10.1109/tcbb.2019.2917188] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Retinal vessel segmentation is a critical procedure towards the accurate visualization, diagnosis, early treatment, and surgery planning of ocular diseases. Recent deep learning-based approaches have achieved impressive performance in retinal vessel segmentation. However, they usually apply global image pre-processing and take the whole retinal images as input during network training, which have two drawbacks for accurate retinal vessel segmentation. First, these methods lack the utilization of the local patch information. Second, they overlook the geometric constraint that retina only occurs in a specific area within the whole image or the extracted patch. As a consequence, these global-based methods suffer in handling details, such as recognizing the small thin vessels, discriminating the optic disk, etc. To address these drawbacks, this study proposes a Global and Local enhanced residual U-nEt (GLUE) for accurate retinal vessel segmentation, which benefits from both the globally and locally enhanced information inside the retinal region. Experimental results on two benchmark datasets demonstrate the effectiveness of the proposed method, which consistently improves the segmentation accuracy over a conventional U-Net and achieves competitive performance compared to the state-of-the-art.
Collapse
|
25
|
van der Laak J, Litjens G, Ciompi F. Deep learning in histopathology: the path to the clinic. Nat Med 2021; 27:775-784. [PMID: 33990804 DOI: 10.1038/s41591-021-01343-4] [Citation(s) in RCA: 361] [Impact Index Per Article: 90.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Accepted: 03/31/2021] [Indexed: 02/08/2023]
Abstract
Machine learning techniques have great potential to improve medical diagnostics, offering ways to improve accuracy, reproducibility and speed, and to ease workloads for clinicians. In the field of histopathology, deep learning algorithms have been developed that perform similarly to trained pathologists for tasks such as tumor detection and grading. However, despite these promising results, very few algorithms have reached clinical implementation, challenging the balance between hope and hype for these new techniques. This Review provides an overview of the current state of the field, as well as describing the challenges that still need to be addressed before artificial intelligence in histopathology can achieve clinical value.
Collapse
Affiliation(s)
- Jeroen van der Laak
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands. .,Center for Medical Image Science and Visualization, Linköping University, Linköping, Sweden.
| | - Geert Litjens
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Francesco Ciompi
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands
| |
Collapse
|
26
|
Wang KS, Yu G, Xu C, Meng XH, Zhou J, Zheng C, Deng Z, Shang L, Liu R, Su S, Zhou X, Li Q, Li J, Wang J, Ma K, Qi J, Hu Z, Tang P, Deng J, Qiu X, Li BY, Shen WD, Quan RP, Yang JT, Huang LY, Xiao Y, Yang ZC, Li Z, Wang SC, Ren H, Liang C, Guo W, Li Y, Xiao H, Gu Y, Yun JP, Huang D, Song Z, Fan X, Chen L, Yan X, Li Z, Huang ZC, Huang J, Luttrell J, Zhang CY, Zhou W, Zhang K, Yi C, Wu C, Shen H, Wang YP, Xiao HM, Deng HW. Accurate diagnosis of colorectal cancer based on histopathology images using artificial intelligence. BMC Med 2021; 19:76. [PMID: 33752648 PMCID: PMC7986569 DOI: 10.1186/s12916-021-01942-5] [Citation(s) in RCA: 75] [Impact Index Per Article: 18.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Accepted: 02/16/2021] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Accurate and robust pathological image analysis for colorectal cancer (CRC) diagnosis is time-consuming and knowledge-intensive, but is essential for CRC patients' treatment. The current heavy workload of pathologists in clinics/hospitals may easily lead to unconscious misdiagnosis of CRC based on daily image analyses. METHODS Based on a state-of-the-art transfer-learned deep convolutional neural network in artificial intelligence (AI), we proposed a novel patch aggregation strategy for clinic CRC diagnosis using weakly labeled pathological whole-slide image (WSI) patches. This approach was trained and validated using an unprecedented and enormously large number of 170,099 patches, > 14,680 WSIs, from > 9631 subjects that covered diverse and representative clinical cases from multi-independent-sources across China, the USA, and Germany. RESULTS Our innovative AI tool consistently and nearly perfectly agreed with (average Kappa statistic 0.896) and even often better than most of the experienced expert pathologists when tested in diagnosing CRC WSIs from multicenters. The average area under the receiver operating characteristics curve (AUC) of AI was greater than that of the pathologists (0.988 vs 0.970) and achieved the best performance among the application of other AI methods to CRC diagnosis. Our AI-generated heatmap highlights the image regions of cancer tissue/cells. CONCLUSIONS This first-ever generalizable AI system can handle large amounts of WSIs consistently and robustly without potential bias due to fatigue commonly experienced by clinical pathologists. It will drastically alleviate the heavy clinical burden of daily pathology diagnosis and improve the treatment for CRC patients. This tool is generalizable to other cancer diagnosis based on image recognition.
Collapse
Affiliation(s)
- K S Wang
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
- Department of Pathology, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - G Yu
- Department of Biomedical Engineering, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - C Xu
- Department of Biostatistics and Epidemiology, The University of Oklahoma Health Sciences Center, Oklahoma City, OK, 73104, USA
| | - X H Meng
- Laboratory of Molecular and Statistical Genetics, College of Life Sciences, Hunan Normal University, Changsha, 410081, Hunan, China
| | - J Zhou
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
- Department of Pathology, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - C Zheng
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
- Department of Pathology, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - Z Deng
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
- Department of Pathology, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - L Shang
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
| | - R Liu
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
| | - S Su
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
| | - X Zhou
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
| | - Q Li
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
| | - J Li
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
| | - J Wang
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
| | - K Ma
- Department of Pathology, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - J Qi
- Department of Pathology, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - Z Hu
- Department of Pathology, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - P Tang
- Department of Pathology, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - J Deng
- Department of Deming Department of Medicine, Tulane Center of Biomedical Informatics and Genomics, Tulane University School of Medicine, 1440 Canal Street, Suite 1610, New Orleans, LA, 70112, USA
| | - X Qiu
- Centers of System Biology, Data Information and Reproductive Health, School of Basic Medical Science, School of Basic Medical Science, Central South University, Changsha, 410008, Hunan, China
| | - B Y Li
- Centers of System Biology, Data Information and Reproductive Health, School of Basic Medical Science, School of Basic Medical Science, Central South University, Changsha, 410008, Hunan, China
| | - W D Shen
- Centers of System Biology, Data Information and Reproductive Health, School of Basic Medical Science, School of Basic Medical Science, Central South University, Changsha, 410008, Hunan, China
| | - R P Quan
- Centers of System Biology, Data Information and Reproductive Health, School of Basic Medical Science, School of Basic Medical Science, Central South University, Changsha, 410008, Hunan, China
| | - J T Yang
- Centers of System Biology, Data Information and Reproductive Health, School of Basic Medical Science, School of Basic Medical Science, Central South University, Changsha, 410008, Hunan, China
| | - L Y Huang
- Centers of System Biology, Data Information and Reproductive Health, School of Basic Medical Science, School of Basic Medical Science, Central South University, Changsha, 410008, Hunan, China
| | - Y Xiao
- Centers of System Biology, Data Information and Reproductive Health, School of Basic Medical Science, School of Basic Medical Science, Central South University, Changsha, 410008, Hunan, China
| | - Z C Yang
- Department of Pharmacology, Xiangya School of Pharmaceutical Sciences, Central South University, Changsha, 410078, Hunan, China
| | - Z Li
- School of Life Sciences, Central South University, Changsha, 410013, Hunan, China
| | - S C Wang
- College of Information Science and Engineering, Hunan Normal University, Changsha, 410081, Hunan, China
| | - H Ren
- Department of Pathology, Gongli Hospital, Second Military Medical University, Shanghai, 200135, China
- Department of Pathology, the Peace Hospital Affiliated to Changzhi Medical College, Changzhi, 046000, China
| | - C Liang
- Pathological Laboratory of Adicon Medical Laboratory Co., Ltd, Hangzhou, 310023, Zhejiang, China
| | - W Guo
- Department of Pathology, First Affiliated Hospital of Hunan Normal University, The People's Hospital of Hunan Province, Changsha, 410005, Hunan, China
| | - Y Li
- Department of Pathology, First Affiliated Hospital of Hunan Normal University, The People's Hospital of Hunan Province, Changsha, 410005, Hunan, China
| | - H Xiao
- Department of Pathology, the Third Xiangya Hospital, Central South University, Changsha, 410013, Hunan, China
| | - Y Gu
- Department of Pathology, the Third Xiangya Hospital, Central South University, Changsha, 410013, Hunan, China
| | - J P Yun
- Department of Pathology, Sun Yat-Sen University Cancer Center, Guangzhou, 510060, China
| | - D Huang
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, 200032, China
| | - Z Song
- Department of Pathology, Chinese PLA General Hospital, Beijing, 100853, China
| | - X Fan
- Department of Pathology, Nanjing Drum Tower Hospital, the Affiliated Hospital of Nanjing University Medical School, Nanjing, 210008, China
| | - L Chen
- Department of Pathology, The first affiliated hospital, Air Force Medical University, Xi'an, 710032, China
| | - X Yan
- Institute of Pathology and southwest cancer center, Southwest Hospital, Third Military Medical University, Chongqing, 400038, China
| | - Z Li
- Department of Pathology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, 510080, China
| | - Z C Huang
- Department of Biomedical Engineering, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - J Huang
- Department of Anatomy and Neurobiology, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - J Luttrell
- School of Computing Sciences and Computer Engineering, University of Southern Mississippi, Hattiesburg, MS, 39406, USA
| | - C Y Zhang
- School of Computing Sciences and Computer Engineering, University of Southern Mississippi, Hattiesburg, MS, 39406, USA
| | - W Zhou
- College of Computing, Michigan Technological University, Houghton, MI, 49931, USA
| | - K Zhang
- Department of Computer Science, Bioinformatics Facility of Xavier NIH RCMI Cancer Research Center, Xavier University of Louisiana, New Orleans, LA, 70125, USA
| | - C Yi
- Department of Pathology, Ochsner Medical Center, New Orleans, LA, 70121, USA
| | - C Wu
- Department of Statistics, Florida State University, Tallahassee, FL, 32306, USA
| | - H Shen
- Department of Deming Department of Medicine, Tulane Center of Biomedical Informatics and Genomics, Tulane University School of Medicine, 1440 Canal Street, Suite 1610, New Orleans, LA, 70112, USA
- Division of Biomedical Informatics and Genomics, Deming Department of Medicine, Tulane University School of Medicine, New Orleans, LA, 70112, USA
| | - Y P Wang
- Department of Deming Department of Medicine, Tulane Center of Biomedical Informatics and Genomics, Tulane University School of Medicine, 1440 Canal Street, Suite 1610, New Orleans, LA, 70112, USA
- Department of Biomedical Engineering, Tulane University, New Orleans, LA, 70118, USA
| | - H M Xiao
- Centers of System Biology, Data Information and Reproductive Health, School of Basic Medical Science, School of Basic Medical Science, Central South University, Changsha, 410008, Hunan, China.
| | - H W Deng
- Department of Deming Department of Medicine, Tulane Center of Biomedical Informatics and Genomics, Tulane University School of Medicine, 1440 Canal Street, Suite 1610, New Orleans, LA, 70112, USA.
- Centers of System Biology, Data Information and Reproductive Health, School of Basic Medical Science, School of Basic Medical Science, Central South University, Changsha, 410008, Hunan, China.
- Division of Biomedical Informatics and Genomics, Deming Department of Medicine, Tulane University School of Medicine, New Orleans, LA, 70112, USA.
| |
Collapse
|
27
|
Nasir IM, Rashid M, Shah JH, Sharif M, Awan MYH, Alkinani MH. An Optimized Approach for Breast Cancer Classification for Histopathological Images Based on Hybrid Feature Set. Curr Med Imaging 2021; 17:136-147. [PMID: 32324518 DOI: 10.2174/1573405616666200423085826] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2019] [Revised: 03/05/2020] [Accepted: 03/24/2020] [Indexed: 11/22/2022]
Abstract
BACKGROUND Breast cancer is considered as one of the most perilous sickness among females worldwide and the ratio of new cases is increasing yearly. Many researchers have proposed efficient algorithms to diagnose breast cancer at early stages, which have increased the efficiency and performance by utilizing the learned features of gold standard histopathological images. OBJECTIVE Most of these systems have either used traditional handcrafted or deep features, which had a lot of noise and redundancy, and ultimately decrease the performance of the system. METHODS A hybrid approach is proposed by fusing and optimizing the properties of handcrafted and deep features to classify the breast cancer images. HOG and LBP features are serially fused with pre-trained models VGG19 and InceptionV3. PCR and ICR are used to evaluate the classification performance of the proposed method. RESULTS The method concentrates on histopathological images to classify the breast cancer. The performance is compared with the state-of-the-art techniques, where an overall patient-level accuracy of 97.2% and image-level accuracy of 96.7% is recorded. CONCLUSION The proposed hybrid method achieves the best performance as compared to previous methods and it can be used for the intelligent healthcare systems and early breast cancer detection.
Collapse
Affiliation(s)
| | - Muhammad Rashid
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt, Pakistan
| | - Jamal Hussain Shah
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt, Pakistan
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt, Pakistan
| | | | - Monagi H Alkinani
- College of Computer Science and Engineering, Department of Computer Science and Artificial Intelligence, University of Jeddah, Saudi Arabia
| |
Collapse
|
28
|
Guleria S, Shah TU, Pulido JV, Fasullo M, Ehsan L, Lippman R, Sali R, Mutha P, Cheng L, Brown DE, Syed S. Deep learning systems detect dysplasia with human-like accuracy using histopathology and probe-based confocal laser endomicroscopy. Sci Rep 2021; 11:5086. [PMID: 33658592 DOI: 10.1038/s41598-021-84510-411:5086] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2020] [Accepted: 02/15/2021] [Indexed: 05/28/2023] Open
Abstract
Probe-based confocal laser endomicroscopy (pCLE) allows for real-time diagnosis of dysplasia and cancer in Barrett's esophagus (BE) but is limited by low sensitivity. Even the gold standard of histopathology is hindered by poor agreement between pathologists. We deployed deep-learning-based image and video analysis in order to improve diagnostic accuracy of pCLE videos and biopsy images. Blinded experts categorized biopsies and pCLE videos as squamous, non-dysplastic BE, or dysplasia/cancer, and deep learning models were trained to classify the data into these three categories. Biopsy classification was conducted using two distinct approaches-a patch-level model and a whole-slide-image-level model. Gradient-weighted class activation maps (Grad-CAMs) were extracted from pCLE and biopsy models in order to determine tissue structures deemed relevant by the models. 1970 pCLE videos, 897,931 biopsy patches, and 387 whole-slide images were used to train, test, and validate the models. In pCLE analysis, models achieved a high sensitivity for dysplasia (71%) and an overall accuracy of 90% for all classes. For biopsies at the patch level, the model achieved a sensitivity of 72% for dysplasia and an overall accuracy of 90%. The whole-slide-image-level model achieved a sensitivity of 90% for dysplasia and 94% overall accuracy. Grad-CAMs for all models showed activation in medically relevant tissue regions. Our deep learning models achieved high diagnostic accuracy for both pCLE-based and histopathologic diagnosis of esophageal dysplasia and its precursors, similar to human accuracy in prior studies. These machine learning approaches may improve accuracy and efficiency of current screening protocols.
Collapse
Affiliation(s)
- Shan Guleria
- Rush University Medical Center, Chicago, IL, USA
| | - Tilak U Shah
- Hunter Holmes McGuire Veterans Affairs Medical Center, Richmond, VA, USA
- Division of Gastroenterology, Hepatology and Nutrition, Virginia Commonwealth University, Richmond, VA, USA
| | - J Vincent Pulido
- Johns Hopkins University Applied Physics Laboratory, Laurel, MD, USA
- Department of Systems & Information Engineering, University of Virginia, Charlottesville, VA, USA
| | - Matthew Fasullo
- Hunter Holmes McGuire Veterans Affairs Medical Center, Richmond, VA, USA
- Division of Gastroenterology, Hepatology and Nutrition, Virginia Commonwealth University, Richmond, VA, USA
| | - Lubaina Ehsan
- Division of Gastroenterology, Hepatology, and Nutrition, Department of Pediatrics, University of Virginia School of Medicine, Charlottesville, VA, USA
| | - Robert Lippman
- Hunter Holmes McGuire Veterans Affairs Medical Center, Richmond, VA, USA
| | - Rasoul Sali
- Department of Systems & Information Engineering, University of Virginia, Charlottesville, VA, USA
| | - Pritesh Mutha
- Hunter Holmes McGuire Veterans Affairs Medical Center, Richmond, VA, USA
- Division of Gastroenterology, Hepatology and Nutrition, Virginia Commonwealth University, Richmond, VA, USA
| | - Lin Cheng
- Rush University Medical Center, Chicago, IL, USA
| | - Donald E Brown
- Department of Systems & Information Engineering, University of Virginia, Charlottesville, VA, USA
| | - Sana Syed
- Division of Gastroenterology, Hepatology, and Nutrition, Department of Pediatrics, University of Virginia School of Medicine, Charlottesville, VA, USA.
| |
Collapse
|
29
|
Guleria S, Shah TU, Pulido JV, Fasullo M, Ehsan L, Lippman R, Sali R, Mutha P, Cheng L, Brown DE, Syed S. Deep learning systems detect dysplasia with human-like accuracy using histopathology and probe-based confocal laser endomicroscopy. Sci Rep 2021; 11:5086. [PMID: 33658592 PMCID: PMC7930108 DOI: 10.1038/s41598-021-84510-4] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2020] [Accepted: 02/15/2021] [Indexed: 12/20/2022] Open
Abstract
Probe-based confocal laser endomicroscopy (pCLE) allows for real-time diagnosis of dysplasia and cancer in Barrett's esophagus (BE) but is limited by low sensitivity. Even the gold standard of histopathology is hindered by poor agreement between pathologists. We deployed deep-learning-based image and video analysis in order to improve diagnostic accuracy of pCLE videos and biopsy images. Blinded experts categorized biopsies and pCLE videos as squamous, non-dysplastic BE, or dysplasia/cancer, and deep learning models were trained to classify the data into these three categories. Biopsy classification was conducted using two distinct approaches-a patch-level model and a whole-slide-image-level model. Gradient-weighted class activation maps (Grad-CAMs) were extracted from pCLE and biopsy models in order to determine tissue structures deemed relevant by the models. 1970 pCLE videos, 897,931 biopsy patches, and 387 whole-slide images were used to train, test, and validate the models. In pCLE analysis, models achieved a high sensitivity for dysplasia (71%) and an overall accuracy of 90% for all classes. For biopsies at the patch level, the model achieved a sensitivity of 72% for dysplasia and an overall accuracy of 90%. The whole-slide-image-level model achieved a sensitivity of 90% for dysplasia and 94% overall accuracy. Grad-CAMs for all models showed activation in medically relevant tissue regions. Our deep learning models achieved high diagnostic accuracy for both pCLE-based and histopathologic diagnosis of esophageal dysplasia and its precursors, similar to human accuracy in prior studies. These machine learning approaches may improve accuracy and efficiency of current screening protocols.
Collapse
Affiliation(s)
- Shan Guleria
- Rush University Medical Center, Chicago, IL, USA
| | - Tilak U Shah
- Hunter Holmes McGuire Veterans Affairs Medical Center, Richmond, VA, USA
- Division of Gastroenterology, Hepatology and Nutrition, Virginia Commonwealth University, Richmond, VA, USA
| | - J Vincent Pulido
- Johns Hopkins University Applied Physics Laboratory, Laurel, MD, USA
- Department of Systems & Information Engineering, University of Virginia, Charlottesville, VA, USA
| | - Matthew Fasullo
- Hunter Holmes McGuire Veterans Affairs Medical Center, Richmond, VA, USA
- Division of Gastroenterology, Hepatology and Nutrition, Virginia Commonwealth University, Richmond, VA, USA
| | - Lubaina Ehsan
- Division of Gastroenterology, Hepatology, and Nutrition, Department of Pediatrics, University of Virginia School of Medicine, Charlottesville, VA, USA
| | - Robert Lippman
- Hunter Holmes McGuire Veterans Affairs Medical Center, Richmond, VA, USA
| | - Rasoul Sali
- Department of Systems & Information Engineering, University of Virginia, Charlottesville, VA, USA
| | - Pritesh Mutha
- Hunter Holmes McGuire Veterans Affairs Medical Center, Richmond, VA, USA
- Division of Gastroenterology, Hepatology and Nutrition, Virginia Commonwealth University, Richmond, VA, USA
| | - Lin Cheng
- Rush University Medical Center, Chicago, IL, USA
| | - Donald E Brown
- Department of Systems & Information Engineering, University of Virginia, Charlottesville, VA, USA
| | - Sana Syed
- Division of Gastroenterology, Hepatology, and Nutrition, Department of Pediatrics, University of Virginia School of Medicine, Charlottesville, VA, USA.
| |
Collapse
|
30
|
Breath analysis based early gastric cancer classification from deep stacked sparse autoencoder neural network. Sci Rep 2021; 11:4014. [PMID: 33597551 PMCID: PMC7889910 DOI: 10.1038/s41598-021-83184-2] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Accepted: 01/29/2021] [Indexed: 01/16/2023] Open
Abstract
Deep learning is an emerging tool, which is regularly used for disease diagnosis in the medical field. A new research direction has been developed for the detection of early-stage gastric cancer. The computer-aided diagnosis (CAD) systems reduce the mortality rate due to their effectiveness. In this study, we proposed a new method for feature extraction using a stacked sparse autoencoder to extract the discriminative features from the unlabeled data of breath samples. A Softmax classifier was then integrated to the proposed method of feature extraction, to classify gastric cancer from the breath samples. Precisely, we identified fifty peaks in each spectrum to distinguish the EGC, AGC, and healthy persons. This CAD system reduces the distance between the input and output by learning the features and preserve the structure of the input data set of breath samples. The features were extracted from the unlabeled data of the breath samples. After the completion of unsupervised training, autoencoders with Softmax classifier were cascaded to develop a deep stacked sparse autoencoder neural network. In last, fine-tuning of the developed neural network was carried out with labeled training data to make the model more reliable and repeatable. The proposed deep stacked sparse autoencoder neural network architecture exhibits excellent results, with an overall accuracy of 98.7% for advanced gastric cancer classification and 97.3% for early gastric cancer detection using breath analysis. Moreover, the developed model produces an excellent result for recall, precision, and f score value, making it suitable for clinical application.
Collapse
|
31
|
Valous NA, Moraleda RR, Jäger D, Zörnig I, Halama N. Interrogating the microenvironmental landscape of tumors with computational image analysis approaches. Semin Immunol 2020; 48:101411. [PMID: 33168423 DOI: 10.1016/j.smim.2020.101411] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/14/2020] [Revised: 08/13/2020] [Accepted: 09/04/2020] [Indexed: 02/07/2023]
Abstract
The tumor microenvironment is an interacting heterogeneous collection of cancer cells, resident as well as infiltrating host cells, secreted factors, and extracellular matrix proteins. With the growing importance of immunotherapies, it has become crucial to be able to characterize the composition and the functional orientation of the microenvironment. The development of novel computational image analysis methodologies may enable the robust quantification and localization of immune and related biomarker-expressing cells within the microenvironment. The aim of the review is to concisely highlight a selection of current and significant contributions pertinent to methodological advances coupled with biomedical or translational applications. A further aim is to concisely present computational advances that, to our knowledge, have currently very limited use for the assessment of the microenvironment but have the potential to enhance image analysis pipelines; on this basis, an example is shown for the detection and segmentation of cells of the microenvironment using a published pipeline and a public dataset. Finally, a general proposal is presented on the conceptual design of automation-optimized computational image analysis workflows in the biomedical and clinical domain.
Collapse
Affiliation(s)
- Nektarios A Valous
- Applied Tumor Immunity Clinical Cooperation Unit, National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Im Neuenheimer Feld 460, 69120 Heidelberg, Germany.
| | - Rodrigo Rojas Moraleda
- Applied Tumor Immunity Clinical Cooperation Unit, National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Im Neuenheimer Feld 460, 69120 Heidelberg, Germany.
| | - Dirk Jäger
- Applied Tumor Immunity Clinical Cooperation Unit, National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Im Neuenheimer Feld 460, 69120 Heidelberg, Germany; Department of Medical Oncology, National Center for Tumor Diseases (NCT), Heidelberg University Hospital (UKHD), Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Inka Zörnig
- Department of Medical Oncology, National Center for Tumor Diseases (NCT), Heidelberg University Hospital (UKHD), Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Niels Halama
- Department of Medical Oncology, National Center for Tumor Diseases (NCT), Heidelberg University Hospital (UKHD), Im Neuenheimer Feld 460, 69120 Heidelberg, Germany; Division of Translational Immunotherapy, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg, Germany.
| |
Collapse
|
32
|
Liu S, Zhao L, Wang X, Xin Q, Zhao J, Guttery DS, Zhang YD. Deep Spatio-Temporal Representation and Ensemble Classification for Attention Deficit/Hyperactivity Disorder. IEEE Trans Neural Syst Rehabil Eng 2020; 29:1-10. [PMID: 32833639 DOI: 10.1109/tnsre.2020.3019063] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Attention deficit/Hyperactivity disorder (ADHD) is a complex, universal and heterogeneous neurodevelopmental disease. The traditional diagnosis of ADHD relies on the long-term analysis of complex information such as clinical data (electroencephalogram, etc.), patients' behavior and psychological tests by professional doctors. In recent years, functional magnetic resonance imaging (fMRI) has been developing rapidly and is widely employed in the study of brain cognition due to its non-invasive and non-radiation characteristics. We propose an algorithm based on convolutional denoising autoencoder (CDAE) and adaptive boosting decision trees (AdaDT) to improve the results of ADHD classification. Firstly, combining the advantages of convolutional neural networks (CNNs) and the denoising autoencoder (DAE), we developed a convolutional denoising autoencoder to extract the spatial features of fMRI data and obtain spatial features sorted by time. Then, AdaDT was exploited to classify the features extracted by CDAE. Finally, we validate the algorithm on the ADHD-200 test dataset. The experimental results show that our method offers improved classification compared with state-of-the-art methods in terms of the average accuracy of each individual site and all sites, meanwhile, our algorithm can maintain a certain balance between specificity and sensitivity.
Collapse
|
33
|
Shu X, Zhang L, Wang Z, Lv Q, Yi Z. Deep Neural Networks With Region-Based Pooling Structures for Mammographic Image Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2246-2255. [PMID: 31985411 DOI: 10.1109/tmi.2020.2968397] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Breast cancer is one of the most frequently diagnosed solid cancers. Mammography is the most commonly used screening technology for detecting breast cancer. Traditional machine learning methods of mammographic image classification or segmentation using manual features require a great quantity of manual segmentation annotation data to train the model and test the results. But manual labeling is expensive, time-consuming, and laborious, and greatly increases the cost of system construction. To reduce this cost and the workload of radiologists, an end-to-end full-image mammogram classification method based on deep neural networks was proposed for classifier building, which can be constructed without bounding boxes or mask ground truth label of training data. The only label required in this method is the classification of mammographic images, which can be relatively easy to collect from diagnostic reports. Because breast lesions usually take up a fraction of the total area visualized in the mammographic image, we propose different pooling structures for convolutional neural networks(CNNs) instead of the common pooling methods, which divide the image into regions and select the few with high probability of malignancy as the representation of the whole mammographic image. The proposed pooling structures can be applied on most CNN-based models, which may greatly improve the models' performance on mammographic image data with the same input. Experimental results on the publicly available INbreast dataset and CBIS dataset indicate that the proposed pooling structures perform satisfactorily on mammographic image data compared with previous state-of-the-art mammographic image classifiers and detection algorithm using segmentation annotations.
Collapse
|
34
|
|