51
|
Coutinho MG, Câmara GB, Barbosa RDM, Fernandes MA. SARS-CoV-2 virus classification based on stacked sparse autoencoder. Comput Struct Biotechnol J 2022; 21:284-298. [PMID: 36530948 PMCID: PMC9742810 DOI: 10.1016/j.csbj.2022.12.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 12/04/2022] [Accepted: 12/05/2022] [Indexed: 12/13/2022] Open
Abstract
Since December 2019, the world has been intensely affected by the COVID-19 pandemic, caused by the SARS-CoV-2. In the case of a novel virus identification, the early elucidation of taxonomic classification and origin of the virus genomic sequence is essential for strategic planning, containment, and treatments. Deep learning techniques have been successfully used in many viral classification problems associated with viral infection diagnosis, metagenomics, phylogenetics, and analysis. Considering that motivation, the authors proposed an efficient viral genome classifier for the SARS-CoV-2 using the deep neural network based on the stacked sparse autoencoder (SSAE). For the best performance of the model, we explored the utilization of image representations of the complete genome sequences as the SSAE input to provide a classification of the SARS-CoV-2. For that, a dataset based on k-mers image representation was applied. We performed four experiments to provide different levels of taxonomic classification of the SARS-CoV-2. The SSAE technique provided great performance results in all experiments, achieving classification accuracy between 92% and 100% for the validation set and between 98.9% and 100% when the SARS-CoV-2 samples were applied for the test set. In this work, samples of the SARS-CoV-2 were not used during the training process, only during subsequent tests, in which the model was able to infer the correct classification of the samples in the vast majority of cases. This indicates that our model can be adapted to classify other emerging viruses. Finally, the results indicated the applicability of this deep learning technique in genome classification problems.
Collapse
Affiliation(s)
- Maria G.F. Coutinho
- Laboratory of Machine Learning and Intelligent Instrumentation, IMD/nPITI, Federal University of Rio Grande do Norte, Natal, Brazil
| | - Gabriel B.M. Câmara
- Laboratory of Machine Learning and Intelligent Instrumentation, IMD/nPITI, Federal University of Rio Grande do Norte, Natal, Brazil
| | - Raquel de M. Barbosa
- Department of Pharmacy and Pharmaceutical Technology, University of Granada, 18071 Granada, Spain
| | - Marcelo A.C. Fernandes
- Laboratory of Machine Learning and Intelligent Instrumentation, IMD/nPITI, Federal University of Rio Grande do Norte, Natal, Brazil
- Department of Computer and Automation Engineering, Federal University of Rio Grande do Norte, Natal, Brazil
| |
Collapse
|
52
|
Alzoubi I, Bao G, Zheng Y, Wang X, Graeber MB. Artificial intelligence techniques for neuropathological diagnostics and research. Neuropathology 2022. [PMID: 36443935 DOI: 10.1111/neup.12880] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 10/17/2022] [Accepted: 10/23/2022] [Indexed: 12/03/2022]
Abstract
Artificial intelligence (AI) research began in theoretical neurophysiology, and the resulting classical paper on the McCulloch-Pitts mathematical neuron was written in a psychiatry department almost 80 years ago. However, the application of AI in digital neuropathology is still in its infancy. Rapid progress is now being made, which prompted this article. Human brain diseases represent distinct system states that fall outside the normal spectrum. Many differ not only in functional but also in structural terms, and the morphology of abnormal nervous tissue forms the traditional basis of neuropathological disease classifications. However, only a few countries have the medical specialty of neuropathology, and, given the sheer number of newly developed histological tools that can be applied to the study of brain diseases, a tremendous shortage of qualified hands and eyes at the microscope is obvious. Similarly, in neuroanatomy, human observers no longer have the capacity to process the vast amounts of connectomics data. Therefore, it is reasonable to assume that advances in AI technology and, especially, whole-slide image (WSI) analysis will greatly aid neuropathological practice. In this paper, we discuss machine learning (ML) techniques that are important for understanding WSI analysis, such as traditional ML and deep learning, introduce a recently developed neuropathological AI termed PathoFusion, and present thoughts on some of the challenges that must be overcome before the full potential of AI in digital neuropathology can be realized.
Collapse
Affiliation(s)
- Islam Alzoubi
- School of Computer Science The University of Sydney Sydney New South Wales Australia
| | - Guoqing Bao
- School of Computer Science The University of Sydney Sydney New South Wales Australia
| | - Yuqi Zheng
- Ken Parker Brain Tumour Research Laboratories Brain and Mind Centre, Faculty of Medicine and Health, University of Sydney Camperdown New South Wales Australia
| | - Xiuying Wang
- School of Computer Science The University of Sydney Sydney New South Wales Australia
| | - Manuel B. Graeber
- Ken Parker Brain Tumour Research Laboratories Brain and Mind Centre, Faculty of Medicine and Health, University of Sydney Camperdown New South Wales Australia
| |
Collapse
|
53
|
Tavolara TE, Gurcan MN, Niazi MKK. Contrastive Multiple Instance Learning: An Unsupervised Framework for Learning Slide-Level Representations of Whole Slide Histopathology Images without Labels. Cancers (Basel) 2022; 14:5778. [PMID: 36497258 PMCID: PMC9738801 DOI: 10.3390/cancers14235778] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Revised: 11/16/2022] [Accepted: 11/19/2022] [Indexed: 11/25/2022] Open
Abstract
Recent methods in computational pathology have trended towards semi- and weakly-supervised methods requiring only slide-level labels. Yet, even slide-level labels may be absent or irrelevant to the application of interest, such as in clinical trials. Hence, we present a fully unsupervised method to learn meaningful, compact representations of WSIs. Our method initially trains a tile-wise encoder using SimCLR, from which subsets of tile-wise embeddings are extracted and fused via an attention-based multiple-instance learning framework to yield slide-level representations. The resulting set of intra-slide-level and inter-slide-level embeddings are attracted and repelled via contrastive loss, respectively. This resulted in slide-level representations with self-supervision. We applied our method to two tasks- (1) non-small cell lung cancer subtyping (NSCLC) as a classification prototype and (2) breast cancer proliferation scoring (TUPAC16) as a regression prototype-and achieved an AUC of 0.8641 ± 0.0115 and correlation (R2) of 0.5740 ± 0.0970, respectively. Ablation experiments demonstrate that the resulting unsupervised slide-level feature space can be fine-tuned with small datasets for both tasks. Overall, our method approaches computational pathology in a novel manner, where meaningful features can be learned from whole-slide images without the need for annotations of slide-level labels. The proposed method stands to benefit computational pathology, as it theoretically enables researchers to benefit from completely unlabeled whole-slide images.
Collapse
Affiliation(s)
- Thomas E. Tavolara
- Center for Biomedical Informatics, Wake Forest School of Medicine, Winston-Salem, NC 27101, USA
| | | | | |
Collapse
|
54
|
Chen Y, Wang J, Wang C, Liu M, Zou Q. Deep learning models for disease-associated circRNA prediction: a review. Brief Bioinform 2022; 23:6696465. [PMID: 36130259 DOI: 10.1093/bib/bbac364] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Revised: 07/30/2022] [Accepted: 08/03/2022] [Indexed: 12/14/2022] Open
Abstract
Emerging evidence indicates that circular RNAs (circRNAs) can provide new insights and potential therapeutic targets for disease diagnosis and treatment. However, traditional biological experiments are expensive and time-consuming. Recently, deep learning with a more powerful ability for representation learning enables it to be a promising technology for predicting disease-associated circRNAs. In this review, we mainly introduce the most popular databases related to circRNA, and summarize three types of deep learning-based circRNA-disease associations prediction methods: feature-generation-based, type-discrimination and hybrid-based methods. We further evaluate seven representative models on benchmark with ground truth for both balance and imbalance classification tasks. In addition, we discuss the advantages and limitations of each type of method and highlight suggested applications for future research.
Collapse
Affiliation(s)
- Yaojia Chen
- College of Electronics and Information Engineering Guangdong Ocean University, Zhanjiang, China and the Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China, Chengdu, China
| | - Jiacheng Wang
- Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China, Chengdu, China
| | - Chuyu Wang
- Faculty of Computing, Harbin Institute of Technology, Harbin, China
| | - Mingxin Liu
- College of Electronics and Information Engineering, Guangdong Ocean University, Zhanjiang, China
| | - Quan Zou
- University of Electronic Science and Technology of China, China
| |
Collapse
|
55
|
Mao Y, Chen C, Wang Z, Cheng D, You P, Huang X, Zhang B, Zhao F. Generative adversarial networks with adaptive normalization for synthesizing T2-weighted magnetic resonance images from diffusion-weighted images. Front Neurosci 2022; 16:1058487. [PMID: 36452330 PMCID: PMC9704724 DOI: 10.3389/fnins.2022.1058487] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 10/25/2022] [Indexed: 12/10/2023] Open
Abstract
Recently, attention has been drawn toward brain imaging technology in the medical field, among which MRI plays a vital role in clinical diagnosis and lesion analysis of brain diseases. Different sequences of MR images provide more comprehensive information and help doctors to make accurate clinical diagnoses. However, their costs are particularly high. For many image-to-image synthesis methods in the medical field, supervised learning-based methods require labeled datasets, which are often difficult to obtain. Therefore, we propose an unsupervised learning-based generative adversarial network with adaptive normalization (AN-GAN) for synthesizing T2-weighted MR images from rapidly scanned diffusion-weighted imaging (DWI) MR images. In contrast to the existing methods, deep semantic information is extracted from the high-frequency information of original sequence images, which are then added to the feature map in deconvolution layers as a modality mask vector. This image fusion operation results in better feature maps and guides the training of GANs. Furthermore, to better preserve semantic information against common normalization layers, we introduce AN, a conditional normalization layer that modulates the activations using the fused feature map. Experimental results show that our method of synthesizing T2 images has a better perceptual quality and better detail than the other state-of-the-art methods.
Collapse
Affiliation(s)
- Yanyan Mao
- College of Oceanography and Space Informatics, China University of Petroleum, Qingdao, China
- School of Computer Science and Technology, Shandong Business and Technology University, Yantai, China
| | - Chao Chen
- School of Computer Science and Technology, Shandong Business and Technology University, Yantai, China
| | - Zhenjie Wang
- College of Oceanography and Space Informatics, China University of Petroleum, Qingdao, China
| | - Dapeng Cheng
- School of Computer Science and Technology, Shandong Business and Technology University, Yantai, China
- Shandong Co-Innovation Center of Future Intelligent Computing, Yantai, China
| | - Panlu You
- School of Computer Science and Technology, Shandong Business and Technology University, Yantai, China
| | - Xingdan Huang
- School of Statistics, Shandong Business and Technology University, Yantai, China
| | - Baosheng Zhang
- School of Computer Science and Technology, Shandong Business and Technology University, Yantai, China
| | - Feng Zhao
- School of Computer Science and Technology, Shandong Business and Technology University, Yantai, China
- Shandong Co-Innovation Center of Future Intelligent Computing, Yantai, China
| |
Collapse
|
56
|
Yang M, Jiao L, Liu F, Hou B, Yang S, Jian M. DPFL-Nets: Deep Pyramid Feature Learning Networks for Multiscale Change Detection. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:6402-6416. [PMID: 34029198 DOI: 10.1109/tnnls.2021.3079627] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Due to the complementary properties of different types of sensors, change detection between heterogeneous images receives increasing attention from researchers. However, change detection cannot be handled by directly comparing two heterogeneous images since they demonstrate different image appearances and statistics. In this article, we propose a deep pyramid feature learning network (DPFL-Net) for change detection, especially between heterogeneous images. DPFL-Net can learn a series of hierarchical features in an unsupervised fashion, containing both spatial details and multiscale contextual information. The learned pyramid features from two input images make unchanged pixels matched exactly and changed ones dissimilar and after transformed into the same space for each scale successively. We further propose fusion blocks to aggregate multiscale difference images (DIs), generating an enhanced DI with strong separability. Based on the enhanced DI, unchanged areas are predicted and used to train DPFL-Net in the next iteration. In this article, pyramid features and unchanged areas are updated alternately, leading to an unsupervised change detection method. In the feature transformation process, local consistency is introduced to constrain the learned pyramid features, modeling the correlations between the neighboring pixels and reducing the false alarms. Experimental results demonstrate that the proposed approach achieves superior or at least comparable results to the existing state-of-the-art change detection methods in both homogeneous and heterogeneous cases.
Collapse
|
57
|
Chattopadhyay S, Dey A, Singh PK, Oliva D, Cuevas E, Sarkar R. MTRRE-Net: A deep learning model for detection of breast cancer from histopathological images. Comput Biol Med 2022; 150:106155. [PMID: 36240595 DOI: 10.1016/j.compbiomed.2022.106155] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Revised: 08/31/2022] [Accepted: 09/24/2022] [Indexed: 11/03/2022]
Abstract
Histopathological image classification has become one of the most challenging tasks among researchers due to the fine-grained variability of the disease. However, the rapid development of deep learning-based models such as the Convolutional Neural Network (CNN) has propelled much attentiveness to the classification of complex biomedical images. In this work, we propose a novel end-to-end deep learning model, named Multi-scale Dual Residual Recurrent Network (MTRRE-Net), for breast cancer classification from histopathological images. This model introduces a contrasting approach of dual residual block combined with the recurrent network to overcome the vanishing gradient problem even if the network is significantly deep. The proposed model has been evaluated on a publicly available standard dataset, namely BreaKHis, and achieved impressive accuracy in overcoming state-of-the-art models on all the images considered at various magnification levels.
Collapse
Affiliation(s)
- Soham Chattopadhyay
- Department of Electrical Engineering, Jadavpur University, 188, Raja S.C. Mallick Road, Kolkata 700032, West Bengal, India.
| | - Arijit Dey
- Department of Computer Science and Engineering, Maulana Abul Kalam Azad University of Technology, Kolkata 700064, West Bengal, India.
| | - Pawan Kumar Singh
- Department of Information Technology, Jadavpur University, Jadavpur University Second Campus, Plot No. 8, Salt Lake Bypass, LB Block, Sector III, Salt Lake City, Kolkata 700106, West Bengal, India.
| | - Diego Oliva
- División de Tecnologías para la Integración Ciber-Humana, Universidad de Guadalajara, CUCEI, Av. Revolución 1500, 44430, Guadalajara, Jal, Mexico.
| | - Erik Cuevas
- División de Tecnologías para la Integración Ciber-Humana, Universidad de Guadalajara, CUCEI, Av. Revolución 1500, 44430, Guadalajara, Jal, Mexico.
| | - Ram Sarkar
- Department of Computer Science and Engineering, Jadavpur University, 188, Raja S.C. Mallick Road, Kolkata 700032, West Bengal, India.
| |
Collapse
|
58
|
Bouguettaya A, Zarzour H, Kechida A, Taberkit AM. Vehicle Detection From UAV Imagery With Deep Learning: A Review. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:6047-6067. [PMID: 34029200 DOI: 10.1109/tnnls.2021.3080276] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Vehicle detection from unmanned aerial vehicle (UAV) imagery is one of the most important tasks in a large number of computer vision-based applications. This crucial task needed to be done with high accuracy and speed. However, it is a very challenging task due to many characteristics related to the aerial images and the used hardware, such as different vehicle sizes, orientations, types, density, limited datasets, and inference speed. In recent years, many classical and deep-learning-based methods have been proposed in the literature to address these problems. Handed engineering- and shallow learning-based techniques suffer from poor accuracy and generalization to other complex cases. Deep-learning-based vehicle detection algorithms achieved better results due to their powerful learning ability. In this article, we provide a review on vehicle detection from UAV imagery using deep learning techniques. We start by presenting the different types of deep learning architectures, such as convolutional neural networks, recurrent neural networks, autoencoders, generative adversarial networks, and their contribution to improve the vehicle detection task. Then, we focus on investigating the different vehicle detection methods, datasets, and the encountered challenges all along with the suggested solutions. Finally, we summarize and compare the techniques used to improve vehicle detection from UAV-based images, which could be a useful aid to researchers and developers to select the most adequate method for their needs.
Collapse
|
59
|
Din SU, Kumar J, Shao J, Mawuli CB, Ndiaye WD. Learning High-Dimensional Evolving Data Streams With Limited Labels. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:11373-11384. [PMID: 34033560 DOI: 10.1109/tcyb.2021.3070420] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
In the context of streaming data, learning algorithms often need to confront several unique challenges, such as concept drift, label scarcity, and high dimensionality. Several concept drift-aware data stream learning algorithms have been proposed to tackle these issues over the past decades. However, most existing algorithms utilize a supervised learning framework and require all true class labels to update their models. Unfortunately, in the streaming environment, requiring all labels is unfeasible and not realistic in many real-world applications. Therefore, learning data streams with minimal labels is a more practical scenario. Considering the problem of the curse of dimensionality and label scarcity, in this article, we present a new semisupervised learning technique for streaming data. To cure the curse of dimensionality, we employ a denoising autoencoder to transform the high-dimensional feature space into a reduced, compact, and more informative feature representation. Furthermore, we use a cluster-and-label technique to reduce the dependency on true class labels. We employ a synchronization-based dynamic clustering technique to summarize the streaming data into a set of dynamic microclusters that are further used for classification. In addition, we employ a disagreement-based learning method to cope with concept drift. Extensive experiments performed on many real-world datasets demonstrate the superior performance of the proposed method compared to several state-of-the-art methods.
Collapse
|
60
|
Zhang J, Wang W, Guo D, Bai B, Bo T, Fan S. Antidiabetic Effect of Millet Bran Polysaccharides Partially Mediated via Changes in Gut Microbiome. Foods 2022; 11:foods11213406. [PMID: 36360018 PMCID: PMC9654906 DOI: 10.3390/foods11213406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Revised: 10/23/2022] [Accepted: 10/24/2022] [Indexed: 11/16/2022] Open
Abstract
Diabetes is a type of metabolic disease associated with changes in the intestinal flora. In this study, the regulatory effect of millet bran on intestinal microbiota in a model of type 2 diabetes (T2DM) was investigated in an effort to develop new approaches to prevent and treat diabetes and its complications in patients. The effect of purified millet bran polysaccharide (MBP) with three different intragastric doses (400 mg/kg, 200 mg/kg, and 100 mg/kg) combined with a high-fat diet was determined in a streptozotocin (STZ)-induced model of T2DM. By analyzing the changes in indicators, weight, fasting blood sugar, and other bio-physiological parameters, the changes in gut microbiota were analyzed via high-throughput sequencing to establish the effect of MBP on the intestinal flora. The results showed that MBP alleviated symptoms of high-fat diet-induced T2DM. A high dosage of MBP enhanced the hypoglycemic effects compared with low and medium dosages. During gavage, the fasting blood glucose (FBG) levels of rats in the MBP group were significantly reduced (p < 0.05). The glucose tolerance of rats in the MBP group was significantly improved (p < 0.05). In diabetic mice, MBP significantly increased the activities of CAT, SOD, and GSH-Px. The inflammatory symptoms of liver cells and islet cells in the MBP group were alleviated, and the anti-inflammatory effect was partially correlated with the dose of MBP. After 4 weeks of treatment with MBP, the indices of blood lipid in the MBP group were significantly improved compared with those of the DM group (p < 0.05). Treatment with MBP (400 mg/kg) increases the levels of beneficial bacteria and decreases harmful bacteria in the intestinal tract of rats, thus altering the intestinal microbial community and antidiabetic effect on mice with T2DM by modulating gut microbiota. The findings suggest that MBP is a potential pharmaceutical supplement for preventing and treating diabetes.
Collapse
Affiliation(s)
- Jinhua Zhang
- College of Life Sciences, Shanxi University, Taiyuan 030006, China
- Shanxi Key Laboratory of Research and Utilization of Characteristic Plant Resources, Shanxi University, Taiyuan 030006, China
| | - Wenjing Wang
- College of Life Sciences, Shanxi University, Taiyuan 030006, China
| | - Dingyi Guo
- College of Life Sciences, Shanxi University, Taiyuan 030006, China
| | - Baoqing Bai
- College of Life Sciences, Shanxi University, Taiyuan 030006, China
- Shanxi Key Laboratory of Research and Utilization of Characteristic Plant Resources, Shanxi University, Taiyuan 030006, China
| | - Tao Bo
- College of Life Sciences, Shanxi University, Taiyuan 030006, China
- Shanxi Key Laboratory of Research and Utilization of Characteristic Plant Resources, Shanxi University, Taiyuan 030006, China
| | - Sanhong Fan
- College of Life Sciences, Shanxi University, Taiyuan 030006, China
- Shanxi Key Laboratory of Research and Utilization of Characteristic Plant Resources, Shanxi University, Taiyuan 030006, China
- Correspondence:
| |
Collapse
|
61
|
Zhou W, Deng Z, Liu Y, Shen H, Deng H, Xiao H. Global Research Trends of Artificial Intelligence on Histopathological Images: A 20-Year Bibliometric Analysis. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:11597. [PMID: 36141871 PMCID: PMC9517580 DOI: 10.3390/ijerph191811597] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Revised: 08/31/2022] [Accepted: 09/01/2022] [Indexed: 06/13/2023]
Abstract
Cancer has become a major threat to global health care. With the development of computer science, artificial intelligence (AI) has been widely applied in histopathological images (HI) analysis. This study analyzed the publications of AI in HI from 2001 to 2021 by bibliometrics, exploring the research status and the potential popular directions in the future. A total of 2844 publications from the Web of Science Core Collection were included in the bibliometric analysis. The country/region, institution, author, journal, keyword, and references were analyzed by using VOSviewer and CiteSpace. The results showed that the number of publications has grown rapidly in the last five years. The USA is the most productive and influential country with 937 publications and 23,010 citations, and most of the authors and institutions with higher numbers of publications and citations are from the USA. Keyword analysis showed that breast cancer, prostate cancer, colorectal cancer, and lung cancer are the tumor types of greatest concern. Co-citation analysis showed that classification and nucleus segmentation are the main research directions of AI-based HI studies. Transfer learning and self-supervised learning in HI is on the rise. This study performed the first bibliometric analysis of AI in HI from multiple indicators, providing insights for researchers to identify key cancer types and understand the research trends of AI application in HI.
Collapse
Affiliation(s)
- Wentong Zhou
- Center for System Biology, Data Sciences, and Reproductive Health, School of Basic Medical Science, Central South University, Changsha 410031, China
| | - Ziheng Deng
- Center for System Biology, Data Sciences, and Reproductive Health, School of Basic Medical Science, Central South University, Changsha 410031, China
| | - Yong Liu
- Center for System Biology, Data Sciences, and Reproductive Health, School of Basic Medical Science, Central South University, Changsha 410031, China
| | - Hui Shen
- Tulane Center of Biomedical Informatics and Genomics, Deming Department of Medicine, School of Medicine, Tulane University School, New Orleans, LA 70112, USA
| | - Hongwen Deng
- Tulane Center of Biomedical Informatics and Genomics, Deming Department of Medicine, School of Medicine, Tulane University School, New Orleans, LA 70112, USA
| | - Hongmei Xiao
- Center for System Biology, Data Sciences, and Reproductive Health, School of Basic Medical Science, Central South University, Changsha 410031, China
| |
Collapse
|
62
|
NDG-CAM: Nuclei Detection in Histopathology Images with Semantic Segmentation Networks and Grad-CAM. Bioengineering (Basel) 2022; 9:bioengineering9090475. [PMID: 36135021 PMCID: PMC9495364 DOI: 10.3390/bioengineering9090475] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 09/07/2022] [Accepted: 09/13/2022] [Indexed: 11/17/2022] Open
Abstract
Nuclei identification is a fundamental task in many areas of biomedical image analysis related to computational pathology applications. Nowadays, deep learning is the primary approach by which to segment the nuclei, but accuracy is closely linked to the amount of histological ground truth data for training. In addition, it is known that most of the hematoxylin and eosin (H&E)-stained microscopy nuclei images contain complex and irregular visual characteristics. Moreover, conventional semantic segmentation architectures grounded on convolutional neural networks (CNNs) are unable to recognize distinct overlapping and clustered nuclei. To overcome these problems, we present an innovative method based on gradient-weighted class activation mapping (Grad-CAM) saliency maps for image segmentation. The proposed solution is comprised of two steps. The first is the semantic segmentation obtained by the use of a CNN; then, the detection step is based on the calculation of local maxima of the Grad-CAM analysis evaluated on the nucleus class, allowing us to determine the positions of the nuclei centroids. This approach, which we denote as NDG-CAM, has performance in line with state-of-the-art methods, especially in isolating the different nuclei instances, and can be generalized for different organs and tissues. Experimental results demonstrated a precision of 0.833, recall of 0.815 and a Dice coefficient of 0.824 on the publicly available validation set. When used in combined mode with instance segmentation architectures such as Mask R-CNN, the method manages to surpass state-of-the-art approaches, with precision of 0.838, recall of 0.934 and a Dice coefficient of 0.884. Furthermore, performance on the external, locally collected validation set, with a Dice coefficient of 0.914 for the combined model, shows the generalization capability of the implemented pipeline, which has the ability to detect nuclei not only related to tumor or normal epithelium but also to other cytotypes.
Collapse
|
63
|
Shen Z, Hu J, Wu H, Chen Z, Wu W, Lin J, Xu Z, Kong J, Lin T. Global research trends and foci of artificial intelligence-based tumor pathology: a scientometric study. J Transl Med 2022; 20:409. [PMID: 36068536 PMCID: PMC9450455 DOI: 10.1186/s12967-022-03615-0] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Accepted: 08/25/2022] [Indexed: 02/08/2023] Open
Abstract
BACKGROUND With the development of digital pathology and the renewal of deep learning algorithm, artificial intelligence (AI) is widely applied in tumor pathology. Previous researches have demonstrated that AI-based tumor pathology may help to solve the challenges faced by traditional pathology. This technology has attracted the attention of scholars in many fields and a large amount of articles have been published. This study mainly summarizes the knowledge structure of AI-based tumor pathology through bibliometric analysis, and discusses the potential research trends and foci. METHODS Publications related to AI-based tumor pathology from 1999 to 2021 were selected from Web of Science Core Collection. VOSviewer and Citespace were mainly used to perform and visualize co-authorship, co-citation, and co-occurrence analysis of countries, institutions, authors, references and keywords in this field. RESULTS A total of 2753 papers were included. The papers on AI-based tumor pathology research had been continuously increased since 1999. The United States made the largest contribution in this field, in terms of publications (1138, 41.34%), H-index (85) and total citations (35,539 times). We identified the most productive institution and author were Harvard Medical School and Madabhushi Anant, while Jemal Ahmedin was the most co-cited author. Scientific Reports was the most prominent journal and after analysis, Lecture Notes in Computer Science was the journal with highest total link strength. According to the result of references and keywords analysis, "breast cancer histopathology" "convolutional neural network" and "histopathological image" were identified as the major future research foci. CONCLUSIONS AI-based tumor pathology is in the stage of vigorous development and has a bright prospect. International transboundary cooperation among countries and institutions should be strengthened in the future. It is foreseeable that more research foci will be lied in the interpretability of deep learning-based model and the development of multi-modal fusion model.
Collapse
Affiliation(s)
- Zefeng Shen
- Department of Urology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Jintao Hu
- Department of Urology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Haiyang Wu
- Graduate School of Tianjin Medical University, No. 22 Qixiangtai Road, Tianjin, 300070, China
| | - Zeshi Chen
- Department of Urology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Weixia Wu
- Zhujiang Hospital, Southern Medical University, 253 Gongye Road M, Guangzhou, 510282, China
| | - Junyi Lin
- Department of Urology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Zixin Xu
- Department of Urology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Jianqiu Kong
- Department of Urology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, Guangdong, China.
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Guangzhou, China.
| | - Tianxin Lin
- Department of Urology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, Guangdong, China.
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Guangzhou, China.
| |
Collapse
|
64
|
Gong X, Zhang T, Chen CLP, Liu Z. Research Review for Broad Learning System: Algorithms, Theory, and Applications. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:8922-8950. [PMID: 33729975 DOI: 10.1109/tcyb.2021.3061094] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
In recent years, the appearance of the broad learning system (BLS) is poised to revolutionize conventional artificial intelligence methods. It represents a step toward building more efficient and effective machine-learning methods that can be extended to a broader range of necessary research fields. In this survey, we provide a comprehensive overview of the BLS in data mining and neural networks for the first time, focusing on summarizing various BLS methods from the aspects of its algorithms, theories, applications, and future open research questions. First, we introduce the basic pattern of BLS manifestation, the universal approximation capability, and essence from the theoretical perspective. Furthermore, we focus on BLS's various improvements based on the current state of the theoretical research, which further improves its flexibility, stability, and accuracy under general or specific conditions, including classification, regression, semisupervised, and unsupervised tasks. Due to its remarkable efficiency, impressive generalization performance, and easy extendibility, BLS has been applied in different domains. Next, we illustrate BLS's practical advances, such as computer vision, biomedical engineering, control, and natural language processing. Finally, the future open research problems and promising directions for BLSs are pointed out.
Collapse
|
65
|
Shmatko A, Ghaffari Laleh N, Gerstung M, Kather JN. Artificial intelligence in histopathology: enhancing cancer research and clinical oncology. NATURE CANCER 2022; 3:1026-1038. [PMID: 36138135 DOI: 10.1038/s43018-022-00436-4] [Citation(s) in RCA: 179] [Impact Index Per Article: 59.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Accepted: 08/03/2022] [Indexed: 06/16/2023]
Abstract
Artificial intelligence (AI) methods have multiplied our capabilities to extract quantitative information from digital histopathology images. AI is expected to reduce workload for human experts, improve the objectivity and consistency of pathology reports, and have a clinical impact by extracting hidden information from routinely available data. Here, we describe how AI can be used to predict cancer outcome, treatment response, genetic alterations and gene expression from digitized histopathology slides. We summarize the underlying technologies and emerging approaches, noting limitations, including the need for data sharing and standards. Finally, we discuss the broader implications of AI in cancer research and oncology.
Collapse
Affiliation(s)
- Artem Shmatko
- Division of AI in Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany
- European Molecular Biology Laboratory, European Bioinformatics Institute, Cambridge, UK
| | | | - Moritz Gerstung
- Division of AI in Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany.
- European Molecular Biology Laboratory, European Bioinformatics Institute, Cambridge, UK.
| | - Jakob Nikolas Kather
- Department of Medicine III, University Hospital RWTH Aachen, Aachen, Germany.
- Medical Oncology, National Center for Tumor Diseases, University Hospital Heidelberg, Heidelberg, Germany.
- Pathology and Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK.
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany.
| |
Collapse
|
66
|
El-Fiqi H, Wang M, Kasmarik K, Bezerianos A, Tan KC, Abbass HA. Weighted Gate Layer Autoencoders. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:7242-7253. [PMID: 33502995 DOI: 10.1109/tcyb.2021.3049583] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
A single dataset could hide a significant number of relationships among its feature set. Learning these relationships simultaneously avoids the time complexity associated with running the learning algorithm for every possible relationship, and affords the learner with an ability to recover missing data and substitute erroneous ones by using available data. In our previous research, we introduced the gate-layer autoencoders (GLAEs), which offer an architecture that enables a single model to approximate multiple relationships simultaneously. GLAE controls what an autoencoder learns in a time series by switching on and off certain input gates, thus, allowing and disallowing the data to flow through the network to increase network's robustness. However, GLAE is limited to binary gates. In this article, we generalize the architecture to weighted gate layer autoencoders (WGLAE) through the addition of a weight layer to update the error according to which variables are more critical and to encourage the network to learn these variables. This new weight layer can also be used as an output gate and uses additional control parameters to afford the network with abilities to represent different models that can learn through gating the inputs. We compare the architecture against similar architectures in the literature and demonstrate that the proposed architecture produces more robust autoencoders with the ability to reconstruct both incomplete synthetic and real data with high accuracy.
Collapse
|
67
|
Xie J, Pu X, He J, Qiu Y, Lu C, Gao W, Wang X, Lu H, Shi J, Xu Y, Madabhushi A, Fan X, Chen J, Xu J. Survival prediction on intrahepatic cholangiocarcinoma with histomorphological analysis on the whole slide images. Comput Biol Med 2022; 146:105520. [PMID: 35537220 DOI: 10.1016/j.compbiomed.2022.105520] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Revised: 04/07/2022] [Accepted: 04/11/2022] [Indexed: 01/06/2023]
Abstract
Intrahepatic cholangiocarcinoma (ICC) is cancer that originates from the liver's secondary ductal epithelium or branch. Due to the lack of early-stage clinical symptoms and very high mortality, the 5-year postoperative survival rate is only about 35%. A critical step to improve patients' survival is accurately predicting their survival status and giving appropriate treatment. The tumor microenvironment of ICC is the immediate environment on which the tumor cell growth depends. The differentiation of tumor glands, the stroma status, and the tumor-infiltrating lymphocytes in such environments are strictly related to the tumor progress. It is crucial to develop a computerized system for characterizing the tumor environment. This work aims to develop the quantitative histomorphological features that describe lymphocyte density distribution at the cell level and the different components at the tumor's tissue level in H&E-stained whole slide images (WSIs). The goal is to explore whether these features could stratify patients' survival. This study comprised of 127 patients diagnosed with ICC after surgery, where 78 cases were randomly chosen as the modeling set, and the rest of the 49 cases were testing set. Deep learning-based models were developed for tissue segmentation and lymphocyte detection in the WSIs. A total of 107-dimensional features, including different type of graph features on the WSIs were extracted by exploring the histomorphological patterns of these identified tumor tissue and lymphocytes. The top 3 discriminative features were chosen with the mRMR algorithm via 5-fold cross-validation to predict the patient's survival. The model's performance was evaluated on the independent testing set, which achieved an AUC of 0.6818 and the log-rank test p-value of 0.03. The Cox multivariable test was used to control the TNM staging, γ-Glutamytransferase, and the Peritumoral Glisson's Sheath Invasion. It showed that our model could independently predict survival risk with a p-value of 0.048 and HR (95% confidence interval) of 2.90 (1.01-8.32). These results indicated that the composition in tissue-level and global arrangement of lymphocytes in the cell-level could distinguish ICC patients' survival risk.
Collapse
Affiliation(s)
- Jiawei Xie
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, 210044, China
| | - Xiaohong Pu
- Dept. of Pathology, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing, 210008, China
| | - Jian He
- Dept. of Nuclear Medicine, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing, 210008, China
| | - Yudong Qiu
- Dept. of Hepatopancreatobiliary Surgery, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing, 210008, China
| | - Cheng Lu
- Dept. of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, 44106, USA
| | - Wei Gao
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, 210044, China
| | - Xiangxue Wang
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, 210044, China
| | - Haoda Lu
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, 210044, China
| | - Jiong Shi
- Dept. of Pathology, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing, 210008, China
| | - Yuemei Xu
- Dept. of Pathology, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing, 210008, China
| | - Anant Madabhushi
- Dept. of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, 44106, USA; Louis Stokes Cleveland Veterans Administration Medical Center, Cleveland, OH, 44106, USA
| | - Xiangshan Fan
- Dept. of Pathology, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing, 210008, China
| | - Jun Chen
- Dept. of Pathology, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing, 210008, China.
| | - Jun Xu
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, 210044, China.
| |
Collapse
|
68
|
Wang Z, Zhu X, Li A, Wang Y, Meng G, Wang M. Global and local attentional feature alignment for domain adaptive nuclei detection in histopathology images. Artif Intell Med 2022; 132:102341. [DOI: 10.1016/j.artmed.2022.102341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Revised: 06/08/2022] [Accepted: 06/27/2022] [Indexed: 11/02/2022]
|
69
|
Zhao H, Wu H, Wang X. OIAE: Overall Improved Autoencoder with Powerful Image Reconstruction and Discriminative Feature Extraction. Cognit Comput 2022. [DOI: 10.1007/s12559-022-10000-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
70
|
MITNET: a novel dataset and a two-stage deep learning approach for mitosis recognition in whole slide images of breast cancer tissue. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07441-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
AbstractMitosis assessment of breast cancer has a strong prognostic importance and is visually evaluated by pathologists. The inter, and intra-observer variability of this assessment is high. In this paper, a two-stage deep learning approach, named MITNET, has been applied to automatically detect nucleus and classify mitoses in whole slide images (WSI) of breast cancer. Moreover, this paper introduces two new datasets. The first dataset is used to detect the nucleus in the WSIs, which contains 139,124 annotated nuclei in 1749 patches extracted from 115 WSIs of breast cancer tissue, and the second dataset consists of 4908 mitotic cells and 4908 non-mitotic cells image samples extracted from 214 WSIs which is used for mitosis classification. The created datasets are used to train the MITNET network, which consists of two deep learning architectures, called MITNET-det and MITNET-rec, respectively, to isolate nuclei cells and identify the mitoses in WSIs. In MITNET-det architecture, to extract features from nucleus images and fuse them, CSPDarknet and Path Aggregation Network (PANet) are used, respectively, and then, a detection strategy using You Look Only Once (scaled-YOLOv4) is employed to detect nucleus at three different scales. In the classification part, the detected isolated nucleus images are passed through proposed MITNET-rec deep learning architecture, to identify the mitosis in the WSIs. Various deep learning classifiers and the proposed classifier are trained with a publicly available mitosis datasets (MIDOG and ATYPIA) and then, validated over our created dataset. The results verify that deep learning-based classifiers trained on MIDOG and ATYPIA have difficulties to recognize mitosis on our dataset which shows that the created mitosis dataset has unique features and characteristics. Besides this, the proposed classifier outperforms the state-of-the-art classifiers significantly and achieves a $$68.7\%$$
68.7
%
F1-score and $$49.0\%$$
49.0
%
F1-score on the MIDOG and the created mitosis datasets, respectively. Moreover, the experimental results reveal that the overall proposed MITNET framework detects the nucleus in WSIs with high detection rates and recognizes the mitotic cells in WSI with high F1-score which leads to the improvement of the accuracy of pathologists’ decision.
Collapse
|
71
|
Liu L, Wang Y, Chang J, Zhang P, Liang G, Zhang H. LLRHNet: Multiple Lesions Segmentation Using Local-Long Range Features. Front Neuroinform 2022; 16:859973. [PMID: 35600503 PMCID: PMC9119082 DOI: 10.3389/fninf.2022.859973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Accepted: 03/22/2022] [Indexed: 11/13/2022] Open
Abstract
The encoder-decoder-based deep convolutional neural networks (CNNs) have made great improvements in medical image segmentation tasks. However, due to the inherent locality of convolution, CNNs generally are demonstrated to have limitations in obtaining features across layers and long-range features from the medical image. In this study, we develop a local-long range hybrid features network (LLRHNet), which inherits the merits of the iterative aggregation mechanism and the transformer technology, as a medical image segmentation model. LLRHNet adopts encoder-decoder architecture as the backbone which iteratively aggregates the projection and up-sampling to fuse local low-high resolution features across isolated layers. The transformer adopts the multi-head self-attention mechanism to extract long-range features from the tokenized image patches and fuses these features with the local-range features extracted by down-sampling operation in the backbone network. These hybrid features are used to assist the cascaded up-sampling operations to local the position of the target tissues. LLRHNet is evaluated on two multiple lesions medical image data sets, including a public liver-related segmentation data set (3DIRCADb) and an in-house stroke and white matter hyperintensity (SWMH) segmentation data set. Experimental results denote that LLRHNet achieves state-of-the-art performance on both data sets.
Collapse
Affiliation(s)
- Liangliang Liu
- College of Information and Management Science, Henan Agricultural University, Zhengzhou, China
| | - Ying Wang
- College of Information and Management Science, Henan Agricultural University, Zhengzhou, China
| | - Jing Chang
- College of Information and Management Science, Henan Agricultural University, Zhengzhou, China
| | - Pei Zhang
- College of Information and Management Science, Henan Agricultural University, Zhengzhou, China
| | - Gongbo Liang
- Department of Computer Science, Eastern Kentucky University, Richmond, KY, United States
| | - Hui Zhang
- College of Information and Management Science, Henan Agricultural University, Zhengzhou, China
| |
Collapse
|
72
|
Hou R, Peng Y, Grimm LJ, Ren Y, Mazurowski MA, Marks JR, King LM, Maley CC, Hwang ES, Lo JY. Anomaly Detection of Calcifications in Mammography Based on 11,000 Negative Cases. IEEE Trans Biomed Eng 2022; 69:1639-1650. [PMID: 34788216 DOI: 10.1109/tbme.2021.3126281] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
In mammography, calcifications are one of the most common signs of breast cancer. Detection of such lesions is an active area of research for computer-aided diagnosis and machine learning algorithms. Due to limited numbers of positive cases, many supervised detection models suffer from overfitting and fail to generalize. We present a one-class, semi-supervised framework using a deep convolutional autoencoder trained with over 50,000 images from 11,000 negative-only cases. Since the model learned from only normal breast parenchymal features, calcifications produced large signals when comparing the residuals between input and reconstruction output images. As a key advancement, a structural dissimilarity index was used to suppress non-structural noises. Our selected model achieved pixel-based AUROC of 0.959 and AUPRC of 0.676 during validation, where calcification masks were defined in a semi-automated process. Although not trained directly on any cancers, detection performance of calcification lesions on 1,883 testing images (645 malignant and 1238 negative) achieved 75% sensitivity at 2.5 false positives per image. Performance plateaued early when trained with only a fraction of the cases, and greater model complexity or a larger dataset did not improve performance. This study demonstrates the potential of this anomaly detection approach to detect mammographic calcifications in a semi-supervised manner with efficient use of a small number of labeled images, and may facilitate new clinical applications such as computer-aided triage and quality improvement.
Collapse
|
73
|
Quality-relevant feature extraction method based on teacher-student uncertainty autoencoder and its application to soft sensors. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2021.12.131] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
74
|
CellNet: A Lightweight Model towards Accurate LOC-Based High-Speed Cell Detection. ELECTRONICS 2022. [DOI: 10.3390/electronics11091407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Label-free cell separation and sorting in a microfluidic system, an essential technique for modern cancer diagnosis, resulted in high-throughput single-cell analysis becoming a reality. However, designing an efficient cell detection model is challenging. Traditional cell detection methods are subject to occlusion boundaries and weak textures, resulting in poor performance. Modern detection models based on convolutional neural networks (CNNs) have achieved promising results at the cost of a large number of both parameters and floating point operations (FLOPs). In this work, we present a lightweight, yet powerful cell detection model named CellNet, which includes two efficient modules, CellConv blocks and the h-swish nonlinearity function. CellConv is proposed as an effective feature extractor as a substitute to computationally expensive convolutional layers, whereas the h-swish function is introduced to increase the nonlinearity of the compact model. To boost the prediction and localization ability of the detection model, we re-designed the model’s multi-task loss function. In comparison with other efficient object detection methods, our approach achieved state-of-the-art 98.70% mean average precision (mAP) on our custom sea urchin embryos dataset with only 0.08 M parameters and 0.10 B FLOPs, reducing the size of the model by 39.5× and the computational cost by 4.6×. We deployed CellNet on different platforms to verify its efficiency. The inference speed on a graphics processing unit (GPU) was 500.0 fps compared with 87.7 fps on a CPU. Additionally, CellNet is 769.5-times smaller and 420 fps faster than YOLOv3. Extensive experimental results demonstrate that CellNet can achieve an excellent efficiency/accuracy trade-off on resource-constrained platforms.
Collapse
|
75
|
Hu H, Qiao S, Hao Y, Bai Y, Cheng R, Zhang W, Zhang G. Breast cancer histopathological images recognition based on two-stage nuclei segmentation strategy. PLoS One 2022; 17:e0266973. [PMID: 35482728 PMCID: PMC9049370 DOI: 10.1371/journal.pone.0266973] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2021] [Accepted: 03/30/2022] [Indexed: 11/19/2022] Open
Abstract
Pathological examination is the gold standard for breast cancer diagnosis. The recognition of histopathological images of breast cancer has attracted a lot of attention in the field of medical image processing. In this paper, on the base of the Bioimaging 2015 dataset, a two-stage nuclei segmentation strategy, that is, a method of watershed segmentation based on histopathological images after stain separation, is proposed to make the dataset recognized to be the carcinoma and non-carcinoma recognition. Firstly, stain separation is performed on breast cancer histopathological images. Then the marker-based watershed segmentation method is used for images obtained from stain separation to achieve the nuclei segmentation target. Next, the completed local binary pattern is used to extract texture features from the nuclei regions (images after nuclei segmentation), and color features were extracted by using the color auto-correlation method on the stain-separated images. Finally, the two kinds of features were fused and the support vector machine was used for carcinoma and non-carcinoma recognition. The experimental results show that the two-stage nuclei segmentation strategy proposed in this paper has significant advantages in the recognition of carcinoma and non-carcinoma on breast cancer histopathological images, and the recognition accuracy arrives at 91.67%. The proposed method is also applied to the ICIAR 2018 dataset to realize the automatic recognition of carcinoma and non-carcinoma, and the recognition accuracy arrives at 92.50%.
Collapse
Affiliation(s)
- Hongping Hu
- School of Science, North University of China, Taiyuan, China
| | - Shichang Qiao
- School of Science, North University of China, Taiyuan, China
| | - Yan Hao
- School of Information and Communication Engineering, North University of China, Taiyuan, China
| | - Yanping Bai
- School of Science, North University of China, Taiyuan, China
| | - Rong Cheng
- School of Science, North University of China, Taiyuan, China
| | - Wendong Zhang
- School of Instrument and Electronics, State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan, China
| | - Guojun Zhang
- School of Instrument and Electronics, State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan, China
| |
Collapse
|
76
|
Alqahtani A. Application of Artificial Intelligence in Discovery and Development of Anticancer and Antidiabetic Therapeutic Agents. EVIDENCE-BASED COMPLEMENTARY AND ALTERNATIVE MEDICINE : ECAM 2022; 2022:6201067. [PMID: 35509623 PMCID: PMC9060979 DOI: 10.1155/2022/6201067] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Revised: 03/17/2022] [Accepted: 04/05/2022] [Indexed: 11/18/2022]
Abstract
Spectacular developments in molecular and cellular biology have led to important discoveries in cancer research. Despite cancer is one of the major causes of morbidity and mortality globally, diabetes is one of the most leading sources of group of disorders. Artificial intelligence (AI) has been considered the fourth industrial revolution machine. The most major hurdles in drug discovery and development are the time and expenditures required to sustain the drug research pipeline. Large amounts of data can be explored and generated by AI, which can then be converted into useful knowledge. Because of this, the world's largest drug companies have already begun to use AI in their drug development research. In the present era, AI has a huge amount of potential for the rapid discovery and development of new anticancer drugs. Clinical studies, electronic medical records, high-resolution medical imaging, and genomic assessments are just a few of the tools that could aid drug development. Large data sets are available to researchers in the pharmaceutical and medical fields, which can be analyzed by advanced AI systems. This review looked at how computational biology and AI technologies may be utilized in cancer precision drug development by combining knowledge of cancer medicines, drug resistance, and structural biology. This review also highlighted a realistic assessment of the potential for AI in understanding and managing diabetes.
Collapse
Affiliation(s)
- Amal Alqahtani
- College of Medicine, Imam Abdulrahman Bin Faisal University, Dammam, 31541, Saudi Arabia
- Department of Basic Sciences, Deanship of Preparatory Year and Supporting Studies, Imam Abdulrahman Bin Faisal University, P.O. Box 1982, Dammam 34212, Saudi Arabia
| |
Collapse
|
77
|
Scarpiniti M, Sarv Ahrabi S, Baccarelli E, Piazzo L, Momenzadeh A. A novel unsupervised approach based on the hidden features of Deep Denoising Autoencoders for COVID-19 disease detection. EXPERT SYSTEMS WITH APPLICATIONS 2022; 192:116366. [PMID: 34937995 PMCID: PMC8675154 DOI: 10.1016/j.eswa.2021.116366] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Revised: 10/15/2021] [Accepted: 11/30/2021] [Indexed: 05/02/2023]
Abstract
Chest imaging can represent a powerful tool for detecting the Coronavirus disease 2019 (COVID-19). Among the available technologies, the chest Computed Tomography (CT) scan is an effective approach for reliable and early detection of the disease. However, it could be difficult to rapidly identify by human inspection anomalous area in CT images belonging to the COVID-19 disease. Hence, it becomes necessary the exploitation of suitable automatic algorithms able to quick and precisely identify the disease, possibly by using few labeled input data, because large amounts of CT scans are not usually available for the COVID-19 disease. The method proposed in this paper is based on the exploitation of the compact and meaningful hidden representation provided by a Deep Denoising Convolutional Autoencoder (DDCAE). Specifically, the proposed DDCAE, trained on some target CT scans in an unsupervised way, is used to build up a robust statistical representation generating a target histogram. A suitable statistical distance measures how this target histogram is far from a companion histogram evaluated on an unknown test scan: if this distance is greater of a threshold, the test image is labeled as anomaly, i.e. the scan belongs to a patient affected by COVID-19 disease. Some experimental results and comparisons with other state-of-the-art methods show the effectiveness of the proposed approach reaching a top accuracy of 100% and similar high values for other metrics. In conclusion, by using a statistical representation of the hidden features provided by DDCAEs, the developed architecture is able to differentiate COVID-19 from normal and pneumonia scans with high reliability and at low computational cost.
Collapse
Affiliation(s)
- Michele Scarpiniti
- Department of Information Engineering, Electronics and Telecommunications (DIET), Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy
| | - Sima Sarv Ahrabi
- Department of Information Engineering, Electronics and Telecommunications (DIET), Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy
| | - Enzo Baccarelli
- Department of Information Engineering, Electronics and Telecommunications (DIET), Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy
| | - Lorenzo Piazzo
- Department of Information Engineering, Electronics and Telecommunications (DIET), Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy
| | - Alireza Momenzadeh
- Department of Information Engineering, Electronics and Telecommunications (DIET), Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy
| |
Collapse
|
78
|
Liang S, Lu H, Zang M, Wang X, Jiao Y, Zhao T, Xu EY, Xu J. Deep SED-Net with interactive learning for multiple testicular cell types segmentation and cell composition analysis in mouse seminiferous tubules. Cytometry A 2022; 101:658-674. [PMID: 35388957 DOI: 10.1002/cyto.a.24556] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 03/05/2022] [Accepted: 04/01/2022] [Indexed: 11/06/2022]
Abstract
The development of mouse spermatozoa is a continuous process from spermatogonia, spermatocytes, spermatids to mature sperm. Those developing germ cells (spermatogonia, spermatocyte, spermatids) together with supporting Sertoli cells are all enclosed inside seminiferous tubules of the testis, their identification is key to testis histology and pathology analysis. Automated segmentation of all these cells is a challenging task because of their dynamical changes in different stages. The accurate segmentation of testicular cells is critical in developing computerized spermatogenesis staging. In this paper, we present a novel segmentation model, SED-Net, which incorporates a Squeeze-and-Excitation (SE) module and a Dense unit. The SE module optimizes and obtains features from different channels, whereas the Dense unit uses fewer parameters to enhance the use of features. A human-in-the-loop strategy, named deep interactive learning, is developed to achieve better segmentation performance while reducing the workload of manual annotation and time consumption. Across a cohort of 274 seminiferous tubules from Stages VI to VIII, the SED-Net achieved a pixel accuracy of 0.930, a mean pixel accuracy of 0.866, a mean intersection over union of 0.710, and a frequency weighted intersection over union of 0.878, respectively, in terms of four types of testicular cell segmentation. There is no significant difference between manual annotated tubules and segmentation results by SED-Net in cell composition analysis for tubules from Stages VI to VIII. In addition, we performed cell composition analysis on 2346 segmented seminiferous tubule images from 12 segmented testicular section results. The results provided quantitation of cells of various testicular cell types across 12 stages. The rule reflects the cell variation tendency across 12 stages during development of mouse spermatozoa. The method could enable us to not only analyze cell morphology and staging during the development of mouse spermatozoa but also potientially could be applied to the study of reproductive diseases such as infertility.
Collapse
Affiliation(s)
- Shi Liang
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, China
| | - Haoda Lu
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, China
| | - Min Zang
- State Key Laboratory of Reproductive Medicine, Nanjing Medical University, Nanjing, China
| | - Xiangxue Wang
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, China
| | - Yiping Jiao
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, China
| | - Tingting Zhao
- State Key Laboratory of Reproductive Medicine, Nanjing Medical University, Nanjing, China
| | - Eugene Yujun Xu
- State Key Laboratory of Reproductive Medicine, Nanjing Medical University, Nanjing, China.,Department of Neurology, Center for Reproductive Sciences, Northwestern University Feinberg School of Medicine, IL, USA
| | - Jun Xu
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, China
| |
Collapse
|
79
|
Unsupervised Segmentation in NSCLC: How to Map the Output of Unsupervised Segmentation to Meaningful Histological Labels by Linear Combination? APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12083718] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Background: Segmentation is, in many Pathomics projects, an initial step. Usually, in supervised settings, well-annotated and large datasets are required. Regarding the rarity of such datasets, unsupervised learning concepts appear to be a potential solution. Against this background, we tested for a small dataset on lung cancer tissue microarrays (TMA) if a model (i) first can be in a previously published unsupervised setting and (ii) secondly can be modified and retrained to produce meaningful labels, and (iii) we finally compared this approach to standard segmentation models. Methods: (ad i) First, a convolutional neuronal network (CNN) segmentation model is trained in an unsupervised fashion, as recently described by Kanezaki et al. (ad ii) Second, the model is modified by adding a remapping block and is retrained on an annotated dataset in a supervised setting. (ad iii) Third, the segmentation results are compared to standard segmentation models trained on the same dataset. Results: (ad i–ii) By adding an additional mapping-block layer and by retraining, models previously trained in an unsupervised manner can produce meaningful labels. (ad iii) The segmentation quality is inferior to standard segmentation models trained on the same dataset. Conclusions: Unsupervised training in combination with subsequent supervised training offers for histological images here no benefit.
Collapse
|
80
|
Long Z, Jinsong W. A hybrid method of entropy and SSAE-SVM based DDoS detection and mitigation mechanism in SDN. Comput Secur 2022. [DOI: 10.1016/j.cose.2022.102604] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
81
|
Chen RJ, Lu MY, Wang J, Williamson DFK, Rodig SJ, Lindeman NI, Mahmood F. Pathomic Fusion: An Integrated Framework for Fusing Histopathology and Genomic Features for Cancer Diagnosis and Prognosis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:757-770. [PMID: 32881682 DOI: 10.1109/tmi.2020.3021387] [Citation(s) in RCA: 183] [Impact Index Per Article: 61.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Cancer diagnosis, prognosis, mymargin and therapeutic response predictions are based on morphological information from histology slides and molecular profiles from genomic data. However, most deep learning-based objective outcome prediction and grading paradigms are based on histology or genomics alone and do not make use of the complementary information in an intuitive manner. In this work, we propose Pathomic Fusion, an interpretable strategy for end-to-end multimodal fusion of histology image and genomic (mutations, CNV, RNA-Seq) features for survival outcome prediction. Our approach models pairwise feature interactions across modalities by taking the Kronecker product of unimodal feature representations, and controls the expressiveness of each representation via a gating-based attention mechanism. Following supervised learning, we are able to interpret and saliently localize features across each modality, and understand how feature importance shifts when conditioning on multimodal input. We validate our approach using glioma and clear cell renal cell carcinoma datasets from the Cancer Genome Atlas (TCGA), which contains paired whole-slide image, genotype, and transcriptome data with ground truth survival and histologic grade labels. In a 15-fold cross-validation, our results demonstrate that the proposed multimodal fusion paradigm improves prognostic determinations from ground truth grading and molecular subtyping, as well as unimodal deep networks trained on histology and genomic data alone. The proposed method establishes insight and theory on how to train deep networks on multimodal biomedical data in an intuitive manner, which will be useful for other problems in medicine that seek to combine heterogeneous data streams for understanding diseases and predicting response and resistance to treatment. Code and trained models are made available at: https://github.com/mahmoodlab/PathomicFusion.
Collapse
|
82
|
SVseg: Stacked Sparse Autoencoder-Based Patch Classification Modeling for Vertebrae Segmentation. MATHEMATICS 2022. [DOI: 10.3390/math10050796] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Precise vertebrae segmentation is essential for the image-related analysis of spine pathologies such as vertebral compression fractures and other abnormalities, as well as for clinical diagnostic treatment and surgical planning. An automatic and objective system for vertebra segmentation is required, but its development is likely to run into difficulties such as low segmentation accuracy and the requirement of prior knowledge or human intervention. Recently, vertebral segmentation methods have focused on deep learning-based techniques. To mitigate the challenges involved, we propose deep learning primitives and stacked Sparse autoencoder-based patch classification modeling for Vertebrae segmentation (SVseg) from Computed Tomography (CT) images. After data preprocessing, we extract overlapping patches from CT images as input to train the model. The stacked sparse autoencoder learns high-level features from unlabeled image patches in an unsupervised way. Furthermore, we employ supervised learning to refine the feature representation to improve the discriminability of learned features. These high-level features are fed into a logistic regression classifier to fine-tune the model. A sigmoid classifier is added to the network to discriminate the vertebrae patches from non-vertebrae patches by selecting the class with the highest probabilities. We validated our proposed SVseg model on the publicly available MICCAI Computational Spine Imaging (CSI) dataset. After configuration optimization, our proposed SVseg model achieved impressive performance, with 87.39% in Dice Similarity Coefficient (DSC), 77.60% in Jaccard Similarity Coefficient (JSC), 91.53% in precision (PRE), and 90.88% in sensitivity (SEN). The experimental results demonstrated the method’s efficiency and significant potential for diagnosing and treating clinical spinal diseases.
Collapse
|
83
|
Jiang H, Li S, Li H. Parallel ‘same’ and ‘valid’ convolutional block and input-collaboration strategy for histopathological image classification. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.108417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
84
|
Ciga O, Xu T, Martel AL. Self supervised contrastive learning for digital histopathology. MACHINE LEARNING WITH APPLICATIONS 2022. [DOI: 10.1016/j.mlwa.2021.100198] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
|
85
|
Wu Y, Cheng M, Huang S, Pei Z, Zuo Y, Liu J, Yang K, Zhu Q, Zhang J, Hong H, Zhang D, Huang K, Cheng L, Shao W. Recent Advances of Deep Learning for Computational Histopathology: Principles and Applications. Cancers (Basel) 2022; 14:1199. [PMID: 35267505 PMCID: PMC8909166 DOI: 10.3390/cancers14051199] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Revised: 02/16/2022] [Accepted: 02/22/2022] [Indexed: 01/10/2023] Open
Abstract
With the remarkable success of digital histopathology, we have witnessed a rapid expansion of the use of computational methods for the analysis of digital pathology and biopsy image patches. However, the unprecedented scale and heterogeneous patterns of histopathological images have presented critical computational bottlenecks requiring new computational histopathology tools. Recently, deep learning technology has been extremely successful in the field of computer vision, which has also boosted considerable interest in digital pathology applications. Deep learning and its extensions have opened several avenues to tackle many challenging histopathological image analysis problems including color normalization, image segmentation, and the diagnosis/prognosis of human cancers. In this paper, we provide a comprehensive up-to-date review of the deep learning methods for digital H&E-stained pathology image analysis. Specifically, we first describe recent literature that uses deep learning for color normalization, which is one essential research direction for H&E-stained histopathological image analysis. Followed by the discussion of color normalization, we review applications of the deep learning method for various H&E-stained image analysis tasks such as nuclei and tissue segmentation. We also summarize several key clinical studies that use deep learning for the diagnosis and prognosis of human cancers from H&E-stained histopathological images. Finally, online resources and open research problems on pathological image analysis are also provided in this review for the convenience of researchers who are interested in this exciting field.
Collapse
Affiliation(s)
- Yawen Wu
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Michael Cheng
- Department of Medicine, Indiana University School of Medicine, Indianapolis, IN 46202, USA; (M.C.); (J.Z.); (K.H.)
- Regenstrief Institute, Indiana University, Indianapolis, IN 46202, USA
| | - Shuo Huang
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Zongxiang Pei
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Yingli Zuo
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Jianxin Liu
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Kai Yang
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Qi Zhu
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Jie Zhang
- Department of Medicine, Indiana University School of Medicine, Indianapolis, IN 46202, USA; (M.C.); (J.Z.); (K.H.)
- Regenstrief Institute, Indiana University, Indianapolis, IN 46202, USA
| | - Honghai Hong
- Department of Clinical Laboratory, The Third Affiliated Hospital of Guangzhou Medical University, Guangzhou 510006, China;
| | - Daoqiang Zhang
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Kun Huang
- Department of Medicine, Indiana University School of Medicine, Indianapolis, IN 46202, USA; (M.C.); (J.Z.); (K.H.)
- Regenstrief Institute, Indiana University, Indianapolis, IN 46202, USA
| | - Liang Cheng
- Departments of Pathology and Laboratory Medicine, Indiana University School of Medicine, Indianapolis, IN 46202, USA
| | - Wei Shao
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| |
Collapse
|
86
|
Convolutional Blur Attention Network for Cell Nuclei Segmentation. SENSORS 2022; 22:s22041586. [PMID: 35214488 PMCID: PMC8878074 DOI: 10.3390/s22041586] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Revised: 02/11/2022] [Accepted: 02/14/2022] [Indexed: 01/27/2023]
Abstract
Accurately segmented nuclei are important, not only for cancer classification, but also for predicting treatment effectiveness and other biomedical applications. However, the diversity of cell types, various external factors, and illumination conditions make nucleus segmentation a challenging task. In this work, we present a new deep learning-based method for cell nucleus segmentation. The proposed convolutional blur attention (CBA) network consists of downsampling and upsampling procedures. A blur attention module and a blur pooling operation are used to retain the feature salience and avoid noise generation in the downsampling procedure. A pyramid blur pooling (PBP) module is proposed to capture the multi-scale information in the upsampling procedure. The superiority of the proposed method has been compared with a few prior segmentation models, namely U-Net, ENet, SegNet, LinkNet, and Mask RCNN on the 2018 Data Science Bowl (DSB) challenge dataset and the multi-organ nucleus segmentation (MoNuSeg) at MICCAI 2018. The Dice similarity coefficient and some evaluation matrices, such as F1 score, recall, precision, and average Jaccard index (AJI) were used to evaluate the segmentation efficiency of these models. Overall, the proposal method in this paper has the best performance, the AJI indicator on the DSB dataset and MoNuSeg is 0.8429, 0.7985, respectively.
Collapse
|
87
|
Xu X, Ren W. A hybrid model of stacked autoencoder and modified particle swarm optimization for multivariate chaotic time series forecasting. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2021.108321] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
88
|
McAlpine ED, Michelow P, Celik T. The Utility of Unsupervised Machine Learning in Anatomic Pathology. Am J Clin Pathol 2022; 157:5-14. [PMID: 34302331 DOI: 10.1093/ajcp/aqab085] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Accepted: 04/18/2021] [Indexed: 01/29/2023] Open
Abstract
OBJECTIVES Developing accurate supervised machine learning algorithms is hampered by the lack of representative annotated datasets. Most data in anatomic pathology are unlabeled and creating large, annotated datasets is a time consuming and laborious process. Unsupervised learning, which does not require annotated data, possesses the potential to assist with this challenge. This review aims to introduce the concept of unsupervised learning and illustrate how clustering, generative adversarial networks (GANs) and autoencoders have the potential to address the lack of annotated data in anatomic pathology. METHODS A review of unsupervised learning with examples from the literature was carried out. RESULTS Clustering can be used as part of semisupervised learning where labels are propagated from a subset of annotated data points to remaining unlabeled data points in a dataset. GANs may assist by generating large amounts of synthetic data and performing color normalization. Autoencoders allow training of a network on a large, unlabeled dataset and transferring learned representations to a classifier using a smaller, labeled subset (unsupervised pretraining). CONCLUSIONS Unsupervised machine learning techniques such as clustering, GANs, and autoencoders, used individually or in combination, may help address the lack of annotated data in pathology and improve the process of developing supervised learning models.
Collapse
Affiliation(s)
- Ewen D McAlpine
- Division of Anatomical Pathology, School of Pathology, University of the Witwatersrand, Johannesburg, South Africa
- National Health Laboratory Service, Johannesburg, South Africa
| | - Pamela Michelow
- Division of Anatomical Pathology, School of Pathology, University of the Witwatersrand, Johannesburg, South Africa
- National Health Laboratory Service, Johannesburg, South Africa
| | - Turgay Celik
- School of Electrical and Information Engineering, University of the Witwatersrand, Johannesburg, South Africa
- Wits Institute of Data Science, University of the Witwatersrand, Johannesburg, South Africa
| |
Collapse
|
89
|
S. AL-Malaise AL-Ghamdi A, Ragab M, J. Alsolami F, Choudhry H, Rizqallah Alzahrani I. Intelligent Forensic Investigation Using Optimal Stacked Autoencoder for Critical Industrial Infrastructures. COMPUTERS, MATERIALS & CONTINUA 2022; 72:2275-2289. [DOI: 10.32604/cmc.2022.026226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/19/2021] [Accepted: 01/24/2022] [Indexed: 10/28/2024]
|
90
|
Improving geometric P-norm-based glioma segmentation through deep convolutional autoencoder encapsulation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103232] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
91
|
Li R, Wu X, Li A, Wang M. OUP accepted manuscript. Bioinformatics 2022; 38:2587-2594. [PMID: 35188177 PMCID: PMC9048674 DOI: 10.1093/bioinformatics/btac113] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2021] [Revised: 01/20/2022] [Accepted: 02/17/2022] [Indexed: 12/03/2022] Open
Abstract
Motivation Cancer survival prediction can greatly assist clinicians in planning patient treatments and improving their life quality. Recent evidence suggests the fusion of multimodal data, such as genomic data and pathological images, is crucial for understanding cancer heterogeneity and enhancing survival prediction. As a powerful multimodal fusion technique, Kronecker product has shown its superiority in predicting survival. However, this technique introduces a large number of parameters that may lead to high computational cost and a risk of overfitting, thus limiting its applicability and improvement in performance. Another limitation of existing approaches using Kronecker product is that they only mine relations for one single time to learn multimodal representation and therefore face significant challenges in deeply mining rich information from multimodal data for accurate survival prediction. Results To address the above limitations, we present a novel hierarchical multimodal fusion approach named HFBSurv by employing factorized bilinear model to fuse genomic and image features step by step. Specifically, with a multiple fusion strategy HFBSurv decomposes the fusion problem into different levels and each of them integrates and passes information progressively from the low level to the high level, thus leading to the more specialized fusion procedure and expressive multimodal representation. In this hierarchical framework, both modality-specific and cross-modality attentional factorized bilinear modules are designed to not only capture and quantify complex relations from multimodal data, but also dramatically reduce computational complexity. Extensive experiments demonstrate that our method performs an effective hierarchical fusion of multimodal data and achieves consistently better performance than other methods for survival prediction. Availability and implementation HFBSurv is freely available at https://github.com/Liruiqing-ustc/HFBSurv. Supplementary information Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Ruiqing Li
- School of Information Science and Technology, University of Science and Technology of China, Hefei AH230027, China
| | - Xingqi Wu
- School of Information Science and Technology, University of Science and Technology of China, Hefei AH230027, China
| | - Ao Li
- School of Information Science and Technology, University of Science and Technology of China, Hefei AH230027, China
| | | |
Collapse
|
92
|
Liang H, Cheng Z, Zhong H, Qu A, Chen L. A region-based convolutional network for nuclei detection and segmentation in microscopy images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103276] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
93
|
Deep Learning on Histopathology Images for Breast Cancer Classification: A Bibliometric Analysis. Healthcare (Basel) 2021; 10:healthcare10010010. [PMID: 35052174 PMCID: PMC8775465 DOI: 10.3390/healthcare10010010] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 12/07/2021] [Accepted: 12/12/2021] [Indexed: 12/16/2022] Open
Abstract
Medical imaging is gaining significant attention in healthcare, including breast cancer. Breast cancer is the most common cancer-related death among women worldwide. Currently, histopathology image analysis is the clinical gold standard in cancer diagnosis. However, the manual process of microscopic examination involves laborious work and can be misleading due to human error. Therefore, this study explored the research status and development trends of deep learning on breast cancer image classification using bibliometric analysis. Relevant works of literature were obtained from the Scopus database between 2014 and 2021. The VOSviewer and Bibliometrix tools were used for analysis through various visualization forms. This study is concerned with the annual publication trends, co-authorship networks among countries, authors, and scientific journals. The co-occurrence network of the authors’ keywords was analyzed for potential future directions of the field. Authors started to contribute to publications in 2016, and the research domain has maintained its growth rate since. The United States and China have strong research collaboration strengths. Only a few studies use bibliometric analysis in this research area. This study provides a recent review on this fast-growing field to highlight status and trends using scientific visualization. It is hoped that the findings will assist researchers in identifying and exploring the potential emerging areas in the related field.
Collapse
|
94
|
Xie X, Wang X, Liang Y, Yang J, Wu Y, Li L, Sun X, Bing P, He B, Tian G, Shi X. Evaluating Cancer-Related Biomarkers Based on Pathological Images: A Systematic Review. Front Oncol 2021; 11:763527. [PMID: 34900711 PMCID: PMC8660076 DOI: 10.3389/fonc.2021.763527] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Accepted: 10/18/2021] [Indexed: 12/12/2022] Open
Abstract
Many diseases are accompanied by changes in certain biochemical indicators called biomarkers in cells or tissues. A variety of biomarkers, including proteins, nucleic acids, antibodies, and peptides, have been identified. Tumor biomarkers have been widely used in cancer risk assessment, early screening, diagnosis, prognosis, treatment, and progression monitoring. For example, the number of circulating tumor cell (CTC) is a prognostic indicator of breast cancer overall survival, and tumor mutation burden (TMB) can be used to predict the efficacy of immune checkpoint inhibitors. Currently, clinical methods such as polymerase chain reaction (PCR) and next generation sequencing (NGS) are mainly adopted to evaluate these biomarkers, which are time-consuming and expansive. Pathological image analysis is an essential tool in medical research, disease diagnosis and treatment, functioning by extracting important physiological and pathological information or knowledge from medical images. Recently, deep learning-based analysis on pathological images and morphology to predict tumor biomarkers has attracted great attention from both medical image and machine learning communities, as this combination not only reduces the burden on pathologists but also saves high costs and time. Therefore, it is necessary to summarize the current process of processing pathological images and key steps and methods used in each process, including: (1) pre-processing of pathological images, (2) image segmentation, (3) feature extraction, and (4) feature model construction. This will help people choose better and more appropriate medical image processing methods when predicting tumor biomarkers.
Collapse
Affiliation(s)
- Xiaoliang Xie
- Department of Colorectal Surgery, General Hospital of Ningxia Medical University, Yinchuan, China.,College of Clinical Medicine, Ningxia Medical University, Yinchuan, China
| | - Xulin Wang
- Department of Oncology Surgery, Central Hospital of Jia Mu Si City, Jia Mu Si, China
| | - Yuebin Liang
- Geneis Beijing Co., Ltd., Beijing, China.,Qingdao Geneis Institute of Big Data Mining and Precision Medicine, Qingdao, China
| | - Jingya Yang
- Geneis Beijing Co., Ltd., Beijing, China.,Qingdao Geneis Institute of Big Data Mining and Precision Medicine, Qingdao, China.,School of Electrical and Information Engineering, Anhui University of Technology, Ma'anshan, China
| | - Yan Wu
- Geneis Beijing Co., Ltd., Beijing, China.,Qingdao Geneis Institute of Big Data Mining and Precision Medicine, Qingdao, China
| | - Li Li
- Beijing Shanghe Jiye Biotech Co., Ltd., Bejing, China
| | - Xin Sun
- Department of Medical Affairs, Central Hospital of Jia Mu Si City, Jia Mu Si, China
| | - Pingping Bing
- Academician Workstation, Changsha Medical University, Changsha, China
| | - Binsheng He
- Academician Workstation, Changsha Medical University, Changsha, China
| | - Geng Tian
- Geneis Beijing Co., Ltd., Beijing, China.,Qingdao Geneis Institute of Big Data Mining and Precision Medicine, Qingdao, China.,IBMC-BGI Center, T`he Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Institute of Basic Medicine and Cancer (IBMC), Chinese Academy of Sciences, Hangzhou, China
| | - Xiaoli Shi
- Geneis Beijing Co., Ltd., Beijing, China.,Qingdao Geneis Institute of Big Data Mining and Precision Medicine, Qingdao, China
| |
Collapse
|
95
|
Intelligent Recognition Algorithm-Based Color Doppler Ultrasound in the Treatment of Dangerous Placenta Previa. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:9886521. [PMID: 34880982 PMCID: PMC8648457 DOI: 10.1155/2021/9886521] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Accepted: 09/28/2021] [Indexed: 11/17/2022]
Abstract
The study focused on the clinical diagnostic value of color Doppler ultrasound of dangerous placenta previa patients under the guidance of intelligent recognition algorithms. 58 patients with placenta previa and placenta accreta admitted to the hospital for treatment were selected as research subjects. The color Doppler ultrasound under the guidance of intelligent recognition algorithm was compared with the two-dimensional ultrasound for specificity, sensitivity, and accuracy. The color Doppler ultrasound results showed that, of the 58 patients, there were 32 cases of complete placenta previa and 26 cases of incomplete placenta previa, which were consistent with the surgical pathology results. It was found that patients with malignant placenta previa and placenta accreta had thickened placenta, disappeared posterior placental space, myometrium <2 mm, and increased incidence of cervical enlargement (P < 0.05). In conclusion, the recognition accuracy of color Doppler ultrasound under the guidance of the intelligent recognition algorithm is more than 90%, and it can effectively identify dangerous placenta previa, assisting doctors in diagnosis and treatment of dangerous placenta previa.
Collapse
|
96
|
Zafar MM, Rauf Z, Sohail A, Khan AR, Obaidullah M, Khan SH, Lee YS, Khan A. Detection of tumour infiltrating lymphocytes in CD3 and CD8 stained histopathological images using a two-phase deep CNN. Photodiagnosis Photodyn Ther 2021; 37:102676. [PMID: 34890783 DOI: 10.1016/j.pdpdt.2021.102676] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Revised: 11/10/2021] [Accepted: 12/06/2021] [Indexed: 12/26/2022]
Abstract
BACKGROUND Immuno-score, a prognostic measure for cancer, employed in determining tumor grade and type, is generated by counting the number of Tumour-Infiltrating Lymphocytes (TILs) in CD3 and CD8 stained histopathological tissue samples. Significant stain variations and heterogeneity in lymphocytes' spatial distribution and density make automated counting of TILs' a challenging task. METHODS This work addresses the aforementioned challenges by developing a pipeline "Two-Phase Deep Convolutional Neural Network based Lymphocyte Counter (TDC-LC)" to detect lymphocytes in CD3 and CD8 stained histology images. The proposed pipeline sequentially works by removing hard negative examples (artifacts) in the first phase using a custom CNN "LSATM-Net" that exploits the idea of a split, asymmetric transform, and merge. Whereas, in the second phase, instance segmentation is performed to detect and generate a lymphocyte count against the remaining samples. Furthermore, the effectiveness of the proposed pipeline is measured by comparing it with the state-of-the-art single- and two-stage detectors. The inference code is available at GitHub Repository https://github.com/m-mohsin-zafar/tdc-lc. RESULTS The empirical evaluation on samples from LYSTO dataset shows that the proposed LSTAM-Net can learn variations in the images and precisely remove the hard negative stain artifacts with an F-score of 0.74. The detection analysis shows that the proposed TDC-LC outperforms the existing models in identifying and counting lymphocytes with high Recall (0.87) and F-score (0.89). Moreover, the commendable performance of the proposed TDC-LC in different organs suggests a good generalization. CONCLUSION The promising performance of the proposed pipeline suggests that it can serve as an automated system for detecting and counting lymphocytes from patches of tissue samples thereby reducing the burden on pathologists.
Collapse
Affiliation(s)
- Muhammad Mohsin Zafar
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad 45650, Pakistan; Faculty of Computer Science and Engineering, Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi 23640, District Swabi, Khyber Pakhtunkhwa, Pakistan
| | - Zunaira Rauf
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad 45650, Pakistan; PIEAS Artificial Intelligence Center (PAIC), Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad 45650, Pakistan
| | - Anabia Sohail
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad 45650, Pakistan; PIEAS Artificial Intelligence Center (PAIC), Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad 45650, Pakistan
| | - Abdul Rehman Khan
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad 45650, Pakistan; Center for Mathematical Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad 45650, Pakistan
| | - Muhammad Obaidullah
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad 45650, Pakistan; Center for Mathematical Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad 45650, Pakistan
| | - Saddam Hussain Khan
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad 45650, Pakistan; PIEAS Artificial Intelligence Center (PAIC), Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad 45650, Pakistan
| | - Yeon Soo Lee
- Deparment of Biomedical Engineering, College of Medical Sciences, Catholic University of Daegu, South Korea.
| | - Asifullah Khan
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad 45650, Pakistan; PIEAS Artificial Intelligence Center (PAIC), Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad 45650, Pakistan; Deparment of Biomedical Engineering, College of Medical Sciences, Catholic University of Daegu, South Korea; Center for Mathematical Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad 45650, Pakistan.
| |
Collapse
|
97
|
Mridha MF, Hamid MA, Monowar MM, Keya AJ, Ohi AQ, Islam MR, Kim JM. A Comprehensive Survey on Deep-Learning-Based Breast Cancer Diagnosis. Cancers (Basel) 2021; 13:6116. [PMID: 34885225 PMCID: PMC8656730 DOI: 10.3390/cancers13236116] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Revised: 11/25/2021] [Accepted: 12/01/2021] [Indexed: 12/11/2022] Open
Abstract
Breast cancer is now the most frequently diagnosed cancer in women, and its percentage is gradually increasing. Optimistically, there is a good chance of recovery from breast cancer if identified and treated at an early stage. Therefore, several researchers have established deep-learning-based automated methods for their efficiency and accuracy in predicting the growth of cancer cells utilizing medical imaging modalities. As of yet, few review studies on breast cancer diagnosis are available that summarize some existing studies. However, these studies were unable to address emerging architectures and modalities in breast cancer diagnosis. This review focuses on the evolving architectures of deep learning for breast cancer detection. In what follows, this survey presents existing deep-learning-based architectures, analyzes the strengths and limitations of the existing studies, examines the used datasets, and reviews image pre-processing techniques. Furthermore, a concrete review of diverse imaging modalities, performance metrics and results, challenges, and research directions for future researchers is presented.
Collapse
Affiliation(s)
- Muhammad Firoz Mridha
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Md. Abdul Hamid
- Department of Information Technology, Faculty of Computing & Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (M.A.H.); (M.M.M.)
| | - Muhammad Mostafa Monowar
- Department of Information Technology, Faculty of Computing & Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (M.A.H.); (M.M.M.)
| | - Ashfia Jannat Keya
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Abu Quwsar Ohi
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Md. Rashedul Islam
- Department of Computer Science and Engineering, University of Asia Pacific, Dhaka 1216, Bangladesh;
| | - Jong-Myon Kim
- Department of Electrical, Electronics, and Computer Engineering, University of Ulsan, Ulsan 680-749, Korea
| |
Collapse
|
98
|
Rashmi R, Prasad K, Udupa CBK. Breast histopathological image analysis using image processing techniques for diagnostic puposes: A methodological review. J Med Syst 2021; 46:7. [PMID: 34860316 PMCID: PMC8642363 DOI: 10.1007/s10916-021-01786-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Accepted: 10/21/2021] [Indexed: 12/24/2022]
Abstract
Breast cancer in women is the second most common cancer worldwide. Early detection of breast cancer can reduce the risk of human life. Non-invasive techniques such as mammograms and ultrasound imaging are popularly used to detect the tumour. However, histopathological analysis is necessary to determine the malignancy of the tumour as it analyses the image at the cellular level. Manual analysis of these slides is time consuming, tedious, subjective and are susceptible to human errors. Also, at times the interpretation of these images are inconsistent between laboratories. Hence, a Computer-Aided Diagnostic system that can act as a decision support system is need of the hour. Moreover, recent developments in computational power and memory capacity led to the application of computer tools and medical image processing techniques to process and analyze breast cancer histopathological images. This review paper summarizes various traditional and deep learning based methods developed to analyze breast cancer histopathological images. Initially, the characteristics of breast cancer histopathological images are discussed. A detailed discussion on the various potential regions of interest is presented which is crucial for the development of Computer-Aided Diagnostic systems. We summarize the recent trends and choices made during the selection of medical image processing techniques. Finally, a detailed discussion on the various challenges involved in the analysis of BCHI is presented along with the future scope.
Collapse
Affiliation(s)
- R Rashmi
- Manipal School of Information Sciences, Manipal Academy of Higher Education, Manipal, India
| | - Keerthana Prasad
- Manipal School of Information Sciences, Manipal Academy of Higher Education, Manipal, India
| | | |
Collapse
|
99
|
Shao W, Wang T, Huang Z, Han Z, Zhang J, Huang K. Weakly Supervised Deep Ordinal Cox Model for Survival Prediction From Whole-Slide Pathological Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3739-3747. [PMID: 34264823 DOI: 10.1109/tmi.2021.3097319] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Whole-Slide Histopathology Image (WSI) is generally considered the gold standard for cancer diagnosis and prognosis. Given the large inter-operator variation among pathologists, there is an imperative need to develop machine learning models based on WSIs for consistently predicting patient prognosis. The existing WSI-based prediction methods do not utilize the ordinal ranking loss to train the prognosis model, and thus cannot model the strong ordinal information among different patients in an efficient way. Another challenge is that a WSI is of large size (e.g., 100,000-by-100,000 pixels) with heterogeneous patterns but often only annotated with a single WSI-level label, which further complicates the training process. To address these challenges, we consider the ordinal characteristic of the survival process by adding a ranking-based regularization term on the Cox model and propose a weakly supervised deep ordinal Cox model (BDOCOX) for survival prediction from WSIs. Here, we generate amounts of bags from WSIs, and each bag is comprised of the image patches representing the heterogeneous patterns of WSIs, which is assumed to match the WSI-level labels for training the proposed model. The effectiveness of the proposed method is well validated by theoretical analysis as well as the prognosis and patient stratification results on three cancer datasets from The Cancer Genome Atlas (TCGA).
Collapse
|
100
|
Sangeetha K, Prakash S. An Early Breast Cancer Detection System Using Stacked Auto Encoder Deep Neural Network with Particle Swarm Optimization Based Classification Method. JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS 2021. [DOI: 10.1166/jmihi.2021.3886] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
The demand in breast cancer’s early detection and diagnosis over the last few decade has given a new research avenues. For an individual who is suffered from breast cancer, a successful treatment plan can be specified if early stage diagnosis of non-communicable disease is done
as stated by world health organization (WHO). Around the world, mortality can be reduced by cure disease’s early diagnosis. For breast cancer’s early detection and to detect other abnormalities of human breast tissue, digital mammogram is used as a most popular screening method.
Early detection is assisted by periodic clinical check-ups and self-tests and survival chance is significantly enhanced by it. For mammograms (MGs), deep learning (DL) methods are investigated by researchers due to traditional computer-aided detection (CAD) systems limitations and breast cancer’s
early detection’s extreme importance and patients false diagnosis high impact. So, there is need to have a noninvasive cancer detection system which is efficient, accurate, fast and robust. There are two process in proposed work, Histogram Rehabilitated Local Contrast Enhancement (HRLCE)
technique is used in initial process for contrast enhancement with two processing stages. Contrast enhancements potentiality is enhanced while preserving image’s local details by this technique. So, for cancer classification, Particle Swarm Optimization (PSO) and stacked auto encoders
(SAE) combined with framework based on DNN called SAE-PSO-DNN Model is used. The SAE-DNN parameters with two hidden layers are tuned using PSO and Limited-memory BFGS (LBFGS) is used as a technique for reducing features. Specificity, sensitivity, normalized root mean square erro (NRMSE), accuracy
parameters are used for evaluating SAE-PSO-DNN models results. Around 92% of accurate results are produced by SAE-PSO-DNN model as shown in experimentation results, which is far better than Convolutional Neural Network (CNN) as well as Support Vector Machine (SVM) techniques.
Collapse
Affiliation(s)
- K. Sangeetha
- Computer Science Engineering Department, SNS College of Technology, Coimbatore 641035, India
| | - S. Prakash
- Computer Science Engineering Department, Sri Shakthi Institute of Engineering and Technology, Coimbatore 641062, India
| |
Collapse
|