101
|
Kumar A, Vishwakarma A, Bajaj V. CRCCN-Net: Automated framework for classification of colorectal tissue using histopathological images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
102
|
Ragab M. Optimal Deep Transfer Learning Based Colorectal Cancer Detection and Classification Model. COMPUTERS, MATERIALS & CONTINUA 2023; 74:3279-3295. [DOI: 10.32604/cmc.2023.031037] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/28/2024]
|
103
|
Srivastava R. Applications of artificial intelligence multiomics in precision oncology. J Cancer Res Clin Oncol 2023; 149:503-510. [PMID: 35796775 DOI: 10.1007/s00432-022-04161-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2022] [Accepted: 06/17/2022] [Indexed: 02/06/2023]
Abstract
Cancer is the second leading worldwide disease that depends on oncogenic mutations and non-mutated genes for survival. Recent advancements in next-generation sequencing (NGS) have transformed the health care sector with big data and machine learning (ML) approaches. NGS data are able to detect the abnormalities and mutations in the oncogenes. These multi-omics analyses are used for risk prediction, early diagnosis, accurate prognosis, and identification of biomarkers in cancer patients. The availability of these cancer data and their analysis may provide insights into the biology of the disease, which can be used for the personalized treatment of cancer patients. Bioinformatics tools are delivering this promise by managing, integrating, and analyzing these complex datasets. The clinical outcomes of cancer patients are improved by the use of various innovative methods implicated particularly for diagnosis and therapeutics. ML-based artificial intelligence (AI) applications are solving these issues to a great extent. AI techniques are used to update the patients on a personalized basis about their treatment procedures, progress, recovery, therapies used, dietary changes in lifestyles patterns along with the survival summary of previously recovered cancer patients. In this way, the patients are becoming more aware of their diseases and the entire clinical treatment procedures. Though the technology has its own advantages and disadvantages, we hope that the day is not so far when AI techniques will provide personalized treatment to cancer patients tailored to their needs in much quicker ways.
Collapse
Affiliation(s)
- Ruby Srivastava
- CSIR-Centre for Cellular and Molecular Biology, Hyderabad, India.
| |
Collapse
|
104
|
Moyes A, Gault R, Zhang K, Ming J, Crookes D, Wang J. Multi-channel auto-encoders for learning domain invariant representations enabling superior classification of histopathology images. Med Image Anal 2023; 83:102640. [PMID: 36260951 DOI: 10.1016/j.media.2022.102640] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2021] [Revised: 08/06/2022] [Accepted: 09/15/2022] [Indexed: 12/13/2022]
Abstract
Domain shift is a problem commonly encountered when developing automated histopathology pipelines. The performance of machine learning models such as convolutional neural networks within automated histopathology pipelines is often diminished when applying them to novel data domains due to factors arising from differing staining and scanning protocols. The Dual-Channel Auto-Encoder (DCAE) model was previously shown to produce feature representations that are less sensitive to appearance variation introduced by different digital slide scanners. In this work, the Multi-Channel Auto-Encoder (MCAE) model is presented as an extension to DCAE which learns from more than two domains of data. Experimental results show that the MCAE model produces feature representations that are less sensitive to inter-domain variations than the comparative StaNoSA method when tested on a novel synthetic dataset. This was apparent when applying the MCAE, DCAE, and StaNoSA models to three different classification tasks from unseen domains. The results of this experiment show the MCAE model out performs the other models. These results show that the MCAE model is able to generalise better to novel data, including data from unseen domains, than existing approaches by actively learning normalised feature representations.
Collapse
Affiliation(s)
- Andrew Moyes
- School of Electronics, Electrical Engineering and Computer Science, Queen's University, Belfast, 18 Malone Road, BT9 6RT, UK.
| | - Richard Gault
- School of Electronics, Electrical Engineering and Computer Science, Queen's University, Belfast, 18 Malone Road, BT9 6RT, UK.
| | - Kun Zhang
- School of Electrical Engineering, Nantong University, Nantong, China.
| | - Ji Ming
- School of Electronics, Electrical Engineering and Computer Science, Queen's University, Belfast, 18 Malone Road, BT9 6RT, UK.
| | - Danny Crookes
- School of Electronics, Electrical Engineering and Computer Science, Queen's University, Belfast, 18 Malone Road, BT9 6RT, UK.
| | - Jing Wang
- Second People's Hospital of Nantong, China.
| |
Collapse
|
105
|
Dogar GM, Shahzad M, Fraz MM. Attention augmented distance regression and classification network for nuclei instance segmentation and type classification in histology images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
106
|
Juhong A, Li B, Yao CY, Yang CW, Agnew DW, Lei YL, Huang X, Piyawattanametha W, Qiu Z. Super-resolution and segmentation deep learning for breast cancer histopathology image analysis. BIOMEDICAL OPTICS EXPRESS 2023; 14:18-36. [PMID: 36698665 PMCID: PMC9841988 DOI: 10.1364/boe.463839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 09/16/2022] [Accepted: 09/19/2022] [Indexed: 06/17/2023]
Abstract
Traditionally, a high-performance microscope with a large numerical aperture is required to acquire high-resolution images. However, the images' size is typically tremendous. Therefore, they are not conveniently managed and transferred across a computer network or stored in a limited computer storage system. As a result, image compression is commonly used to reduce image size resulting in poor image resolution. Here, we demonstrate custom convolution neural networks (CNNs) for both super-resolution image enhancement from low-resolution images and characterization of both cells and nuclei from hematoxylin and eosin (H&E) stained breast cancer histopathological images by using a combination of generator and discriminator networks so-called super-resolution generative adversarial network-based on aggregated residual transformation (SRGAN-ResNeXt) to facilitate cancer diagnosis in low resource settings. The results provide high enhancement in image quality where the peak signal-to-noise ratio and structural similarity of our network results are over 30 dB and 0.93, respectively. The derived performance is superior to the results obtained from both the bicubic interpolation and the well-known SRGAN deep-learning methods. In addition, another custom CNN is used to perform image segmentation from the generated high-resolution breast cancer images derived with our model with an average Intersection over Union of 0.869 and an average dice similarity coefficient of 0.893 for the H&E image segmentation results. Finally, we propose the jointly trained SRGAN-ResNeXt and Inception U-net Models, which applied the weights from the individually trained SRGAN-ResNeXt and inception U-net models as the pre-trained weights for transfer learning. The jointly trained model's results are progressively improved and promising. We anticipate these custom CNNs can help resolve the inaccessibility of advanced microscopes or whole slide imaging (WSI) systems to acquire high-resolution images from low-performance microscopes located in remote-constraint settings.
Collapse
Affiliation(s)
- Aniwat Juhong
- Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48823, USA
- Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, MI 48823, USA
| | - Bo Li
- Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48823, USA
- Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, MI 48823, USA
| | - Cheng-You Yao
- Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, MI 48823, USA
- Department of Biomedical Engineering, Michigan State University, East Lansing, MI 48824, USA
| | - Chia-Wei Yang
- Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, MI 48823, USA
- Department of Chemistry, Michigan State University, East Lansing, MI 48824, USA
| | - Dalen W. Agnew
- College of Veterinary Medicine, Michigan State University, East Lansing, MI 48824, USA
| | - Yu Leo Lei
- Department of Periodontics Oral Medicine, University of Michigan, Ann Arbor, MI 48104, USA
| | - Xuefei Huang
- Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, MI 48823, USA
- Department of Biomedical Engineering, Michigan State University, East Lansing, MI 48824, USA
- Department of Chemistry, Michigan State University, East Lansing, MI 48824, USA
| | - Wibool Piyawattanametha
- Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, MI 48823, USA
- Department of Biomedical Engineering, School of Engineering, King Mongkut’s Institute of Technology Ladkrabang (KMITL), Bangkok 10520, Thailand
| | - Zhen Qiu
- Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48823, USA
- Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, MI 48823, USA
- Department of Biomedical Engineering, Michigan State University, East Lansing, MI 48824, USA
| |
Collapse
|
107
|
Liang Y, Yin Z, Liu H, Zeng H, Wang J, Liu J, Che N. Weakly Supervised Deep Nuclei Segmentation With Sparsely Annotated Bounding Boxes for DNA Image Cytometry. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:785-795. [PMID: 34951851 DOI: 10.1109/tcbb.2021.3138189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Nuclei segmentation is an essential step in DNA ploidy analysis by image-based cytometry (DNA-ICM) which is widely used in cytopathology and allows an objective measurement of DNA content (ploidy). The routine fully supervised learning-based method requires often tedious and expensive pixel-wise labels. In this paper, we propose a novel weakly supervised nuclei segmentation framework which exploits only sparsely annotated bounding boxes, without any segmentation labels. The key is to integrate the traditional image segmentation and self-training into fully supervised instance segmentation. We first leverage the traditional segmentation to generate coarse masks for each box-annotated nucleus to supervise the training of a teacher model, which is then responsible for both the refinement of these coarse masks and pseudo labels generation of unlabeled nuclei. These pseudo labels and refined masks along with the original manually annotated bounding boxes jointly supervise the training of student model. Both teacher and student share the same architecture and especially the student is initialized by the teacher. We have extensively evaluated our method with both our DNA-ICM dataset and public cytopathological dataset. Without bells and whistles, our method outperforms all existing weakly supervised entries on both datasets. Code and our DNA-ICM dataset are publicly available at https://github.com/CVIU-CSU/Weakly-Supervised-Nuclei-Segmentation.
Collapse
|
108
|
A multi-task deep learning framework for perineural invasion recognition in gastric cancer whole slide images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
109
|
Role of AI and digital pathology for colorectal immuno-oncology. Br J Cancer 2023; 128:3-11. [PMID: 36183010 DOI: 10.1038/s41416-022-01986-1] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 08/31/2022] [Accepted: 09/07/2022] [Indexed: 01/27/2023] Open
Abstract
Immunotherapy deals with therapeutic interventions to arrest the progression of tumours using the immune system. These include checkpoint inhibitors, T-cell manipulation, cytokines, oncolytic viruses and tumour vaccines. In this paper, we present a survey of the latest developments on immunotherapy in colorectal cancer (CRC) and the role of artificial intelligence (AI) in this context. Among these, microsatellite instability (MSI) is perhaps the most popular IO biomarker globally. We first discuss the MSI status of tumours, its implications for patient management, and its relationship to immune response. In recent years, several aspiring studies have used AI to predict the MSI status of patients from digital whole-slide images (WSIs) of routine diagnostic slides. We present a survey of AI literature on the prediction of MSI and tumour mutation burden from digitised WSIs of haematoxylin and eosin-stained diagnostic slides. We discuss AI approaches in detail and elaborate their contributions, limitations and key takeaways to drive future research. We further expand this survey to other IO-related biomarkers like immune cell infiltrates and alternate data modalities like immunohistochemistry and gene expression. Finally, we underline possible future directions in immunotherapy for CRC and promise of AI to accelerate this exploration for patient benefits.
Collapse
|
110
|
Naser A, Aydemir O. Classification of pleasant and unpleasant odor imagery EEG signals. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-08171-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
|
111
|
Nasir ES, Parvaiz A, Fraz MM. Nuclei and glands instance segmentation in histology images: a narrative review. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10372-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
112
|
Deep Learning for Skin Melanocytic Tumors in Whole-Slide Images: A Systematic Review. Cancers (Basel) 2022; 15:cancers15010042. [PMID: 36612037 PMCID: PMC9817526 DOI: 10.3390/cancers15010042] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 12/05/2022] [Accepted: 12/16/2022] [Indexed: 12/24/2022] Open
Abstract
The rise of Artificial Intelligence (AI) has shown promising performance as a support tool in clinical pathology workflows. In addition to the well-known interobserver variability between dermatopathologists, melanomas present a significant challenge in their histological interpretation. This study aims to analyze all previously published studies on whole-slide images of melanocytic tumors that rely on deep learning techniques for automatic image analysis. Embase, Pubmed, Web of Science, and Virtual Health Library were used to search for relevant studies for the systematic review, in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist. Articles from 2015 to July 2022 were included, with an emphasis placed on the used artificial intelligence methods. Twenty-eight studies that fulfilled the inclusion criteria were grouped into four groups based on their clinical objectives, including pathologists versus deep learning models (n = 10), diagnostic prediction (n = 7); prognosis (n = 5), and histological features (n = 6). These were then analyzed to draw conclusions on the general parameters and conditions of AI in pathology, as well as the necessary factors for better performance in real scenarios.
Collapse
|
113
|
Abbas A, Gaber MM, Abdelsamea MM. XDecompo: Explainable Decomposition Approach in Convolutional Neural Networks for Tumour Image Classification. SENSORS (BASEL, SWITZERLAND) 2022; 22:9875. [PMID: 36560243 PMCID: PMC9782528 DOI: 10.3390/s22249875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 12/06/2022] [Accepted: 12/12/2022] [Indexed: 06/17/2023]
Abstract
Of the various tumour types, colorectal cancer and brain tumours are still considered among the most serious and deadly diseases in the world. Therefore, many researchers are interested in improving the accuracy and reliability of diagnostic medical machine learning models. In computer-aided diagnosis, self-supervised learning has been proven to be an effective solution when dealing with datasets with insufficient data annotations. However, medical image datasets often suffer from data irregularities, making the recognition task even more challenging. The class decomposition approach has provided a robust solution to such a challenging problem by simplifying the learning of class boundaries of a dataset. In this paper, we propose a robust self-supervised model, called XDecompo, to improve the transferability of features from the pretext task to the downstream task. XDecompo has been designed based on an affinity propagation-based class decomposition to effectively encourage learning of the class boundaries in the downstream task. XDecompo has an explainable component to highlight important pixels that contribute to classification and explain the effect of class decomposition on improving the speciality of extracted features. We also explore the generalisability of XDecompo in handling different medical datasets, such as histopathology for colorectal cancer and brain tumour images. The quantitative results demonstrate the robustness of XDecompo with high accuracy of 96.16% and 94.30% for CRC and brain tumour images, respectively. XDecompo has demonstrated its generalization capability and achieved high classification accuracy (both quantitatively and qualitatively) in different medical image datasets, compared with other models. Moreover, a post hoc explainable method has been used to validate the feature transferability, demonstrating highly accurate feature representations.
Collapse
Affiliation(s)
- Asmaa Abbas
- School of Computing and Digital Technology, Birmingham City University, Birmingham B4 7AP, UK
| | - Mohamed Medhat Gaber
- School of Computing and Digital Technology, Birmingham City University, Birmingham B4 7AP, UK
- Faculty of Computer Science and Engineering, Galala University, Suez 435611, Egypt
| | - Mohammed M. Abdelsamea
- School of Computing and Digital Technology, Birmingham City University, Birmingham B4 7AP, UK
- Department of Computer Science, Faculty of Computers and Information, University of Assiut, Assiut 71515, Egypt
| |
Collapse
|
114
|
Cohen M, Puntonet J, Sanchez J, Kierszbaum E, Crema M, Soyer P, Dion E. Artificial intelligence vs. radiologist: accuracy of wrist fracture detection on radiographs. Eur Radiol 2022; 33:3974-3983. [PMID: 36515712 DOI: 10.1007/s00330-022-09349-3] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Revised: 09/05/2022] [Accepted: 11/29/2022] [Indexed: 12/15/2022]
Abstract
OBJECTIVE To compare the performances of artificial intelligence (AI) to those of radiologists in wrist fracture detection on radiographs. METHODS This retrospective study included 637 patients (1917 radiographs) with wrist trauma between January 2017 and December 2019. The AI software used was a deep neuronal network algorithm. Ground truth was established by three senior musculoskeletal radiologists who compared the initial radiology reports (IRR) made by non-specialized radiologists, the results of AI, and the combination of AI and IRR (IR+AI) RESULTS: A total of 318 fractures were reported by the senior radiologists in 247 patients. Sensitivity of AI (83%; 95% CI: 78-87%) was significantly greater than that of IRR (76%; 95% CI: 70-81%) (p < 0.001). Specificities were similar for AI (96%; 95% CI: 93-97%) and for IRR (96%; 95% CI: 94-98%) (p = 0.80). The combination of AI+IRR had a significantly greater sensitivity (88%; 95% CI: 84-92%) compared to AI and IRR (p < 0.001) and a lower specificity (92%; 95% CI: 89-95%) (p < 0.001). The sensitivity for scaphoid fracture detection was acceptable for AI (84%) and IRR (80%) but poor for the detection of other carpal bones fracture (41% for AI and 26% for IRR). CONCLUSIONS Performance of AI in wrist fracture detection on radiographs is better than that of non-specialized radiologists. The combination of AI and radiologist's analysis yields best performances. KEY POINTS • Artificial intelligence has better performances for wrist fracture detection compared to non-expert radiologists in daily practice. • Performance of artificial intelligence greatly differs depending on the anatomical area. • Sensitivity of artificial intelligence for the detection of carpal bones fractures is 56%.
Collapse
Affiliation(s)
- Mathieu Cohen
- Department of Radiology - Hotel Dieu Hospital, Assistance Publique-Hopitaux de Paris, Paris, France
- Université Paris Cité, F-75006, Paris, France
| | - Julien Puntonet
- Department of Radiology - Hotel Dieu Hospital, Assistance Publique-Hopitaux de Paris, Paris, France.
- Université Paris Cité, F-75006, Paris, France.
| | - Julien Sanchez
- Université Paris Cité, F-75006, Paris, France
- Institute of Sports Imaging, French National Institute of Sports (INSEP), Paris, France
| | | | - Michel Crema
- Department of Radiology - Hotel Dieu Hospital, Assistance Publique-Hopitaux de Paris, Paris, France
- Institute of Sports Imaging, French National Institute of Sports (INSEP), Paris, France
| | - Philippe Soyer
- Université Paris Cité, F-75006, Paris, France
- Department of Radiology- Cochin Hospital, Assistance Publique-Hopitaux de Paris, 75014, Paris, France
| | - Elisabeth Dion
- Department of Radiology - Hotel Dieu Hospital, Assistance Publique-Hopitaux de Paris, Paris, France
- Université Paris Cité, F-75006, Paris, France
| |
Collapse
|
115
|
Artificial intelligence for prediction of response to cancer immunotherapy. Semin Cancer Biol 2022; 87:137-147. [PMID: 36372326 DOI: 10.1016/j.semcancer.2022.11.008] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 11/02/2022] [Accepted: 11/08/2022] [Indexed: 11/13/2022]
Abstract
Artificial intelligence (AI) indicates the application of machines to imitate intelligent behaviors for solving complex tasks with minimal human intervention, including machine learning and deep learning. The use of AI in medicine improves health-care systems in multiple areas such as diagnostic confirmation, risk stratification, analysis, prognosis prediction, treatment surveillance, and virtual health support, which has considerable potential to revolutionize and reshape medicine. In terms of immunotherapy, AI has been applied to unlock underlying immune signatures to associate with responses to immunotherapy indirectly as well as predict responses to immunotherapy responses directly. The AI-based analysis of high-throughput sequences and medical images can provide useful information for management of cancer immunotherapy considering the excellent abilities in selecting appropriate subjects, improving therapeutic regimens, and predicting individualized prognosis. In present review, we aim to evaluate a broad framework about AI-based computational approaches for prediction of response to cancer immunotherapy on both indirect and direct manners. Furthermore, we summarize our perspectives about challenges and opportunities of further AI applications on cancer immunotherapy relating to clinical practicability.
Collapse
|
116
|
Zhang W, Zhang J, Yang S, Wang X, Yang W, Huang J, Wang W, Han X. Knowledge-Based Representation Learning for Nucleus Instance Classification From Histopathological Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3939-3951. [PMID: 36037453 DOI: 10.1109/tmi.2022.3201981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The classification of nuclei in H&E-stained histopathological images is a fundamental step in the quantitative analysis of digital pathology. Most existing methods employ multi-class classification on the detected nucleus instances, while the annotation scale greatly limits their performance. Moreover, they often downplay the contextual information surrounding nucleus instances that is critical for classification. To explicitly provide contextual information to the classification model, we design a new structured input consisting of a content-rich image patch and a target instance mask. The image patch provides rich contextual information, while the target instance mask indicates the location of the instance to be classified and emphasizes its shape. Benefiting from our structured input format, we propose Structured Triplet for representation learning, a triplet learning framework on unlabelled nucleus instances with customized positive and negative sampling strategies. We pre-train a feature extraction model based on this framework with a large-scale unlabeled dataset, making it possible to train an effective classification model with limited annotated data. We also add two auxiliary branches, namely the attribute learning branch and the conventional self-supervised learning branch, to further improve its performance. As part of this work, we will release a new dataset of H&E-stained pathology images with nucleus instance masks, containing 20,187 patches of size 1024 ×1024 , where each patch comes from a different whole-slide image. The model pre-trained on this dataset with our framework significantly reduces the burden of extensive labeling. We show a substantial improvement in nucleus classification accuracy compared with the state-of-the-art methods.
Collapse
|
117
|
Yang M, Xie Z, Wang Z, Yuan Y, Zhang J. Su-MICL: Severity-Guided Multiple Instance Curriculum Learning for Histopathology Image Interpretable Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3533-3543. [PMID: 35786552 DOI: 10.1109/tmi.2022.3188326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Histopathology image classification plays a critical role in clinical diagnosis. However, due to the absence of clinical interpretability, most existing image-level classifiers remain impractical. To acquire the essential interpretability, lesion-level diagnosis is also provided, relying on detailed lesion-level annotations. Although the multiple-instance learning (MIL)-based approach can identify lesions by only utilizing image-level annotations, it requires overly strict prior information and has limited accuracy in lesion-level tasks. Here, we present a novel severity-guided multiple instance curriculum learning (Su-MICL) strategy to avoid tedious labeling. The proposed Su-MICL is under a MIL framework with a neglected prior: disease severity to define the learning difficulty of training images. Based on the difficulty degree, a curriculum is developed to train a model utilizing images from easy to hard. The experimental results for two histopathology image datasets demonstrate that Su-MICL achieves comparable performance to the state-of-the-art weakly supervised methods for image-level classification, and its performance for identifying lesions is closest to the supervised learning method. Without tedious lesion labeling, the Su-MICL approach can provide an interpretable diagnosis, as well as an effective insight to aid histopathology image diagnosis.
Collapse
|
118
|
Liu G, Zhao J, Tian G, Li S, Lu Y. Visualizing knowledge evolution trends and research hotspots of artificial intelligence in colorectal cancer: A bibliometric analysis. Front Oncol 2022; 12:925924. [PMID: 36518311 PMCID: PMC9742812 DOI: 10.3389/fonc.2022.925924] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Accepted: 11/11/2022] [Indexed: 10/29/2023] Open
Abstract
BACKGROUND In recent years, the rapid development of artificial intelligence (AI) technology has created a new diagnostic and therapeutic opportunity for colorectal cancer (CRC). Numerous academic and clinical studies have demonstrated that high-level auxiliary diagnosis and treatment systems based on AI technology can significantly improve the readability of medical data, objectively provide a reliable and comprehensive reference for physicians, reduce the experience gap between physicians, and aid physicians in making more accurate diagnosis decisions. In this study, we used bibliometric techniques to visually analyze the literature about AI in the CRC field and summarize the current situation and research hotspots in this field. METHODS The relevant literature on AI in the field of CRC research was obtained from the Web of Science Core Collection (WoSCC) database. The software CiteSpace was utilized to analyze the number of papers, countries, institutions, authors, journals, cited literature, and keywords of the included literature and generate a visual knowledge map. The present study aims to evaluate the origin, current hotspots, and research trends of AI in CRC using bibliometric analysis. RESULTS As of March 2022, 64 nations/regions, 230 institutions, 245 journals, and 300 authors had published 562 AI-related articles in the field of CRC. Since 2016, each year has seen an exponential increase. China and the United States were the largest contributors, with the largest number of beneficial research institutions and the closest collaboration relationship. The World Journal of Gastroenterology is this field's most widely published journal. Diagnosis and treatment research, gene and immunology research, intestinal polyp research, tumor grading research, gastrointestinal endoscopy research, and prognosis research comprised the six topics derived from high-frequency keyword cluster analysis. CONCLUSION In recent years, field research has been a popular topic of discussion. The results of our bibliometric analysis allow us to comprehend better the current situation and trend of this research field, and the quantitative data indicators can serve as a guide for the research and application of global scholars.
Collapse
Affiliation(s)
- Guangwei Liu
- Department of Gastrointestinal Surgery, The Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
- Shandong Key Laboratory of Digital Medicine and Computer Assisted Surgery, Qingdao University, Qingdao, Shandong, China
| | - Jun Zhao
- Department of Pharmacy, The Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | - Guangye Tian
- School of Control Science and Engineering, Shandong University, Jinan, Shandong, China
| | - Shuai Li
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China
| | - Yun Lu
- Department of Gastrointestinal Surgery, The Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
- Shandong Key Laboratory of Digital Medicine and Computer Assisted Surgery, Qingdao University, Qingdao, Shandong, China
| |
Collapse
|
119
|
Tharwat M, Sakr NA, El-Sappagh S, Soliman H, Kwak KS, Elmogy M. Colon Cancer Diagnosis Based on Machine Learning and Deep Learning: Modalities and Analysis Techniques. SENSORS (BASEL, SWITZERLAND) 2022; 22:9250. [PMID: 36501951 PMCID: PMC9739266 DOI: 10.3390/s22239250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Accepted: 11/24/2022] [Indexed: 06/17/2023]
Abstract
The treatment and diagnosis of colon cancer are considered to be social and economic challenges due to the high mortality rates. Every year, around the world, almost half a million people contract cancer, including colon cancer. Determining the grade of colon cancer mainly depends on analyzing the gland's structure by tissue region, which has led to the existence of various tests for screening that can be utilized to investigate polyp images and colorectal cancer. This article presents a comprehensive survey on the diagnosis of colon cancer. This covers many aspects related to colon cancer, such as its symptoms and grades as well as the available imaging modalities (particularly, histopathology images used for analysis) in addition to common diagnosis systems. Furthermore, the most widely used datasets and performance evaluation metrics are discussed. We provide a comprehensive review of the current studies on colon cancer, classified into deep-learning (DL) and machine-learning (ML) techniques, and we identify their main strengths and limitations. These techniques provide extensive support for identifying the early stages of cancer that lead to early treatment of the disease and produce a lower mortality rate compared with the rate produced after symptoms develop. In addition, these methods can help to prevent colorectal cancer from progressing through the removal of pre-malignant polyps, which can be achieved using screening tests to make the disease easier to diagnose. Finally, the existing challenges and future research directions that open the way for future work in this field are presented.
Collapse
Affiliation(s)
- Mai Tharwat
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| | - Nehal A. Sakr
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| | - Shaker El-Sappagh
- Information Systems Department, Faculty of Computers and Artificial Intelligence, Benha University, Benha 13512, Egypt
- Faculty of Computer Science and Engineering, Galala University, Suez 435611, Egypt
| | - Hassan Soliman
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| | - Kyung-Sup Kwak
- Department of Information and Communication Engineering, Inha University, Incheon 22212, Republic of Korea
| | - Mohammed Elmogy
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| |
Collapse
|
120
|
Alzoubi I, Bao G, Zheng Y, Wang X, Graeber MB. Artificial intelligence techniques for neuropathological diagnostics and research. Neuropathology 2022. [PMID: 36443935 DOI: 10.1111/neup.12880] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 10/17/2022] [Accepted: 10/23/2022] [Indexed: 12/03/2022]
Abstract
Artificial intelligence (AI) research began in theoretical neurophysiology, and the resulting classical paper on the McCulloch-Pitts mathematical neuron was written in a psychiatry department almost 80 years ago. However, the application of AI in digital neuropathology is still in its infancy. Rapid progress is now being made, which prompted this article. Human brain diseases represent distinct system states that fall outside the normal spectrum. Many differ not only in functional but also in structural terms, and the morphology of abnormal nervous tissue forms the traditional basis of neuropathological disease classifications. However, only a few countries have the medical specialty of neuropathology, and, given the sheer number of newly developed histological tools that can be applied to the study of brain diseases, a tremendous shortage of qualified hands and eyes at the microscope is obvious. Similarly, in neuroanatomy, human observers no longer have the capacity to process the vast amounts of connectomics data. Therefore, it is reasonable to assume that advances in AI technology and, especially, whole-slide image (WSI) analysis will greatly aid neuropathological practice. In this paper, we discuss machine learning (ML) techniques that are important for understanding WSI analysis, such as traditional ML and deep learning, introduce a recently developed neuropathological AI termed PathoFusion, and present thoughts on some of the challenges that must be overcome before the full potential of AI in digital neuropathology can be realized.
Collapse
Affiliation(s)
- Islam Alzoubi
- School of Computer Science The University of Sydney Sydney New South Wales Australia
| | - Guoqing Bao
- School of Computer Science The University of Sydney Sydney New South Wales Australia
| | - Yuqi Zheng
- Ken Parker Brain Tumour Research Laboratories Brain and Mind Centre, Faculty of Medicine and Health, University of Sydney Camperdown New South Wales Australia
| | - Xiuying Wang
- School of Computer Science The University of Sydney Sydney New South Wales Australia
| | - Manuel B. Graeber
- Ken Parker Brain Tumour Research Laboratories Brain and Mind Centre, Faculty of Medicine and Health, University of Sydney Camperdown New South Wales Australia
| |
Collapse
|
121
|
Chen Y, Jia Y, Zhang X, Bai J, Li X, Ma M, Sun Z, Pei Z. TSHVNet: Simultaneous Nuclear Instance Segmentation and Classification in Histopathological Images Based on Multiattention Mechanisms. BIOMED RESEARCH INTERNATIONAL 2022; 2022:7921922. [PMID: 36457339 PMCID: PMC9708332 DOI: 10.1155/2022/7921922] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Revised: 09/30/2022] [Accepted: 10/03/2022] [Indexed: 09/27/2023]
Abstract
Accurate nuclear instance segmentation and classification in histopathologic images are the foundation of cancer diagnosis and prognosis. Several challenges are restricting the development of accurate simultaneous nuclear instance segmentation and classification. Firstly, the visual appearances of different category nuclei could be similar, making it difficult to distinguish different types of nuclei. Secondly, it is thorny to separate highly clustering nuclear instances. Thirdly, rare current studies have considered the global dependencies among diverse nuclear instances. In this article, we propose a novel deep learning framework named TSHVNet which integrates multiattention modules (i.e., Transformer and SimAM) into the state-of-the-art HoVer-Net for the sake of a more accurate nuclear instance segmentation and classification. Specifically, the Transformer attention module is employed on the trunk of the HoVer-Net to model the long-distance relationships of diverse nuclear instances. The SimAM attention modules are deployed on both the trunk and branches to apply the 3D channel and spatial attention to assign neurons with appropriate weights. Finally, we validate the proposed method on two public datasets: PanNuke and CoNSeP. The comparison results have shown the outstanding performance of the proposed TSHVNet network among the state-of-art methods. Particularly, as compared to the original HoVer-Net, the performance of nuclear instance segmentation evaluated by the PQ index has shown 1.4% and 2.8% increases on the CoNSeP and PanNuke datasets, respectively, and the performance of nuclear classification measured by F1_score has increased by 2.4% and 2.5% on the CoNSeP and PanNuke datasets, respectively. Therefore, the proposed multiattention-based TSHVNet is of great potential in simultaneous nuclear instance segmentation and classification.
Collapse
Affiliation(s)
- Yuli Chen
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, China
| | - Yuhang Jia
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, China
| | - Xinxin Zhang
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, China
| | - Jiayang Bai
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, China
| | - Xue Li
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, China
| | - Miao Ma
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, China
| | - Zengguo Sun
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, China
| | - Zhao Pei
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, China
| |
Collapse
|
122
|
Wu H, Souedet N, Jan C, Clouchoux C, Delzescaux T. A general deep learning framework for neuron instance segmentation based on Efficient UNet and morphological post-processing. Comput Biol Med 2022; 150:106180. [PMID: 36244305 DOI: 10.1016/j.compbiomed.2022.106180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 09/21/2022] [Accepted: 10/01/2022] [Indexed: 11/03/2022]
Abstract
Recent studies have demonstrated the superiority of deep learning in medical image analysis, especially in cell instance segmentation, a fundamental step for many biological studies. However, the excellent performance of the neural networks requires training on large, unbiased dataset and annotations, which is labor-intensive and expertise-demanding. This paper presents an end-to-end framework to automatically detect and segment NeuN stained neuronal cells on histological images using only point annotations. Unlike traditional nuclei segmentation with point annotation, we propose using point annotation and binary segmentation to synthesize pixel-level annotations. The synthetic masks are used as the ground truth to train the neural network, a U-Net-like architecture with a state-of-the-art network, EfficientNet, as the encoder. Validation results show the superiority of our model compared to other recent methods. In addition, we investigated multiple post-processing schemes and proposed an original strategy to convert the probability map into segmented instances using ultimate erosion and dynamic reconstruction. This approach is easy to configure and outperforms other classical post-processing techniques. This work aims to develop a robust and efficient framework for analyzing neurons using optical microscopic data, which can be used in preclinical biological studies and, more specifically, in the context of neurodegenerative diseases. Code is available at: https://github.com/MIRCen/NeuronInstanceSeg.
Collapse
Affiliation(s)
- Huaqian Wu
- CEA-CNRS-UMR 9199, MIRCen, Fontenay-aux-Roses, France
| | | | - Caroline Jan
- CEA-CNRS-UMR 9199, MIRCen, Fontenay-aux-Roses, France
| | | | | |
Collapse
|
123
|
Li H, Guan Y. Multilevel Modeling of Joint Damage in Rheumatoid Arthritis. ADVANCED INTELLIGENT SYSTEMS (WEINHEIM AN DER BERGSTRASSE, GERMANY) 2022; 4:2200184. [PMID: 37808948 PMCID: PMC10557461 DOI: 10.1002/aisy.202200184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Indexed: 10/10/2023]
Abstract
While most deep learning approaches are developed for single images, in real world applications, images are often obtained as a series to inform decision making. Due to hardware (memory) and software (algorithm) limitations, few methods have been developed to integrate multiple images so far. In this study, we present an approach that seamlessly integrates deep learning and traditional machine learning models, to study multiple images and score joint damages in rheumatoid arthritis. This method allows the quantification of joining space narrowing to approach the clinical upper limit. Beyond predictive performance, we integrate the multilevel interconnections across joints and damage types into the machine learning model and reveal the cross-regulation map of joint damages in rheumatoid arthritis.
Collapse
Affiliation(s)
- Hongyang Li
- Department of Computational Medicine and Bioinformatics, University of Michigan, 100 Washtenaw Avenue, Ann Arbor, MI 48109, USA
| | - Yuanfang Guan
- Department of Computational Medicine and Bioinformatics, University of Michigan, 100 Washtenaw Avenue, Ann Arbor, MI 48109, USA
| |
Collapse
|
124
|
Jiang L, Zou B, Liu S, Yang W, Wang M, Huang E. Recognition of abnormal human behavior in dual-channel convolutional 3D construction site based on deep learning. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07881-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
125
|
Xin B, Yang Y, Xie X, Shang J, Liu Z, Peng S. Detecting and Classifying Nuclei Using Multi-Scale Fully Convolutional Network. J Comput Biol 2022; 29:1095-1103. [PMID: 35984993 DOI: 10.1089/cmb.2022.0111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/15/2023] Open
Abstract
The detection and classification of nuclei play an important role in the histopathological analysis. It aims to find out the distribution of nuclei in the histopathology images for the next step of analysis and research. However, it is very challenging to detect and localize nuclei in histopathology images because the size of nuclei accounts for only a few pixels in images, making it difficult to be detected. Most automatic detection machine learning algorithms use patches, which are small pieces of images including a single cell, as training data, and then apply a sliding window strategy to detect nuclei on histopathology images. These methods require preprocessing of data set, which is a very tedious work, and it is also difficult to localize the detected results on original images. Fully convolutional network-based deep learning methods are able to take images as raw inputs, and output results of corresponding size, which makes it well suited for nuclei detection and classification task. In this study, we propose a novel multi-scale fully convolution network, named Cell Fully Convolutional Network (CFCN), with dilated convolution for fine-grained nuclei classification and localization in histology images. We trained CFCN in a typical histology image data set, and the experimental results show that CFCN outperforms the other state-of-the-art nuclei classification models, and the F1 score reaches 0.750.
Collapse
Affiliation(s)
- Bin Xin
- College of Computer Science and Electronic Engineering, Hunan University, Changsha, China
| | - Yaning Yang
- College of Computer Science and Electronic Engineering, Hunan University, Changsha, China
| | - Xiaolan Xie
- College of Information Science and Engineering, Guilin University of Technology, Guilin, China
| | - Jiandong Shang
- National Supercomputing Center in Zhengzhou, Zhengzhou University, Henan, China
| | - Zhengyu Liu
- Department of Cardiology, Hunan Provincial People's Hospital, Changsha, China
- Department of Epidemiological Research, Hunan Provincial People's Hospital, Changsha, China
| | - Shaoliang Peng
- College of Computer Science and Electronic Engineering, Hunan University, Changsha, China
| |
Collapse
|
126
|
Li Y, Wu X, Yang P, Jiang G, Luo Y. Machine Learning for Lung Cancer Diagnosis, Treatment, and Prognosis. GENOMICS, PROTEOMICS & BIOINFORMATICS 2022; 20:850-866. [PMID: 36462630 PMCID: PMC10025752 DOI: 10.1016/j.gpb.2022.11.003] [Citation(s) in RCA: 60] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Revised: 10/03/2022] [Accepted: 11/17/2022] [Indexed: 12/03/2022]
Abstract
The recent development of imaging and sequencing technologies enables systematic advances in the clinical study of lung cancer. Meanwhile, the human mind is limited in effectively handling and fully utilizing the accumulation of such enormous amounts of data. Machine learning-based approaches play a critical role in integrating and analyzing these large and complex datasets, which have extensively characterized lung cancer through the use of different perspectives from these accrued data. In this review, we provide an overview of machine learning-based approaches that strengthen the varying aspects of lung cancer diagnosis and therapy, including early detection, auxiliary diagnosis, prognosis prediction, and immunotherapy practice. Moreover, we highlight the challenges and opportunities for future applications of machine learning in lung cancer.
Collapse
Affiliation(s)
- Yawei Li
- Department of Preventive Medicine, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611, USA
| | - Xin Wu
- Department of Medicine, University of Illinois at Chicago, Chicago, IL 60612, USA
| | - Ping Yang
- Department of Quantitative Health Sciences, Mayo Clinic, Rochester, MN 55905 / Scottsdale, AZ 85259, USA
| | - Guoqian Jiang
- Department of Artificial Intelligence and Informatics, Mayo Clinic, Rochester, MN 55905, USA
| | - Yuan Luo
- Department of Preventive Medicine, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611, USA.
| |
Collapse
|
127
|
Nofallah S, Mokhtari M, Wu W, Mehta S, Knezevich S, May CJ, Chang OH, Lee AC, Elmore JG, Shapiro LG. Segmenting Skin Biopsy Images with Coarse and Sparse Annotations using U-Net. J Digit Imaging 2022; 35:1238-1249. [PMID: 35501416 PMCID: PMC9060411 DOI: 10.1007/s10278-022-00641-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2021] [Revised: 02/11/2022] [Accepted: 04/15/2022] [Indexed: 11/26/2022] Open
Abstract
The number of melanoma diagnoses has increased dramatically over the past three decades, outpacing almost all other cancers. Nearly 1 in 4 skin biopsies is of melanocytic lesions, highlighting the clinical and public health importance of correct diagnosis. Deep learning image analysis methods may improve and complement current diagnostic and prognostic capabilities. The histologic evaluation of melanocytic lesions, including melanoma and its precursors, involves determining whether the melanocytic population involves the epidermis, dermis, or both. Semantic segmentation of clinically important structures in skin biopsies is a crucial step towards an accurate diagnosis. While training a segmentation model requires ground-truth labels, annotation of large images is a labor-intensive task. This issue becomes especially pronounced in a medical image dataset in which expert annotation is the gold standard. In this paper, we propose a two-stage segmentation pipeline using coarse and sparse annotations on a small region of the whole slide image as the training set. Segmentation results on whole slide images show promising performance for the proposed pipeline.
Collapse
Affiliation(s)
| | - Mojgan Mokhtari
- Pathology Department, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Wenjun Wu
- University of Washington, Seattle, WA, 98195, USA
| | - Sachin Mehta
- University of Washington, Seattle, WA, 98195, USA
| | | | - Caitlin J May
- Dermatopathology Northwest, Bellevue, WA, 98005, USA
| | | | - Annie C Lee
- David Geffen School of Medicine, UCLA, Los Angeles, CA, 90024, USA
| | - Joann G Elmore
- David Geffen School of Medicine, UCLA, Los Angeles, CA, 90024, USA
| | | |
Collapse
|
128
|
Syed AH, Khan T. Evolution of research trends in artificial intelligence for breast cancer diagnosis and prognosis over the past two decades: A bibliometric analysis. Front Oncol 2022; 12:854927. [PMID: 36267967 PMCID: PMC9578338 DOI: 10.3389/fonc.2022.854927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 08/30/2022] [Indexed: 01/27/2023] Open
Abstract
Objective In recent years, among the available tools, the concurrent application of Artificial Intelligence (AI) has improved the diagnostic performance of breast cancer screening. In this context, the present study intends to provide a comprehensive overview of the evolution of AI for breast cancer diagnosis and prognosis research using bibliometric analysis. Methodology Therefore, in the present study, relevant peer-reviewed research articles published from 2000 to 2021 were downloaded from the Scopus and Web of Science (WOS) databases and later quantitatively analyzed and visualized using Bibliometrix (R package). Finally, open challenges areas were identified for future research work. Results The present study revealed that the number of literature studies published in AI for breast cancer detection and survival prediction has increased from 12 to 546 between the years 2000 to 2021. The United States of America (USA), the Republic of China, and India are the most productive publication-wise in this field. Furthermore, the USA leads in terms of the total citations; however, hungry and Holland take the lead positions in average citations per year. Wang J is the most productive author, and Zhan J is the most relevant author in this field. Stanford University in the USA is the most relevant affiliation by the number of published articles. The top 10 most relevant sources are Q1 journals with PLOS ONE and computer in Biology and Medicine are the leading journals in this field. The most trending topics related to our study, transfer learning and deep learning, were identified. Conclusion The present findings provide insight and research directions for policymakers and academic researchers for future collaboration and research in AI for breast cancer patients.
Collapse
Affiliation(s)
- Asif Hassan Syed
- Department of Computer Science, Faculty of Computing and Information Technology Rabigh (FCITR), King Abdulaziz University, Jeddah, Saudi Arabia
| | - Tabrej Khan
- Department of Information Systems, Faculty of Computing and Information Technology Rabigh (FCITR), King Abdulaziz University, Jeddah, Saudi Arabia
| |
Collapse
|
129
|
Qiao Y, Zhao L, Luo C, Luo Y, Wu Y, Li S, Bu D, Zhao Y. Multi-modality artificial intelligence in digital pathology. Brief Bioinform 2022; 23:6702380. [PMID: 36124675 PMCID: PMC9677480 DOI: 10.1093/bib/bbac367] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Revised: 07/27/2022] [Accepted: 08/05/2022] [Indexed: 12/14/2022] Open
Abstract
In common medical procedures, the time-consuming and expensive nature of obtaining test results plagues doctors and patients. Digital pathology research allows using computational technologies to manage data, presenting an opportunity to improve the efficiency of diagnosis and treatment. Artificial intelligence (AI) has a great advantage in the data analytics phase. Extensive research has shown that AI algorithms can produce more up-to-date and standardized conclusions for whole slide images. In conjunction with the development of high-throughput sequencing technologies, algorithms can integrate and analyze data from multiple modalities to explore the correspondence between morphological features and gene expression. This review investigates using the most popular image data, hematoxylin-eosin stained tissue slide images, to find a strategic solution for the imbalance of healthcare resources. The article focuses on the role that the development of deep learning technology has in assisting doctors' work and discusses the opportunities and challenges of AI.
Collapse
Affiliation(s)
- Yixuan Qiao
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Lianhe Zhao
- Corresponding authors: Yi Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences; Shandong First Medical University & Shandong Academy of Medical Sciences. Tel.: +86 10 6260 0822; Fax: +86 10 6260 1356; E-mail: ; Lianhe Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences. Tel.: +86 18513983324; E-mail:
| | - Chunlong Luo
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yufan Luo
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yang Wu
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Shengtong Li
- Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Dechao Bu
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Yi Zhao
- Corresponding authors: Yi Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences; Shandong First Medical University & Shandong Academy of Medical Sciences. Tel.: +86 10 6260 0822; Fax: +86 10 6260 1356; E-mail: ; Lianhe Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences. Tel.: +86 18513983324; E-mail:
| |
Collapse
|
130
|
Zhou W, Deng Z, Liu Y, Shen H, Deng H, Xiao H. Global Research Trends of Artificial Intelligence on Histopathological Images: A 20-Year Bibliometric Analysis. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:11597. [PMID: 36141871 PMCID: PMC9517580 DOI: 10.3390/ijerph191811597] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Revised: 08/31/2022] [Accepted: 09/01/2022] [Indexed: 06/13/2023]
Abstract
Cancer has become a major threat to global health care. With the development of computer science, artificial intelligence (AI) has been widely applied in histopathological images (HI) analysis. This study analyzed the publications of AI in HI from 2001 to 2021 by bibliometrics, exploring the research status and the potential popular directions in the future. A total of 2844 publications from the Web of Science Core Collection were included in the bibliometric analysis. The country/region, institution, author, journal, keyword, and references were analyzed by using VOSviewer and CiteSpace. The results showed that the number of publications has grown rapidly in the last five years. The USA is the most productive and influential country with 937 publications and 23,010 citations, and most of the authors and institutions with higher numbers of publications and citations are from the USA. Keyword analysis showed that breast cancer, prostate cancer, colorectal cancer, and lung cancer are the tumor types of greatest concern. Co-citation analysis showed that classification and nucleus segmentation are the main research directions of AI-based HI studies. Transfer learning and self-supervised learning in HI is on the rise. This study performed the first bibliometric analysis of AI in HI from multiple indicators, providing insights for researchers to identify key cancer types and understand the research trends of AI application in HI.
Collapse
Affiliation(s)
- Wentong Zhou
- Center for System Biology, Data Sciences, and Reproductive Health, School of Basic Medical Science, Central South University, Changsha 410031, China
| | - Ziheng Deng
- Center for System Biology, Data Sciences, and Reproductive Health, School of Basic Medical Science, Central South University, Changsha 410031, China
| | - Yong Liu
- Center for System Biology, Data Sciences, and Reproductive Health, School of Basic Medical Science, Central South University, Changsha 410031, China
| | - Hui Shen
- Tulane Center of Biomedical Informatics and Genomics, Deming Department of Medicine, School of Medicine, Tulane University School, New Orleans, LA 70112, USA
| | - Hongwen Deng
- Tulane Center of Biomedical Informatics and Genomics, Deming Department of Medicine, School of Medicine, Tulane University School, New Orleans, LA 70112, USA
| | - Hongmei Xiao
- Center for System Biology, Data Sciences, and Reproductive Health, School of Basic Medical Science, Central South University, Changsha 410031, China
| |
Collapse
|
131
|
Sobhani F, Muralidhar S, Hamidinekoo A, Hall AH, King LM, Marks JR, Maley C, Horlings HM, Hwang ES, Yuan Y. Spatial interplay of tissue hypoxia and T-cell regulation in ductal carcinoma in situ. NPJ Breast Cancer 2022; 8:105. [PMID: 36109587 PMCID: PMC9477879 DOI: 10.1038/s41523-022-00419-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Accepted: 03/21/2022] [Indexed: 11/09/2022] Open
Abstract
Hypoxia promotes aggressive tumor phenotypes and mediates the recruitment of suppressive T cells in invasive breast carcinomas. We investigated the role of hypoxia in relation to T-cell regulation in ductal carcinoma in situ (DCIS). We designed a deep learning system tailored for the tissue architecture complexity of DCIS, and compared pure DCIS cases with the synchronous DCIS and invasive components within invasive ductal carcinoma cases. Single-cell classification was applied in tandem with a new method for DCIS ductal segmentation in dual-stained CA9 and FOXP3, whole-tumor section digital pathology images. Pure DCIS typically has an intermediate level of colocalization of FOXP3+ and CA9+ cells, but in invasive carcinoma cases, the FOXP3+ (T-regulatory) cells may have relocated from the DCIS and into the invasive parts of the tumor, leading to high levels of colocalization in the invasive parts but low levels in the synchronous DCIS component. This may be due to invasive, hypoxic tumors evolving to recruit T-regulatory cells in order to evade immune predation. Our data support the notion that hypoxia promotes immune tolerance through recruitment of T-regulatory cells, and furthermore indicate a spatial pattern of relocalization of T-regulatory cells from DCIS to hypoxic tumor cells. Spatial colocalization of hypoxic and T-regulatory cells may be a key event and useful marker of DCIS progression.
Collapse
Affiliation(s)
- Faranak Sobhani
- Centre for Evolution and Cancer, Institute of Cancer Research, London, UK.
- Division of Molecular Pathology, Institute of Cancer Research, London, UK.
| | - Sathya Muralidhar
- Centre for Evolution and Cancer, Institute of Cancer Research, London, UK
- Division of Molecular Pathology, Institute of Cancer Research, London, UK
| | - Azam Hamidinekoo
- Centre for Evolution and Cancer, Institute of Cancer Research, London, UK
- Division of Molecular Pathology, Institute of Cancer Research, London, UK
| | - Allison H Hall
- Department of Pathology, Duke University School of Medicine, Durham, NC, USA
| | - Lorraine M King
- Department of Surgery, Duke University School of Medicine, Durham, NC, USA
| | - Jeffrey R Marks
- Department of Surgery, Duke University School of Medicine, Durham, NC, USA
| | - Carlo Maley
- Arizona Cancer Evolution Center, Biodesign Institute and School of Life Sciences, Arizona State University, Tempe, AZ, USA
| | - Hugo M Horlings
- Department of Pathology, The Netherlands Cancer Institute, Plesmanlaan, 121 1066CX, Amsterdam, The Netherlands
| | - E Shelley Hwang
- Department of Surgery, Duke University School of Medicine, Durham, NC, USA
| | - Yinyin Yuan
- Centre for Evolution and Cancer, Institute of Cancer Research, London, UK.
- Division of Molecular Pathology, Institute of Cancer Research, London, UK.
| |
Collapse
|
132
|
NDG-CAM: Nuclei Detection in Histopathology Images with Semantic Segmentation Networks and Grad-CAM. Bioengineering (Basel) 2022; 9:bioengineering9090475. [PMID: 36135021 PMCID: PMC9495364 DOI: 10.3390/bioengineering9090475] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 09/07/2022] [Accepted: 09/13/2022] [Indexed: 11/17/2022] Open
Abstract
Nuclei identification is a fundamental task in many areas of biomedical image analysis related to computational pathology applications. Nowadays, deep learning is the primary approach by which to segment the nuclei, but accuracy is closely linked to the amount of histological ground truth data for training. In addition, it is known that most of the hematoxylin and eosin (H&E)-stained microscopy nuclei images contain complex and irregular visual characteristics. Moreover, conventional semantic segmentation architectures grounded on convolutional neural networks (CNNs) are unable to recognize distinct overlapping and clustered nuclei. To overcome these problems, we present an innovative method based on gradient-weighted class activation mapping (Grad-CAM) saliency maps for image segmentation. The proposed solution is comprised of two steps. The first is the semantic segmentation obtained by the use of a CNN; then, the detection step is based on the calculation of local maxima of the Grad-CAM analysis evaluated on the nucleus class, allowing us to determine the positions of the nuclei centroids. This approach, which we denote as NDG-CAM, has performance in line with state-of-the-art methods, especially in isolating the different nuclei instances, and can be generalized for different organs and tissues. Experimental results demonstrated a precision of 0.833, recall of 0.815 and a Dice coefficient of 0.824 on the publicly available validation set. When used in combined mode with instance segmentation architectures such as Mask R-CNN, the method manages to surpass state-of-the-art approaches, with precision of 0.838, recall of 0.934 and a Dice coefficient of 0.884. Furthermore, performance on the external, locally collected validation set, with a Dice coefficient of 0.914 for the combined model, shows the generalization capability of the implemented pipeline, which has the ability to detect nuclei not only related to tumor or normal epithelium but also to other cytotypes.
Collapse
|
133
|
Kavitha MS, Gangadaran P, Jackson A, Venmathi Maran BA, Kurita T, Ahn BC. Deep Neural Network Models for Colon Cancer Screening. Cancers (Basel) 2022; 14:3707. [PMID: 35954370 PMCID: PMC9367621 DOI: 10.3390/cancers14153707] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Revised: 07/26/2022] [Accepted: 07/27/2022] [Indexed: 12/24/2022] Open
Abstract
Early detection of colorectal cancer can significantly facilitate clinicians' decision-making and reduce their workload. This can be achieved using automatic systems with endoscopic and histological images. Recently, the success of deep learning has motivated the development of image- and video-based polyp identification and segmentation. Currently, most diagnostic colonoscopy rooms utilize artificial intelligence methods that are considered to perform well in predicting invasive cancer. Convolutional neural network-based architectures, together with image patches and preprocesses are often widely used. Furthermore, learning transfer and end-to-end learning techniques have been adopted for detection and localization tasks, which improve accuracy and reduce user dependence with limited datasets. However, explainable deep networks that provide transparency, interpretability, reliability, and fairness in clinical diagnostics are preferred. In this review, we summarize the latest advances in such models, with or without transparency, for the prediction of colorectal cancer and also address the knowledge gap in the upcoming technology.
Collapse
Affiliation(s)
- Muthu Subash Kavitha
- School of Information and Data Sciences, Nagasaki University, Nagasaki 852-8521, Japan;
| | - Prakash Gangadaran
- BK21 FOUR KNU Convergence Educational Program of Biomedical Sciences for Creative Future Talents, School of Medicine, Kyungpook National University, Daegu 41944, Korea;
- Department of Nuclear Medicine, School of Medicine, Kyungpook National University, Kyungpook National University Hospital, Daegu 41944, Korea
| | - Aurelia Jackson
- Borneo Marine Research Institute, Universiti Malaysia Sabah, Kota Kinabalu 88400, Malaysia; (A.J.); (B.A.V.M.)
| | - Balu Alagar Venmathi Maran
- Borneo Marine Research Institute, Universiti Malaysia Sabah, Kota Kinabalu 88400, Malaysia; (A.J.); (B.A.V.M.)
| | - Takio Kurita
- Graduate School of Advanced Science and Engineering, Hiroshima University, Higashi-Hiroshima 739-8521, Japan;
| | - Byeong-Cheol Ahn
- BK21 FOUR KNU Convergence Educational Program of Biomedical Sciences for Creative Future Talents, School of Medicine, Kyungpook National University, Daegu 41944, Korea;
- Department of Nuclear Medicine, School of Medicine, Kyungpook National University, Kyungpook National University Hospital, Daegu 41944, Korea
| |
Collapse
|
134
|
Belciug S. Learning deep neural networks' architectures using differential evolution. Case study: Medical imaging processing. Comput Biol Med 2022; 146:105623. [PMID: 35751202 PMCID: PMC9112664 DOI: 10.1016/j.compbiomed.2022.105623] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2021] [Revised: 03/03/2022] [Accepted: 03/31/2022] [Indexed: 02/07/2023]
Abstract
The COVID-19 pandemic has changed the way we practice medicine. Cancer patient and obstetric care landscapes have been distorted. Delaying cancer diagnosis or maternal-fetal monitoring increased the number of preventable deaths or pregnancy complications. One solution is using Artificial Intelligence to help the medical personnel establish the diagnosis in a faster and more accurate manner. Deep learning is the state-of-the-art solution for image classification. Researchers manually design the structure of fix deep learning neural networks structures and afterwards verify their performance. The goal of this paper is to propose a potential method for learning deep network architectures automatically. As the number of networks architectures increases exponentially with the number of convolutional layers in the network, we propose a differential evolution algorithm to traverse the search space. At first, we propose a way to encode the network structure as a candidate solution of fixed-length integer array, followed by the initialization of differential evolution method. A set of random individuals is generated, followed by mutation, recombination, and selection. At each generation the individuals with the poorest loss values are eliminated and replaced with more competitive individuals. The model has been tested on three cancer datasets containing MRI scans and histopathological images and two maternal-fetal screening ultrasound images. The novel proposed method has been compared and statistically benchmarked to four state-of-the-art deep learning networks: VGG16, ResNet50, Inception V3, and DenseNet169. The experimental results showed that the model is competitive to other state-of-the-art models, obtaining accuracies between 78.73% and 99.50% depending on the dataset it had been applied on.
Collapse
|
135
|
Xie J, Pu X, He J, Qiu Y, Lu C, Gao W, Wang X, Lu H, Shi J, Xu Y, Madabhushi A, Fan X, Chen J, Xu J. Survival prediction on intrahepatic cholangiocarcinoma with histomorphological analysis on the whole slide images. Comput Biol Med 2022; 146:105520. [PMID: 35537220 DOI: 10.1016/j.compbiomed.2022.105520] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Revised: 04/07/2022] [Accepted: 04/11/2022] [Indexed: 01/06/2023]
Abstract
Intrahepatic cholangiocarcinoma (ICC) is cancer that originates from the liver's secondary ductal epithelium or branch. Due to the lack of early-stage clinical symptoms and very high mortality, the 5-year postoperative survival rate is only about 35%. A critical step to improve patients' survival is accurately predicting their survival status and giving appropriate treatment. The tumor microenvironment of ICC is the immediate environment on which the tumor cell growth depends. The differentiation of tumor glands, the stroma status, and the tumor-infiltrating lymphocytes in such environments are strictly related to the tumor progress. It is crucial to develop a computerized system for characterizing the tumor environment. This work aims to develop the quantitative histomorphological features that describe lymphocyte density distribution at the cell level and the different components at the tumor's tissue level in H&E-stained whole slide images (WSIs). The goal is to explore whether these features could stratify patients' survival. This study comprised of 127 patients diagnosed with ICC after surgery, where 78 cases were randomly chosen as the modeling set, and the rest of the 49 cases were testing set. Deep learning-based models were developed for tissue segmentation and lymphocyte detection in the WSIs. A total of 107-dimensional features, including different type of graph features on the WSIs were extracted by exploring the histomorphological patterns of these identified tumor tissue and lymphocytes. The top 3 discriminative features were chosen with the mRMR algorithm via 5-fold cross-validation to predict the patient's survival. The model's performance was evaluated on the independent testing set, which achieved an AUC of 0.6818 and the log-rank test p-value of 0.03. The Cox multivariable test was used to control the TNM staging, γ-Glutamytransferase, and the Peritumoral Glisson's Sheath Invasion. It showed that our model could independently predict survival risk with a p-value of 0.048 and HR (95% confidence interval) of 2.90 (1.01-8.32). These results indicated that the composition in tissue-level and global arrangement of lymphocytes in the cell-level could distinguish ICC patients' survival risk.
Collapse
Affiliation(s)
- Jiawei Xie
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, 210044, China
| | - Xiaohong Pu
- Dept. of Pathology, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing, 210008, China
| | - Jian He
- Dept. of Nuclear Medicine, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing, 210008, China
| | - Yudong Qiu
- Dept. of Hepatopancreatobiliary Surgery, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing, 210008, China
| | - Cheng Lu
- Dept. of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, 44106, USA
| | - Wei Gao
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, 210044, China
| | - Xiangxue Wang
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, 210044, China
| | - Haoda Lu
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, 210044, China
| | - Jiong Shi
- Dept. of Pathology, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing, 210008, China
| | - Yuemei Xu
- Dept. of Pathology, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing, 210008, China
| | - Anant Madabhushi
- Dept. of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, 44106, USA; Louis Stokes Cleveland Veterans Administration Medical Center, Cleveland, OH, 44106, USA
| | - Xiangshan Fan
- Dept. of Pathology, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing, 210008, China
| | - Jun Chen
- Dept. of Pathology, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing, 210008, China.
| | - Jun Xu
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, 210044, China.
| |
Collapse
|
136
|
Predicting brain structural network using functional connectivity. Med Image Anal 2022; 79:102463. [PMID: 35490597 DOI: 10.1016/j.media.2022.102463] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2021] [Revised: 03/06/2022] [Accepted: 04/15/2022] [Indexed: 12/13/2022]
Abstract
Uncovering the non-trivial brain structure-function relationship is fundamentally important for revealing organizational principles of human brain. However, it is challenging to infer a reliable relationship between individual brain structure and function, e.g., the relations between individual brain structural connectivity (SC) and functional connectivity (FC). Brain structure-function displays a distributed and heterogeneous pattern, that is, many functional relationships arise from non-overlapping sets of anatomical connections. This complex relation can be interwoven with widely existed individual structural and functional variations. Motivated by the advances of generative adversarial network (GAN) and graph convolutional network (GCN) in the deep learning field, in this work, we proposed a multi-GCN based GAN (MGCN-GAN) to infer individual SC based on corresponding FC by automatically learning the complex associations between individual brain structural and functional networks. The generator of MGCN-GAN is composed of multiple multi-layer GCNs which are designed to model complex indirect connections in brain network. The discriminator of MGCN-GAN is a single multi-layer GCN which aims to distinguish the predicted SC from real SC. To overcome the inherent unstable behavior of GAN, we designed a new structure-preserving (SP) loss function to guide the generator to learn the intrinsic SC patterns more effectively. Using Human Connectome Project (HCP) dataset and Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset as test beds, our MGCN-GAN model can generate reliable individual SC from FC. This result implies that there may exist a common regulation between specific brain structural and functional architectures across different individuals.
Collapse
|
137
|
Wang Z, Zhu X, Li A, Wang Y, Meng G, Wang M. Global and local attentional feature alignment for domain adaptive nuclei detection in histopathology images. Artif Intell Med 2022; 132:102341. [DOI: 10.1016/j.artmed.2022.102341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Revised: 06/08/2022] [Accepted: 06/27/2022] [Indexed: 11/02/2022]
|
138
|
Shen Y, Ke J. Sampling Based Tumor Recognition in Whole-Slide Histology Image With Deep Learning Approaches. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2022; 19:2431-2441. [PMID: 33630739 DOI: 10.1109/tcbb.2021.3062230] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Histopathological identification of tumor tissue is one of the routine pathological diagnoses for pathologists. Recently, computational pathology has been successfully interpreted by a variety of deep learning-based applications. Nevertheless, the high-efficient and spatial-correlated processing of individual patches have always attracted attention in whole-slide image (WSI) analysis. In this paper, we propose a high-throughput system to detect tumor regions in colorectal cancer histology slides precisely. We train a deep convolutional neural network (CNN) model and design a Monte Carlo (MC) adaptive sampling method to estimate the most representative patches in a WSI. Two conditional random field (CRF) models are designed, namely the correction CRF and the prediction CRF are integrated for spatial dependencies of patches. We use three datasets of colorectal cancer from The Cancer Genome Atlas (TCGA) to evaluate the performance of the system. The overall diagnostic time can be reduced from 56.7 percent to 71.7 percent on the slides of a varying tumor distribution, with an increase in classification accuracy.
Collapse
|
139
|
Nourse R, Cartledge S, Tegegne T, Gurrin C, Maddison R. Now you see it! Using wearable cameras to gain insights into the lived experience of cardiovascular conditions. Eur J Cardiovasc Nurs 2022; 21:750-755. [PMID: 35714119 DOI: 10.1093/eurjcn/zvac053] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/01/2022] [Revised: 05/27/2022] [Accepted: 06/01/2022] [Indexed: 11/14/2022]
Abstract
Wearable cameras offer an innovative way to discover new insights into the lived experience of people with cardiovascular conditions. Wearable cameras can be used alone or supplement more traditional research methods, such as interviews and participant observations. This paper provides an overview of the benefits of using wearable cameras for data collection and outlines some key considerations for researchers and clinicians interested in this method. We provide a case study describing a study design using wearable cameras and how the data were used.
Collapse
Affiliation(s)
- Rebecca Nourse
- Institute for Physical Activity and Nutrition, Deakin University, 221 Burwood Highway, Burwood, VIC 3125, Australia
| | - Susie Cartledge
- Institute for Physical Activity and Nutrition, Deakin University, 221 Burwood Highway, Burwood, VIC 3125, Australia.,School of Public Health and Preventive Medicine, Monash University, Melbourne, Australia
| | - Teketo Tegegne
- Institute for Physical Activity and Nutrition, Deakin University, 221 Burwood Highway, Burwood, VIC 3125, Australia
| | - Cathal Gurrin
- School of Computing, Dublin City University, Dublin, Ireland
| | - Ralph Maddison
- Institute for Physical Activity and Nutrition, Deakin University, 221 Burwood Highway, Burwood, VIC 3125, Australia
| |
Collapse
|
140
|
Mund A, Brunner AD, Mann M. Unbiased spatial proteomics with single-cell resolution in tissues. Mol Cell 2022; 82:2335-2349. [PMID: 35714588 DOI: 10.1016/j.molcel.2022.05.022] [Citation(s) in RCA: 118] [Impact Index Per Article: 39.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Revised: 05/05/2022] [Accepted: 05/18/2022] [Indexed: 12/19/2022]
Abstract
Mass spectrometry (MS)-based proteomics has become a powerful technology to quantify the entire complement of proteins in cells or tissues. Here, we review challenges and recent advances in the LC-MS-based analysis of minute protein amounts, down to the level of single cells. Application of this technology revealed that single-cell transcriptomes are dominated by stochastic noise due to the very low number of transcripts per cell, whereas the single-cell proteome appears to be complete. The spatial organization of cells in tissues can be studied by emerging technologies, including multiplexed imaging and spatial transcriptomics, which can now be combined with ultra-sensitive proteomics. Combined with high-content imaging, artificial intelligence and single-cell laser microdissection, MS-based proteomics provides an unbiased molecular readout close to the functional level. Potential applications range from basic biological questions to precision medicine.
Collapse
Affiliation(s)
- Andreas Mund
- Proteomics Program, The Novo Nordisk Foundation Center for Protein Research, University of Copenhagen, Faculty of Health and Medical Sciences, Blegdamsvej 3B, 2200 Copenhagen, Denmark
| | - Andreas-David Brunner
- Department of Proteomics and Signal Transduction, Max Planck Institute of Biochemistry, Martinsried, Germany; Boehringer Ingelheim Pharma GmbH & Co. KG, Drug Discovery Sciences, Birkendorfer Str. 65, D-88397, Biberach Riss, Germany
| | - Matthias Mann
- Proteomics Program, The Novo Nordisk Foundation Center for Protein Research, University of Copenhagen, Faculty of Health and Medical Sciences, Blegdamsvej 3B, 2200 Copenhagen, Denmark; Department of Proteomics and Signal Transduction, Max Planck Institute of Biochemistry, Martinsried, Germany.
| |
Collapse
|
141
|
Shvetsov N, Grønnesby M, Pedersen E, Møllersen K, Busund LTR, Schwienbacher R, Bongo LA, Kilvaer TK. A Pragmatic Machine Learning Approach to Quantify Tumor-Infiltrating Lymphocytes in Whole Slide Images. Cancers (Basel) 2022; 14:cancers14122974. [PMID: 35740648 PMCID: PMC9221016 DOI: 10.3390/cancers14122974] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Revised: 06/06/2022] [Accepted: 06/10/2022] [Indexed: 11/16/2022] Open
Abstract
Simple Summary Tumor tissues sampled from patients contain prognostic and predictive information beyond what is currently being used in clinical practice. Large-scale digitization enables new ways of exploiting this information. The most promising analysis pipelines include deep learning/artificial intelligence (AI). However, to ensure success, AI often requires a time-consuming curation of data. In our approach, we repurposed AI pipelines and training data for cell segmentation and classification to identify tissue-infiltrating lymphocytes (TILs) in lung cancer tissue. We showed that our approach is able to identify TILs and provide prognostic information in an unseen dataset from lung cancer patients. Our methods can be adapted in myriad ways and may help pave the way for the large-scale deployment of digital pathology. Abstract Increased levels of tumor-infiltrating lymphocytes (TILs) indicate favorable outcomes in many types of cancer. The manual quantification of immune cells is inaccurate and time-consuming for pathologists. Our aim is to leverage a computational solution to automatically quantify TILs in standard diagnostic hematoxylin and eosin-stained sections (H&E slides) from lung cancer patients. Our approach is to transfer an open-source machine learning method for the segmentation and classification of nuclei in H&E slides trained on public data to TIL quantification without manual labeling of the data. Our results show that the resulting TIL quantification correlates to the patient prognosis and compares favorably to the current state-of-the-art method for immune cell detection in non-small cell lung cancer (current standard CD8 cells in DAB-stained TMAs HR 0.34, 95% CI 0.17–0.68 vs. TILs in HE WSIs: HoVer-Net PanNuke Aug Model HR 0.30, 95% CI 0.15–0.60 and HoVer-Net MoNuSAC Aug model HR 0.27, 95% CI 0.14–0.53). Our approach bridges the gap between machine learning research, translational clinical research and clinical implementation. However, further validation is warranted before implementation in a clinical setting.
Collapse
Affiliation(s)
- Nikita Shvetsov
- Department of Computer Science, UiT The Arctic University of Norway, N-9038 Tromsø, Norway; (N.S.); (E.P.); (L.A.B.)
| | - Morten Grønnesby
- Department of Medical Biology, UiT The Arctic University of Norway, N-9038 Tromsø, Norway; (M.G.); (L.-T.R.B.); (R.S.)
| | - Edvard Pedersen
- Department of Computer Science, UiT The Arctic University of Norway, N-9038 Tromsø, Norway; (N.S.); (E.P.); (L.A.B.)
| | - Kajsa Møllersen
- Department of Community Medicine, UiT The Arctic University of Norway, N-9038 Tromsø, Norway;
| | - Lill-Tove Rasmussen Busund
- Department of Medical Biology, UiT The Arctic University of Norway, N-9038 Tromsø, Norway; (M.G.); (L.-T.R.B.); (R.S.)
- Department of Clinical Pathology, University Hospital of North Norway, N-9038 Tromsø, Norway
| | - Ruth Schwienbacher
- Department of Medical Biology, UiT The Arctic University of Norway, N-9038 Tromsø, Norway; (M.G.); (L.-T.R.B.); (R.S.)
- Department of Clinical Pathology, University Hospital of North Norway, N-9038 Tromsø, Norway
| | - Lars Ailo Bongo
- Department of Computer Science, UiT The Arctic University of Norway, N-9038 Tromsø, Norway; (N.S.); (E.P.); (L.A.B.)
| | - Thomas Karsten Kilvaer
- Department of Oncology, University Hospital of North Norway, N-9038 Tromsø, Norway
- Department of Clinical Medicine, UiT The Arctic University of Norway, N-9038 Tromsø, Norway
- Correspondence:
| |
Collapse
|
142
|
Wang CW, Chang CC, Lee YC, Lin YJ, Lo SC, Hsu PC, Liou YA, Wang CH, Chao TK. Weakly supervised deep learning for prediction of treatment effectiveness on ovarian cancer from histopathology images. Comput Med Imaging Graph 2022; 99:102093. [PMID: 35752000 DOI: 10.1016/j.compmedimag.2022.102093] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Revised: 05/13/2022] [Accepted: 06/03/2022] [Indexed: 11/30/2022]
Abstract
Despite the progress made during the last two decades in the surgery and chemotherapy of ovarian cancer, more than 70 % of advanced patients are with recurrent cancer and decease. Surgical debulking of tumors following chemotherapy is the conventional treatment for advanced carcinoma, but patients with such treatment remain at great risk for recurrence and developing drug resistance, and only about 30 % of the women affected will be cured. Bevacizumab is a humanized monoclonal antibody, which blocks VEGF signaling in cancer, inhibits angiogenesis and causes tumor shrinkage, and has been recently approved by FDA as a monotherapy for advanced ovarian cancer in combination with chemotherapy. Considering the cost, potential toxicity, and finding that only a portion of patients will benefit from these drugs, the identification of new predictive method for the treatment of ovarian cancer remains an urgent unmet medical need. In this study, we develop weakly supervised deep learning approaches to accurately predict therapeutic effect for bevacizumab of ovarian cancer patients from histopathological hematoxylin and eosin stained whole slide images, without any pathologist-provided locally annotated regions. To the authors' best knowledge, this is the first model demonstrated to be effective for prediction of the therapeutic effect of patients with epithelial ovarian cancer to bevacizumab. Quantitative evaluation of a whole section dataset shows that the proposed method achieves high accuracy, 0.882 ± 0.06; precision, 0.921 ± 0.04, recall, 0.912 ± 0.03; F-measure, 0.917 ± 0.07 using 5-fold cross validation and outperforms two state-of-the art deep learning approaches Coudray et al. (2018), Campanella et al. (2019). For an independent TMA testing set, the three proposed methods obtain promising results with high recall (sensitivity) 0.946, 0.893 and 0.964, respectively. The results suggest that the proposed method could be useful for guiding treatment by assisting in filtering out patients without positive therapeutic response to suffer from further treatments while keeping patients with positive response in the treatment process. Furthermore, according to the statistical analysis of the Cox Proportional Hazards Model, patients who were predicted to be invalid by the proposed model had a very high risk of cancer recurrence (hazard ratio = 13.727) than patients predicted to be effective with statistical signifcance (p < 0.05).
Collapse
Affiliation(s)
- Ching-Wei Wang
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan; Graduate Institute of Applied Science and Technology, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Cheng-Chang Chang
- Department of Gynecology and Obstetrics, Tri-Service General Hospital, Taipei, Taiwan; Graduate Institute of Medical Sciences, National Defense Medical Center, Taipei, Taiwan
| | - Yu-Ching Lee
- Graduate Institute of Applied Science and Technology, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Yi-Jia Lin
- Department of Pathology, Tri-Service General Hospital, Taipei, Taiwan; Institute of Pathology and Parasitology, National Defense Medical Center, Taipei, Taiwan
| | - Shih-Chang Lo
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Po-Chao Hsu
- Department of Gynecology and Obstetrics, Tri-Service General Hospital, Taipei, Taiwan; Graduate Institute of Medical Sciences, National Defense Medical Center, Taipei, Taiwan
| | - Yi-An Liou
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Chih-Hung Wang
- Department of Otolaryngology-Head and Neck Surgery, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| | - Tai-Kuang Chao
- Department of Pathology, Tri-Service General Hospital, Taipei, Taiwan; Institute of Pathology and Parasitology, National Defense Medical Center, Taipei, Taiwan.
| |
Collapse
|
143
|
Bai T, Zhang Z, Guo S, Zhao C, Luo X. Semi-Supervised Cell Detection with Reliable Pseudo-Labels. J Comput Biol 2022; 29:1061-1073. [PMID: 35704885 DOI: 10.1089/cmb.2022.0108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Pathological images play an important role in the diagnosis, treatment, and prognosis of cancer. Usually, pathological images contain complex environments and cells of different shapes. Pathologists consume a lot of time and labor costs when analyzing and discriminating the cells in the images. Therefore, fully annotated pathological image data sets are not easy to obtain. In view of the problem of insufficient labeled data, we input a large number of unlabeled images into the pretrained model to generate accurate pseudo-labels. In this article, we propose two methods to improve the quality of pseudo-labels, namely, the pseudo-labeling based on adaptive threshold and the pseudo-labeling based on cell count. These two pseudo-labeling methods take into account the distribution of cells in different pathological images when removing background noise, and ensure that accurate pseudo-labels are generated for each unlabeled image. Meanwhile, when pseudo-labels are used for model retraining, we perform data distillation on the feature maps of unlabeled images through an attention mechanism, which further improves the quality of training data. In addition, we also propose a multi-task learning model, which learns the cell detection task and the cell count task simultaneously, and improves the performance of cell detection through feature sharing. We verified the above methods on three different data sets, and the results show that the detection effect of the model with a large number of unlabeled images involved in retraining is improved by 9%-13% compared with the model that only uses a small number of labeled images for pretraining. Moreover, our methods have good applicability on the three data sets.
Collapse
Affiliation(s)
- Tian Bai
- College of Computer Science and Technology, Jilin University, Changchun, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, Changchun, China
| | - Zhenting Zhang
- College of Computer Science and Technology, Jilin University, Changchun, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, Changchun, China
| | - Shuyu Guo
- College of Computer Science and Technology, Jilin University, Changchun, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, Changchun, China
| | - Chen Zhao
- College of Computer Science and Technology, Jilin University, Changchun, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, Changchun, China
| | - Xiao Luo
- Department of Breast Surgery, China-Japan Union Hospital of Jilin University, Changchun, China
| |
Collapse
|
144
|
MITNET: a novel dataset and a two-stage deep learning approach for mitosis recognition in whole slide images of breast cancer tissue. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07441-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
AbstractMitosis assessment of breast cancer has a strong prognostic importance and is visually evaluated by pathologists. The inter, and intra-observer variability of this assessment is high. In this paper, a two-stage deep learning approach, named MITNET, has been applied to automatically detect nucleus and classify mitoses in whole slide images (WSI) of breast cancer. Moreover, this paper introduces two new datasets. The first dataset is used to detect the nucleus in the WSIs, which contains 139,124 annotated nuclei in 1749 patches extracted from 115 WSIs of breast cancer tissue, and the second dataset consists of 4908 mitotic cells and 4908 non-mitotic cells image samples extracted from 214 WSIs which is used for mitosis classification. The created datasets are used to train the MITNET network, which consists of two deep learning architectures, called MITNET-det and MITNET-rec, respectively, to isolate nuclei cells and identify the mitoses in WSIs. In MITNET-det architecture, to extract features from nucleus images and fuse them, CSPDarknet and Path Aggregation Network (PANet) are used, respectively, and then, a detection strategy using You Look Only Once (scaled-YOLOv4) is employed to detect nucleus at three different scales. In the classification part, the detected isolated nucleus images are passed through proposed MITNET-rec deep learning architecture, to identify the mitosis in the WSIs. Various deep learning classifiers and the proposed classifier are trained with a publicly available mitosis datasets (MIDOG and ATYPIA) and then, validated over our created dataset. The results verify that deep learning-based classifiers trained on MIDOG and ATYPIA have difficulties to recognize mitosis on our dataset which shows that the created mitosis dataset has unique features and characteristics. Besides this, the proposed classifier outperforms the state-of-the-art classifiers significantly and achieves a $$68.7\%$$
68.7
%
F1-score and $$49.0\%$$
49.0
%
F1-score on the MIDOG and the created mitosis datasets, respectively. Moreover, the experimental results reveal that the overall proposed MITNET framework detects the nucleus in WSIs with high detection rates and recognizes the mitotic cells in WSI with high F1-score which leads to the improvement of the accuracy of pathologists’ decision.
Collapse
|
145
|
Artificial Intelligence-Based Tissue Phenotyping in Colorectal Cancer Histopathology Using Visual and Semantic Features Aggregation. MATHEMATICS 2022. [DOI: 10.3390/math10111909] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Tissue phenotyping of the tumor microenvironment has a decisive role in digital profiling of intra-tumor heterogeneity, epigenetics, and progression of cancer. Most of the existing methods for tissue phenotyping often rely on time-consuming and error-prone manual procedures. Recently, with the advent of advanced technologies, these procedures have been automated using artificial intelligence techniques. In this paper, a novel deep histology heterogeneous feature aggregation network (HHFA-Net) is proposed based on visual and semantic information fusion for the detection of tissue phenotypes in colorectal cancer (CRC). We adopted and tested various data augmentation techniques to avoid computationally expensive stain normalization procedures and handle limited and imbalanced data problems. Three publicly available datasets are used in the experiments: CRC tissue phenotyping (CRC-TP), CRC histology (CRCH), and colon cancer histology (CCH). The proposed HHFA-Net achieves higher accuracies than the state-of-the-art methods for tissue phenotyping in CRC histopathology images.
Collapse
|
146
|
Hoorali F, Khosravi H, Moradi B. Automatic microscopic diagnosis of diseases using an improved UNet++ architecture. Tissue Cell 2022; 76:101816. [DOI: 10.1016/j.tice.2022.101816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Revised: 05/02/2022] [Accepted: 05/04/2022] [Indexed: 12/01/2022]
|
147
|
Bai T, Xu J, Zhang Z, Guo S, Luo X. Context-aware learning for cancer cell nucleus recognition in pathology images. Bioinformatics 2022; 38:2892-2898. [PMID: 35561198 DOI: 10.1093/bioinformatics/btac167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Revised: 02/28/2022] [Accepted: 03/17/2022] [Indexed: 11/13/2022] Open
Abstract
MOTIVATION Nucleus identification supports many quantitative analysis studies that rely on nuclei positions or categories. Contextual information in pathology images refers to information near the to-be-recognized cell, which can be very helpful for nucleus subtyping. Current CNN-based methods do not explicitly encode contextual information within the input images and point annotations. RESULTS In this article, we propose a novel framework with context to locate and classify nuclei in microscopy image data. Specifically, first we use state-of-the-art network architectures to extract multi-scale feature representations from multi-field-of-view, multi-resolution input images and then conduct feature aggregation on-the-fly with stacked convolutional operations. Then, two auxiliary tasks are added to the model to effectively utilize the contextual information. One for predicting the frequencies of nuclei, and the other for extracting the regional distribution information of the same kind of nuclei. The entire framework is trained in an end-to-end, pixel-to-pixel fashion. We evaluate our method on two histopathological image datasets with different tissue and stain preparations, and experimental results demonstrate that our method outperforms other recent state-of-the-art models in nucleus identification. AVAILABILITY AND IMPLEMENTATION The source code of our method is freely available at https://github.com/qjxjy123/DonRabbit. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Tian Bai
- College of Computer Science and Technology, Jilin University, 130012 Changchun, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, 130012 Changchun, China
| | - Jiayu Xu
- College of Computer Science and Technology, Jilin University, 130012 Changchun, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, 130012 Changchun, China
| | - Zhenting Zhang
- College of Computer Science and Technology, Jilin University, 130012 Changchun, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, 130012 Changchun, China
| | - Shuyu Guo
- College of Computer Science and Technology, Jilin University, 130012 Changchun, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, 130012 Changchun, China
| | - Xiao Luo
- Department of Breast Surgery, China-Japan Union Hospital of Jilin University, 130033 Changchun, China
| |
Collapse
|
148
|
Lin A, Qi C, Li M, Guan R, Imyanitov EN, Mitiushkina NV, Cheng Q, Liu Z, Wang X, Lyu Q, Zhang J, Luo P. Deep Learning Analysis of the Adipose Tissue and the Prediction of Prognosis in Colorectal Cancer. Front Nutr 2022; 9:869263. [PMID: 35634419 PMCID: PMC9131178 DOI: 10.3389/fnut.2022.869263] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Accepted: 04/11/2022] [Indexed: 11/18/2022] Open
Abstract
Research has shown that the lipid microenvironment surrounding colorectal cancer (CRC) is closely associated with the occurrence, development, and metastasis of CRC. According to pathological images from the National Center for Tumor diseases (NCT), the University Medical Center Mannheim (UMM) database and the ImageNet data set, a model called VGG19 was pre-trained. A deep convolutional neural network (CNN), VGG19CRC, was trained by the migration learning method. According to the VGG19CRC model, adipose tissue scores were calculated for TCGA-CRC hematoxylin and eosin (H&E) images and images from patients at Zhujiang Hospital of Southern Medical University and First People's Hospital of Chenzhou. Kaplan-Meier (KM) analysis was used to compare the overall survival (OS) of patients. The XCell and MCP-Counter algorithms were used to evaluate the immune cell scores of the patients. Gene set enrichment analysis (GSEA) and single-sample GSEA (ssGSEA) were used to analyze upregulated and downregulated pathways. In TCGA-CRC, patients with high-adipocytes (high-ADI) CRC had significantly shorter OS times than those with low-ADI CRC. In a validation queue from Zhujiang Hospital of Southern Medical University (Local-CRC1), patients with high-ADI had worse OS than CRC patients with low-ADI. In another validation queue from First People's Hospital of Chenzhou (Local-CRC2), patients with low-ADI CRC had significantly longer OS than patients with high-ADI CRC. We developed a deep convolution network to segment various tissues from pathological H&E images of CRC and automatically quantify ADI. This allowed us to further analyze and predict the survival of CRC patients according to information from their segmented pathological tissue images, such as tissue components and the tumor microenvironment.
Collapse
Affiliation(s)
- Anqi Lin
- Department of Oncology, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Chang Qi
- Department of Oncology, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Mujiao Li
- College of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Department of Information, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Rui Guan
- Department of Oncology, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Evgeny N. Imyanitov
- Department of Tumor Growth Biology, N.N. Petrov Institute of Oncology, St. Petersburg, Russia
| | - Natalia V. Mitiushkina
- Department of Tumor Growth Biology, N.N. Petrov Institute of Oncology, St. Petersburg, Russia
| | - Quan Cheng
- Department of Neurosurgery, Xiangya Hospital, Central South University, Changsha, China
| | - Zaoqu Liu
- Department of Interventional Radiology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Xiaojun Wang
- First People's Hospital of Chenzhou City, Chenzhou, China
| | - Qingwen Lyu
- Department of Information, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- *Correspondence: Qingwen Lyu
| | - Jian Zhang
- Department of Oncology, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Jian Zhang
| | - Peng Luo
- Department of Oncology, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Peng Luo
| |
Collapse
|
149
|
Rani P, Dutta K, Kumar V. Artificial intelligence techniques for prediction of drug synergy in malignant diseases: Past, present, and future. Comput Biol Med 2022; 144:105334. [DOI: 10.1016/j.compbiomed.2022.105334] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Revised: 02/13/2022] [Accepted: 02/13/2022] [Indexed: 12/22/2022]
|
150
|
Xu C, Zhang Y, Fan X, Lan X, Ye X, Wu T. An efficient fluorescence in situ hybridization (FISH)-based circulating genetically abnormal cells (CACs) identification method based on Multi-scale MobileNet-YOLO-V4. Quant Imaging Med Surg 2022; 12:2961-2976. [PMID: 35502367 PMCID: PMC9014158 DOI: 10.21037/qims-21-909] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2021] [Accepted: 02/11/2022] [Indexed: 11/06/2023]
Abstract
BACKGROUND Circulating tumor cells (CTCs) acting as "liquid biopsy" of cancer are cells that have been shed from the primary tumor, which cause the development of a secondary tumor in a distant organ site, leading to cancer metastasis. Recent research suggests that CTCs with abnormalities in gene copy numbers in mononuclear cell-enriched peripheral blood samples, namely circulating genetically abnormal cells (CACs), could be used as a non-invasive decision tool to detect patients with benign pulmonary nodules. Such cells are identified by counting the fluorescence signals of fluorescence in situ hybridization (FISH). However, owing to the rarity of CACs in the blood, identification of CACs using this technique is time-consuming and is a drawback of this method. METHODS This study has proposed an efficient and automatic FISH-based CACs identification approach which is based on a combination of the high accuracy of You Only Look Once (YOLO)-V4 and the lightweight and rapidness of MobileNet-V3. The backbone of YOLO-V4 was replaced with MobileNet-V3 to improve the detection efficiency and prevent overfitting, and the architecture of YOLO-V4 was optimized by utilizing a new feature map with a larger scale to enable the enhanced detection ability for small targets. RESULTS We trained and tested the proposed model using a dataset containing more than 7,000 cells based on five-fold cross-validation. All the images in the dataset were 2,448×2,048 (pixels) in size. The number of cells in each image was >70. The accuracy of four-color fluorescence signals detection for our proposed model were all approximately 98%, and the mean average precision (mAP) were close to 100%. The final outcome of the developed method was the type of cells, i.e., normal cells, CACs, gaining cells or deletion cells. The method had a CACs identification accuracy of 93.86% (similar to an expert pathologist), and a detection speed that was about 500 times greater than that of a pathologist. CONCLUSIONS The developed method could greatly increase the review accuracy, enhance the efficiency of reviewers, and reduce the review turnaround time during CACs identification.
Collapse
Affiliation(s)
- Chao Xu
- China Telecommunication Technology Labs, China Academy of Information and Communications Technology, Beijing, China
| | - Yi Zhang
- China Telecommunication Technology Labs, China Academy of Information and Communications Technology, Beijing, China
| | - Xianjun Fan
- Department of Product Development, Zhuhai Sanmed Biotech Ltd., Zhuhai, China
| | - Xingjie Lan
- Department of Data Operation, Zhuhai Sanmed Biotech Ltd., Zhuhai, China
| | - Xin Ye
- Department of Product Development, Zhuhai Sanmed Biotech Ltd., Zhuhai, China
| | - Tongning Wu
- China Telecommunication Technology Labs, China Academy of Information and Communications Technology, Beijing, China
| |
Collapse
|