51
|
Thakur N, Alam MR, Abdul-Ghafar J, Chong Y. Recent Application of Artificial Intelligence in Non-Gynecological Cancer Cytopathology: A Systematic Review. Cancers (Basel) 2022; 14:3529. [PMID: 35884593 PMCID: PMC9316753 DOI: 10.3390/cancers14143529] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 07/12/2022] [Accepted: 07/15/2022] [Indexed: 11/27/2022] Open
Abstract
State-of-the-art artificial intelligence (AI) has recently gained considerable interest in the healthcare sector and has provided solutions to problems through automated diagnosis. Cytological examination is a crucial step in the initial diagnosis of cancer, although it shows limited diagnostic efficacy. Recently, AI applications in the processing of cytopathological images have shown promising results despite the elementary level of the technology. Here, we performed a systematic review with a quantitative analysis of recent AI applications in non-gynecological (non-GYN) cancer cytology to understand the current technical status. We searched the major online databases, including MEDLINE, Cochrane Library, and EMBASE, for relevant English articles published from January 2010 to January 2021. The searched query terms were: "artificial intelligence", "image processing", "deep learning", "cytopathology", and "fine-needle aspiration cytology." Out of 17,000 studies, only 26 studies (26 models) were included in the full-text review, whereas 13 studies were included for quantitative analysis. There were eight classes of AI models treated of according to target organs: thyroid (n = 11, 39%), urinary bladder (n = 6, 21%), lung (n = 4, 14%), breast (n = 2, 7%), pleural effusion (n = 2, 7%), ovary (n = 1, 4%), pancreas (n = 1, 4%), and prostate (n = 1, 4). Most of the studies focused on classification and segmentation tasks. Although most of the studies showed impressive results, the sizes of the training and validation datasets were limited. Overall, AI is also promising for non-GYN cancer cytopathology analysis, such as pathology or gynecological cytology. However, the lack of well-annotated, large-scale datasets with Z-stacking and external cross-validation was the major limitation found across all studies. Future studies with larger datasets with high-quality annotations and external validation are required.
Collapse
Affiliation(s)
| | | | | | - Yosep Chong
- Department of Hospital Pathology, College of Medicine, The Catholic University of Korea, Seoul 06591, Korea; (N.T.); (M.R.A.); (J.A.-G.)
| |
Collapse
|
52
|
Chen H, Liu J, Hua C, Feng J, Pang B, Cao D, Li C. Accurate classification of white blood cells by coupling pre-trained ResNet and DenseNet with SCAM mechanism. BMC Bioinformatics 2022; 23:282. [PMID: 35840897 PMCID: PMC9287918 DOI: 10.1186/s12859-022-04824-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Accepted: 07/07/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Via counting the different kinds of white blood cells (WBCs), a good quantitative description of a person's health status is obtained, thus forming the critical aspects for the early treatment of several diseases. Thereby, correct classification of WBCs is crucial. Unfortunately, the manual microscopic evaluation is complicated, time-consuming, and subjective, so its statistical reliability becomes limited. Hence, the automatic and accurate identification of WBCs is of great benefit. However, the similarity between WBC samples and the imbalance and insufficiency of samples in the field of medical computer vision bring challenges to intelligent and accurate classification of WBCs. To tackle these challenges, this study proposes a deep learning framework by coupling the pre-trained ResNet and DenseNet with SCAM (spatial and channel attention module) for accurately classifying WBCs. RESULTS In the proposed network, ResNet and DenseNet enables information reusage and new information exploration, respectively, which are both important and compatible for learning good representations. Meanwhile, the SCAM module sequentially infers attention maps from two separate dimensions of space and channel to emphasize important information or suppress unnecessary information, further enhancing the representation power of our model for WBCs to overcome the limitation of sample similarity. Moreover, the data augmentation and transfer learning techniques are used to handle the data of imbalance and insufficiency. In addition, the mixup approach is adopted for modeling the vicinity relation across training samples of different categories to increase the generalizability of the model. By comparing with five representative networks on our developed LDWBC dataset and the publicly available LISC, BCCD, and Raabin WBC datasets, our model achieves the best overall performance. We also implement the occlusion testing by the gradient-weighted class activation mapping (Grad-CAM) algorithm to improve the interpretability of our model. CONCLUSION The proposed method has great potential for application in intelligent and accurate classification of WBCs.
Collapse
Affiliation(s)
- Hua Chen
- Institute of Artificial Intelligence, School of Computer Science, Wuhan University, Wuhan, 430072, China
| | - Juan Liu
- Institute of Artificial Intelligence, School of Computer Science, Wuhan University, Wuhan, 430072, China.
| | - Chunbing Hua
- Institute of Artificial Intelligence, School of Computer Science, Wuhan University, Wuhan, 430072, China
| | - Jing Feng
- Institute of Artificial Intelligence, School of Computer Science, Wuhan University, Wuhan, 430072, China
| | - Baochuan Pang
- Landing Artificial Intelligence Center for Pathological Diagnosis, Wuhan, 430072, China
| | - Dehua Cao
- Landing Artificial Intelligence Center for Pathological Diagnosis, Wuhan, 430072, China
| | - Cheng Li
- Landing Artificial Intelligence Center for Pathological Diagnosis, Wuhan, 430072, China
| |
Collapse
|
53
|
Zak J, Grzeszczyk MK, Pater A, Roszkowiak L, Siemion K, Korzynska A. Cell image augmentation for classification task using GANs on Pap smear dataset. Biocybern Biomed Eng 2022. [DOI: 10.1016/j.bbe.2022.07.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
54
|
Kupas D, Harangi B. Classification of Pap-smear cell images using deep convolutional neural network accelerated by hand-crafted features. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:1452-1455. [PMID: 36083935 DOI: 10.1109/embc48229.2022.9871171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The classification of cells extracted from Pap-smears is in most cases done using neural network architectures. Nevertheless, the importance of features extracted with digital image processing is also discussed in many related articles. Decision support systems and automated analysis tools of Pap-smears often use these kinds of manually extracted, global features based on clinical expert opinion. In this paper, a solution is introduced where 29 different contextual features are combined with local features learned by a neural network so that it increases classification performance. The weight distribution between the features is also investigated leading to a conclusion that the numerical features are indeed forming an important part of the learning process. Furthermore, extensive testing of the presented methods is done using a dataset annotated by clinical experts. An increase of 3.2% in F1-Score value can be observed when using the combination of contextual and local features. Clinical Relevance - Analysis of images extracted from digital Pap-test using modern machine learning tools is discussed in many scientific papers. The manual classification of the cells can be time-consuming and expensive which requires a high amount of manual labor. Furthermore the result of the manual classification can also be uncertain due to interobserver variability. Considering these, any result that can lead to a more reliable highly accurate classification method is considered valuable in the field of cervical cancer screening.
Collapse
|
55
|
Multi-class nucleus detection and classification using deep convolutional neural network with enhanced high dimensional dissimilarity translation model on cervical cells. Biocybern Biomed Eng 2022. [DOI: 10.1016/j.bbe.2022.06.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
56
|
van der Kamp A, Waterlander TJ, de Bel T, van der Laak J, van den Heuvel-Eibrink MM, Mavinkurve-Groothuis AMC, de Krijger RR. Artificial Intelligence in Pediatric Pathology: The Extinction of a Medical Profession or the Key to a Bright Future? Pediatr Dev Pathol 2022; 25:380-387. [PMID: 35238696 DOI: 10.1177/10935266211059809] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Artificial Intelligence (AI) has become of increasing interest over the past decade. While digital image analysis (DIA) is already being used in radiology, it is still in its infancy in pathology. One of the reasons is that large-scale digitization of glass slides has only recently become available. With the advent of digital slide scanners, that digitize glass slides into whole slide images, many labs are now in a transition phase towards digital pathology. However, only few departments worldwide are currently fully digital. Digital pathology provides the ability to annotate large datasets and train computers to develop and validate robust algorithms, similar to radiology. In this opinionated overview, we will give a brief introduction into AI in pathology, discuss the potential positive and negative implications and speculate about the future role of AI in the field of pediatric pathology.
Collapse
Affiliation(s)
- Ananda van der Kamp
- 541199Princess Máxima Center for Pediatric Oncology, Utrecht, the Netherlands
| | - Tomas J Waterlander
- 541199Princess Máxima Center for Pediatric Oncology, Utrecht, the Netherlands
| | - Thomas de Bel
- Department of Pathology, 234134Radboud University Medical Center, Nijmegen, the Netherlands
| | - Jeroen van der Laak
- Department of Pathology, 234134Radboud University Medical Center, Nijmegen, the Netherlands.,Center for Medical Image Science and Visualization, 4566Linköping University, Linköping, Sweden
| | | | | | - Ronald R de Krijger
- 541199Princess Máxima Center for Pediatric Oncology, Utrecht, the Netherlands.,Department of Pathology, University Medical Center Utrecht, Utrecht, the Netherlands
| |
Collapse
|
57
|
Yin HL, Jiang Y, Xu Z, Jia HH, Lin GW. Combined diagnosis of multiparametric MRI-based deep learning models facilitates differentiating triple-negative breast cancer from fibroadenoma magnetic resonance BI-RADS 4 lesions. J Cancer Res Clin Oncol 2022; 149:2575-2584. [PMID: 35771263 DOI: 10.1007/s00432-022-04142-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2022] [Accepted: 06/13/2022] [Indexed: 02/05/2023]
Abstract
PURPOSE To investigate the value of the combined diagnosis of multiparametric MRI-based deep learning models to differentiate triple-negative breast cancer (TNBC) from fibroadenoma magnetic resonance Breast Imaging-Reporting and Data System category 4 (BI-RADS 4) lesions and to evaluate whether the combined diagnosis of these models could improve the diagnostic performance of radiologists. METHODS A total of 319 female patients with 319 pathologically confirmed BI-RADS 4 lesions were randomly divided into training, validation, and testing sets in this retrospective study. The three models were established based on contrast-enhanced T1-weighted imaging, diffusion-weighted imaging, and T2-weighted imaging using the training and validation sets. The artificial intelligence (AI) combination score was calculated according to the results of three models. The diagnostic performances of four radiologists with and without AI assistance were compared with the AI combination score on the testing set. The area under the curve (AUC), sensitivity, specificity, accuracy, and weighted kappa value were calculated to assess the performance. RESULTS The AI combination score yielded an excellent performance (AUC = 0.944) on the testing set. With AI assistance, the AUC for the diagnosis of junior radiologist 1 (JR1) increased from 0.833 to 0.885, and that for JR2 increased from 0.823 to 0.876. The AUCs of senior radiologist 1 (SR1) and SR2 slightly increased from 0.901 and 0.950 to 0.925 and 0.975 after AI assistance, respectively. CONCLUSION Combined diagnosis of multiparametric MRI-based deep learning models to differentiate TNBC from fibroadenoma magnetic resonance BI-RADS 4 lesions can achieve comparable performance to that of SRs and improve the diagnostic performance of JRs.
Collapse
Affiliation(s)
- Hao-Lin Yin
- Department of Radiology, Huadong Hospital Affiliated to Fudan University, Jing'an District, 221# Yan'anxi Road, Shanghai, 200040, China
| | - Yu Jiang
- Department of Radiology, West China Hospital of Sichuan University, 37# Guo Xue Xiang, Chengdu, Sichuan, China
| | - Zihan Xu
- Lung Cancer Center, Cancer Center and State Key Laboratory of Biotherapy, West China Hospital of Sichuan University, 37# Guo Xue Xiang, Chengdu, Sichuan, China
| | - Hui-Hui Jia
- Department of Radiology, Huadong Hospital Affiliated to Fudan University, Jing'an District, 221# Yan'anxi Road, Shanghai, 200040, China
| | - Guang-Wu Lin
- Department of Radiology, Huadong Hospital Affiliated to Fudan University, Jing'an District, 221# Yan'anxi Road, Shanghai, 200040, China.
| |
Collapse
|
58
|
Pramanik R, Biswas M, Sen S, Souza Júnior LAD, Papa JP, Sarkar R. A fuzzy distance-based ensemble of deep models for cervical cancer detection. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 219:106776. [PMID: 35398621 DOI: 10.1016/j.cmpb.2022.106776] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/08/2022] [Revised: 03/22/2022] [Accepted: 03/23/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Cervical cancer is one of the leading causes of women's death. Like any other disease, cervical cancer's early detection and treatment with the best possible medical advice are the paramount steps that should be taken to ensure the minimization of after-effects of contracting this disease. PaP smear images are one the most effective ways to detect the presence of such type of cancer. This article proposes a fuzzy distance-based ensemble approach composed of deep learning models for cervical cancer detection in PaP smear images. METHODS We employ three transfer learning models for this task: Inception V3, MobileNet V2, and Inception ResNet V2, with additional layers to learn data-specific features. To aggregate the outcomes of these models, we propose a novel ensemble method based on the minimization of error values between the observed and the ground-truth. For samples with multiple predictions, we first take three distance measures, i.e., Euclidean, Manhattan (City-Block), and Cosine, for each class from their corresponding best possible solution. We then defuzzify these distance measures using the product rule to calculate the final predictions. RESULTS In the current experiments, we have achieved 95.30%, 93.92%, and 96.44% respectively when Inception V3, MobileNet V2, and Inception ResNet V2 run individually. After applying the proposed ensemble technique, the performance reaches 96.96% which is higher than the individual models. CONCLUSION Experimental outcomes on three publicly available datasets ensure that the proposed model presents competitive results compared to state-of-the-art methods. The proposed approach provides an end-to-end classification technique to detect cervical cancer from PaP smear images. This may help the medical professionals for better treatment of the cervical cancer. Thus increasing the overall efficiency in the whole testing process. The source code of the proposed work can be found in github.com/rishavpramanik/CervicalFuzzyDistanceEnsemble.
Collapse
Affiliation(s)
- Rishav Pramanik
- Department of Computer Science and Engineering, Jadavpur University, 188 Raja S C Mallick Rd, Kolkata, 700032, West Bengal, India.
| | - Momojit Biswas
- Department of Metallurgical and Material Engineering, Jadavpur University, 188 Raja S C Mallick Rd, Kolkata, 700032, West Bengal, India.
| | - Shibaprasad Sen
- Department of Computer Science and Technology, University of Engineering and Management, Kolkata, 700160, West Bengal, India.
| | - Luis Antonio de Souza Júnior
- Department of Computing, São Carlos Federal University-UFScar, São Carlos, São Paulo, Brazil; Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Regensburg, Bavaria, Germany.
| | - João Paulo Papa
- Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Regensburg, Bavaria, Germany; Department of Computing, São Paulo State University, Av. Eng. Luiz Edmundo Carrijo Coube, 14-01, Bauru, São Paulo, Brazil.
| | - Ram Sarkar
- Department of Computer Science and Engineering, Jadavpur University, 188 Raja S C Mallick Rd, Kolkata, 700032, West Bengal, India.
| |
Collapse
|
59
|
Bai T, Xu J, Zhang Z, Guo S, Luo X. Context-aware learning for cancer cell nucleus recognition in pathology images. Bioinformatics 2022; 38:2892-2898. [PMID: 35561198 DOI: 10.1093/bioinformatics/btac167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Revised: 02/28/2022] [Accepted: 03/17/2022] [Indexed: 11/13/2022] Open
Abstract
MOTIVATION Nucleus identification supports many quantitative analysis studies that rely on nuclei positions or categories. Contextual information in pathology images refers to information near the to-be-recognized cell, which can be very helpful for nucleus subtyping. Current CNN-based methods do not explicitly encode contextual information within the input images and point annotations. RESULTS In this article, we propose a novel framework with context to locate and classify nuclei in microscopy image data. Specifically, first we use state-of-the-art network architectures to extract multi-scale feature representations from multi-field-of-view, multi-resolution input images and then conduct feature aggregation on-the-fly with stacked convolutional operations. Then, two auxiliary tasks are added to the model to effectively utilize the contextual information. One for predicting the frequencies of nuclei, and the other for extracting the regional distribution information of the same kind of nuclei. The entire framework is trained in an end-to-end, pixel-to-pixel fashion. We evaluate our method on two histopathological image datasets with different tissue and stain preparations, and experimental results demonstrate that our method outperforms other recent state-of-the-art models in nucleus identification. AVAILABILITY AND IMPLEMENTATION The source code of our method is freely available at https://github.com/qjxjy123/DonRabbit. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Tian Bai
- College of Computer Science and Technology, Jilin University, 130012 Changchun, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, 130012 Changchun, China
| | - Jiayu Xu
- College of Computer Science and Technology, Jilin University, 130012 Changchun, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, 130012 Changchun, China
| | - Zhenting Zhang
- College of Computer Science and Technology, Jilin University, 130012 Changchun, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, 130012 Changchun, China
| | - Shuyu Guo
- College of Computer Science and Technology, Jilin University, 130012 Changchun, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, 130012 Changchun, China
| | - Xiao Luo
- Department of Breast Surgery, China-Japan Union Hospital of Jilin University, 130033 Changchun, China
| |
Collapse
|
60
|
Chen W, Shen W, Gao L, Li X. Hybrid Loss-Constrained Lightweight Convolutional Neural Networks for Cervical Cell Classification. SENSORS 2022; 22:s22093272. [PMID: 35590961 PMCID: PMC9101629 DOI: 10.3390/s22093272] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Revised: 04/11/2022] [Accepted: 04/21/2022] [Indexed: 02/04/2023]
Abstract
Artificial intelligence (AI) technologies have resulted in remarkable achievements and conferred massive benefits to computer-aided systems in medical imaging. However, the worldwide usage of AI-based automation-assisted cervical cancer screening systems is hindered by computational cost and resource limitations. Thus, a highly economical and efficient model with enhanced classification ability is much more desirable. This paper proposes a hybrid loss function with label smoothing to improve the distinguishing power of lightweight convolutional neural networks (CNNs) for cervical cell classification. The results strengthen our confidence in hybrid loss-constrained lightweight CNNs, which can achieve satisfactory accuracy with much lower computational cost for the SIPakMeD dataset. In particular, ShufflenetV2 obtained a comparable classification result (96.18% in accuracy, 96.30% in precision, 96.23% in recall, and 99.08% in specificity) with only one-seventh of the memory usage, one-sixth of the number of parameters, and one-fiftieth of total flops compared with Densenet-121 (96.79% in accuracy). GhostNet achieved an improved classification result (96.39% accuracy, 96.42% precision, 96.39% recall, and 99.09% specificity) with one-half of the memory usage, one-quarter of the number of parameters, and one-fiftieth of total flops compared with Densenet-121 (96.79% in accuracy). The proposed lightweight CNNs are likely to lead to an easily-applicable and cost-efficient automation-assisted system for cervical cancer diagnosis and prevention.
Collapse
|
61
|
Shinde S, Kalbhor M, Wajire P. DeepCyto: a hybrid framework for cervical cancer classification by using deep feature fusion of cytology images. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2022; 19:6415-6434. [PMID: 35730264 DOI: 10.3934/mbe.2022301] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Cervical cancer is the second most commonly seen cancer in women. It affects the cervix portion of the vagina. The most preferred diagnostic test required for screening cervical cancer is the pap smear test. Pap smear is a time-consuming test as it requires detailed analysis by expert cytologists. Cytologists can screen around 100 to 1000 slides depending upon the availability of advanced equipment. Due to this reason Artificial intelligence (AI) based computer-aided diagnosis system for the classification of pap smear images is needed. There are some AI-based solutions proposed in the literature, still an effective and accurate system is under research. In this paper, the deep learning-based hybrid methodology namely DeepCyto is proposed for the classification of pap smear cytology images. The DeepCyto extracts the feature fusion vectors from pre-trained models and passes these to two workflows. Workflow-1 applies principal component analysis and machine learning ensemble to classify the pap smear images. Workflow-2 takes feature fusion vectors as an input and applies an artificial neural network for classification. The experiments are performed on three benchmark datasets namely Herlev, SipakMed, and LBCs. The performance measures of accuracy, precision, recall and F1-score are used to evaluate the effectiveness of the DeepCyto. The experimental results depict that Workflow-2 has given the best performance on all three datasets even with a smaller number of epochs. Also, the performance of the DeepCyto Workflow 2 on multi-cell images of LBCs is better compared to single cell images of other datasets. Thus, DeepCyto is an efficient method for accurate feature extraction as well as pap smear image classification.
Collapse
Affiliation(s)
- Swati Shinde
- Department of Computer Engineering, Pimpri Chinchwad College of Engineering, Pune, Maharashtra, India
| | - Madhura Kalbhor
- Department of Computer Engineering, Pimpri Chinchwad College of Engineering, Pune, Maharashtra, India
| | - Pankaj Wajire
- Department of Computer Engineering, Pimpri Chinchwad College of Engineering, Pune, Maharashtra, India
| |
Collapse
|
62
|
Zhu J, Liu M, Li X. Progress on deep learning in digital pathology of breast cancer: a narrative review. Gland Surg 2022; 11:751-766. [PMID: 35531111 PMCID: PMC9068546 DOI: 10.21037/gs-22-11] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Accepted: 03/04/2022] [Indexed: 01/26/2024]
Abstract
BACKGROUND AND OBJECTIVE Pathology is the gold standard criteria for breast cancer diagnosis and has important guiding value in formulating the clinical treatment plan and predicting the prognosis. However, traditional microscopic examinations of tissue sections are time consuming and labor intensive, with unavoidable subjective variations. Deep learning (DL) can evaluate and extract the most important information from images with less need for human instruction, providing a promising approach to assist in the pathological diagnosis of breast cancer. To provide an informative and up-to-date summary on the topic of DL-based diagnostic systems for breast cancer pathology image analysis and discuss the advantages and challenges to the routine clinical application of digital pathology. METHODS A PubMed search with keywords ("breast neoplasm" or "breast cancer") and ("pathology" or "histopathology") and ("artificial intelligence" or "deep learning") was conducted. Relevant publications in English published from January 2000 to October 2021 were screened manually for their title, abstract, and even full text to determine their true relevance. References from the searched articles and other supplementary articles were also studied. KEY CONTENT AND FINDINGS DL-based computerized image analysis has obtained impressive achievements in breast cancer pathology diagnosis, classification, grading, staging, and prognostic prediction, providing powerful methods for faster, more reproducible, and more precise diagnoses. However, all artificial intelligence (AI)-assisted pathology diagnostic models are still in the experimental stage. Improving their economic efficiency and clinical adaptability are still required to be developed as the focus of further researches. CONCLUSIONS Having searched PubMed and other databases and summarized the application of DL-based AI models in breast cancer pathology, we conclude that DL is undoubtedly a promising tool for assisting pathologists in routines, but further studies are needed to realize the digitization and automation of clinical pathology.
Collapse
Affiliation(s)
- Jingjin Zhu
- School of Medicine, Nankai University, Tianjin, China
| | - Mei Liu
- Department of Pathology, Chinese People’s Liberation Army General Hospital, Beijing, China
| | - Xiru Li
- Department of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing, China
| |
Collapse
|
63
|
Wang X, Kittaka M, He Y, Zhang Y, Ueki Y, Kihara D. OC_Finder: Osteoclast Segmentation, Counting, and Classification Using Watershed and Deep Learning. FRONTIERS IN BIOINFORMATICS 2022; 2. [PMID: 35474753 PMCID: PMC9038109 DOI: 10.3389/fbinf.2022.819570] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
Osteoclasts are multinucleated cells that exclusively resorb bone matrix proteins and minerals on the bone surface. They differentiate from monocyte/macrophage lineage cells in the presence of osteoclastogenic cytokines such as the receptor activator of nuclear factor-κB ligand (RANKL) and are stained positive for tartrate-resistant acid phosphatase (TRAP). In vitro osteoclast formation assays are commonly used to assess the capacity of osteoclast precursor cells for differentiating into osteoclasts wherein the number of TRAP-positive multinucleated cells is counted as osteoclasts. Osteoclasts are manually identified on cell culture dishes by human eyes, which is a labor-intensive process. Moreover, the manual procedure is not objective and results in lack of reproducibility. To accelerate the process and reduce the workload for counting the number of osteoclasts, we developed OC_Finder, a fully automated system for identifying osteoclasts in microscopic images. OC_Finder consists of cell image segmentation with a watershed algorithm and cell classification using deep learning. OC_Finder detected osteoclasts differentiated from wild-type and Sh3bp2KI/+ precursor cells at a 99.4% accuracy for segmentation and at a 98.1% accuracy for classification. The number of osteoclasts classified by OC_Finder was at the same accuracy level with manual counting by a human expert. OC_Finder also showed consistent performance on additional datasets collected with different microscopes with different settings by different operators. Together, successful development of OC_Finder suggests that deep learning is a useful tool to perform prompt and accurate unbiased classification and detection of specific cell types in microscopic images.
Collapse
Affiliation(s)
- Xiao Wang
- Department of Computer Science, Purdue University, West Lafayette, IN, United States
| | - Mizuho Kittaka
- Department of Biomedical Sciences and Comprehensive Care, Indiana University School of Dentistry, Indianapolis, IN, United States
- Indiana Center for Musculoskeletal Health, Indiana University School of Medicine, Indianapolis, IN, United States
| | - Yilin He
- School of Software Engineering, Shandong University, Jinan, China
| | - Yiwei Zhang
- Department of Computer Science, Rensselaer Polytechnic Institute, Troy, NY, United States
| | - Yasuyoshi Ueki
- Department of Biomedical Sciences and Comprehensive Care, Indiana University School of Dentistry, Indianapolis, IN, United States
- Indiana Center for Musculoskeletal Health, Indiana University School of Medicine, Indianapolis, IN, United States
| | - Daisuke Kihara
- Department of Computer Science, Purdue University, West Lafayette, IN, United States
- Department of Biological Sciences, Purdue University, West Lafayette, IN, United States
- Purdue Cancer Research Institute, Purdue University, West Lafayette, IN, United States
- *Correspondence: Daisuke Kihara,
| |
Collapse
|
64
|
Mahmoud HAH, AlArfaj AA, Hafez AM. A Fast Hybrid Classification Algorithm with Feature Reduction for Medical Images. Appl Bionics Biomech 2022; 2022:1367366. [PMID: 35360292 PMCID: PMC8964210 DOI: 10.1155/2022/1367366] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Accepted: 03/05/2022] [Indexed: 11/18/2022] Open
Abstract
In this paper, we are introducing a fast hybrid fuzzy classification algorithm with feature reduction for medical images. We incorporated the quantum-based grasshopper computing algorithm (QGH) with feature extraction using fuzzy clustering technique (C-means). QGH integrates quantum computing into machine learning and intelligence applications. The objective of our technique is to the integrate QGH method, specifically into cervical cancer detection that is based on image processing. Many features such as color, geometry, and texture found in the cells imaged in Pap smear lab test are very crucial in cancer diagnosis. Our proposed technique is based on the extraction of the best features using a more than 2600 public Pap smear images and further applies feature reduction technique to reduce the feature space. Performance evaluation of our approach evaluates the influence of the extracted feature on the classification precision by performing two experimental setups. First setup is using all the extracted features which leads to classification without feature bias. The second setup is a fusion technique which utilized QGH with the fuzzy C-means algorithm to choose the best features. In the setups, we allocate the assessment to accuracy based on the selection of best features and of different categories of the cancer. In the last setup, we utilized a fusion technique engaged with statistical techniques to launch a qualitative agreement with the feature selection in several experimental setups.
Collapse
Affiliation(s)
- Hanan Ahmed Hosni Mahmoud
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Abeer Abdulaziz AlArfaj
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Alaaeldin M. Hafez
- Department of Information Systems, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| |
Collapse
|
65
|
Tao X, Chu X, Guo B, Pan Q, Ji S, Lou W, Lv C, Xie G, Hua K. Scrutinizing high-risk patients from ASC-US cytology via a deep learning model. Cancer Cytopathol 2022; 130:407-414. [PMID: 35290728 DOI: 10.1002/cncy.22560] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Revised: 11/07/2021] [Accepted: 11/09/2021] [Indexed: 12/14/2022]
Abstract
BACKGROUND Atypical squamous cells of undetermined significance (ASC-US) is the most frequent but ambiguous abnormal Papanicolaou (Pap) interpretation and is generally triaged by high-risk human papillomavirus (hrHPV) testing before colposcopy. This study aimed to evaluate the performance of an artificial intelligence (AI)-based triage system to predict ASC-US cytology for cervical intraepithelial neoplasia 2+ lesions (CIN2+). METHODS More than 60,000 images were used to train this proposed deep learning-based ASC-US triage system, where both cell-level and slide-level information were extracted. In total, 1967 consecutive ASC-US Paps from 2017 to 2019 were included in this study. Histological follow-ups were retrieved to compare the triage performance between the AI system and hrHPV in 622 patients with simultaneous hrHPV testing. RESULTS In the triage of women with ASC-US cytology for CIN2+, our system attained equivalent sensitivity (92.9%; 95% confidence interval [CI], 75.0%-98.8%) and higher specificity (49.7%; 95% CI, 45.6%-53.8%) than hrHPV testing (sensitivity: 89.3%; 95% CI, 70.6%-97.2%; specificity: 34.3%; 95% CI, 30.6%-38.3%) without requiring additional patient examination or testing. Additionally, the independence of this system from hrHPV testing (κ = 0.138) indicated that these 2 different methods could be used to triage ASC-US as an alternative way. CONCLUSION This de novo deep learning-based system can triage ASC-US cytology for CIN2+ with a performance superior to hrHPV testing and without incurring additional expenses.
Collapse
Affiliation(s)
- Xiang Tao
- Department of Pathology, Obstetrics and Gynecology Hospital, Fudan University, Shanghai, China
| | - Xiao Chu
- Ping An Healthcare Technology, Shanghai, China
| | - Bingxue Guo
- Ping An Healthcare Technology, Shanghai, China
| | - Qiuzhi Pan
- Department of Pathology, Obstetrics and Gynecology Hospital, Fudan University, Shanghai, China
| | - Shuting Ji
- Department of Pathology, Obstetrics and Gynecology Hospital, Fudan University, Shanghai, China
| | - Wenjie Lou
- Ping An Healthcare Technology, Shanghai, China
| | | | - Guotong Xie
- Ping An Healthcare Technology, Shanghai, China.,Ping An Healthcare and Technology Company Limited, Shanghai, China.,Ping An International Smart City Technology Company, Shanghai, China
| | - Keqin Hua
- Department of Obstetrics and Gynecology, Obstetrics and Gynecology Hospital, Fudan University, Shanghai, China
| |
Collapse
|
66
|
Pantanowitz L. Improving the Pap test with artificial intelligence. Cancer Cytopathol 2022; 130:402-404. [PMID: 35291050 DOI: 10.1002/cncy.22561] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Accepted: 01/31/2022] [Indexed: 11/07/2022]
Affiliation(s)
- Liron Pantanowitz
- Department of Pathology, University of Michigan, Ann Arbor, Michigan
| |
Collapse
|
67
|
Hou X, Shen G, Zhou L, Li Y, Wang T, Ma X. Artificial Intelligence in Cervical Cancer Screening and Diagnosis. Front Oncol 2022; 12:851367. [PMID: 35359358 PMCID: PMC8963491 DOI: 10.3389/fonc.2022.851367] [Citation(s) in RCA: 58] [Impact Index Per Article: 19.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2022] [Accepted: 02/10/2022] [Indexed: 12/11/2022] Open
Abstract
Cervical cancer remains a leading cause of cancer death in women, seriously threatening their physical and mental health. It is an easily preventable cancer with early screening and diagnosis. Although technical advancements have significantly improved the early diagnosis of cervical cancer, accurate diagnosis remains difficult owing to various factors. In recent years, artificial intelligence (AI)-based medical diagnostic applications have been on the rise and have excellent applicability in the screening and diagnosis of cervical cancer. Their benefits include reduced time consumption, reduced need for professional and technical personnel, and no bias owing to subjective factors. We, thus, aimed to discuss how AI can be used in cervical cancer screening and diagnosis, particularly to improve the accuracy of early diagnosis. The application and challenges of using AI in the diagnosis and treatment of cervical cancer are also discussed.
Collapse
Affiliation(s)
- Xin Hou
- Department of Obstetrics and Gynecology, Tongji Medical College, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
| | - Guangyang Shen
- Department of Obstetrics and Gynecology, Tongji Medical College, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
| | - Liqiang Zhou
- Cancer Centre and Center of Reproduction, Development and Aging, Faculty of Health Sciences, University of Macau, Macau, Macau SAR, China
| | - Yinuo Li
- Department of Obstetrics and Gynecology, Tongji Medical College, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
| | - Tian Wang
- Department of Obstetrics and Gynecology, Tongji Medical College, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
| | - Xiangyi Ma
- Department of Obstetrics and Gynecology, Tongji Medical College, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
- *Correspondence: Xiangyi Ma,
| |
Collapse
|
68
|
Fekri-Ershad S, Ramakrishnan S. Cervical cancer diagnosis based on modified uniform local ternary patterns and feed forward multilayer network optimized by genetic algorithm. Comput Biol Med 2022; 144:105392. [PMID: 35299043 DOI: 10.1016/j.compbiomed.2022.105392] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2021] [Revised: 03/02/2022] [Accepted: 03/08/2022] [Indexed: 11/29/2022]
Abstract
Cervical cancer is one of the most common types of cancer for women. Early and accurate diagnosis can save the patient's life. Pap smear testing is nowadays commonly used to diagnose cervical cancer. The type, structure and size of the cervical cells in pap smears images are major factors which are used by specialist doctors to diagnosis abnormality. Various image processing-based approaches have been proposed to acquire pap smear images and diagnose cervical cancer in pap smears images. Accuracy is usually the primary objective in evaluating the performance of these systems. In this paper, a two-stage method for pap smear image classification is presented. The aim of the first stage is to extract texture information of the cytoplasm and nucleolus jointly. For this purpose, the pap smear image is first segmented using the appropriate threshold. Then, a texture descriptor is proposed titled modified uniform local ternary patterns (MULTP), to describe the local textural features. Secondly, an optimized multi-layer feed-forward neural network is used to classify the pap smear images. The proposed deep neural network is optimized using genetic algorithm in terms of number of hidden layers and hidden nodes. In this respect, an innovative chromosome representation and cross-over process is proposed to handle these parameters. The performance of the proposed method is evaluated on the Herlev database and compared with many other efficient methods in this scope under the same validation conditions. The results show that the detection accuracy of the proposed method is higher than the compared methods. Insensitivity to image rotation is one of the major advantages of the proposed method. Results show that the proposed method has the capability to be used in online problems because of low run time. The proposed texture descriptor, MULTP is a general operator which can be used in many computer vision problems to describe texture properties of image. Also, the proposed optimization algorithm can be used in deep-networks to improve performance.
Collapse
Affiliation(s)
- Shervan Fekri-Ershad
- Faculty of Computer Engineering, Najafabad Branch, Islamic Azad University, Najafabad, Iran; Big Data Research Center, Najafabad Branch, Islamic Azad University, Najafabad, Iran.
| | - S Ramakrishnan
- Dept. of Information Technology, Dr.Mahalingam College of Engg. & Tech., Pollachi, 642003, India
| |
Collapse
|
69
|
Yin HL, Jiang Y, Huang WJ, Li SH, Lin GW. A Magnetic Resonance Angiography-Based Study Comparing Machine Learning and Clinical Evaluation: Screening Intracranial Regions Associated with the Hemorrhagic Stroke of Adult Moyamoya Disease. J Stroke Cerebrovasc Dis 2022; 31:106382. [PMID: 35183983 DOI: 10.1016/j.jstrokecerebrovasdis.2022.106382] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Revised: 01/25/2022] [Accepted: 01/29/2022] [Indexed: 02/05/2023] Open
Abstract
OBJECTIVES Moyamoya disease patients with hemorrhagic stroke usually have a poor prognosis. This study aimed to determine whether hemorrhagic moyamoya disease could be distinguished from MRA images using transfer deep learning and to screen potential regions that contain rich distinguishing information from MRA images in moyamoya disease. MATERIALS AND METHODS A total of 116 adult patients with bilateral moyamoya diseases suffering from hemorrhagic or ischemia complications were retrospectively screened. Based on original MRA images at the level of the basal cistern, basal ganglia, and centrum semiovale, we adopted the pretrained ResNet18 to build three models for differentiating hemorrhagic moyamoya disease. Grad-CAM was applied to visualize the regions of interest. RESULTS For the test set, the accuracies of model differentiation in the basal cistern, basal ganglia, and centrum semiovale were 93.3%, 91.5%, and 86.4%, respectively. Visualization of the regions of interest demonstrated that the models focused on the deep and periventricular white matter and abnormal collateral vessels in hemorrhagic moyamoya disease. CONCLUSION A transfer learning model based on MRA images of the basal cistern and basal ganglia showed a good ability to differentiate between patients with hemorrhagic moyamoya disease and those with ischemic moyamoya disease. The deep and periventricular white matter and collateral vessels at the level of the basal cistern and basal ganglia may contain rich distinguishing information.
Collapse
Affiliation(s)
- Hao-Lin Yin
- Department of Radiology, Huadong Hospital Affiliated to Fudan University, No. 221 Yan'anxi Road, Jing'an District, Shanghai 200040, China
| | - Yu Jiang
- Department of Radiology, West China Hospital, Sichuan University, 37# Guo Xue Xiang, Chengdu, Sichuan 610041, China
| | - Wen-Jun Huang
- Department of Radiology, Huadong Hospital Affiliated to Fudan University, No. 221 Yan'anxi Road, Jing'an District, Shanghai 200040, China
| | - Shi-Hong Li
- Department of Radiology, Huadong Hospital Affiliated to Fudan University, No. 221 Yan'anxi Road, Jing'an District, Shanghai 200040, China
| | - Guang-Wu Lin
- Department of Radiology, Huadong Hospital Affiliated to Fudan University, No. 221 Yan'anxi Road, Jing'an District, Shanghai 200040, China.
| |
Collapse
|
70
|
|
71
|
Lu H, Tian S, Yu L, Liu L, Cheng J, Wu W, Kang X, Zhang D. DCACNet: Dual context aggregation and attention-guided cross deconvolution network for medical image segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 214:106566. [PMID: 34890992 DOI: 10.1016/j.cmpb.2021.106566] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Revised: 10/27/2021] [Accepted: 11/28/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Segmentation is a key step in biomedical image analysis tasks. Recently, convolutional neural networks (CNNs) have been increasingly applied in the field of medical image processing; however, standard models still have some drawbacks. Due to the significant loss of spatial information at the coding stage, it is often difficult to restore the details of low-level visual features using simple deconvolution, and the generated feature maps are sparse, which results in performance degradation. This prompted us to study whether it is possible to better preserve the deep feature information of the image in order to solve the sparsity problem of image segmentation models. METHODS In this study, we (1) build a reliable deep learning network framework, named DCACNet, to improve the segmentation performance for medical images; (2) propose a multiscale cross-fusion encoding network to extract features; (3) build a dual context aggregation module to fuse the context features at different scales and capture more fine-grained deep features; and (4) propose an attention-guided cross deconvolution decoding network to generate dense feature maps. We demonstrate the effectiveness of the proposed method on two publicly available datasets. RESULTS DCACNet was trained and tested on the prepared dataset, and the experimental results show that our proposed model has better segmentation performance than previous models. For 4-class classification (CHAOS dataset), the mean DSC coefficient reached 91.03%. For 2-class classification (Herlev dataset), the accuracy, precision, sensitivity, specificity, and Dice score reached 96.77%, 90.40%, 94.20%, 97.50%, and 97.69%, respectively. The experimental results show that DCACNet can improve the segmentation effect for medical images. CONCLUSION DCACNet achieved promising results on the prepared dataset and improved segmentation performance. It can better retain the deep feature information of the image than other models and solve the sparsity problem of the medical image segmentation model.
Collapse
Affiliation(s)
- Hongchun Lu
- School of Software, Xinjiang University, Urumqi, Xinjiang 830046, China; School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, Sichuan 610031, China
| | - Shengwei Tian
- School of Software, Xinjiang University, Urumqi, Xinjiang 830046, China.
| | - Long Yu
- Network Center, Xinjiang University, Urumqi, Xinjiang 830046, China
| | - Lu Liu
- School of Teacher Educaiton, Jining University, Qufu, Shandong 273199, China
| | - Junlong Cheng
- College of Computer Science, Sichuan University, Chengdu 610065, China
| | - Weidong Wu
- People's Hospital of Xinjiang Uygur Autonomous Region, Xinjiang Key Laboratory of Dermatology Research, China
| | - Xiaojing Kang
- People's Hospital of Xinjiang Uygur Autonomous Region, Xinjiang Key Laboratory of Dermatology Research, China
| | - Dezhi Zhang
- People's Hospital of Xinjiang Uygur Autonomous Region, Xinjiang Key Laboratory of Dermatology Research, China
| |
Collapse
|
72
|
Watson ER, Taherian Fard A, Mar JC. Computational Methods for Single-Cell Imaging and Omics Data Integration. Front Mol Biosci 2022; 8:768106. [PMID: 35111809 PMCID: PMC8801747 DOI: 10.3389/fmolb.2021.768106] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Accepted: 11/29/2021] [Indexed: 12/12/2022] Open
Abstract
Integrating single cell omics and single cell imaging allows for a more effective characterisation of the underlying mechanisms that drive a phenotype at the tissue level, creating a comprehensive profile at the cellular level. Although the use of imaging data is well established in biomedical research, its primary application has been to observe phenotypes at the tissue or organ level, often using medical imaging techniques such as MRI, CT, and PET. These imaging technologies complement omics-based data in biomedical research because they are helpful for identifying associations between genotype and phenotype, along with functional changes occurring at the tissue level. Single cell imaging can act as an intermediary between these levels. Meanwhile new technologies continue to arrive that can be used to interrogate the genome of single cells and its related omics datasets. As these two areas, single cell imaging and single cell omics, each advance independently with the development of novel techniques, the opportunity to integrate these data types becomes more and more attractive. This review outlines some of the technologies and methods currently available for generating, processing, and analysing single-cell omics- and imaging data, and how they could be integrated to further our understanding of complex biological phenomena like ageing. We include an emphasis on machine learning algorithms because of their ability to identify complex patterns in large multidimensional data.
Collapse
Affiliation(s)
| | - Atefeh Taherian Fard
- Australian Institute for Bioengineering and Nanotechnology, The University of Queensland, Brisbane, QLD, Australia
| | - Jessica Cara Mar
- Australian Institute for Bioengineering and Nanotechnology, The University of Queensland, Brisbane, QLD, Australia
| |
Collapse
|
73
|
Zhu Z, Lu S, Wang SH, Górriz JM, Zhang YD. BCNet: A Novel Network for Blood Cell Classification. Front Cell Dev Biol 2022; 9:813996. [PMID: 35047515 PMCID: PMC8762289 DOI: 10.3389/fcell.2021.813996] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Accepted: 12/03/2021] [Indexed: 11/17/2022] Open
Abstract
Aims: Most blood diseases, such as chronic anemia, leukemia (commonly known as blood cancer), and hematopoietic dysfunction, are caused by environmental pollution, substandard decoration materials, radiation exposure, and long-term use certain drugs. Thus, it is imperative to classify the blood cell images. Most cell classification is based on the manual feature, machine learning classifier or the deep convolution network neural model. However, manual feature extraction is a very tedious process, and the results are usually unsatisfactory. On the other hand, the deep convolution neural network is usually composed of massive layers, and each layer has many parameters. Therefore, each deep convolution neural network needs a lot of time to get the results. Another problem is that medical data sets are relatively small, which may lead to overfitting problems. Methods: To address these problems, we propose seven models for the automatic classification of blood cells: BCARENet, BCR5RENet, BCMV2RENet, BCRRNet, BCRENet, BCRSNet, and BCNet. The BCNet model is the best model among the seven proposed models. The backbone model in our method is selected as the ResNet-18, which is pre-trained on the ImageNet set. To improve the performance of the proposed model, we replace the last four layers of the trained transferred ResNet-18 model with the three randomized neural networks (RNNs), which are RVFL, ELM, and SNN. The final outputs of our BCNet are generated by the ensemble of the predictions from the three randomized neural networks by the majority voting. We use four multi-classification indexes for the evaluation of our model. Results: The accuracy, average precision, average F1-score, and average recall are 96.78, 97.07, 96.78, and 96.77%, respectively. Conclusion: We offer the comparison of our model with state-of-the-art methods. The results of the proposed BCNet model are much better than other state-of-the-art methods.
Collapse
Affiliation(s)
- Ziquan Zhu
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, United Kingdom
| | - Siyuan Lu
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, United Kingdom
| | - Shui-Hua Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, United Kingdom
| | - Juan Manuel Górriz
- Department of Signal Theory, Networking and Communications, University of Granada, Granada, Spain
| | - Yu-Dong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, United Kingdom
- Guangxi Key Laboratory of Trusted Software, Guilin University of Electronic Technology, Guilin, China
| |
Collapse
|
74
|
Nambu Y, Mariya T, Shinkai S, Umemoto M, Asanuma H, Sato I, Hirohashi Y, Torigoe T, Fujino Y, Saito T. A screening assistance system for cervical cytology of squamous cell atypia based on a two-step combined CNN algorithm with label smoothing. Cancer Med 2022; 11:520-529. [PMID: 34841722 PMCID: PMC8729059 DOI: 10.1002/cam4.4460] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 11/16/2021] [Accepted: 11/18/2021] [Indexed: 12/03/2022] Open
Abstract
BACKGROUND Although many cervical cytology diagnostic support systems have been developed, it is challenging to classify overlapping cell clusters with a variety of patterns in the same way that humans do. In this study, we developed a fast and accurate system for the detection and classification of atypical cell clusters by using a two-step algorithm based on two different deep learning algorithms. METHODS We created 919 cell images from liquid-based cervical cytological samples collected at Sapporo Medical University and annotated them based on the Bethesda system as a dataset for machine learning. Most of the images captured overlapping and crowded cells, and images were oversampled by digital processing. The detection system consists of two steps: (1) detection of atypical cells using You Only Look Once v4 (YOLOv4) and (2) classification of the detected cells using ResNeSt. A label smoothing algorithm was used for the dataset in the second classification step. This method annotates multiple correct classes from a single cell image with a smooth probability distribution. RESULTS The first step, cell detection by YOLOv4, was able to detect all atypical cells above ASC-US without any observed false negatives. The detected cell images were then analyzed in the second step, cell classification by the ResNeSt algorithm, which exhibited average accuracy and F-measure values of 90.5% and 70.5%, respectively. The oversampling of the training image and label smoothing algorithm contributed to the improvement of the system's accuracy. CONCLUSION This system combines two deep learning algorithms to enable accurate detection and classification of cell clusters based on the Bethesda system, which has been difficult to achieve in the past. We will conduct further research and development of this system as a platform for augmented reality microscopes for cytological diagnosis.
Collapse
Affiliation(s)
- Yuta Nambu
- Department of Media ArchitectureFuture University HakodateHakodateJapan
| | - Tasuku Mariya
- Department of Obstetrics and GynecologySapporo Medical University School of MedicineSapporoJapan
| | - Shota Shinkai
- Department of Obstetrics and GynecologySapporo Medical University School of MedicineSapporoJapan
| | - Mina Umemoto
- Department of Obstetrics and GynecologySapporo Medical University School of MedicineSapporoJapan
| | - Hiroko Asanuma
- Department of Pathology 1stSapporo Medical University School of MedicineSapporoJapan
| | - Ikuma Sato
- Department of Media ArchitectureFuture University HakodateHakodateJapan
| | - Yoshihiko Hirohashi
- Department of Pathology 1stSapporo Medical University School of MedicineSapporoJapan
| | - Toshihiko Torigoe
- Department of Pathology 1stSapporo Medical University School of MedicineSapporoJapan
| | - Yuichi Fujino
- Department of Media ArchitectureFuture University HakodateHakodateJapan
| | - Tsuyoshi Saito
- Department of Obstetrics and GynecologySapporo Medical University School of MedicineSapporoJapan
| |
Collapse
|
75
|
Qin J, He Y, Ge J, Liang Y. A multi-task feature fusion model for cervical cell classification. IEEE J Biomed Health Inform 2022; 26:4668-4678. [DOI: 10.1109/jbhi.2022.3180989] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Affiliation(s)
- Jian Qin
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin, China
| | - Yongjun He
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin, China
| | - Jinping Ge
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin, China
| | - Yiqin Liang
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin, China
| |
Collapse
|
76
|
Classification of cervical cells leveraging simultaneous super-resolution and ordinal regression. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2021.108208] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
77
|
Lightweight convolutional neural network with knowledge distillation for cervical cells classification. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103177] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
78
|
AIM and Cervical Cancer. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
79
|
Saranya A, Kottursamy K, AlZubi AA, Bashir AK. Analyzing fibrous tissue pattern in fibrous dysplasia bone images using deep R-CNN networks for segmentation. Soft comput 2021; 26:7519-7533. [PMID: 34867079 PMCID: PMC8634752 DOI: 10.1007/s00500-021-06519-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/29/2021] [Indexed: 11/13/2022]
Abstract
Predictive health monitoring systems help to detect human health threats in the early stage. Evolving deep learning techniques in medical image analysis results in efficient feedback in quick time. Fibrous dysplasia (FD) is a genetic disorder, triggered by the mutation in Guanine Nucleotide binding protein with alpha stimulatory activities in the human bone genesis. It slowly occupies the bone marrow and converts the bone cell into fibrous tissues. It weakens the bone structure and leads to permanent disability. This paper proposes the study of FD bone image analyzing techniques with deep networks. Also, the linear regression model is annotated for predicting the bone abnormality levels with observed coefficients. Modern image processing begins with various image filters. It describes the edges, shades, texture values of the receptive field. Different types of segmentation and edge detection mechanisms are applied to locate the tumor, lesion, and fibrous tissues in the bone image. Extract the fibrous region in the bone image using the region-based convolutional neural network algorithm. The segmented results are compared with their accuracy metrics. The segmentation loss is reduced by each iteration. The overall loss is 0.24% and the accuracy is 99%, segmenting the masked region produces 98% of accuracy, and building the bounding boxes is 99% of accuracy.
Collapse
Affiliation(s)
- A Saranya
- Department of Computational Intelligence, School of Computing, SRM Institute of Science and Technology, Kattankulathur, Tamilnadu India
| | - Kottilingam Kottursamy
- Department of Computational Intelligence, School of Computing, SRM Institute of Science and Technology, Kattankulathur, Tamilnadu India
| | - Ahmad Ali AlZubi
- Computer Science Department, Community College, King Saud University, P.O. Box 28095, Riyadh, 11437 Saudi Arabia
| | - Ali Kashif Bashir
- Department of Computing and Mathematics, Manchester Metropolitan University, Manchester, UK.,School of Information and Communication Engineering, University of Electronic Science and Technology of China (UESTC), Chengdu, China
| |
Collapse
|
80
|
Li J, Dou Q, Yang H, Liu J, Fu L, Zhang Y, Zheng L, Zhang D. Cervical cell multi-classification algorithm using global context information and attention mechanism. Tissue Cell 2021; 74:101677. [PMID: 34814053 DOI: 10.1016/j.tice.2021.101677] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2021] [Revised: 11/01/2021] [Accepted: 11/09/2021] [Indexed: 11/30/2022]
Abstract
Cervical cancer is the second biggest killer of female cancer, second only to breast cancer. The cure rate of precancerous lesions found early is relatively high. Therefore, cervical cell classification has very important clinical value in the early screening of cervical cancer. This paper proposes a convolutional neural network (L-PCNN) that integrates global context information and attention mechanism to classify cervical cells. The cell image is sent to the improved ResNet-50 backbone network to extract deep learning features. In order to better extract deep features, each convolution block introduces a convolution block attention mechanism to guide the network to focus on the cell area. Then, the end of the backbone network adds a pyramid pooling layer and a long short-term memory module (LSTM) to aggregate image features in different regions. The low-level features and high-level features are integrated, so that the whole network can learn more regional detail features, and solve the problem of network gradient disappearance. The experiment is conducted on the SIPaKMeD public data set. The experimental results show that the accuracy of the proposed l-PCNN in cervical cell accuracy is 98.89 %, the sensitivity is 99.9 %, the specificity is 99.8 % and the F-measure is 99.89 %, which is better than most cervical cell classification models, which proves the effectiveness of the model.
Collapse
Affiliation(s)
- Jun Li
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China.
| | - Qiyan Dou
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Haima Yang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China.
| | - Jin Liu
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai, 201620, China
| | - Le Fu
- Department of Radiology, Shanghai First Maternity and Infant Hospital, Tongji University School of Medicine, Shanghai, China
| | - Yu Zhang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Lulu Zheng
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Dawei Zhang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| |
Collapse
|
81
|
Huong AKC, Tay KG, Ngu XTI. Five-Class Classification of Cervical Pap Smear Images: A Study of CNN-Error-Correcting SVM Models. Healthc Inform Res 2021; 27:298-306. [PMID: 34788910 PMCID: PMC8654336 DOI: 10.4258/hir.2021.27.4.298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Accepted: 07/23/2021] [Indexed: 12/03/2022] Open
Abstract
Objectives Different complex strategies of fusing handcrafted descriptors and features from convolutional neural network (CNN) models have been studied, mainly for two-class Papanicolaou (Pap) smear image classification. This paper explores a simplified system using combined binary coding for a five-class version of this problem. Methods This system extracted features from transfer learning of AlexNet, VGG19, and ResNet50 networks before reducing this problem into multiple binary sub-problems using error-correcting coding. The learners were trained using the support vector machine (SVM) method. The outputs of these classifiers were combined and compared to the true class codes for the final prediction. Results Despite the superior performance of VGG19-SVM, with mean ± standard deviation accuracy and sensitivity of 80.68% ± 2.00% and 80.86% ± 0.45%, respectively, this model required a long training time. There were also false-negative cases using both the VGGNet-SVM and ResNet-SVM models. AlexNet-SVM was more efficient in terms of running speed and prediction consistency. Our findings also showed good diagnostic ability, with an area under the curve of approximately 0.95. Further investigation also showed good agreement between our research outcomes and that of the state-of-the-art methods, with specificity ranging from 93% to 100%. Conclusions We believe that the AlexNet-SVM model can be conveniently applied for clinical use. Further research could include the implementation of an optimization algorithm for hyperparameter tuning, as well as an appropriate selection of experimental design to improve the efficiency of Pap smear image classification.
Collapse
Affiliation(s)
- Audrey K C Huong
- Faculty of Electrical and Electronic Engineering, Universiti Tun Hussein Onn Malaysia, Batu Pahat, Malaysia
| | - Kim Gaik Tay
- Faculty of Electrical and Electronic Engineering, Universiti Tun Hussein Onn Malaysia, Batu Pahat, Malaysia
| | - Xavier T I Ngu
- Faculty of Electrical and Electronic Engineering, Universiti Tun Hussein Onn Malaysia, Batu Pahat, Malaysia
| |
Collapse
|
82
|
Jian J, Xia W, Zhang R, Zhao X, Zhang J, Wu X, Li Y, Qiang J, Gao X. Multiple instance convolutional neural network with modality-based attention and contextual multi-instance learning pooling layer for effective differentiation between borderline and malignant epithelial ovarian tumors. Artif Intell Med 2021; 121:102194. [PMID: 34763809 DOI: 10.1016/j.artmed.2021.102194] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Revised: 09/01/2021] [Accepted: 10/07/2021] [Indexed: 01/18/2023]
Abstract
Malignant epithelial ovarian tumors (MEOTs) are the most lethal gynecologic malignancies, accounting for 90% of ovarian cancer cases. By contrast, borderline epithelial ovarian tumors (BEOTs) have low malignant potential and are generally associated with a good prognosis. Accurate preoperative differentiation between BEOTs and MEOTs is crucial for determining the appropriate surgical strategies and improving the postoperative quality of life. Multimodal magnetic resonance imaging (MRI) is an essential diagnostic tool. Although state-of-the-art artificial intelligence technologies such as convolutional neural networks can be used for automated diagnoses, their application have been limited owing to their high demand for graphics processing unit memory and hardware resources when dealing with large 3D volumetric data. In this study, we used multimodal MRI with a multiple instance learning (MIL) method to differentiate between BEOT and MEOT. We proposed the use of MAC-Net, a multiple instance convolutional neural network (MICNN) with modality-based attention (MA) and contextual MIL pooling layer (C-MPL). The MA module can learn from the decision-making patterns of clinicians to automatically perceive the importance of different MRI modalities and achieve multimodal MRI feature fusion based on their importance. The C-MPL module uses strong prior knowledge of tumor distribution as an important reference and assesses contextual information between adjacent images, thus achieving a more accurate prediction. The performance of MAC-Net is superior, with an area under the receiver operating characteristic curve of 0.878, surpassing that of several known MICNN approaches. Therefore, it can be used to assist clinical differentiation between BEOTs and MEOTs.
Collapse
Affiliation(s)
- Junming Jian
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, Jiangsu 215163, China; Jinan Guoke Medical Engineering and Technology Development Co., Ltd., Jinan, Shandong 250109, China
| | - Wei Xia
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, Jiangsu 215163, China
| | - Rui Zhang
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, Jiangsu 215163, China
| | - Xingyu Zhao
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, Jiangsu 215163, China
| | - Jiayi Zhang
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, Jiangsu 215163, China
| | - Xiaodong Wu
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, Jiangsu 215163, China
| | - Yong'ai Li
- Department of Radiology, Jinshan Hospital, Fudan University, Shanghai 201508, China
| | - Jinwei Qiang
- Department of Radiology, Jinshan Hospital, Fudan University, Shanghai 201508, China
| | - Xin Gao
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, Jiangsu 215163, China; Jinan Guoke Medical Engineering and Technology Development Co., Ltd., Jinan, Shandong 250109, China; Department of Radiology, Shanxi Province Cancer Hospital, Shanxi Medical University, Taiyuan, Shanxi 030013, China.
| |
Collapse
|
83
|
Liu W, Li C, Rahaman MM, Jiang T, Sun H, Wu X, Hu W, Chen H, Sun C, Yao Y, Grzegorzek M. Is the aspect ratio of cells important in deep learning? A robust comparison of deep learning methods for multi-scale cytopathology cell image classification: From convolutional neural networks to visual transformers. Comput Biol Med 2021; 141:105026. [PMID: 34801245 DOI: 10.1016/j.compbiomed.2021.105026] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2021] [Accepted: 11/08/2021] [Indexed: 11/19/2022]
Abstract
Cervical cancer is a very common and fatal type of cancer in women. Cytopathology images are often used to screen for this cancer. Given that there is a possibility that many errors can occur during manual screening, a computer-aided diagnosis system based on deep learning has been developed. Deep learning methods require a fixed dimension of input images, but the dimensions of clinical medical images are inconsistent. The aspect ratios of the images suffer while resizing them directly. Clinically, the aspect ratios of cells inside cytopathological images provide important information for doctors to diagnose cancer. Therefore, it is difficult to resize directly. However, many existing studies have resized the images directly and have obtained highly robust classification results. To determine a reasonable interpretation, we have conducted a series of comparative experiments. First, the raw data of the SIPaKMeD dataset are pre-processed to obtain standard and scaled datasets. Then, the datasets are resized to 224 × 224 pixels. Finally, 22 deep learning models are used to classify the standard and scaled datasets. The results of the study indicate that deep learning models are robust to changes in the aspect ratio of cells in cervical cytopathological images. This conclusion is also validated via the Herlev dataset.
Collapse
Affiliation(s)
- Wanli Liu
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169, China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169, China.
| | - Md Mamunur Rahaman
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169, China
| | - Tao Jiang
- School of Control Engineering, Chengdu University of Information Technology, Chengdu, 610225, China
| | - Hongzan Sun
- Shengjing Hospital, China Medical University, Shenyang, 110001, China
| | - Xiangchen Wu
- Suzhou Ruiguan Technology Company Ltd., Suzhou, 215000, China
| | - Weiming Hu
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169, China
| | - Haoyuan Chen
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169, China
| | - Changhao Sun
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169, China; Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, 110169, China
| | - Yudong Yao
- Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ, 07030, USA
| | - Marcin Grzegorzek
- Institute of Medical Informatics, University of Luebeck, Luebeck, Germany
| |
Collapse
|
84
|
Pal A, Xue Z, Desai K, Aina F Banjo A, Adepiti CA, Long LR, Schiffman M, Antani S. Deep multiple-instance learning for abnormal cell detection in cervical histopathology images. Comput Biol Med 2021; 138:104890. [PMID: 34601391 PMCID: PMC11977668 DOI: 10.1016/j.compbiomed.2021.104890] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Revised: 09/15/2021] [Accepted: 09/22/2021] [Indexed: 01/18/2023]
Abstract
Cervical cancer is a disease of significant concern affecting women's health worldwide. Early detection of and treatment at the precancerous stage can help reduce mortality. High-grade cervical abnormalities and precancer are confirmed using microscopic analysis of cervical histopathology. However, manual analysis of cervical biopsy slides is time-consuming, needs expert pathologists, and suffers from reader variability errors. Prior work in the literature has suggested using automated image analysis algorithms for analyzing cervical histopathology images captured with the whole slide digital scanners (e.g., Aperio, Hamamatsu, etc.). However, whole-slide digital tissue scanners with good optical magnification and acceptable imaging quality are cost-prohibitive and difficult to acquire in low and middle-resource regions. Hence, the development of low-cost imaging systems and automated image analysis algorithms are of critical importance. Motivated by this, we conduct an experimental study to assess the feasibility of developing a low-cost diagnostic system with the H&E stained cervical tissue image analysis algorithm. In our imaging system, the image acquisition is performed by a smartphone affixing it on the top of a commonly available light microscope which magnifies the cervical tissues. The images are not captured in a constant optical magnification, and, unlike whole-slide scanners, our imaging system is unable to record the magnification. The images are mega-pixel images and are labeled based on the presence of abnormal cells. In our dataset, there are total 1331 (train: 846, validation: 116 test: 369) images. We formulate the classification task as a deep multiple instance learning problem and quantitatively evaluate the classification performance of four different types of multiple instance learning algorithms trained with five different architectures designed with varying instance sizes. Finally, we designed a sparse attention-based multiple instance learning framework that can produce a maximum of 84.55% classification accuracy on the test set.
Collapse
Affiliation(s)
- Anabik Pal
- National Library of Medicine, National Institutes of Health, Bethesda, MD, USA.
| | - Zhiyun Xue
- National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Kanan Desai
- National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | | | | | - L Rodney Long
- National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Mark Schiffman
- National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Sameer Antani
- National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
85
|
Abousamra S, Belinsky D, Van Arnam J, Allard F, Yee E, Gupta R, Kurc T, Samaras D, Saltz J, Chen C. Multi-Class Cell Detection Using Spatial Context Representation. PROCEEDINGS. IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION 2021; 2021:3985-3994. [PMID: 38783989 PMCID: PMC11114143 DOI: 10.1109/iccv48922.2021.00397] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2024]
Abstract
In digital pathology, both detection and classification of cells are important for automatic diagnostic and prognostic tasks. Classifying cells into subtypes, such as tumor cells, lymphocytes or stromal cells is particularly challenging. Existing methods focus on morphological appearance of individual cells, whereas in practice pathologists often infer cell classes through their spatial context. In this paper, we propose a novel method for both detection and classification that explicitly incorporates spatial contextual information. We use the spatial statistical function to describe local density in both a multi-class and a multi-scale manner. Through representation learning and deep clustering techniques, we learn advanced cell representation with both appearance and spatial context. On various benchmarks, our method achieves better performance than state-of-the-arts, especially on the classification task. We also create a new dataset for multi-class cell detection and classification in breast cancer and we make both our code and data publicly available.
Collapse
Affiliation(s)
| | | | | | | | - Eric Yee
- Stony Brook University, Stony Brook, NY 11794, USA
| | | | - Tahsin Kurc
- Stony Brook University, Stony Brook, NY 11794, USA
| | | | - Joel Saltz
- Stony Brook University, Stony Brook, NY 11794, USA
| | - Chao Chen
- Stony Brook University, Stony Brook, NY 11794, USA
| |
Collapse
|
86
|
Li X, Xu Z, Shen X, Zhou Y, Xiao B, Li TQ. Detection of Cervical Cancer Cells in Whole Slide Images Using Deformable and Global Context Aware Faster RCNN-FPN. Curr Oncol 2021; 28:3585-3601. [PMID: 34590614 PMCID: PMC8482136 DOI: 10.3390/curroncol28050307] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Revised: 09/06/2021] [Accepted: 09/12/2021] [Indexed: 01/16/2023] Open
Abstract
Cervical cancer is a worldwide public health problem with a high rate of illness and mortality among women. In this study, we proposed a novel framework based on Faster RCNN-FPN architecture for the detection of abnormal cervical cells in cytology images from a cancer screening test. We extended the Faster RCNN-FPN model by infusing deformable convolution layers into the feature pyramid network (FPN) to improve scalability. Furthermore, we introduced a global contextual aware module alongside the Region Proposal Network (RPN) to enhance the spatial correlation between the background and the foreground. Extensive experimentations with the proposed deformable and global context aware (DGCA) RCNN were carried out using the cervical image dataset of "Digital Human Body" Vision Challenge from the Alibaba Cloud TianChi Company. Performance evaluation based on the mean average precision (mAP) and receiver operating characteristic (ROC) curve has demonstrated considerable advantages of the proposed framework. Particularly, when combined with tagging of the negative image samples using traditional computer-vision techniques, 6-9% increase in mAP has been achieved. The proposed DGCA-RCNN model has potential to become a clinically useful AI tool for automated detection of cervical cancer cells in whole slide images of Pap smear.
Collapse
Affiliation(s)
- Xia Li
- Institute of Information Engineering, China Jiliang University, Hangzhou 310018, China; (X.L.); (Z.X.); (X.S.); (Y.Z.); (B.X.)
| | - Zhenhao Xu
- Institute of Information Engineering, China Jiliang University, Hangzhou 310018, China; (X.L.); (Z.X.); (X.S.); (Y.Z.); (B.X.)
| | - Xi Shen
- Institute of Information Engineering, China Jiliang University, Hangzhou 310018, China; (X.L.); (Z.X.); (X.S.); (Y.Z.); (B.X.)
| | - Yongxia Zhou
- Institute of Information Engineering, China Jiliang University, Hangzhou 310018, China; (X.L.); (Z.X.); (X.S.); (Y.Z.); (B.X.)
| | - Binggang Xiao
- Institute of Information Engineering, China Jiliang University, Hangzhou 310018, China; (X.L.); (Z.X.); (X.S.); (Y.Z.); (B.X.)
| | - Tie-Qiang Li
- Institute of Information Engineering, China Jiliang University, Hangzhou 310018, China; (X.L.); (Z.X.); (X.S.); (Y.Z.); (B.X.)
- Department of Clinical Science, Intervention and Technology, Karolinska Institutet, S-17177 Stockholm, Sweden
- Department of Medical Radiation and Nuclear Medicine, Karolinska University Hospital, S-14186 Stockholm, Sweden
| |
Collapse
|
87
|
Meng Z, Zhao Z, Li B, Su F, Guo L, Wang H. Triple Up-Sampling Segmentation Network With Distribution Consistency Loss for Pathological Diagnosis of Cervical Precancerous Lesions. IEEE J Biomed Health Inform 2021; 25:2673-2685. [PMID: 33296318 DOI: 10.1109/jbhi.2020.3043589] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
OBJECTIVE Cervical cancer, as one of the most frequently diagnosed cancers in women, is curable when detected early. However, automated algorithms for cervical pathology precancerous diagnosis are limited. METHODS In this paper, instead of popular patch-wise classification, an end-to-end patch-wise segmentation algorithm is proposed to focus on the spatial structure changes of pathological tissues. Specifically, a triple up-sampling segmentation network (TriUpSegNet) is constructed to aggregate spatial information. Second, a distribution consistency loss (DC-loss) is designed to constrain the model to fit the inter-class relationship of the cervix. Third, the Gauss-like weighted post-processing is employed to reduce patch stitching deviation and noise. RESULTS The algorithm is evaluated on three challenging and public datasets: 1) MTCHI for cervical precancerous diagnosis, 2) DigestPath for colon cancer, and 3) PAIP for liver cancer. The Dice coefficient is 0.7413 on the MTCHI dataset, which is significantly higher than the published state-of-the-art results. CONCLUSION Experiments on the public dataset MTCHI indicate the superiority of the proposed algorithm on cervical pathology precancerous diagnosis. In addition, the experiments on two other pathological datasets, i.e., DigestPath and PAIP, demonstrate the effectiveness and generalization ability of the TriUpSegNet and weighted post-processing on colon and liver cancers. SIGNIFICANCE The end-to-end TriUpSegNet with DC-loss and weighted post-processing leads to improved segmentation in pathology of various cancers.
Collapse
|
88
|
Rahaman MM, Li C, Yao Y, Kulwa F, Wu X, Li X, Wang Q. DeepCervix: A deep learning-based framework for the classification of cervical cells using hybrid deep feature fusion techniques. Comput Biol Med 2021; 136:104649. [PMID: 34332347 DOI: 10.1016/j.compbiomed.2021.104649] [Citation(s) in RCA: 82] [Impact Index Per Article: 20.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Revised: 07/08/2021] [Accepted: 07/09/2021] [Indexed: 01/01/2023]
Abstract
Cervical cancer, one of the most common fatal cancers among women, can be prevented by regular screening to detect any precancerous lesions at early stages and treat them. Pap smear test is a widely performed screening technique for early detection of cervical cancer, whereas this manual screening method suffers from high false-positive results because of human errors. To improve the manual screening practice, machine learning (ML) and deep learning (DL) based computer-aided diagnostic (CAD) systems have been investigated widely to classify cervical Pap cells. Most of the existing studies require pre-segmented images to obtain good classification results. In contrast, accurate cervical cell segmentation is challenging because of cell clustering. Some studies rely on handcrafted features, which cannot guarantee the classification stage's optimality. Moreover, DL provides poor performance for a multiclass classification task when there is an uneven distribution of data, which is prevalent in the cervical cell dataset. This investigation has addressed those limitations by proposing DeepCervix, a hybrid deep feature fusion (HDFF) technique based on DL, to classify the cervical cells accurately. Our proposed method uses various DL models to capture more potential information to enhance classification performance. Our proposed HDFF method is tested on the publicly available SIPaKMeD dataset and compared the performance with base DL models and the late fusion (LF) method. For the SIPaKMeD dataset, we have obtained the state-of-the-art classification accuracy of 99.85%, 99.38%, and 99.14% for 2-class, 3-class, and 5-class classification. This method is also tested on the Herlev dataset and achieves an accuracy of 98.32% for 2-class and 90.32% for 7-class classification. The source code of the DeepCervix model is available at: https://github.com/Mamunur-20/DeepCervix.
Collapse
Affiliation(s)
- Md Mamunur Rahaman
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169, China.
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169, China.
| | - Yudong Yao
- Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ, 07030, USA
| | - Frank Kulwa
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169, China
| | - Xiangchen Wu
- Suzhou Ruiguan Technology Company Ltd., Suzhou, 215000, China
| | - Xiaoyan Li
- Cancer Hospital of China Medical University, Liaoning Hospital and Institute, Shenyang, 110042, China.
| | - Qian Wang
- Cancer Hospital of China Medical University, Liaoning Hospital and Institute, Shenyang, 110042, China
| |
Collapse
|
89
|
Manna A, Kundu R, Kaplun D, Sinitca A, Sarkar R. A fuzzy rank-based ensemble of CNN models for classification of cervical cytology. Sci Rep 2021; 11:14538. [PMID: 34267261 PMCID: PMC8282795 DOI: 10.1038/s41598-021-93783-8] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2021] [Accepted: 06/30/2021] [Indexed: 12/14/2022] Open
Abstract
Cervical cancer affects more than 0.5 million women annually causing more than 0.3 million deaths. Detection of cancer in its early stages is of prime importance for eradicating the disease from the patient’s body. However, regular population-wise screening of cancer is limited by its expensive and labour intensive detection process, where clinicians need to classify individual cells from a stained slide consisting of more than 100,000 cervical cells, for malignancy detection. Thus, Computer-Aided Diagnosis (CAD) systems are used as a viable alternative for easy and fast detection of cancer. In this paper, we develop such a method where we form an ensemble-based classification model using three Convolutional Neural Network (CNN) architectures, namely Inception v3, Xception and DenseNet-169 pre-trained on ImageNet dataset for Pap stained single cell and whole-slide image classification. The proposed ensemble scheme uses a fuzzy rank-based fusion of classifiers by considering two non-linear functions on the decision scores generated by said base learners. Unlike the simple fusion schemes that exist in the literature, the proposed ensemble technique makes the final predictions on the test samples by taking into consideration the confidence in the predictions of the base classifiers. The proposed model has been evaluated on two publicly available benchmark datasets, namely, the SIPaKMeD Pap Smear dataset and the Mendeley Liquid Based Cytology (LBC) dataset, using a 5-fold cross-validation scheme. On the SIPaKMeD Pap Smear dataset, the proposed framework achieves a classification accuracy of 98.55% and sensitivity of 98.52% in its 2-class setting, and 95.43% accuracy and 98.52% sensitivity in its 5-class setting. On the Mendeley LBC dataset, the accuracy achieved is 99.23% and sensitivity of 99.23%. The results obtained outperform many of the state-of-the-art models, thereby justifying the effectiveness of the same. The relevant codes of this proposed model are publicly available on GitHub.
Collapse
Affiliation(s)
- Ankur Manna
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, 700032, India
| | - Rohit Kundu
- Department of Electrical Engineering, Jadavpur University, Kolkata, 700032, India
| | - Dmitrii Kaplun
- Department of Automation and Control Processes, Saint Petersburg Electrotechnical University "LETI", Saint Petersburg, 197376, Russian Federation.
| | - Aleksandr Sinitca
- Department of Automation and Control Processes, Saint Petersburg Electrotechnical University "LETI", Saint Petersburg, 197376, Russian Federation
| | - Ram Sarkar
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, 700032, India
| |
Collapse
|
90
|
Pirovano A, Almeida LG, Ladjal S, Bloch I, Berlemont S. Computer-aided diagnosis tool for cervical cancer screening with weakly supervised localization and detection of abnormalities using adaptable and explainable classifier. Med Image Anal 2021; 73:102167. [PMID: 34333217 DOI: 10.1016/j.media.2021.102167] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Revised: 06/28/2021] [Accepted: 07/07/2021] [Indexed: 01/18/2023]
Abstract
While pap test is the most common diagnosis methods for cervical cancer, their results are highly dependent on the ability of the cytotechnicians to detect abnormal cells on the smears using brightfield microscopy. In this paper, we propose an explainable region classifier in whole slide images that could be used by cyto-pathologists to handle efficiently these big images (100,000x100,000 pixels). We create a dataset that simulates pap smears regions and uses a loss, we call classification under regression constraint, to train an efficient region classifier (about 66.8% accuracy on severity classification, 95.2% accuracy on normal/abnormal classification and 0.870 KAPPA score). We explain how we benefit from this loss to obtain a model focused on sensitivity and, then, we show that it can be used to perform weakly supervised localization (accuracy of 80.4%) of the cell that is mostly responsible for the malignancy of regions of whole slide images. We extend our method to perform a more general detection of abnormal cells (66.1% accuracy) and ensure that at least one abnormal cell will be detected if malignancy is present. Finally, we experiment our solution on a small real clinical slide dataset, highlighting the relevance of our proposed solution, adapting it to be as easily integrated in a pathology laboratory workflow as possible, and extending it to make a slide-level prediction.
Collapse
Affiliation(s)
- Antoine Pirovano
- Keen Eye, 74 rue du Faubourg Saint-Antoine, Paris 75012, France; LTCI, Telecom Paris, Institut Polytechnique de Paris, 19 Place Marguerite Perey, Palaiseau 91120, France.
| | | | - Said Ladjal
- LTCI, Telecom Paris, Institut Polytechnique de Paris, 19 Place Marguerite Perey, Palaiseau 91120, France
| | - Isabelle Bloch
- LTCI, Telecom Paris, Institut Polytechnique de Paris, 19 Place Marguerite Perey, Palaiseau 91120, France; Sorbonne Université, CNRS, LIP6, Paris, France
| | | |
Collapse
|
91
|
Classification of immature white blood cells in acute lymphoblastic leukemia L1 using neural networks particle swarm optimization. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06245-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
92
|
DeepPrognosis: Preoperative prediction of pancreatic cancer survival and surgical margin via comprehensive understanding of dynamic contrast-enhanced CT imaging and tumor-vascular contact parsing. Med Image Anal 2021; 73:102150. [PMID: 34303891 DOI: 10.1016/j.media.2021.102150] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Revised: 05/08/2021] [Accepted: 06/24/2021] [Indexed: 12/15/2022]
Abstract
Pancreatic ductal adenocarcinoma (PDAC) is one of the most lethal cancers and carries a dismal prognosis of ∼10% in five year survival rate. Surgery remains the best option of a potential cure for patients who are evaluated to be eligible for initial resection of PDAC. However, outcomes vary significantly even among the resected patients who were the same cancer stage and received similar treatments. Accurate quantitative preoperative prediction of primary resectable PDACs for personalized cancer treatment is thus highly desired. Nevertheless, there are a very few automated methods yet to fully exploit the contrast-enhanced computed tomography (CE-CT) imaging for PDAC prognosis assessment. CE-CT plays a critical role in PDAC staging and resectability evaluation. In this work, we propose a novel deep neural network model for the survival prediction of primary resectable PDAC patients, named as 3D Contrast-Enhanced Convolutional Long Short-Term Memory network (CE-ConvLSTM), which can derive the tumor attenuation signatures or patterns from patient CE-CT imaging studies. Tumor-vascular relationships, which might indicate the resection margin status, have also been proven to hold strong relationships with the overall survival of PDAC patients. To capture such relationships, we propose a self-learning approach for automated pancreas and peripancreatic anatomy segmentation without requiring any annotations on our PDAC datasets. We then employ a multi-task convolutional neural network (CNN) to accomplish both tasks of survival outcome and margin prediction where the network benefits from learning the resection margin related image features to improve the survival prediction. Our presented framework can improve overall survival prediction performances compared with existing state-of-the-art survival analysis approaches. The new staging biomarker integrating both the proposed risk signature and margin prediction has evidently added values to be combined with the current clinical staging system.
Collapse
|
93
|
Mohammed MA, Abdurahman F, Ayalew YA. Single-cell conventional pap smear image classification using pre-trained deep neural network architectures. BMC Biomed Eng 2021; 3:11. [PMID: 34187589 PMCID: PMC8244198 DOI: 10.1186/s42490-021-00056-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Accepted: 06/09/2021] [Indexed: 01/22/2023] Open
Abstract
Background Automating cytology-based cervical cancer screening could alleviate the shortage of skilled pathologists in developing countries. Up until now, computer vision experts have attempted numerous semi and fully automated approaches to address the need. Yet, these days, leveraging the astonishing accuracy and reproducibility of deep neural networks has become common among computer vision experts. In this regard, the purpose of this study is to classify single-cell Pap smear (cytology) images using pre-trained deep convolutional neural network (DCNN) image classifiers. We have fine-tuned the top ten pre-trained DCNN image classifiers and evaluated them using five class single-cell Pap smear images from SIPaKMeD dataset. The pre-trained DCNN image classifiers were selected from Keras Applications based on their top 1% accuracy. Results Our experimental result demonstrated that from the selected top-ten pre-trained DCNN image classifiers DenseNet169 outperformed with an average accuracy, precision, recall, and F1-score of 0.990, 0.974, 0.974, and 0.974, respectively. Moreover, it dashed the benchmark accuracy proposed by the creators of the dataset with 3.70%. Conclusions Even though the size of DenseNet169 is small compared to the experimented pre-trained DCNN image classifiers, yet, it is not suitable for mobile or edge devices. Further experimentation with mobile or small-size DCNN image classifiers is required to extend the applicability of the models in real-world demands. In addition, since all experiments used the SIPaKMeD dataset, additional experiments will be needed using new datasets to enhance the generalizability of the models.
Collapse
Affiliation(s)
- Mohammed Aliy Mohammed
- School of Biomedical Engineering, Jimma Institute of Technology, Jimma University, Jimma, Ethiopia.
| | - Fetulhak Abdurahman
- Faculty of Electrical and Computer Engineering, Jimma Institute of Technology, Jimma University, Jimma, Ethiopia
| | - Yodit Abebe Ayalew
- Department of Biomedical Engineering, Hawassa Institute of Technology, Hawassa University, Hawassa, Ethiopia
| |
Collapse
|
94
|
Liang Y, Pan C, Sun W, Liu Q, Du Y. Global context-aware cervical cell detection with soft scale anchor matching. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 204:106061. [PMID: 33819821 DOI: 10.1016/j.cmpb.2021.106061] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Accepted: 03/18/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Computer-aided cervical cancer screening based on an automated recognition of cervical cells has the potential to significantly reduce error rate and increase productivity compared to manual screening. Traditional methods often rely on the success of accurate cell segmentation and discriminative hand-crafted features extraction. Recently, detector based on convolutional neural network is applied to reduce the dependency on hand-crafted features and eliminate the necessary segmentation. However, these methods tend to yield too much false positive predictions. METHODS This paper proposes a global context-aware framework to deal with this problem, which integrates global context information by an image-level classification branch and a weighted loss. And the prediction of this branch is merged into cell detection for filtering false positive predictions. Furthermore, a new ground truth assignment strategy in the feature pyramid called soft scale anchor matching is proposed, which matches ground truths with anchors across scales softly. This strategy searches the most appropriate representation of ground truths in each layer and add more positive samples with different scales, which facilitate the feature learning. RESULTS Our proposed methods finally get 5.7% increase in mean average precision and 18.5% increase in specificity with sacrifice of 2.6% delay in inference time. CONCLUSIONS Our proposed methods which totally avoid the dependence on segmentation of cervical cells, show the great potential to reduce the workload for pathologists in automation-assisted cervical cancer screening.
Collapse
Affiliation(s)
- Yixiong Liang
- School of Computer Science and Engineering, Central South University, Changsha, China.
| | - Changli Pan
- School of Computer Science and Engineering, Central South University, Changsha, China.
| | - Wanxin Sun
- School of Computer Science and Engineering, Central South University, Changsha, China.
| | - Qing Liu
- School of Computer Science and Engineering, Central South University, Changsha, China.
| | - Yun Du
- The Fourth Hospital of Hebei Medical University, Hebei Province China-Japan Friendship Center for Cancer Detection, China.
| |
Collapse
|
95
|
Lu H, Tian S, Yu L, Xing Y, Cheng J, Liu L. Medical image segmentation using boundary-enhanced guided packet rotation dual attention decoder network. Technol Health Care 2021; 30:129-143. [PMID: 34057109 DOI: 10.3233/thc-202789] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
OBJECTIVE The automatic segmentation of medical images is an important task in clinical applications. However, due to the complexity of the background of the organs, the unclear boundary, and the variable size of different organs, some of the features are lost during network learning, and the segmentation accuracy is low. To address these issues, This prompted us to study whether it is possible to better preserve the deep feature information of the image and solve the problem of low segmentation caused by unclear image boundaries. METHODS In this study, we (1) build a reliable deep learning network framework, named BGRANet,to improve the segmentation performance for medical images; (2) propose a packet rotation convolutional fusion encoder network to extract features; (3) build a boundary enhanced guided packet rotation dual attention decoder network, which is used to enhance the boundary of the segmentation map and effectively fuse more prior information; and (4) propose a multi-resolution fusion module to generate high-resolution feature maps. We demonstrate the effffectiveness of the proposed method on two publicly available datasets. RESULTS BGRANet has been trained and tested on the prepared dataset and the experimental results show that our proposed model has better segmentation performance. For 4 class classifification (CHAOS dataset), the average dice similarity coeffiffifficient reached 91.73%. For 2 class classifification (Herlev dataset), the prediction, sensitivity, specifificity, accuracy, and Dice reached 93.75%, 94.30%, 98.19%, 97.43%, and 98.08% respectively. The experimental results show that BGRANet can improve the segmentation effffect for medical images. CONCLUSION We propose a boundary-enhanced guided packet rotation dual attention decoder network. It achieved high segmentation accuracy with a reduced parameter number.
Collapse
Affiliation(s)
- Hongchun Lu
- School of Software, Xinjiang University, Urumqi, Xinjiang, China.,Key Laboratory of Software Engineering Technology, Xinjiang University, Urumqi, Xinjiang, China
| | - Shengwei Tian
- School of Software, Xinjiang University, Urumqi, Xinjiang, China
| | - Long Yu
- Network Center, Xinjiang University, Urumqi, Xinjiang, China
| | - Yan Xing
- The First Affiliated Hospital of Xinjiang Medical University, Urumqi, Xinjiang, China
| | - Junlong Cheng
- Key Laboratory of Software Engineering Technology, Xinjiang University, Urumqi, Xinjiang, China.,School of Information Science and Engineering, Xinjiang University, Urumqi, Xinjiang, China
| | - Lu Liu
- School of Educational Science, Xinjiang Normal University, Urumqi, Xinjiang, China
| |
Collapse
|
96
|
Victória Matias A, Atkinson Amorim JG, Buschetto Macarini LA, Cerentini A, Casimiro Onofre AS, De Miranda Onofre FB, Daltoé FP, Stemmer MR, von Wangenheim A. What is the state of the art of computer vision-assisted cytology? A Systematic Literature Review. Comput Med Imaging Graph 2021; 91:101934. [PMID: 34174544 DOI: 10.1016/j.compmedimag.2021.101934] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Revised: 04/16/2021] [Accepted: 05/04/2021] [Indexed: 11/28/2022]
Abstract
Cytology is a low-cost and non-invasive diagnostic procedure employed to support the diagnosis of a broad range of pathologies. Cells are harvested from tissues by aspiration or scraping, and it is still predominantly performed manually by medical or laboratory professionals extensively trained for this purpose. It is a time-consuming and repetitive process where many diagnostic criteria are subjective and vulnerable to human interpretation. Computer Vision technologies, by automatically generating quantitative and objective descriptions of examinations' contents, can help minimize the chances of misdiagnoses and shorten the time required for analysis. To identify the state-of-art of computer vision techniques currently applied to cytology, we conducted a Systematic Literature Review, searching for approaches for the segmentation, detection, quantification, and classification of cells and organelles using computer vision on cytology slides. We analyzed papers published in the last 4 years. The initial search was executed in September 2020 and resulted in 431 articles. After applying the inclusion/exclusion criteria, 157 papers remained, which we analyzed to build a picture of the tendencies and problems present in this research area, highlighting the computer vision methods, staining techniques, evaluation metrics, and the availability of the used datasets and computer code. As a result, we identified that the most used methods in the analyzed works are deep learning-based (70 papers), while fewer works employ classic computer vision only (101 papers). The most recurrent metric used for classification and object detection was the accuracy (33 papers and 5 papers), while for segmentation it was the Dice Similarity Coefficient (38 papers). Regarding staining techniques, Papanicolaou was the most employed one (130 papers), followed by H&E (20 papers) and Feulgen (5 papers). Twelve of the datasets used in the papers are publicly available, with the DTU/Herlev dataset being the most used one. We conclude that there still is a lack of high-quality datasets for many types of stains and most of the works are not mature enough to be applied in a daily clinical diagnostic routine. We also identified a growing tendency towards adopting deep learning-based approaches as the methods of choice.
Collapse
Affiliation(s)
- André Victória Matias
- Department of Informatics and Statistics, Federal University of Santa Catarina, Florianópolis, Brazil.
| | | | | | - Allan Cerentini
- Department of Informatics and Statistics, Federal University of Santa Catarina, Florianópolis, Brazil.
| | | | | | - Felipe Perozzo Daltoé
- Department of Pathology, Federal University of Santa Catarina, Florianópolis, Brazil.
| | - Marcelo Ricardo Stemmer
- Automation and Systems Department, Federal University of Santa Catarina, Florianópolis, Brazil.
| | - Aldo von Wangenheim
- Brazilian Institute for Digital Convergence, Federal University of Santa Catarina, Florianópolis, Brazil.
| |
Collapse
|
97
|
A CNN-based unified framework utilizing projection loss in unison with label noise handling for multiple Myeloma cancer diagnosis. Med Image Anal 2021; 72:102099. [PMID: 34098240 DOI: 10.1016/j.media.2021.102099] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 04/26/2021] [Accepted: 04/27/2021] [Indexed: 01/16/2023]
Abstract
Multiple Myeloma (MM) is a malignancy of plasma cells. Similar to other forms of cancer, it demands prompt diagnosis for reducing the risk of mortality. The conventional diagnostic tools are resource-intense and hence, these solutions are not easily scalable for extending their reach to the masses. Advancements in deep learning have led to rapid developments in affordable, resource optimized, easily deployable computer-assisted solutions. This work proposes a unified framework for MM diagnosis using microscopic blood cell imaging data that addresses the key challenges of inter-class visual similarity of healthy versus cancer cells and that of the label noise of the dataset. To extract class distinctive features, we propose projection loss to maximize the projection of a sample's activation on the respective class vector besides imposing orthogonality constraints on the class vectors. This projection loss is used along with the cross-entropy loss to design a dual branch architecture that helps achieve improved performance and provides scope for targeting the label noise problem. Based on this architecture, two methodologies have been proposed to correct the noisy labels. A coupling classifier has also been proposed to resolve the conflicts in the dual-branch architecture's predictions. We have utilized a large dataset of 72 subjects (26 healthy and 46 MM cancer) containing a total of 74996 images (including 34555 training cell images and 40441 test cell images). This is so far the most extensive dataset on Multiple Myeloma cancer ever reported in the literature. An ablation study has also been carried out. The proposed architecture performs best with a balanced accuracy of 94.17% on binary cell classification of healthy versus cancer in the comparative performance with ten state-of-the-art architectures. Extensive experiments on two additional publicly available datasets of two different modalities have also been utilized for analyzing the label noise handling capability of the proposed methodology. The code will be available under https://github.com/shivgahlout/CAD-MM.
Collapse
|
98
|
Liang Y, Tang Z, Yan M, Chen J, Liu Q, Xiang Y. Comparison detector for cervical cell/clumps detection in the limited data scenario. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.01.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
|
99
|
Vogado L, Veras R, Aires K, Araújo F, Silva R, Ponti M, Tavares JMRS. Diagnosis of Leukaemia in Blood Slides Based on a Fine-Tuned and Highly Generalisable Deep Learning Model. SENSORS (BASEL, SWITZERLAND) 2021; 21:2989. [PMID: 33923209 PMCID: PMC8123151 DOI: 10.3390/s21092989] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/21/2021] [Revised: 04/19/2021] [Accepted: 04/21/2021] [Indexed: 02/06/2023]
Abstract
Leukaemia is a dysfunction that affects the production of white blood cells in the bone marrow. Young cells are abnormally produced, replacing normal blood cells. Consequently, the person suffers problems in transporting oxygen and in fighting infections. This article proposes a convolutional neural network (CNN) named LeukNet that was inspired on convolutional blocks of VGG-16, but with smaller dense layers. To define the LeukNet parameters, we evaluated different CNNs models and fine-tuning methods using 18 image datasets, with different resolution, contrast, colour and texture characteristics. We applied data augmentation operations to expand the training dataset, and the 5-fold cross-validation led to an accuracy of 98.61%. To evaluate the CNNs generalisation ability, we applied a cross-dataset validation technique. The obtained accuracies using cross-dataset experiments on three datasets were 97.04, 82.46 and 70.24%, which overcome the accuracies obtained by current state-of-the-art methods. We conclude that using the most common and deepest CNNs may not be the best choice for applications where the images to be classified differ from those used in pre-training. Additionally, the adopted cross-dataset validation approach proved to be an excellent choice to evaluate the generalisation capability of a model, as it considers the model performance on unseen data, which is paramount for CAD systems.
Collapse
Affiliation(s)
- Luis Vogado
- Departamento de Computação, Universidade Federal do Piauí, Teresina 64049-550, Brazil; (L.V.); (R.V.); (K.A.)
| | - Rodrigo Veras
- Departamento de Computação, Universidade Federal do Piauí, Teresina 64049-550, Brazil; (L.V.); (R.V.); (K.A.)
| | - Kelson Aires
- Departamento de Computação, Universidade Federal do Piauí, Teresina 64049-550, Brazil; (L.V.); (R.V.); (K.A.)
| | - Flávio Araújo
- Curso de Bacharelado em Sistemas de Informação, Universidade Federal do Piauí, Picos 64607-670, Brazil; (F.A.); (R.S.)
| | - Romuere Silva
- Curso de Bacharelado em Sistemas de Informação, Universidade Federal do Piauí, Picos 64607-670, Brazil; (F.A.); (R.S.)
| | - Moacir Ponti
- Instituto de Ciências Matemáticas de de Computação, Universidade de São Paulo, São Carlos 13566-590, Brazil;
| | - João Manuel R. S. Tavares
- Departamento de Engenharia Mecânica, Faculdade de Engenharia, Instituto de Ciência e Inovação em Engenharia Mecânica e Engenharia Industrial, Universidade do Porto, 4200-465 Porto, Portugal
| |
Collapse
|
100
|
Albuquerque T, Cruz R, Cardoso JS. Ordinal losses for classification of cervical cancer risk. PeerJ Comput Sci 2021; 7:e457. [PMID: 33981833 PMCID: PMC8080423 DOI: 10.7717/peerj-cs.457] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Accepted: 03/04/2021] [Indexed: 06/12/2023]
Abstract
Cervical cancer is the fourth leading cause of cancer-related deaths in women, especially in low to middle-income countries. Despite the outburst of recent scientific advances, there is no totally effective treatment, especially when diagnosed in an advanced stage. Screening tests, such as cytology or colposcopy, have been responsible for a substantial decrease in cervical cancer deaths. Cervical cancer automatic screening via Pap smear is a highly valuable cell imaging-based detection tool, where cells must be classified as being within one of a multitude of ordinal classes, ranging from abnormal to normal. Current approaches to ordinal inference for neural networks are found to not sufficiently take advantage of the ordinal problem or to be too uncompromising. A non-parametric ordinal loss for neuronal networks is proposed that promotes the output probabilities to follow a unimodal distribution. This is done by imposing a set of different constraints over all pairs of consecutive labels which allows for a more flexible decision boundary relative to approaches from the literature. Our proposed loss is contrasted against other methods from the literature by using a plethora of deep architectures. A first conclusion is the benefit of using non-parametric ordinal losses against parametric losses in cervical cancer risk prediction. Additionally, the proposed loss is found to be the top-performer in several cases. The best performing model scores an accuracy of 75.6% for seven classes and 81.3% for four classes.
Collapse
Affiliation(s)
- Tomé Albuquerque
- Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal
- Faculty of Engineering of the University of Porto, Porto, Portugal
| | - Ricardo Cruz
- Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal
- Faculty of Engineering of the University of Porto, Porto, Portugal
| | - Jaime S. Cardoso
- Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal
- Faculty of Engineering of the University of Porto, Porto, Portugal
| |
Collapse
|