1
|
Wu J, Wang H, Nie Y, Wang Y, He W, Wang G, Li Z, Chen J, Xu W. CTCNet: a fine-grained classification network for fluorescence images of circulating tumor cells. Med Biol Eng Comput 2025:10.1007/s11517-025-03297-y. [PMID: 39841310 DOI: 10.1007/s11517-025-03297-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2024] [Accepted: 01/12/2025] [Indexed: 01/23/2025]
Abstract
The identification and categorization of circulating tumor cells (CTCs) in peripheral blood are imperative for advancing cancer diagnostics and prognostics. The intricacy of various CTCs subtypes, coupled with the difficulty in developing exhaustive datasets, has impeded progress in this specialized domain. To date, no methods have been dedicated exclusively to overcoming the classification challenges of CTCs. To address this deficit, we have developed CTCDet, a large-scale dataset meticulously annotated based on the distinctive pathological characteristics of CTCs, aimed at advancing the application of deep learning techniques in oncological research. Furthermore, we introduce CTCNet, an innovative hybrid architecture that merges the capabilities of CNNs and Transformers to achieve precise classification of CTCs. This architecture features the Parallel Token mixer, which integrates local window self-attention with large-kernel depthwise convolution, enhancing the network's ability to model intricate channel and spatial relationships. Additionally, the Deformable Large Kernel Attention (DLKAttention) module leverages deformable convolution and large-kernel operations to adeptly delineate the nuanced features of CTCs, substantially boosting classification efficacy. Comprehensive evaluations on the CTCDet dataset validate the superior performance of CTCNet, confirming its ability to outperform other general methods in accurate cell classification. Moreover, the generalizability of CTCNet has been established across various datasets, establishing its robustness and applicability. What is more, our proposed method can lead to clinical applications and provide some help in assisting cancer diagnosis and treatment. Code and Data are available at https://github.com/JasonWu404/CTCs_Classification .
Collapse
Affiliation(s)
- Juntao Wu
- School of Electronic and Information Engineering, Anhui Jianzhu University, Hefei, 230601, Anhui, China
- Hefei Institute of Physical Sciences, Chinese Academy of Sciences, Hefei, 230031, Anhui, China
| | - Han Wang
- Hefei Institute of Physical Sciences, Chinese Academy of Sciences, Hefei, 230031, Anhui, China
| | - Yuman Nie
- Hefei Institute of Physical Sciences, Chinese Academy of Sciences, Hefei, 230031, Anhui, China.
| | - Yaoxiong Wang
- Hefei Institute of Physical Sciences, Chinese Academy of Sciences, Hefei, 230031, Anhui, China
| | - Wei He
- Anhui BioX-Vision Biological Technology Co., Ltd, Hefei, 230031, Anhui, China
| | - Guoxing Wang
- Anhui BioX-Vision Biological Technology Co., Ltd, Hefei, 230031, Anhui, China
| | - Zeng Li
- School of Pharmacy, Anhui Medical University, Hefei, 230032, Anhui, China
| | - Jiajun Chen
- Anhui BioX-Vision Biological Technology Co., Ltd, Hefei, 230031, Anhui, China
| | - Wenliang Xu
- Anhui BioX-Vision Biological Technology Co., Ltd, Hefei, 230031, Anhui, China
| |
Collapse
|
2
|
Çetin-Kaya Y. Equilibrium Optimization-Based Ensemble CNN Framework for Breast Cancer Multiclass Classification Using Histopathological Image. Diagnostics (Basel) 2024; 14:2253. [PMID: 39410657 PMCID: PMC11475610 DOI: 10.3390/diagnostics14192253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2024] [Revised: 09/12/2024] [Accepted: 10/08/2024] [Indexed: 10/20/2024] Open
Abstract
Background: Breast cancer is one of the most lethal cancers among women. Early detection and proper treatment reduce mortality rates. Histopathological images provide detailed information for diagnosing and staging breast cancer disease. Methods: The BreakHis dataset, which includes histopathological images, is used in this study. Medical images are prone to problems such as different textural backgrounds and overlapping cell structures, unbalanced class distribution, and insufficiently labeled data. In addition to these, the limitations of deep learning models in overfitting and insufficient feature extraction make it extremely difficult to obtain a high-performance model in this dataset. In this study, 20 state-of-the-art models are trained to diagnose eight types of breast cancer using the fine-tuning method. In addition, a comprehensive experimental study was conducted to determine the most successful new model, with 20 different custom models reported. As a result, we propose a novel model called MultiHisNet. Results: The most effective new model, which included a pointwise convolution layer, residual link, channel, and spatial attention module, achieved 94.69% accuracy in multi-class breast cancer classification. An ensemble model was created with the best-performing transfer learning and custom models obtained in the study, and model weights were determined with an Equilibrium Optimizer. The proposed ensemble model achieved 96.71% accuracy in eight-class breast cancer detection. Conclusions: The results show that the proposed model will support pathologists in successfully diagnosing breast cancer.
Collapse
Affiliation(s)
- Yasemin Çetin-Kaya
- Department of Computer Engineering, Faculty of Engineering and Architecture, Tokat Gaziosmanpasa University, Tokat 60250, Turkey
| |
Collapse
|
3
|
Ma L, Gao Y, Huo Y, Tian T, Hong G, Li H. Integrated analysis of diverse cancer types reveals a breast cancer-specific serum miRNA biomarker through relative expression orderings analysis. Breast Cancer Res Treat 2024; 204:475-484. [PMID: 38191685 PMCID: PMC10959809 DOI: 10.1007/s10549-023-07208-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Accepted: 11/29/2023] [Indexed: 01/10/2024]
Abstract
PURPOSE Serum microRNA (miRNA) holds great potential as a non-invasive biomarker for diagnosing breast cancer (BrC). However, most diagnostic models rely on the absolute expression levels of miRNAs, which are susceptible to batch effects and challenging for clinical transformation. Furthermore, current studies on liquid biopsy diagnostic biomarkers for BrC mainly focus on distinguishing BrC patients from healthy controls, needing more specificity assessment. METHODS We collected a large number of miRNA expression data involving 8465 samples from GEO, including 13 different cancer types and non-cancer controls. Based on the relative expression orderings (REOs) of miRNAs within each sample, we applied the greedy, LASSO multiple linear regression, and random forest algorithms to identify a qualitative biomarker specific to BrC by comparing BrC samples to samples of other cancers as controls. RESULTS We developed a BrC-specific biomarker called 7-miRPairs, consisting of seven miRNA pairs. It demonstrated comparable classification performance in our analyzed machine learning algorithms while requiring fewer miRNA pairs, accurately distinguishing BrC from 12 other cancer types. The diagnostic performance of 7-miRPairs was favorable in the training set (accuracy = 98.47%, specificity = 98.14%, sensitivity = 99.25%), and similar results were obtained in the test set (accuracy = 97.22%, specificity = 96.87%, sensitivity = 98.02%). KEGG pathway enrichment analysis of the 11 miRNAs within the 7-miRPairs revealed significant enrichment of target mRNAs in pathways associated with BrC. CONCLUSION Our study provides evidence that utilizing serum miRNA pairs can offer significant advantages for BrC-specific diagnosis in clinical practice by directly comparing serum samples with BrC to other cancer types.
Collapse
Affiliation(s)
- Liyuan Ma
- School of Public Health and Health Management, Gannan Medical University, Ganzhou, 341000, China
| | - Yaru Gao
- School of Public Health and Health Management, Gannan Medical University, Ganzhou, 341000, China
| | - Yue Huo
- School of Public Health and Health Management, Gannan Medical University, Ganzhou, 341000, China
| | - Tian Tian
- School of Medical Information Engineering, Gannan Medical University, Ganzhou, 341000, China
| | - Guini Hong
- School of Medical Information Engineering, Gannan Medical University, Ganzhou, 341000, China.
| | - Hongdong Li
- School of Medical Information Engineering, Gannan Medical University, Ganzhou, 341000, China.
| |
Collapse
|
4
|
Ciobotaru A, Bota MA, Goța DI, Miclea LC. Multi-Instance Classification of Breast Tumor Ultrasound Images Using Convolutional Neural Networks and Transfer Learning. Bioengineering (Basel) 2023; 10:1419. [PMID: 38136010 PMCID: PMC10740646 DOI: 10.3390/bioengineering10121419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2023] [Revised: 12/07/2023] [Accepted: 12/12/2023] [Indexed: 12/24/2023] Open
Abstract
BACKGROUND Breast cancer is arguably one of the leading causes of death among women around the world. The automation of the early detection process and classification of breast masses has been a prominent focus for researchers in the past decade. The utilization of ultrasound imaging is prevalent in the diagnostic evaluation of breast cancer, with its predictive accuracy being dependent on the expertise of the specialist. Therefore, there is an urgent need to create fast and reliable ultrasound image detection algorithms to address this issue. METHODS This paper aims to compare the efficiency of six state-of-the-art, fine-tuned deep learning models that can classify breast tissue from ultrasound images into three classes: benign, malignant, and normal, using transfer learning. Additionally, the architecture of a custom model is introduced and trained from the ground up on a public dataset containing 780 images, which was further augmented to 3900 and 7800 images, respectively. What is more, the custom model is further validated on another private dataset containing 163 ultrasound images divided into two classes: benign and malignant. The pre-trained architectures used in this work are ResNet-50, Inception-V3, Inception-ResNet-V2, MobileNet-V2, VGG-16, and DenseNet-121. The performance evaluation metrics that are used in this study are as follows: Precision, Recall, F1-Score and Specificity. RESULTS The experimental results show that the models trained on the augmented dataset with 7800 images obtained the best performance on the test set, having 94.95 ± 0.64%, 97.69 ± 0.52%, 97.69 ± 0.13%, 97.77 ± 0.29%, 95.07 ± 0.41%, 98.11 ± 0.10%, and 96.75 ± 0.26% accuracy for the ResNet-50, MobileNet-V2, InceptionResNet-V2, VGG-16, Inception-V3, DenseNet-121, and our model, respectively. CONCLUSION Our proposed model obtains competitive results, outperforming some state-of-the-art models in terms of accuracy and training time.
Collapse
Affiliation(s)
- Alexandru Ciobotaru
- Department of Automation, Faculty of Automation and Computer Science, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania; (A.C.); (D.I.G.)
| | - Maria Aurora Bota
- Department of Advanced Computing Sciences, Faculty of Sciences and Engineering, Maastricht University, 6229 EN Maastricht, The Netherlands;
| | - Dan Ioan Goța
- Department of Automation, Faculty of Automation and Computer Science, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania; (A.C.); (D.I.G.)
| | - Liviu Cristian Miclea
- Department of Automation, Faculty of Automation and Computer Science, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania; (A.C.); (D.I.G.)
| |
Collapse
|
5
|
Jiang L, Huang S, Luo C, Zhang J, Chen W, Liu Z. An improved multi-scale gradient generative adversarial network for enhancing classification of colorectal cancer histological images. Front Oncol 2023; 13:1240645. [PMID: 38023227 PMCID: PMC10679330 DOI: 10.3389/fonc.2023.1240645] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 10/23/2023] [Indexed: 12/01/2023] Open
Abstract
Introduction Deep learning-based solutions for histological image classification have gained attention in recent years due to their potential for objective evaluation of histological images. However, these methods often require a large number of expert annotations, which are both time-consuming and labor-intensive to obtain. Several scholars have proposed generative models to augment labeled data, but these often result in label uncertainty due to incomplete learning of the data distribution. Methods To alleviate these issues, a method called InceptionV3-SMSG-GAN has been proposed to enhance classification performance by generating high-quality images. Specifically, images synthesized by Multi-Scale Gradients Generative Adversarial Network (MSG-GAN) are selectively added to the training set through a selection mechanism utilizing a trained model to choose generated images with higher class probabilities. The selection mechanism filters the synthetic images that contain ambiguous category information, thus alleviating label uncertainty. Results Experimental results show that compared with the baseline method which uses InceptionV3, the proposed method can significantly improve the performance of pathological image classification from 86.87% to 89.54% for overall accuracy. Additionally, the quality of generated images is evaluated quantitatively using various commonly used evaluation metrics. Discussion The proposed InceptionV3-SMSG-GAN method exhibited good classification ability, where histological image could be divided into nine categories. Future work could focus on further refining the image generation and selection processes to optimize classification performance.
Collapse
Affiliation(s)
- Liwen Jiang
- Department of Pathology, Affiliated Cancer Hospital and Institution of Guangzhou Medical University, Guangzhou, China
| | - Shuting Huang
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - Chaofan Luo
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - Jiangyu Zhang
- Department of Pathology, Affiliated Cancer Hospital and Institution of Guangzhou Medical University, Guangzhou, China
| | - Wenjing Chen
- Department of Pathology, Guangdong Women and Children Hospital, Guangzhou, China
| | - Zhenyu Liu
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| |
Collapse
|
6
|
Inneci T, Badem H. Detection of Corneal Ulcer Using a Genetic Algorithm-Based Image Selection and Residual Neural Network. Bioengineering (Basel) 2023; 10:639. [PMID: 37370570 DOI: 10.3390/bioengineering10060639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 05/20/2023] [Accepted: 05/22/2023] [Indexed: 06/29/2023] Open
Abstract
Corneal ulcer is one of the most devastating eye diseases causing permanent damage. There exist limited soft techniques available for detecting this disease. In recent years, deep neural networks (DNN) have significantly solved numerous classification problems. However, many samples are needed to obtain reasonable classification performance using a DNN with a huge amount of layers and weights. Since collecting a data set with a large number of samples is usually a difficult and time-consuming process, very large-scale pre-trained DNNs, such as the AlexNet, the ResNet and the DenseNet, can be adapted to classify a dataset with a small number of samples, through the utility of transfer learning techniques. Although such pre-trained DNNs produce successful results in some cases, their classification performances can be low due to many parameters, weights and the emergence of redundancy features that repeat themselves in many layers in som cases. The proposed technique removes these unnecessary features by systematically selecting images in the layers using a genetic algorithm (GA). The proposed method has been tested on ResNet on a small-scale dataset which classifies corneal ulcers. According to the results, the proposed method significantly increased the classification performance compared to the classical approaches.
Collapse
Affiliation(s)
- Tugba Inneci
- Department of Informatics System, Kahramanmaras Sutcu Imam University, Kahramanmaras 46050, Türkiye
| | - Hasan Badem
- Department of Computer Engineering, Kahramanmaras Sutcu Imam University, Kahramanmaras 46050, Türkiye
| |
Collapse
|
7
|
Al-Jabbar M, Alshahrani M, Senan EM, Ahmed IA. Analyzing Histological Images Using Hybrid Techniques for Early Detection of Multi-Class Breast Cancer Based on Fusion Features of CNN and Handcrafted. Diagnostics (Basel) 2023; 13:diagnostics13101753. [PMID: 37238243 DOI: 10.3390/diagnostics13101753] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Revised: 05/09/2023] [Accepted: 05/11/2023] [Indexed: 05/28/2023] Open
Abstract
Breast cancer is the second most common type of cancer among women, and it can threaten women's lives if it is not diagnosed early. There are many methods for detecting breast cancer, but they cannot distinguish between benign and malignant tumors. Therefore, a biopsy taken from the patient's abnormal tissue is an effective way to distinguish between malignant and benign breast cancer tumors. There are many challenges facing pathologists and experts in diagnosing breast cancer, including the addition of some medical fluids of various colors, the direction of the sample, the small number of doctors and their differing opinions. Thus, artificial intelligence techniques solve these challenges and help clinicians resolve their diagnostic differences. In this study, three techniques, each with three systems, were developed to diagnose multi and binary classes of breast cancer datasets and distinguish between benign and malignant types with 40× and 400× factors. The first technique for diagnosing a breast cancer dataset is using an artificial neural network (ANN) with selected features from VGG-19 and ResNet-18. The second technique for diagnosing breast cancer dataset is by ANN with combined features for VGG-19 and ResNet-18 before and after principal component analysis (PCA). The third technique for analyzing breast cancer dataset is by ANN with hybrid features. The hybrid features are a hybrid between VGG-19 and handcrafted; and a hybrid between ResNet-18 and handcrafted. The handcrafted features are mixed features extracted using Fuzzy color histogram (FCH), local binary pattern (LBP), discrete wavelet transform (DWT) and gray level co-occurrence matrix (GLCM) methods. With the multi classes data set, ANN with the hybrid features of the VGG-19 and handcrafted reached a precision of 95.86%, an accuracy of 97.3%, sensitivity of 96.75%, AUC of 99.37%, and specificity of 99.81% with images at magnification factor 400×. Whereas with the binary classes data set, ANN with the hybrid features of the VGG-19 and handcrafted reached a precision of 99.74%, an accuracy of 99.7%, sensitivity of 100%, AUC of 99.85%, and specificity of 100% with images at a magnification factor 400×.
Collapse
Affiliation(s)
- Mohammed Al-Jabbar
- Computer Department, Applied College, Najran University, Najran 66462, Saudi Arabia
| | - Mohammed Alshahrani
- Computer Department, Applied College, Najran University, Najran 66462, Saudi Arabia
| | - Ebrahim Mohammed Senan
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana'a, Yemen
| | | |
Collapse
|
8
|
Yu D, Zhang X, Lin J, Cao T, Chen Y. SECS: An Effective CNN Joint Construction Strategy for Breast Cancer Histopathological Image Classification. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2023. [DOI: 10.1016/j.jksuci.2023.01.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
|