1
|
Alajrami E, Ng T, Jevsikov J, Naidoo P, Fernandes P, Azarmehr N, Dinmohammadi F, Shun-Shin MJ, Dadashi Serej N, Francis DP, Zolgharni M. Active learning for left ventricle segmentation in echocardiography. Comput Methods Programs Biomed 2024; 248:108111. [PMID: 38479147 DOI: 10.1016/j.cmpb.2024.108111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 02/21/2024] [Accepted: 03/01/2024] [Indexed: 04/02/2024]
Abstract
BACKGROUND AND OBJECTIVE Training deep learning models for medical image segmentation require large annotated datasets, which can be expensive and time-consuming to create. Active learning is a promising approach to reduce this burden by strategically selecting the most informative samples for segmentation. This study investigates the use of active learning for efficient left ventricle segmentation in echocardiography with sparse expert annotations. METHODS We adapt and evaluate various sampling techniques, demonstrating their effectiveness in judiciously selecting samples for segmentation. Additionally, we introduce a novel strategy, Optimised Representativeness Sampling, which combines feature-based outliers with the most representative samples to enhance annotation efficiency. RESULTS Our findings demonstrate a substantial reduction in annotation costs, achieving a remarkable 99% upper bound performance while utilising only 20% of the labelled data. This equates to a reduction of 1680 images needing annotation within our dataset. When applied to a publicly available dataset, our approach yielded a remarkable 70% reduction in required annotation efforts, representing a significant advancement compared to baseline active learning strategies, which achieved only a 50% reduction. Our experiments highlight the nuanced performance of diverse sampling strategies across datasets within the same domain. CONCLUSIONS The study provides a cost-effective approach to tackle the challenges of limited expert annotations in echocardiography. By introducing a distinct dataset, made publicly available for research purposes, our work contributes to the field's understanding of efficient annotation strategies in medical image segmentation.
Collapse
Affiliation(s)
- Eman Alajrami
- Intelligent Sensing and Vision, University of West London, London, UK.
| | - Tiffany Ng
- National Heart and Lung Institute, Imperial College London, London, UK
| | - Jevgeni Jevsikov
- Intelligent Sensing and Vision, University of West London, London, UK; National Heart and Lung Institute, Imperial College London, London, UK
| | - Preshen Naidoo
- Intelligent Sensing and Vision, University of West London, London, UK
| | | | - Neda Azarmehr
- Intelligent Sensing and Vision, University of West London, London, UK
| | | | | | | | - Darrel P Francis
- National Heart and Lung Institute, Imperial College London, London, UK
| | - Massoud Zolgharni
- Intelligent Sensing and Vision, University of West London, London, UK; National Heart and Lung Institute, Imperial College London, London, UK
| |
Collapse
|
2
|
Ma D, Li C, Du T, Qiao L, Tang D, Ma Z, Shi L, Lu G, Meng Q, Chen Z, Grzegorzek M, Sun H. PHE-SICH-CT-IDS: A benchmark CT image dataset for evaluation semantic segmentation, object detection and radiomic feature extraction of perihematomal edema in spontaneous intracerebral hemorrhage. Comput Biol Med 2024; 173:108342. [PMID: 38522249 DOI: 10.1016/j.compbiomed.2024.108342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 03/05/2024] [Accepted: 03/17/2024] [Indexed: 03/26/2024]
Abstract
BACKGROUND AND OBJECTIVE Intracerebral hemorrhage is one of the diseases with the highest mortality and poorest prognosis worldwide. Spontaneous intracerebral hemorrhage (SICH) typically presents acutely, prompt and expedited radiological examination is crucial for diagnosis, localization, and quantification of the hemorrhage. Early detection and accurate segmentation of perihematomal edema (PHE) play a critical role in guiding appropriate clinical intervention and enhancing patient prognosis. However, the progress and assessment of computer-aided diagnostic methods for PHE segmentation and detection face challenges due to the scarcity of publicly accessible brain CT image datasets. METHODS This study establishes a publicly available CT dataset named PHE-SICH-CT-IDS for perihematomal edema in spontaneous intracerebral hemorrhage. The dataset comprises 120 brain CT scans and 7,022 CT images, along with corresponding medical information of the patients. To demonstrate its effectiveness, classical algorithms for semantic segmentation, object detection, and radiomic feature extraction are evaluated. The experimental results confirm the suitability of PHE-SICH-CT-IDS for assessing the performance of segmentation, detection and radiomic feature extraction methods. RESULTS This study conducts numerous experiments using classical machine learning and deep learning methods, demonstrating the differences in various segmentation and detection methods on the PHE-SICH-CT-IDS. The highest precision achieved in semantic segmentation is 76.31%, while object detection attains a maximum precision of 97.62%. The experimental results on radiomic feature extraction and analysis prove the suitability of PHE-SICH-CT-IDS for evaluating image features and highlight the predictive value of these features for the prognosis of SICH patients. CONCLUSION To the best of our knowledge, this is the first publicly available dataset for PHE in SICH, comprising various data formats suitable for applications across diverse medical scenarios. We believe that PHE-SICH-CT-IDS will allure researchers to explore novel algorithms, providing valuable support for clinicians and patients in the clinical setting. PHE-SICH-CT-IDS is freely published for non-commercial purpose at https://figshare.com/articles/dataset/PHE-SICH-CT-IDS/23957937.
Collapse
Affiliation(s)
- Deguo Ma
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China.
| | - Tianming Du
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Lin Qiao
- Shengjing Hospital, China Medical University, Shenyang, China
| | - Dechao Tang
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Zhiyu Ma
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Liyu Shi
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Guotao Lu
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Qingtao Meng
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Zhihao Chen
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Marcin Grzegorzek
- Institute of Medical Informatics, University of Luebeck, Luebeck, Germany
| | - Hongzan Sun
- Shengjing Hospital, China Medical University, Shenyang, China.
| |
Collapse
|
3
|
P SK, Agastinose Ronickom JF. Optimal Electrodermal Activity Segment for Enhanced Emotion Recognition Using Spectrogram-Based Feature Extraction and Machine Learning. Int J Neural Syst 2024; 34:2450027. [PMID: 38511233 DOI: 10.1142/s0129065724500278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/22/2024]
Abstract
In clinical and scientific research on emotion recognition using physiological signals, selecting the appropriate segment is of utmost importance for enhanced results. In our study, we optimized the electrodermal activity (EDA) segment for an emotion recognition system. Initially, we obtained EDA signals from two publicly available datasets: the Continuously annotated signals of emotion (CASE) and Wearable stress and affect detection (WESAD) for 4-class dimensional and three-class categorical emotional classification, respectively. These signals were pre-processed, and decomposed into phasic signals using the 'convex optimization to EDA' method. Further, the phasic signals were segmented into two equal parts, each subsequently segmented into five nonoverlapping windows. Spectrograms were then generated using short-time Fourier transform and Mel-frequency cepstrum for each window, from which we extracted 85 features. We built four machine learning models for the first part, second part, and whole phasic signals to investigate their performance in emotion recognition. In the CASE dataset, we achieved the highest multi-class accuracy of 62.54% using the whole phasic and 61.75% with the second part phasic signals. Conversely, the WESAD dataset demonstrated superior performance in three-class emotions classification, attaining an accuracy of 96.44% for both whole phasic and second part phasic segments. As a result, the second part of EDA is strongly recommended for optimal outcomes.
Collapse
Affiliation(s)
- Sriram Kumar P
- School of Biomedical Engineering, Indian Institute of Technology (BHU) Varanasi, Uttar Pradesh 221005, India
| | | |
Collapse
|
4
|
Lin S, Yong J, Zhang L, Chen X, Qiao L, Pan W, Yang Y, Zhao H. Applying image features of proximal paracancerous tissues in predicting prognosis of patients with hepatocellular carcinoma. Comput Biol Med 2024; 173:108365. [PMID: 38537563 DOI: 10.1016/j.compbiomed.2024.108365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 03/19/2024] [Accepted: 03/21/2024] [Indexed: 04/17/2024]
Abstract
BACKGROUND Most of the methods using digital pathological image for predicting Hepatocellular carcinoma (HCC) prognosis have not considered paracancerous tissue microenvironment (PTME), which are potentially important for tumour initiation and metastasis. This study aimed to identify roles of image features of PTME in predicting prognosis and tumour recurrence of HCC patients. METHODS We collected whole slide images (WSIs) of 146 HCC patients from Sun Yat-sen Memorial Hospital (SYSM dataset). For each WSI, five types of regions of interests (ROIs) in PTME and tumours were manually annotated. These ROIs were used to construct a Lasso Cox survival model for predicting the prognosis of HCC patients. To make the model broadly useful, we established a deep learning method to automatically segment WSIs, and further used it to construct a prognosis prediction model. This model was tested by the samples of 225 HCC patients from the Cancer Genome Atlas Liver Hepatocellular Carcinoma (TCGA-LIHC). RESULTS In predicting prognosis of the HCC patients, using the image features of manually annotated ROIs in PTME achieved C-index 0.668 in the SYSM testing dataset, which is higher than the C-index 0.648 reached by the model only using image features of tumours. Integrating ROIs of PTME and tumours achieved C-index 0.693 in the SYSM testing dataset. The model using automatically segmented ROIs of PTME and tumours achieved C-index of 0.665 (95% CI: 0.556-0.774) in the TCGA-LIHC samples, which is better than the widely used methods, WSISA (0.567), DeepGraphSurv (0.593), and SeTranSurv (0.642). Finally, we found the Texture SumAverage Skew HV on immune cell infiltration and Texture related features on desmoplastic reaction are the most important features of PTME in predicting HCC prognosis. We additionally used the model in prediction HCC recurrence for patients from SYSM-training, SYSM-testing, and TCGA-LIHC datasets, indicating the important roles of PTME in the prediction. CONCLUSIONS Our results indicate image features of PTME is critical for improving the prognosis prediction of HCC. Moreover, the image features related with immune cell infiltration and desmoplastic reaction of PTME are the most important factors associated with prognosis of HCC.
Collapse
Affiliation(s)
- Siying Lin
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, 510006, China; Department of Pathology, Department of Medical Research Center, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, 510120, China
| | - Juanjuan Yong
- Department of Pathology, Department of Medical Research Center, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, 510120, China
| | - Lei Zhang
- Department of Pancreatic-Hepato-Biliary-Surgery, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510655, China
| | - Xiaolong Chen
- Department of Hepatic Surgery, Liver Transplantation, The Third Affiliated Hospital of Sun Yat-Sen University, Guangzhou, 510630, China
| | - Liang Qiao
- Storr Liver Centre, Westmead Institute for Medical Research, University of Sydney at Westmead Hospital, Westmead, NSW, 2145, Australia
| | - Weidong Pan
- Department of Pancreatic-Hepato-Biliary-Surgery, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510655, China
| | - Yuedong Yang
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, 510006, China.
| | - Huiying Zhao
- Department of Pathology, Department of Medical Research Center, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, 510120, China.
| |
Collapse
|
5
|
Mahmoud NM, Soliman AM. Early automated detection system for skin cancer diagnosis using artificial intelligent techniques. Sci Rep 2024; 14:9749. [PMID: 38679633 PMCID: PMC11056372 DOI: 10.1038/s41598-024-59783-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Accepted: 04/15/2024] [Indexed: 05/01/2024] Open
Abstract
Recently, skin cancer is one of the spread and dangerous cancers around the world. Early detection of skin cancer can reduce mortality. Traditional methods for skin cancer detection are painful, time-consuming, expensive, and may cause the disease to spread out. Dermoscopy is used for noninvasive diagnosis of skin cancer. Artificial Intelligence (AI) plays a vital role in diseases' diagnosis especially in biomedical engineering field. The automated detection systems based on AI reduce the complications in the traditional methods and can improve skin cancer's diagnosis rate. In this paper, automated early detection system for skin cancer dermoscopic images using artificial intelligent is presented. Adaptive snake (AS) and region growing (RG) algorithms are used for automated segmentation and compared with each other. The results show that AS is accurate and efficient (accuracy = 96%) more than RG algorithm (accuracy = 90%). Artificial Neural networks (ANN) and support vector machine (SVM) algorithms are used for automated classification compared with each other. The proposed system with ANN algorithm shows high accuracy (94%), precision (96%), specificity (95.83%), sensitivity (recall) (92.30%), and F1-score (0.94). The proposed system is easy to use, time consuming, enables patients to make early detection for skin cancer and has high efficiency.
Collapse
Affiliation(s)
- Nourelhoda M Mahmoud
- Biomedical Engineering Department, Faculty of Engineering, Minia University, Minya, Egypt.
| | - Ahmed M Soliman
- Biomedical Engineering Department, Faculty of Engineering, Helwan University, Cairo, Egypt
| |
Collapse
|
6
|
Prabhu H, Bhosale H, Sane A, Dhadwal R, Ramakrishnan V, Valadi J. Protein feature engineering framework for AMPylation site prediction. Sci Rep 2024; 14:8695. [PMID: 38622194 DOI: 10.1038/s41598-024-58450-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Accepted: 03/29/2024] [Indexed: 04/17/2024] Open
Abstract
AMPylation is a biologically significant yet understudied post-translational modification where an adenosine monophosphate (AMP) group is added to Tyrosine and Threonine residues primarily. While recent work has illuminated the prevalence and functional impacts of AMPylation, experimental identification of AMPylation sites remains challenging. Computational prediction techniques provide a faster alternative approach. The predictive performance of machine learning models is highly dependent on the features used to represent the raw amino acid sequences. In this work, we introduce a novel feature extraction pipeline to encode the key properties relevant to AMPylation site prediction. We utilize a recently published dataset of curated AMPylation sites to develop our feature generation framework. We demonstrate the utility of our extracted features by training various machine learning classifiers, on various numerical representations of the raw sequences extracted with the help of our framework. Tenfold cross-validation is used to evaluate the model's capability to distinguish between AMPylated and non-AMPylated sites. The top-performing set of features extracted achieved MCC score of 0.58, Accuracy of 0.8, AUC-ROC of 0.85 and F1 score of 0.73. Further, we elucidate the behaviour of the model on the set of features consisting of monogram and bigram counts for various representations using SHapley Additive exPlanations.
Collapse
Affiliation(s)
- Hardik Prabhu
- Computing and Data Sciences, FLAME University, Pune, 412115, India
- Robert Bosch Centre for Cyber Physical Systems, Indian Institute of Science, Bengaluru, 560012, India
| | | | - Aamod Sane
- Computing and Data Sciences, FLAME University, Pune, 412115, India
| | - Renu Dhadwal
- Computing and Data Sciences, FLAME University, Pune, 412115, India
| | - Vigneshwar Ramakrishnan
- Bioinformatics Center, School of Chemical and Biotechnology, SASTRA Deemed to be University, Thanjavur, 613401, India
| | - Jayaraman Valadi
- Computing and Data Sciences, FLAME University, Pune, 412115, India.
| |
Collapse
|
7
|
Silva AB, Martins AS, Tosta TAA, Loyola AM, Cardoso SV, Neves LA, de Faria PR, do Nascimento MZ. OralEpitheliumDB: A Dataset for Oral Epithelial Dysplasia Image Segmentation and Classification. J Imaging Inform Med 2024:10.1007/s10278-024-01041-w. [PMID: 38409608 DOI: 10.1007/s10278-024-01041-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2023] [Revised: 02/03/2024] [Accepted: 02/06/2024] [Indexed: 02/28/2024]
Abstract
Early diagnosis of potentially malignant disorders, such as oral epithelial dysplasia, is the most reliable way to prevent oral cancer. Computational algorithms have been used as an auxiliary tool to aid specialists in this process. Usually, experiments are performed on private data, making it difficult to reproduce the results. There are several public datasets of histological images, but studies focused on oral dysplasia images use inaccessible datasets. This prevents the improvement of algorithms aimed at this lesion. This study introduces an annotated public dataset of oral epithelial dysplasia tissue images. The dataset includes 456 images acquired from 30 mouse tongues. The images were categorized among the lesion grades, with nuclear structures manually marked by a trained specialist and validated by a pathologist. Also, experiments were carried out in order to illustrate the potential of the proposed dataset in classification and segmentation processes commonly explored in the literature. Convolutional neural network (CNN) models for semantic and instance segmentation were employed on the images, which were pre-processed with stain normalization methods. Then, the segmented and non-segmented images were classified with CNN architectures and machine learning algorithms. The data obtained through these processes is available in the dataset. The segmentation stage showed the F1-score value of 0.83, obtained with the U-Net model using the ResNet-50 as a backbone. At the classification stage, the most expressive result was achieved with the Random Forest method, with an accuracy value of 94.22%. The results show that the segmentation contributed to the classification results, but studies are needed for the improvement of these stages of automated diagnosis. The original, gold standard, normalized, and segmented images are publicly available and may be used for the improvement of clinical applications of CAD methods on oral epithelial dysplasia tissue images.
Collapse
Affiliation(s)
- Adriano Barbosa Silva
- Faculty of Computer Science (FACOM) - Federal University of Uberlândia (UFU), Av. João Naves de Ávila 2121, BLB, 38400-902, Uberlândia, MG, Brazil.
| | - Alessandro Santana Martins
- Federal Institute of Triângulo Mineiro (IFTM), R. Belarmino Vilela Junqueira, S/N, 38305-200, Ituiutaba, MG, Brazil
| | - Thaína Aparecida Azevedo Tosta
- Science and Technology Institute, Federal University of São Paulo (UNIFESP), Av. Cesare Mansueto Giulio Lattes, 1201, 12247-014, São José dos Campos, SP, Brazil
| | - Adriano Mota Loyola
- School of Dentistry, Federal University of Uberlândia (UFU), Av. Pará - 1720, 38405-320, Uberlândia, MG, Brazil
| | - Sérgio Vitorino Cardoso
- School of Dentistry, Federal University of Uberlândia (UFU), Av. Pará - 1720, 38405-320, Uberlândia, MG, Brazil
| | - Leandro Alves Neves
- Department of Computer Science and Statistics (DCCE), São Paulo State University (UNESP), R. Cristóvão Colombo, 2265, 38305-200, São José do Rio Preto, SP, Brazil
| | - Paulo Rogério de Faria
- Department of Histology and Morphology, Institute of Biomedical Science, Federal University of Uberlândia (UFU), Av. Amazonas, S/N, 38405-320, Uberlândia, MG, Brazil
| | - Marcelo Zanchetta do Nascimento
- Faculty of Computer Science (FACOM) - Federal University of Uberlândia (UFU), Av. João Naves de Ávila 2121, BLB, 38400-902, Uberlândia, MG, Brazil
| |
Collapse
|
8
|
Guo H, Meng J, Zhao Y, Zhang H, Dai C. High-precision retinal blood vessel segmentation based on a multi-stage and dual-channel deep learning network. Phys Med Biol 2024; 69:045007. [PMID: 38198716 DOI: 10.1088/1361-6560/ad1cf6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Accepted: 01/10/2024] [Indexed: 01/12/2024]
Abstract
Objective.The high-precision segmentation of retinal vessels in fundus images is important for the early diagnosis of ophthalmic diseases. However, the extraction for microvessels is challenging due to their characteristics of low contrast and high structural complexity. Although some works have been developed to improve the segmentation ability in thin vessels, they have only been successful in recognizing small vessels with relatively high contrast.Approach.Therefore, we develop a deep learning (DL) framework with a multi-stage and dual-channel network model (MSDC_NET) to further improve the thin-vessel segmentation with low contrast. Specifically, an adaptive image enhancement strategy combining multiple preprocessing and the DL method is firstly proposed to elevate the contrast of thin vessels; then, a two-channel model with multi-scale perception is developed to implement whole- and thin-vessel segmentation; and finally, a series of post-processing operations are designed to extract more small vessels in the predicted maps from thin-vessel channels.Main results.Experiments on DRIVE, STARE and CHASE_DB1 demonstrate the superiorities of the proposed MSDC_NET in extracting more thin vessels in fundus images, and quantitative evaluations on several parameters based on the advanced ground truth further verify the advantages of our proposed DL model. Compared with the previous multi-branch method, the specificity and F1score are improved by about 2.18%, 0.68%, 1.73% and 2.91%, 0.24%, 8.38% on the three datasets, respectively.Significance.This work may provide richer information to ophthalmologists for the diagnosis and treatment of vascular-related ophthalmic diseases.
Collapse
Affiliation(s)
- Hui Guo
- School of Computer, Qufu Normal University, 276826 Rizhao, People's Republic of China
| | - Jing Meng
- School of Computer, Qufu Normal University, 276826 Rizhao, People's Republic of China
| | - Yongfu Zhao
- School of Computer, Qufu Normal University, 276826 Rizhao, People's Republic of China
| | - Hongdong Zhang
- School of Computer, Qufu Normal University, 276826 Rizhao, People's Republic of China
| | - Cuixia Dai
- College of Science, Shanghai Institute of Technology, 201418 Shanghai, People's Republic of China
| |
Collapse
|
9
|
Qureshi SA, Hussain L, Ibrar U, Alabdulkreem E, Nour MK, Alqahtani MS, Nafie FM, Mohamed A, Mohammed GP, Duong TQ. Radiogenomic classification for MGMT promoter methylation status using multi-omics fused feature space for least invasive diagnosis through mpMRI scans. Sci Rep 2023; 13:3291. [PMID: 36841898 PMCID: PMC9961309 DOI: 10.1038/s41598-023-30309-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Accepted: 02/21/2023] [Indexed: 02/27/2023] Open
Abstract
Accurate radiogenomic classification of brain tumors is important to improve the standard of diagnosis, prognosis, and treatment planning for patients with glioblastoma. In this study, we propose a novel two-stage MGMT Promoter Methylation Prediction (MGMT-PMP) system that extracts latent features fused with radiomic features predicting the genetic subtype of glioblastoma. A novel fine-tuned deep learning architecture, namely Deep Learning Radiomic Feature Extraction (DLRFE) module, is proposed for latent feature extraction that fuses the quantitative knowledge to the spatial distribution and the size of tumorous structure through radiomic features: (GLCM, HOG, and LBP). The application of the novice rejection algorithm has been found significantly effective in selecting and isolating the negative training instances out of the original dataset. The fused feature vectors are then used for training and testing by k-NN and SVM classifiers. The 2021 RSNA Brain Tumor challenge dataset (BraTS-2021) consists of four structural mpMRIs, viz. fluid-attenuated inversion-recovery, T1-weighted, T1-weighted contrast enhancement, and T2-weighted. We evaluated the classification performance, for the very first time in published form, in terms of measures like accuracy, F1-score, and Matthews correlation coefficient. The Jackknife tenfold cross-validation was used for training and testing BraTS-2021 dataset validation. The highest classification performance is (96.84 ± 0.09)%, (96.08 ± 0.10)%, and (97.44 ± 0.14)% as accuracy, sensitivity, and specificity respectively to detect MGMT methylation status for patients suffering from glioblastoma. Deep learning feature extraction with radiogenomic features, fusing imaging phenotypes and molecular structure, using rejection algorithm has been found to perform outclass capable of detecting MGMT methylation status of glioblastoma patients. The approach relates the genomic variation with radiomic features forming a bridge between two areas of research that may prove useful for clinical treatment planning leading to better outcomes.
Collapse
Affiliation(s)
- Shahzad Ahmad Qureshi
- Department of Computer and Information Sciences, Pakistan Institute of Engineering and Applied Sciences, Islamabad, Pakistan.
| | - Lal Hussain
- Department of Computer Science and IT, Neelum Campus, The University of Azad Jammu and Kashmir, Muzaffarabad, Azad Kashmir, Pakistan. .,Department of Computer Science and IT, King Abdullah Campus, The University of Azad Jammu and Kashmir, Muzaffarabad, Azad Kashmir, Pakistan. .,Department of Radiology, Albert Einstein College of Medicine and Montefiore Medical Center, 111 East 210th Street, Bronx, NY, 10467, USA.
| | - Usama Ibrar
- grid.461150.7Farooq Hospital, Lahore, Pakistan
| | - Eatedal Alabdulkreem
- grid.449346.80000 0004 0501 7602Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671 Saudi Arabia
| | - Mohamed K. Nour
- grid.412832.e0000 0000 9137 6644Department of Computer Sciences, College of Computing and Information System, Umm Al-Qura University, Mecca, Saudi Arabia
| | - Mohammed S. Alqahtani
- grid.412144.60000 0004 1790 7100Radiological Sciences Department, College of Applied Medical Sciences, King Khalid University, Abha, 61421 Saudi Arabia
| | - Faisal Mohammed Nafie
- grid.449051.d0000 0004 0441 5633Department of Computer Science, College of Science and Humanities at Alghat, Majmaah University, Al-Majmaah, 11952 Saudi Arabia
| | - Abdullah Mohamed
- grid.440865.b0000 0004 0377 3762Research Centre, Future University in Egypt, New Cairo, 11845 Egypt
| | - Gouse Pasha Mohammed
- grid.449553.a0000 0004 0441 5588Department of Computer and Self Development, Preparatory Year Deanship, Prince Sattam Bin Abdulaziz University, AlKharj, Saudi Arabia
| | - Tim Q. Duong
- grid.240283.f0000 0001 2152 0791Department of Radiology, Albert Einstein College of Medicine and Montefiore Medical Center, 111 East 210th Street, Bronx, NY 10467 USA
| |
Collapse
|
10
|
Saxena P, Goyal A. Computer-assisted grading of follicular lymphoma: a classification based on SVM, machine learning, and transfer learning approaches. The Imaging Science Journal 2023. [DOI: 10.1080/13682199.2022.2162663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Affiliation(s)
- Pranshu Saxena
- I.K. Gujral Punjab Technical University, Jalandhar, India
| | - Anjali Goyal
- Department of Computer Applications, GNIMT, Ludhiana, India
| |
Collapse
|
11
|
Patnaik V, Mohanty M, Subudhi AK. Identification of healthy biological leafs using hybrid-feature classifier. The Imaging Science Journal 2022. [DOI: 10.1080/13682199.2022.2157533] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Affiliation(s)
- Vijaya Patnaik
- Department of ECE, ITER, SOA Deemed to be University, Odisha, India
| | - Monalisa Mohanty
- Department of ECE, ITER, SOA Deemed to be University, Odisha, India
| | | |
Collapse
|
12
|
Setiadi IC, Hatta AM, Koentjoro S, Stendafity S, Azizah NN, Wijaya WY. Adulteration detection in minced beef using low-cost color imaging system coupled with deep neural network. Front Sustain Food Syst 2022. [DOI: 10.3389/fsufs.2022.1073969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
Major processed meat products, including minced beef, are one of the favorite ingredients of most people because they are high in protein, vitamins, and minerals. The high demand and high prices make processed meat products vulnerable to adulteration. In addition, eliminating morphological attributes makes the authenticity of minced beef challenging to identify with the naked eye. This paper aims to describe the feasibility study of adulteration detection in minced beef using a low-cost imaging system coupled with a deep neural network. The proposed method was expected to be able to detect minced beef adulteration. There were 500 captured images of minced beef samples. Then, there were 24 color and textural features retrieved from the image. The samples were then labeled and evaluated. A deep neural network (DNN) was developed and investigated to support classification. The proposed DNN was also compared to six machine learning algorithms in the form of accuracy, precision, and sensitivity of classification. The feature importance analysis was also performed to obtain the most impacted features to classification results. The DNN model classification accuracy was 98.00% without feature selection and 99.33% with feature selection. The proposed DNN has the best performance with individual accuracy of up to 99.33%, a precision of up to 98.68%, and a sensitivity of up to 98.67%. This work shows the enormous potential application of a low-cost imaging system coupled with DNN to rapidly detect adulterants in minced beef with high performance.
Collapse
|
13
|
Wu J, Zheng D, Wu Z, Song H, Zhang X. Prediction of Buckwheat Maturity in UAV-RGB Images Based on Recursive Feature Elimination Cross-Validation: A Case Study in Jinzhong, Northern China. Plants (Basel) 2022; 11:3257. [PMID: 36501299 PMCID: PMC9737888 DOI: 10.3390/plants11233257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Revised: 11/22/2022] [Accepted: 11/25/2022] [Indexed: 06/17/2023]
Abstract
Buckwheat is an important minor grain crop with medicinal and edible functions. The accurate judgment of buckwheat maturity is beneficial to reduce harvest losses and improve yield. With the rapid development of unmanned aerial vehicle (UAV) technology, it has been widely used to predict the maturity of agricultural products. This paper proposed a method using recursive feature elimination cross-validation (RFECV) combined with multiple regression models to predict the maturity of buckwheat in UAV-RGB images. The images were captured in the buckwheat experimental field of Shanxi Agricultural University in Jinzhong, Northern China, from September to October in 2021. The variety was sweet buckwheat of "Jinqiao No. 1". In order to deeply mine the feature vectors that highly correlated with the prediction of buckwheat maturity, 22 dimensional features with 5 vegetation indexes, 9 color features, and 8 texture features of buckwheat were selected initially. The RFECV method was adopted to obtain the optimal feature vector dimensions and combinations with six regression models of decision tree regression, linear regression, random forest regression, AdaBoost regression, gradient lifting regression, and extreme random tree regression. The coefficient of determination (R2) and root mean square error (RMSE) were used to analyze the different combinations of the six regression models with different feature spaces. The experimental results show that the single vegetation index performed poorly in the prediction of buckwheat maturity; the prediction result of feature space "5" combined with the gradient lifting regression model performed the best; and the R2 and RMSE were 0.981 and 1.70 respectively. The research results can provide an important theoretical basis for the prediction of the regional maturity of crops.
Collapse
Affiliation(s)
- Jinlong Wu
- College of Agricultural Engineering, Shanxi Agricultural University, Jinzhong 030801, China
- College of Information Science and Engineering, Shanxi Agricultural University, Jinzhong 030801, China
| | - Decong Zheng
- College of Agricultural Engineering, Shanxi Agricultural University, Jinzhong 030801, China
| | - Zhiming Wu
- College of Agricultural Engineering, Shanxi Agricultural University, Jinzhong 030801, China
| | - Haiyan Song
- College of Agricultural Engineering, Shanxi Agricultural University, Jinzhong 030801, China
| | - Xiaoxiang Zhang
- College of Agricultural Engineering, Shanxi Agricultural University, Jinzhong 030801, China
| |
Collapse
|
14
|
Kim YJ. Machine Learning Model Based on Radiomic Features for Differentiation between COVID-19 and Pneumonia on Chest X-ray. Sensors (Basel) 2022; 22:6709. [PMID: 36081170 PMCID: PMC9460643 DOI: 10.3390/s22176709] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 08/20/2022] [Accepted: 09/02/2022] [Indexed: 06/15/2023]
Abstract
Machine learning approaches are employed to analyze differences in real-time reverse transcription polymerase chain reaction scans to differentiate between COVID-19 and pneumonia. However, these methods suffer from large training data requirements, unreliable images, and uncertain clinical diagnosis. Thus, in this paper, we used a machine learning model to differentiate between COVID-19 and pneumonia via radiomic features using a bias-minimized dataset of chest X-ray scans. We used logistic regression (LR), naive Bayes (NB), support vector machine (SVM), k-nearest neighbor (KNN), bagging, random forest (RF), extreme gradient boosting (XGB), and light gradient boosting machine (LGBM) to differentiate between COVID-19 and pneumonia based on training data. Further, we used a grid search to determine optimal hyperparameters for each machine learning model and 5-fold cross-validation to prevent overfitting. The identification performances of COVID-19 and pneumonia were compared with separately constructed test data for four machine learning models trained using the maximum probability, contrast, and difference variance of the gray level co-occurrence matrix (GLCM), and the skewness as input variables. The LGBM and bagging model showed the highest and lowest performances; the GLCM difference variance showed a high overall effect in all models. Thus, we confirmed that the radiomic features in chest X-rays can be used as indicators to differentiate between COVID-19 and pneumonia using machine learning.
Collapse
Affiliation(s)
- Young Jae Kim
- Department of Biomedical Engineering, Gachon University, 21, Namdong-daero 774 beon-gil, Namdong-gu, Inchon 21936, Korea
| |
Collapse
|
15
|
Jha N, Lee KS, Kim YJ. Diagnosis of temporomandibular disorders using artificial intelligence technologies: A systematic review and meta-analysis. PLoS One 2022; 17:e0272715. [PMID: 35980894 PMCID: PMC9387829 DOI: 10.1371/journal.pone.0272715] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Accepted: 07/25/2022] [Indexed: 11/21/2022] Open
Abstract
Background Artificial intelligence (AI) algorithms have been applied to diagnose temporomandibular disorders (TMDs). However, studies have used different patient selection criteria, disease subtypes, input data, and outcome measures. Resultantly, the performance of the AI models varies. Objective This study aimed to systematically summarize the current literature on the application of AI technologies for diagnosis of different TMD subtypes, evaluate the quality of these studies, and assess the diagnostic accuracy of existing AI models. Materials and methods The study protocol was carried out based on the preferred reporting items for systematic review and meta-analysis protocols (PRISMA). The PubMed, Embase, and Web of Science databases were searched to find relevant articles from database inception to June 2022. Studies that used AI algorithms to diagnose at least one subtype of TMD and those that assessed the performance of AI algorithms were included. We excluded studies on orofacial pain that were not directly related to the TMD, such as studies on atypical facial pain and neuropathic pain, editorials, book chapters, and excerpts without detailed empirical data. The risk of bias was assessed using the QUADAS-2 tool. We used Grading of Recommendations, Assessment, Development, and Evaluations (GRADE) to provide certainty of evidence. Results A total of 17 articles for automated diagnosis of masticatory muscle disorders, TMJ osteoarthrosis, internal derangement, and disc perforation were included; they were retrospective studies, case-control studies, cohort studies, and a pilot study. Seven studies were subjected to a meta-analysis for diagnostic accuracy. According to the GRADE, the certainty of evidence was very low. The performance of the AI models had accuracy and specificity ranging from 84% to 99.9% and 73% to 100%, respectively. The pooled accuracy was 0.91 (95% CI 0.76–0.99), I2 = 97% (95% CI 0.96–0.98), p < 0.001. Conclusions Various AI algorithms developed for diagnosing TMDs may provide additional clinical expertise to increase diagnostic accuracy. However, it should be noted that a high risk of bias was present in the included studies. Also, certainty of evidence was very low. Future research of higher quality is strongly recommended.
Collapse
Affiliation(s)
- Nayansi Jha
- University of Ulsan College of Medicine, Seoul, Korea
| | - Kwang-sig Lee
- AI Center, Korea University Anam Hospital, Korea University College of Medicine, Seoul, Korea
| | - Yoon-Ji Kim
- Department of Orthodontics, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
- * E-mail:
| |
Collapse
|
16
|
Nithiyaraj E, Selvaraj A. CTSC-Net: an effectual CT slice classification network to categorize organ and non-organ slices from a 3-D CT image. Neural Comput Appl 2022; 34:22141-22156. [PMID: 35990533 PMCID: PMC9376041 DOI: 10.1007/s00521-022-07701-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Accepted: 08/02/2022] [Indexed: 11/29/2022]
Abstract
Computed tomography (CT) is a non-invasive diagnostic imaging modality that reveals more insight into human organs than conventional X-rays. In general, the CT output is a 3-D image that is formed by combining multiple 2D images or slices together. It is essential to keep in mind that not all of the slices provide significant information to detect tumours. Usually, a 3-D CT image obtained from the CT scanners has a significant number of unwanted non-organ slices in it. Radiologists typically devote a significant amount of time to select the slices with organ from a 3-D CT image. The presence of a tumour is only evident in the organ slice; hence, radiologists must be cautious not to skip any organ slices. This work is evaluated on the LITS, 3DIRCADb and COVID-19 CT datasets. The three datasets collectively contain 22,435 organ slices and 53,661 non-organ slices, and there is a huge gap between the number of organ and non-organ slices. There is a need for the automatic elimination of non-organ slices in 3-D CT volumes to assist the physicians, and hence, this work focuses on the automatic recognition of organ slices from 3-D CT volumes. In this paper, a new deep model called the computed tomography slice classification network (CTSC-Net) is proposed for CT slice classification between organ and non-organ slices. The model is trained on 77,980 CT slices, validated on 9748 slices and tested on 12,571 slices. Nine CNN architectures with different layer settings are trained and tested to arrive at the final optimal model. The performance measures are computed in terms of true positive rate, true negative rate, sensitivity, specificity and accuracy. The 20-layer CTSC-Net achieves a validation accuracy of 95.04% and an overall testing accuracy of 99.96%. The proposed model is compared to eight different pre-trained CNN models, and the results of the proposed CTSC-Net surpassed all the comparable models. The activation feature maps of different layers of the CTSC-Net are visualized to verify the discriminative features learned by the network. Hence, the proposed CTSC-Net can be employed as a computer-aided diagnosis tool to help physicians discard unnecessary non-organ slices from the 3-D CT volume and to speed up the CT diagnosis process.
Collapse
Affiliation(s)
- Emerson Nithiyaraj
- Department of Electronics and Communication Engineering, Centre for Image Processing and Pattern Recognition, Mepco Schlenk Engineering College, Sivakasi, 626005 India
| | - Arivazhagan Selvaraj
- Department of Electronics and Communication Engineering, Centre for Image Processing and Pattern Recognition, Mepco Schlenk Engineering College, Sivakasi, 626005 India
| |
Collapse
|
17
|
He Z, Yuan S, Zhao J, Du B, Yuan Z, Alhudhaif A, Alenezi F, Althubiti SA. A novel myocardial infarction localization method using multi-branch DenseNet and spatial matching-based active semi-supervised learning. Inf Sci (N Y) 2022; 606:649-68. [DOI: 10.1016/j.ins.2022.05.070] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
18
|
Sidiropoulos GK, Ouzounis AG, Papakostas GA, Lampoglou A, Sarafis IT, Stamkos A, Solakis G. Hand-Crafted and Learned Feature Aggregation for Visual Marble Tiles Screening. J Imaging 2022; 8:jimaging8070191. [PMID: 35877635 PMCID: PMC9319017 DOI: 10.3390/jimaging8070191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Revised: 07/06/2022] [Accepted: 07/07/2022] [Indexed: 02/04/2023] Open
Abstract
An important factor in the successful marketing of natural ornamental rocks is providing sets of tiles with matching textures. The market price of the tiles is based on the aesthetics of the different quality classes and can change according to the varying needs of the market. The classification of the marble tiles is mainly performed manually by experienced workers. This can lead to misclassifications due to the subjectiveness of such a procedure, causing subsequent problems with the marketing of the product. In this paper, 24 hand-crafted texture descriptors and 20 Convolution Neural Networks were evaluated towards creating aggregated descriptors resulting from the combination of one hand-crafted and one Convolutional Neural Network at a time. A marble tile dataset designed for this study was used for the evaluation process, which was also released publicly to further enable the research for similar studies (both on texture and dolomitic ornamental marble tile analysis). This was done to automate the classification of the marble tiles. The best performing feature descriptors were aggregated together in order to achieve an objective classification. The resulting model was embodied into an automatic screening machine designed and constructed as a part of this study. The experiments showed that the aggregation of the VGG16 and SILTP provided the best results, with an AUC score of 0.9944.
Collapse
Affiliation(s)
- George K. Sidiropoulos
- MLV Research Group, Department of Computer Science, International Hellenic University, 65404 Kavala, Greece; (G.K.S.); (A.G.O.); (A.L.)
| | - Athanasios G. Ouzounis
- MLV Research Group, Department of Computer Science, International Hellenic University, 65404 Kavala, Greece; (G.K.S.); (A.G.O.); (A.L.)
| | - George A. Papakostas
- MLV Research Group, Department of Computer Science, International Hellenic University, 65404 Kavala, Greece; (G.K.S.); (A.G.O.); (A.L.)
- Correspondence:
| | - Anastasia Lampoglou
- MLV Research Group, Department of Computer Science, International Hellenic University, 65404 Kavala, Greece; (G.K.S.); (A.G.O.); (A.L.)
| | - Ilias T. Sarafis
- Department of Chemistry, International Hellenic University, 65404 Kavala, Greece;
| | | | | |
Collapse
|
19
|
He W, Liu T, Han Y, Ming W, Du J, Liu Y, Yang Y, Wang L, Jiang Z, Wang Y, Yuan J, Cao C. A review: The detection of cancer cells in histopathology based on machine vision. Comput Biol Med 2022; 146:105636. [PMID: 35751182 DOI: 10.1016/j.compbiomed.2022.105636] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Revised: 04/04/2022] [Accepted: 04/28/2022] [Indexed: 12/24/2022]
Abstract
Machine vision is being employed in defect detection, size measurement, pattern recognition, image fusion, target tracking and 3D reconstruction. Traditional cancer detection methods are dominated by manual detection, which wastes time and manpower, and heavily relies on the pathologists' skill and work experience. Therefore, these manual detection approaches are not convenient for the inheritance of domain knowledge, and are not suitable for the rapid development of medical care in the future. The emergence of machine vision can iteratively update and learn the domain knowledge of cancer cell pathology detection to achieve automated, high-precision, and consistent detection. Consequently, this paper reviews the use of machine vision to detect cancer cells in histopathology images, as well as the benefits and drawbacks of various detection approaches. First, we review the application of image preprocessing and image segmentation in histopathology for the detection of cancer cells, and compare the benefits and drawbacks of different algorithms. Secondly, for the characteristics of histopathological cancer cell images, the research progress of shape, color and texture features and other methods is mainly reviewed. Furthermore, for the classification methods of histopathological cancer cell images, the benefits and drawbacks of traditional machine vision approaches and deep learning methods are compared and analyzed. Finally, the above research is discussed and forecasted, with the expected future development tendency serving as a guide for future research.
Collapse
Affiliation(s)
- Wenbin He
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Ting Liu
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Yongjie Han
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Wuyi Ming
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China; Guangdong HUST Industrial Technology Research Institute, Guangdong Provincial Key Laboratory of Digital Manufacturing Equipment, Dongguan, 523808, China.
| | - Jinguang Du
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Yinxia Liu
- Laboratory Medicine of Dongguan Kanghua Hospital, Dongguan, 523808, China
| | - Yuan Yang
- Guangdong Provincial Hospital of Chinese Medicine, Guangzhou, 510120, China.
| | - Leijie Wang
- School of Mechanical Engineering, Dongguan University of Technology Dongguan, 523808, China
| | - Zhiwen Jiang
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Yongqiang Wang
- Zhengzhou Coal Mining Machinery Group Co., Ltd, Zhengzhou, 450016, China
| | - Jie Yuan
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Chen Cao
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China; Guangdong HUST Industrial Technology Research Institute, Guangdong Provincial Key Laboratory of Digital Manufacturing Equipment, Dongguan, 523808, China
| |
Collapse
|
20
|
Gupta L, Klinkhammer BM, Seikrit C, Fan N, Bouteldja N, Gräbel P, Gadermayr M, Boor P, Merhof D. Large-scale extraction of interpretable features provides new insights into kidney histopathology – a proof-of-concept study. J Pathol Inform 2022; 13:100097. [PMID: 36268111 PMCID: PMC9576990 DOI: 10.1016/j.jpi.2022.100097] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Revised: 04/14/2022] [Accepted: 05/02/2022] [Indexed: 11/21/2022] Open
Abstract
Whole slide images contain a magnitude of quantitative information that may not be fully explored in qualitative visual assessments. We propose: (1) a novel pipeline for extracting a comprehensive set of visual features, which are detectable by a pathologist, as well as sub-visual features, which are not discernible by human experts and (2) perform detailed analyses on renal images from mice with experimental unilateral ureteral obstruction. An important criterion for these features is that they are easy to interpret, as opposed to features obtained from neural networks. We extract and compare features from pathological and healthy control kidneys to learn how the compartments (glomerulus, Bowman's capsule, tubule, interstitium, artery, and arterial lumen) are affected by the pathology. We define feature selection methods to extract the most informative and discriminative features. We perform statistical analyses to understand the relation of the extracted features, both individually, and in combinations, with tissue morphology and pathology. Particularly for the presented case-study, we highlight features that are affected in each compartment. With this, prior biological knowledge, such as the increase in interstitial nuclei, is confirmed and presented in a quantitative way, alongside with novel findings, like color and intensity changes in glomeruli and Bowman's capsule. The proposed approach is therefore an important step towards quantitative, reproducible, and rater-independent analysis in histopathology.
Collapse
Affiliation(s)
- Laxmi Gupta
- Institute of Imaging & Computer Vision, RWTH Aachen University, Aachen, Germany
- Corresponding author.
| | | | - Claudia Seikrit
- Institute of Pathology, University Hospital Aachen, RWTH Aachen University, Aachen, Germany
- Division of Nephrology and Clinical Immunology, RWTH Aachen University, Aachen, Germany
| | - Nina Fan
- Institute of Imaging & Computer Vision, RWTH Aachen University, Aachen, Germany
| | - Nassim Bouteldja
- Institute of Imaging & Computer Vision, RWTH Aachen University, Aachen, Germany
- Institute of Pathology, University Hospital Aachen, RWTH Aachen University, Aachen, Germany
| | - Philipp Gräbel
- Institute of Imaging & Computer Vision, RWTH Aachen University, Aachen, Germany
| | - Michael Gadermayr
- Institute of Imaging & Computer Vision, RWTH Aachen University, Aachen, Germany
- Salzburg University of Applied Sciences, Puch/Salzburg, Austria
| | - Peter Boor
- Institute of Pathology, University Hospital Aachen, RWTH Aachen University, Aachen, Germany
| | - Dorit Merhof
- Institute of Imaging & Computer Vision, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
21
|
Zhao W, Gurudu SR, Taheri S, Ghosh S, Mallaiyan Sathiaseelan MA, Asadizanjani N. PCB Component Detection Using Computer Vision for Hardware Assurance. BDCC 2022; 6:39. [DOI: 10.3390/bdcc6020039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Printed circuit board (PCB) assurance in the optical domain is a crucial field of study. Though there are many existing PCB assurance methods using image processing, computer vision (CV), and machine learning (ML), the PCB field is complex and increasingly evolving, so new techniques are required to overcome the emerging problems. Existing ML-based methods outperform traditional CV methods; however, they often require more data, have low explainability, and can be difficult to adapt when a new technology arises. To overcome these challenges, CV methods can be used in tandem with ML methods. In particular, human-interpretable CV algorithms such as those that extract color, shape, and texture features increase PCB assurance explainability. This allows for incorporation of prior knowledge, which effectively reduces the number of trainable ML parameters and, thus, the amount of data needed to achieve high accuracy when training or retraining an ML model. Hence, this study explores the benefits and limitations of a variety of common computer vision-based features for the task of PCB component detection. The study results indicate that color features demonstrate promising performance for PCB component detection. The purpose of this paper is to facilitate collaboration between the hardware assurance, computer vision, and machine learning communities.
Collapse
|
22
|
Cai J, Liu M, Zhang Q, Shao Z, Zhou J, Guo Y, Liu J, Wang X, Zhang B, Li X, Cai Y. Renal Cancer Detection: Fusing Deep and Texture Features from Histopathology Images. BioMed Research International 2022; 2022:1-17. [PMID: 35386304 PMCID: PMC8979690 DOI: 10.1155/2022/9821773] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 02/16/2022] [Accepted: 02/21/2022] [Indexed: 11/18/2022]
Abstract
Histopathological images contain morphological markers of disease progression that have diagnostic and predictive values, with many computer-aided diagnosis systems using common deep learning methods that have been proposed to save time and labour. Even though deep learning methods are an end-to-end method, they perform exceptionally well given a large dataset and often show relatively inferior results for a small dataset. In contrast, traditional feature extraction methods have greater robustness and perform well with a small/medium dataset. Moreover, a texture representation-based global approach is commonly used to classify histological tissue images expect in explicit segmentation to extract the structure properties. Considering the scarcity of medical datasets and the usefulness of texture representation, we would like to integrate both the advantages of deep learning and traditional machine learning, i.e., texture representation. To accomplish this task, we proposed a classification model to detect renal cancer using a histopathology dataset by fusing the features from a deep learning model with the extracted texture feature descriptors. Here, five texture feature descriptors from three texture feature families were applied to complement Alex-Net for the extensive validation of the fusion between the deep features and texture features. The texture features are from (1) statistic feature family: histogram of gradient, gray-level cooccurrence matrix, and local binary pattern; (2) transform-based texture feature family: Gabor filters; and (3) model-based texture feature family: Markov random field. The final experimental results for classification outperformed both Alex-Net and a singular texture descriptor, showing the effectiveness of combining the deep features and texture features in renal cancer detection.
Collapse
|
23
|
Ye Z, Zhang Y, Liang Y, Lang J, Zhang X, Zang G, Yuan D, Tian G, Xiao M, Yang J. Cervical Cancer Metastasis and Recurrence Risk Prediction Based on Deep Convolutional Neural Network. Curr Bioinform 2022. [DOI: 10.2174/1574893616666210708143556] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
Background:
Evaluating the risk of metastasis and recurrence of a cervical cancer patient is
critical for appropriate adjuvant therapy. However, current risk assessment models usually involve the
testing of tens to thousands of genes from patients’ tissue samples, which is expensive and timeconsuming.
Therefore, computer-aided diagnosis and prognosis prediction based on Hematoxylin and Eosin
(H&E) pathological images have received much attention recently.
Objective:
The prognosis of whether patients will have metastasis and recurrence can support accurate
treatment for patients in advance and help reduce patient loss. It is also important for guiding treatment
after surgery to be able to quickly and accurately predict the risk of metastasis and recurrence of a cervical
cancer patient.
Method:
To address this problem, we propose a hybrid method. Transfer learning is used to extract features,
and it is combined with traditional machine learning in order to analyze and determine whether
patients have the risks of metastasis and recurrence. First, the proposed model retrieved relevant patches
using a color-based method from H&E pathological images, which were then subjected to image preprocessing
steps such as image normalization and color homogenization. Based on the labeled patched
images, the Xception model with good classification performance was selected, and deep features of
patched pathological images were automatically extracted with transfer learning. After that, the extracted
features were combined to train a random forest model to predict the label of a new patched image.
Finally, a majority voting method was developed to predict the metastasis and recurrence risk of a patient
based on the predictions of patched images from the whole-slide H&E image.
Results:
In our experiment, the proposed model yielded an area under the receiver operating characteristic
curve of 0.82 for the whole-slide image. The experimental results showed that the high-level features
extracted by the deep convolutional neural network from the whole-slide image can be used to predict
the risk of recurrence and metastasis after surgical resection and help identify patients who might receive
additional benefit from adjuvant therapy.
Conclusion:
This paper explored the feasibility of predicting the risk of metastasis and recurrence from
cervical cancer whole slide H&E images through deep learning and random forest methods.
Collapse
Affiliation(s)
- Zixuan Ye
- School of Computer, Hunan University of Technology, Zhuzhou Hunan 412007, China
| | | | - Yuebin Liang
- Geneis (Beijing) Co. Ltd., Beijing 100102, China
| | - Jidong Lang
- Geneis (Beijing) Co. Ltd., Beijing 100102, China
| | - Xiaoli Zhang
- School of Computer, Hunan University of Technology, Zhuzhou Hunan 412007, China
| | | | - Dawei Yuan
- Geneis (Beijing) Co. Ltd., Beijing 100102, China
| | - Geng Tian
- School of Computer, Hunan University of Technology, Zhuzhou Hunan 412007, China
- Geneis (Beijing) Co. Ltd., Beijing 100102, China
| | - Mansheng Xiao
- School of Computer, Hunan University of Technology, Zhuzhou Hunan 412007, China
| | - Jialiang Yang
- School of Computer, Hunan University of Technology, Zhuzhou Hunan 412007, China
- Geneis (Beijing) Co. Ltd., Beijing 100102, China
- Academician Workstation, Changsha
Medical University, Changsha Hunan 410219, China
| |
Collapse
|
24
|
Natarajan S, Govindaraj V, Venkata Rao Narayana R, Zhang YD, Murugan PR, Kandasamy K, Ejaz K. A novel triple-level combinational framework for brain anomaly segmentation to augment clinical diagnosis. Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization 2022. [DOI: 10.1080/21681163.2021.1986858] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Senthilkumar Natarajan
- Department of ECE, Kalasalingam Academy of Research and Education (Kalasalingam University), Srivilliputtur, India
| | - Vishnuvarthanan Govindaraj
- Department of BME, Kalasalingam Academy of Research and Education (Kalasalingam University), Srivilliputtur, India
| | | | - Yu-Dong Zhang
- School of Informatics, University of Leicester, Leicester, UK
| | | | - Karunanithi Kandasamy
- Department of EEE, Vel Tech Rangarajan Dr. Sagunthala R and D Institute of Science and Technology, Avadi, India
| | - Khurram Ejaz
- Department of CS, Universiti Teknologi Malaysia, Johor Bahru, Malaysia
| |
Collapse
|
25
|
Ghosh S, Hassan SKK, Khan AH, Manna A, Bhowmik S, Sarkar R. Application of texture-based features for text non-text classification in printed document images with novel feature selection algorithm. Soft comput 2022; 26:891-909. [DOI: 10.1007/s00500-021-06260-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
26
|
Mewada H, Al-Asad JF, Patel A, Chaudhari J, Mahant K, Vala A. Multi-Channel Local Binary Pattern Guided Convolutional Neural Network for Breast Cancer Classification. Open Biomed Eng J 2021. [DOI: 10.2174/1874120702115010132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Background:
The advancement in convolutional neural network (CNN) has reduced the burden of experts using the computer-aided diagnosis of human breast cancer. However, most CNN networks use spatial features only. The inherent texture structure present in histopathological images plays an important role in distinguishing malignant tissues. This paper proposes an alternate CNN network that integrates Local Binary Pattern (LBP) based texture information with CNN features.
Methods:
The study propagates that LBP provides the most robust rotation, and translation-invariant features in comparison with other texture feature extractors. Therefore, a formulation of LBP in context of convolution operation is presented and used in the proposed CNN network. A non-trainable fixed set binary convolutional filters representing LBP features are combined with trainable convolution filters to approximate the response of the convolution layer. A CNN architecture guided by LBP features is used to classify the histopathological images.
Result:
The network is trained using BreKHis datasets. The use of a fixed set of LBP filters reduces the burden of CNN by minimizing training parameters by a factor of 9. This makes it suitable for the environment with fewer resources. The proposed network obtained 96.46% of maximum accuracy with 98.51% AUC and 97% F1-score.
Conclusion:
LBP based texture information plays a vital role in cancer image classification. A multi-channel LBP futures fusion is used in the CNN network. The experiment results propagate that the new structure of LBP-guided CNN requires fewer training parameters preserving the capability of the CNN network’s classification accuracy.
Collapse
|
27
|
Chandrasekhara SPR, Kabadi MG, Srivinay S. Wearable IoT based diagnosis of prostate cancer using GLCM-multiclass SVM and SIFT-multiclass SVM feature extraction strategies. IJPCC 2021. [DOI: 10.1108/ijpcc-07-2021-0167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Purpose
This study has mainly aimed to compare and contrast two completely different image processing algorithms that are very adaptive for detecting prostate cancer using wearable Internet of Things (IoT) devices. Cancer in these modern times is still considered as one of the most dreaded disease, which is continuously pestering the mankind over a past few decades. According to Indian Council of Medical Research, India alone registers about 11.5 lakh cancer related cases every year and closely up to 8 lakh people die with cancer related issues each year. Earlier the incidence of prostate cancer was commonly seen in men aged above 60 years, but a recent study has revealed that this type of cancer has been on rise even in men between the age groups of 35 and 60 years as well. These findings make it even more necessary to prioritize the research on diagnosing the prostate cancer at an early stage, so that the patients can be cured and can lead a normal life.
Design/methodology/approach
The research focuses on two types of feature extraction algorithms, namely, scale invariant feature transform (SIFT) and gray level co-occurrence matrix (GLCM) that are commonly used in medical image processing, in an attempt to discover and improve the gap present in the potential detection of prostate cancer in medical IoT. Later the results obtained by these two strategies are classified separately using a machine learning based classification model called multi-class support vector machine (SVM). Owing to the advantage of better tissue discrimination and contrast resolution, magnetic resonance imaging images have been considered for this study. The classification results obtained for both the SIFT as well as GLCM methods are then compared to check, which feature extraction strategy provides the most accurate results for diagnosing the prostate cancer.
Findings
The potential of both the models has been evaluated in terms of three aspects, namely, accuracy, sensitivity and specificity. Each model’s result was checked against diversified ranges of training and test data set. It was found that the SIFT-multiclass SVM model achieved a highest performance rate of 99.9451% accuracy, 100% sensitivity and 99% specificity at 40:60 ratio of the training and testing data set.
Originality/value
The SIFT-multi SVM versus GLCM-multi SVM based comparison has been introduced for the first time to perceive the best model to be used for the accurate diagnosis of prostate cancer. The performance of the classification for each of the feature extraction strategies is enumerated in terms of accuracy, sensitivity and specificity.
Collapse
|
28
|
Kim YJ. Machine Learning Models for Sarcopenia Identification Based on Radiomic Features of Muscles in Computed Tomography. Int J Environ Res Public Health 2021; 18:ijerph18168710. [PMID: 34444459 PMCID: PMC8394435 DOI: 10.3390/ijerph18168710] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 08/13/2021] [Accepted: 08/16/2021] [Indexed: 12/12/2022]
Abstract
The diagnosis of sarcopenia requires accurate muscle quantification. As an alternative to manual muscle mass measurement through computed tomography (CT), artificial intelligence can be leveraged for the automation of these measurements. Although generally difficult to identify with the naked eye, the radiomic features in CT images are informative. In this study, the radiomic features were extracted from L3 CT images of the entire muscle area and partial areas of the erector spinae collected from non-small cell lung carcinoma (NSCLC) patients. The first-order statistics and gray-level co-occurrence, gray-level size zone, gray-level run length, neighboring gray-tone difference, and gray-level dependence matrices were the radiomic features analyzed. The identification performances of the following machine learning models were evaluated: logistic regression, support vector machine (SVM), random forest, and extreme gradient boosting (XGB). Sex, coarseness, skewness, and cluster prominence were selected as the relevant features effectively identifying sarcopenia. The XGB model demonstrated the best performance for the entire muscle, whereas the SVM was the worst-performing model. Overall, the models demonstrated improved performance for the entire muscle compared to the erector spinae. Although further validation is required, the radiomic features presented here could become reliable indicators for quantifying the phenomena observed in the muscles of NSCLC patients, thus facilitating the diagnosis of sarcopenia.
Collapse
Affiliation(s)
- Young Jae Kim
- Department of Biomedical Engineering, Gachon University, Inchon 21936, Korea
| |
Collapse
|
29
|
Faust O, En Wei Koh J, Jahmunah V, Sabut S, Ciaccio EJ, Majid A, Ali A, Lip GYH, Acharya UR. Fusion of Higher Order Spectra and Texture Extraction Methods for Automated Stroke Severity Classification with MRI Images. Int J Environ Res Public Health 2021; 18:8059. [PMID: 34360349 PMCID: PMC8345794 DOI: 10.3390/ijerph18158059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Revised: 07/05/2021] [Accepted: 07/23/2021] [Indexed: 11/18/2022]
Abstract
This paper presents a scientific foundation for automated stroke severity classification. We have constructed and assessed a system which extracts diagnostically relevant information from Magnetic Resonance Imaging (MRI) images. The design was based on 267 images that show the brain from individual subjects after stroke. They were labeled as either Lacunar Syndrome (LACS), Partial Anterior Circulation Syndrome (PACS), or Total Anterior Circulation Stroke (TACS). The labels indicate different physiological processes which manifest themselves in distinct image texture. The processing system was tasked with extracting texture information that could be used to classify a brain MRI image from a stroke survivor into either LACS, PACS, or TACS. We analyzed 6475 features that were obtained with Gray-Level Run Length Matrix (GLRLM), Higher Order Spectra (HOS), as well as a combination of Discrete Wavelet Transform (DWT) and Gray-Level Co-occurrence Matrix (GLCM) methods. The resulting features were ranked based on the p-value extracted with the Analysis Of Variance (ANOVA) algorithm. The ranked features were used to train and test four types of Support Vector Machine (SVM) classification algorithms according to the rules of 10-fold cross-validation. We found that SVM with Radial Basis Function (RBF) kernel achieves: Accuracy (ACC) = 93.62%, Specificity (SPE) = 95.91%, Sensitivity (SEN) = 92.44%, and Dice-score = 0.95. These results indicate that computer aided stroke severity diagnosis support is possible. Such systems might lead to progress in stroke diagnosis by enabling healthcare professionals to improve diagnosis and management of stroke patients with the same resources.
Collapse
Affiliation(s)
- Oliver Faust
- Department of Engineering and Mathematics, Sheffield Hallam University, Sheffield S1 1WB, UK
| | - Joel En Wei Koh
- School of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore 599489, Singapore; (J.E.W.K.); (V.J.); (U.R.A.)
| | - Vicnesh Jahmunah
- School of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore 599489, Singapore; (J.E.W.K.); (V.J.); (U.R.A.)
| | - Sukant Sabut
- School of Electronics Engineering, Kalinga Institute of Industrial Technology, Bhubaneswar, Odisha 751024, India;
| | - Edward J. Ciaccio
- Department of Medicine-Cardiology, Columbia University, New York, NY 10027, USA;
| | - Arshad Majid
- Sheffield Institute for Translational Neuroscience, University of Sheffield, Sheffield S10 2HQ, UK;
| | - Ali Ali
- Sheffield Teaching Hospitals NIHR Biomedical Research Centre, Sheffield S10 2JF, UK;
| | - Gregory Y. H. Lip
- Liverpool Centre for Cardiovascular Science, University of Liverpool and Liverpool Heart & Chest Hospital, Liverpool L69 7TX, UK;
- Aalborg Thrombosis Research Unit, Department of Clinical Medicine, Aalborg University, 9000 Aalborg, Denmark
| | - U. Rajendra Acharya
- School of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore 599489, Singapore; (J.E.W.K.); (V.J.); (U.R.A.)
- School of Science and Technology, Singapore University of Social Sciences, 463 Clementi Road, Singapore 599494, Singapore
- Department of Bioinformatics and Medical Engineering, Asia University, Taichung 41354, Taiwan
- International Research Organization for Advanced Science and Technology (IROAST), Kumamoto University, Kumamoto 860-8555, Japan
| |
Collapse
|
30
|
Kundu R, Basak H, Singh PK, Ahmadian A, Ferrara M, Sarkar R. Fuzzy rank-based fusion of CNN models using Gompertz function for screening COVID-19 CT-scans. Sci Rep 2021; 11:14133. [PMID: 34238992 PMCID: PMC8266871 DOI: 10.1038/s41598-021-93658-y] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Accepted: 06/16/2021] [Indexed: 12/22/2022] Open
Abstract
COVID-19 has crippled the world's healthcare systems, setting back the economy and taking the lives of several people. Although potential vaccines are being tested and supplied around the world, it will take a long time to reach every human being, more so with new variants of the virus emerging, enforcing a lockdown-like situation on parts of the world. Thus, there is a dire need for early and accurate detection of COVID-19 to prevent the spread of the disease, even more. The current gold-standard RT-PCR test is only 71% sensitive and is a laborious test to perform, leading to the incapability of conducting the population-wide screening. To this end, in this paper, we propose an automated COVID-19 detection system that uses CT-scan images of the lungs for classifying the same into COVID and Non-COVID cases. The proposed method applies an ensemble strategy that generates fuzzy ranks of the base classification models using the Gompertz function and fuses the decision scores of the base models adaptively to make the final predictions on the test cases. Three transfer learning-based convolutional neural network models are used, namely VGG-11, Wide ResNet-50-2, and Inception v3, to generate the decision scores to be fused by the proposed ensemble model. The framework has been evaluated on two publicly available chest CT scan datasets achieving state-of-the-art performance, justifying the reliability of the model. The relevant source codes related to the present work is available in: GitHub.
Collapse
Affiliation(s)
- Rohit Kundu
- Department of Electrical Engineering, Jadavpur University, Kolkata, 700032, India
| | - Hritam Basak
- Department of Electrical Engineering, Jadavpur University, Kolkata, 700032, India
| | - Pawan Kumar Singh
- Department of Information Technology, Jadavpur University, Kolkata, 700106, India
| | - Ali Ahmadian
- Institute of IR 4.0, The National University of Malaysia (UKM), 43600, Bangi, Selangor, Malaysia.
- Department of Law, Economics and Human Sciences & Decisions Lab, Mediterranea University of Reggio Calabria, 89125, Reggio Calabria, Italy.
| | - Massimiliano Ferrara
- Department of Law, Economics and Human Sciences & Decisions Lab, Mediterranea University of Reggio Calabria, 89125, Reggio Calabria, Italy
| | - Ram Sarkar
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, 700032, India
| |
Collapse
|
31
|
Zhang C, Wen HL, Zhang R, Xie SY, Xie CM. Computed tomography radiomics to predict EBER positivity in Epstein-Barr virus-associated gastric adenocarcinomas: a retrospective study. Acta Radiol 2021; 63:1005-1013. [PMID: 34233501 DOI: 10.1177/02841851211029083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
BACKGROUND The relevance of Epstein-Barr virus (EBV) in gastric carcinoma has been represented by the existence of EBV-encoded small RNA (EBER) in the tumor cells and has prognostic significance in gastric cancer, while gastric adenocarcinoma represents the most frequently occurring gastric malignancy. PURPOSE To observe the capacity of radiomic features extracted from contrast-enhanced computed tomography (CE-CT) images to differentiate EBER-positive gastric adenocarcinoma from EBER-negative ones. MATERIAL AND METHODS A total of 54 patients with gastric adenocarcinoma (EBER-positive: 27, EBER-negative: 27) were retrospectively examined. Radiomic imaging features were extracted from all regions of interest (ROI) delineated by two experienced radiologists on late arterial phase CT images. We distinguished related radiomic features through the two-tailed t test and applied them to construct a decision tree model to evaluate whether EBER in situ hybridization positive had appeared. RESULTS Nine radiomics features were significantly related to EBER in situ hybridization status (P < 0.05), four of which were used to build the decision tree through backward elimination: Correlation_ AllDirection_offset7, Correlation_ angle135_offset7, RunLengthNonuniformity_ AllDirection_offset1_SD, and HighGreyLevelRunEmphasis_ AllDiretion_offset1_SD. The decision tree model consisted of seven decision nodes and six terminal nodes, three of which demonstrated positive EBER in situ hybridization. The specificity, sensitivity, and accuracy of the model were 84%, 80%, and 81.7%, respectively. The area under the curve of the decision tree model was 0.87. CONCLUSION Radiomics based on CE-CT could be applied to predict EBER in situ hybridization status preoperatively in patients with gastric adenocarcinoma.
Collapse
Affiliation(s)
- Cheng Zhang
- Department of Radiology, Sun Yat-sen University Cancer Center; State Key Laboratory of Oncology in Southern China, Guangzhou, PR China
| | - Hai-lin Wen
- Department of Radiology, Sun Yat-sen University Cancer Center; State Key Laboratory of Oncology in Southern China, Guangzhou, PR China
| | - Rong Zhang
- Department of Radiology, Sun Yat-sen University Cancer Center; State Key Laboratory of Oncology in Southern China, Guangzhou, PR China
| | - Shu-yi Xie
- Department of Radiology, Sun Yat-sen University Cancer Center; State Key Laboratory of Oncology in Southern China, Guangzhou, PR China
| | - Chuan-miao Xie
- Department of Radiology, Sun Yat-sen University Cancer Center; State Key Laboratory of Oncology in Southern China, Guangzhou, PR China
| |
Collapse
|
32
|
Zhaoning Y, Yan L, Tiegang G. A lossless self-recovery watermarking scheme with JPEG-LS compression. Journal of Information Security and Applications 2021. [DOI: 10.1016/j.jisa.2020.102733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
33
|
Khan TM, Xu S, Khan ZG, Uzair Chishti M. Implementing Multilabeling, ADASYN, and ReliefF Techniques for Classification of Breast Cancer Diagnostic through Machine Learning: Efficient Computer-Aided Diagnostic System. J Healthc Eng 2021; 2021:5577636. [PMID: 33859807 DOI: 10.1155/2021/5577636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Revised: 02/19/2021] [Accepted: 02/27/2021] [Indexed: 11/17/2022]
Abstract
Multilabel recognition of morphological images and detection of cancerous areas are difficult to locate in the scenario of the image redundancy and less resolution. Cancerous tissues are incredibly tiny in various scenarios. Therefore, for automatic classification, the characteristics of cancer patches in the X-ray image are of critical importance. Due to the slight variation between the textures, using just one feature or using a few features contributes to inaccurate classification outcomes. The present study focuses on five different algorithms for extracting features that can extract further different features. The algorithms are GLCM, LBGLCM, LBP, GLRLM, and SFTA from 8 image groups, and then, the extracted feature spaces are combined. The dataset used for classification is most probably imbalanced. Additionally, another focal point is to eradicate the unbalanced data problem by creating more samples using the ADASYN algorithm so that the error rate is minimized and the accuracy is increased. By using the ReliefF algorithm, it skips less contributing features that relieve the burden on the process. Finally, the feedforward neural network is used for the classification of data. The proposed method showed 99.5% micro, 99.5% macro, 0.5% misclassification, 99.5% recall rats, specificity 99.4%, precision 99.5%, and accuracy 99.5%, showing its robustness in these results. To assess the feasibility of the new system, the INbreast database was used.
Collapse
|
34
|
Öztürk Ş, Özkaya U, Barstuğan M. Classification of Coronavirus (COVID-19) from X-ray and CT images using shrunken features. Int J Imaging Syst Technol 2021; 31:5-15. [PMID: 32904960 PMCID: PMC7461473 DOI: 10.1002/ima.22469] [Citation(s) in RCA: 35] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Revised: 07/08/2020] [Accepted: 07/11/2020] [Indexed: 05/18/2023]
Abstract
Necessary screenings must be performed to control the spread of the COVID-19 in daily life and to make a preliminary diagnosis of suspicious cases. The long duration of pathological laboratory tests and the suspicious test results led the researchers to focus on different fields. Fast and accurate diagnoses are essential for effective interventions for COVID-19. The information obtained by using X-ray and Computed Tomography (CT) images is vital in making clinical diagnoses. Therefore it is aimed to develop a machine learning method for the detection of viral epidemics by analyzing X-ray and CT images. In this study, images belonging to six situations, including coronavirus images, are classified using a two-stage data enhancement approach. Since the number of images in the dataset is deficient and unbalanced, a shallow image augmentation approach was used in the first phase. It is more convenient to analyze these images with hand-crafted feature extraction methods because the dataset newly created is still insufficient to train a deep architecture. Therefore, the Synthetic minority over-sampling technique algorithm is the second data enhancement step of this study. Finally, the feature vector is reduced in size by using a stacked auto-encoder and principal component analysis methods to remove interconnected features in the feature vector. According to the obtained results, it is seen that the proposed method has leveraging performance, especially to make the diagnosis of COVID-19 in a short time and effectively. Also, it is thought to be a source of inspiration for future studies for deficient and unbalanced datasets.
Collapse
Affiliation(s)
- Şaban Öztürk
- Electrical and Electronics EngineeringAmasya UniversityAmasyaTurkey
| | - Umut Özkaya
- Electrical and Electronics EngineeringKonya Technical UniversityKonyaTurkey
| | - Mücahid Barstuğan
- Electrical and Electronics EngineeringKonya Technical UniversityKonyaTurkey
| |
Collapse
|
35
|
Huang CL, Lian MJ, Wu YH, Chen WM, Chiu WT. Identification of Human Ovarian Adenocarcinoma Cells with Cisplatin-resistance by Feature Extraction of Gray Level Co-occurrence Matrix Using Optical Images. Diagnostics (Basel) 2020; 10:E389. [PMID: 32527052 DOI: 10.3390/diagnostics10060389] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2020] [Revised: 06/05/2020] [Accepted: 06/07/2020] [Indexed: 12/13/2022] Open
Abstract
Ovarian cancer is the most malignant of all gynecological cancers. A challenge that deteriorates with ovarian adenocarcinoma in neoplastic disease patients has been associated with the chemoresistance of cancer cells. Cisplatin (CP) belongs to the first-line chemotherapeutic agents and it would be beneficial to identify chemoresistance for ovarian adenocarcinoma cells, especially CP-resistance. Gray level co-occurrence matrix (GLCM) was characterized imaging from a numeric matrix and find its texture features. Serous type (OVCAR-4 and A2780), and clear cell type (IGROV1) ovarian carcinoma cell lines with CP-resistance were used to demonstrate GLCM texture feature extraction of images. Cells were cultured with cell density of 6 × 105 in a glass-bottom dish to form a uniform coverage of the glass slide to get the optical images by microscope and DVC camera. CP-resistant cells included OVCAR-4, A2780 and IGROV and had the higher contrast and entropy, lower energy, and homogeneity. Signal to noise ratio was used to evaluate the degree for chemoresistance of cell images based on GLCM texture feature extraction. The difference between wile type and CP-resistant cells was statistically significant in every case (p < 0.001). It is a promising model to achieve a rapid method with a more reliable diagnostic performance for identification of ovarian adenocarcinoma cells with CP-resistance by feature extraction of GLCM in vitro or ex vivo.
Collapse
|
36
|
Lee S, Rahul, Ye H, Chittajallu D, Kruger U, Boyko T, Lukan JK, Enquobahrie A, Norfleet J, De S. Real-time Burn Classification using Ultrasound Imaging. Sci Rep 2020; 10:5829. [PMID: 32242131 DOI: 10.1038/s41598-020-62674-9] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2019] [Accepted: 03/12/2020] [Indexed: 02/01/2023] Open
Abstract
This article presents a real-time approach for classification of burn depth based on B-mode ultrasound imaging. A grey-level co-occurrence matrix (GLCM) computed from the ultrasound images of the tissue is employed to construct the textural feature set and the classification is performed using nonlinear support vector machine and kernel Fisher discriminant analysis. A leave-one-out cross-validation is used for the independent assessment of the classifiers. The model is tested for pair-wise binary classification of four burn conditions in ex vivo porcine skin tissue: (i) 200 °F for 10 s, (ii) 200 °F for 30 s, (iii) 450 °F for 10 s, and (iv) 450 °F for 30 s. The average classification accuracy for pairwise separation is 99% with just over 30 samples in each burn group and the average multiclass classification accuracy is 93%. The results highlight that the ultrasound imaging-based burn classification approach in conjunction with the GLCM texture features provide an accurate assessment of altered tissue characteristics with relatively moderate sample sizes, which is often the case with experimental and clinical datasets. The proposed method is shown to have the potential to assist with the real-time clinical assessment of burn degrees, particularly for discriminating between superficial and deep second degree burns, which is challenging in clinical practice.
Collapse
|
37
|
|
38
|
Oh JE, Kim MJ, Lee J, Hur BY, Kim B, Kim DY, Baek JY, Chang HJ, Park SC, Oh JH, Cho SA, Sohn DK. Magnetic Resonance-Based Texture Analysis Differentiating KRAS Mutation Status in Rectal Cancer. Cancer Res Treat 2019; 52:51-59. [PMID: 31096736 PMCID: PMC6962487 DOI: 10.4143/crt.2019.050] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2019] [Accepted: 05/06/2019] [Indexed: 12/13/2022] Open
Abstract
Purpose Mutation of the Kirsten Ras (KRAS) oncogene is present in 30%-40% of colorectal cancers and has prognostic significance in rectal cancer. In this study, we examined the ability of radiomics features extracted from T2-weighted magnetic resonance (MR) images to differentiate between tumors with mutant KRAS and wild-type KRAS. Materials and Methods Sixty patients with primary rectal cancer (25 with mutant KRAS, 35 with wild-type KRAS) were retrospectively enrolled. Texture analysis was performed in all regions of interest on MR images, which were manually segmented by two independent radiologists. We identified potentially useful imaging features using the two-tailed t test and used them to build a discriminant model with a decision tree to estimate whether KRAS mutation had occurred. Results Three radiomic features were significantly associated with KRAS mutational status (p < 0.05). The mean (and standard deviation) skewness with gradient filter value was significantly higher in the mutant KRAS group than in the wild-type group (2.04±0.94 vs. 1.59±0.69). Higher standard deviations for medium texture (SSF3 and SSF4) were able to differentiate mutant KRAS (139.81±44.19 and 267.12±89.75, respectively) and wild-type KRAS (114.55±29.30 and 224.78±62.20). The final decision tree comprised three decision nodes and four terminal nodes, two of which designated KRAS mutation. The sensitivity, specificity, and accuracy of the decision tree was 84%, 80%, and 81.7%, respectively. Conclusion Using MR-based texture analysis, we identified three imaging features that could differentiate mutant from wild-type KRAS. T2-weighted images could be used to predict KRAS mutation status preoperatively in patients with rectal cancer.
Collapse
Affiliation(s)
- Ji Eun Oh
- Innovative Medical Engineering & Technology, Research Institute and Hospital, National Cancer Center, Goyang, Korea
| | - Min Ju Kim
- Center for Colorectal Cancer, Research Institute and Hospital, National Cancer Center, Goyang, Korea
| | - Joohyung Lee
- Innovative Medical Engineering & Technology, Research Institute and Hospital, National Cancer Center, Goyang, Korea
| | - Bo Yun Hur
- Center for Colorectal Cancer, Research Institute and Hospital, National Cancer Center, Goyang, Korea
| | - Bun Kim
- Center for Colorectal Cancer, Research Institute and Hospital, National Cancer Center, Goyang, Korea
| | - Dae Yong Kim
- Center for Colorectal Cancer, Research Institute and Hospital, National Cancer Center, Goyang, Korea
| | - Ji Yeon Baek
- Center for Colorectal Cancer, Research Institute and Hospital, National Cancer Center, Goyang, Korea
| | - Hee Jin Chang
- Center for Colorectal Cancer, Research Institute and Hospital, National Cancer Center, Goyang, Korea
| | - Sung Chan Park
- Center for Colorectal Cancer, Research Institute and Hospital, National Cancer Center, Goyang, Korea
| | - Jae Hwan Oh
- Center for Colorectal Cancer, Research Institute and Hospital, National Cancer Center, Goyang, Korea
| | - Sun Ah Cho
- Innovative Medical Engineering & Technology, Research Institute and Hospital, National Cancer Center, Goyang, Korea
| | - Dae Kyung Sohn
- Innovative Medical Engineering & Technology, Research Institute and Hospital, National Cancer Center, Goyang, Korea.,Center for Colorectal Cancer, Research Institute and Hospital, National Cancer Center, Goyang, Korea
| |
Collapse
|
39
|
Novitasari DCR, Lubab A, Sawiji A, Asyhar AH. Application of Feature Extraction for Breast Cancer using One Order Statistic, GLCM, GLRLM, and GLDM. ACTA ACUST UNITED AC 2019. [DOI: 10.25046/aj040413] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|