1
|
Jiao J, Aljuaid H. Research on the prediction of English topic richness in the context of multimedia data. PeerJ Comput Sci 2024; 10:e1967. [PMID: 38660161 PMCID: PMC11042032 DOI: 10.7717/peerj-cs.1967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2024] [Accepted: 03/06/2024] [Indexed: 04/26/2024]
Abstract
With the evolution of the Internet and multimedia technologies, delving deep into multimedia data for predicting topic richness holds significant practical implications in public opinion monitoring and data discourse power competition. This study introduces an algorithm for predicting English topic richness based on the Transformer model, applied specifically to the Twitter platform. Initially, relevant data is organized and extracted following an analysis of Twitter's characteristics. Subsequently, a feature fusion approach is employed to mine, extract, and construct features from Twitter blogs and users, encompassing blog features, topic features, and user features, which are amalgamated into multimodal features. Lastly, the combined features undergo training and learning using the Transformer model. Through experimentation on the Twitter topic richness dataset, our algorithm achieves an accuracy of 82.3%, affirming the efficacy and superior performance of the proposed approach.
Collapse
Affiliation(s)
- Jie Jiao
- Jiaozuo Normal College, Jiaozuo, China
| | - Hanan Aljuaid
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University (PNU), Riyadh, Saudi Arabia
| |
Collapse
|
2
|
Ahmad M, Irfan MA, Sadique U, Haq IU, Jan A, Khattak MI, Ghadi YY, Aljuaid H. Multi-Method Analysis of Histopathological Image for Early Diagnosis of Oral Squamous Cell Carcinoma Using Deep Learning and Hybrid Techniques. Cancers (Basel) 2023; 15:5247. [PMID: 37958422 PMCID: PMC10650156 DOI: 10.3390/cancers15215247] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 10/22/2023] [Accepted: 10/25/2023] [Indexed: 11/15/2023] Open
Abstract
Oral cancer is a fatal disease and ranks seventh among the most common cancers throughout the whole globe. Oral cancer is a type of cancer that usually affects the head and neck. The current gold standard for diagnosis is histopathological investigation, however, the conventional approach is time-consuming and requires professional interpretation. Therefore, early diagnosis of Oral Squamous Cell Carcinoma (OSCC) is crucial for successful therapy, reducing the risk of mortality and morbidity, while improving the patient's chances of survival. Thus, we employed several artificial intelligence techniques to aid clinicians or physicians, thereby significantly reducing the workload of pathologists. This study aimed to develop hybrid methodologies based on fused features to generate better results for early diagnosis of OSCC. This study employed three different strategies, each using five distinct models. The first strategy is transfer learning using the Xception, Inceptionv3, InceptionResNetV2, NASNetLarge, and DenseNet201 models. The second strategy involves using a pre-trained art of CNN for feature extraction coupled with a Support Vector Machine (SVM) for classification. In particular, features were extracted using various pre-trained models, namely Xception, Inceptionv3, InceptionResNetV2, NASNetLarge, and DenseNet201, and were subsequently applied to the SVM algorithm to evaluate the classification accuracy. The final strategy employs a cutting-edge hybrid feature fusion technique, utilizing an art-of-CNN model to extract the deep features of the aforementioned models. These deep features underwent dimensionality reduction through principal component analysis (PCA). Subsequently, low-dimensionality features are combined with shape, color, and texture features extracted using a gray-level co-occurrence matrix (GLCM), Histogram of Oriented Gradient (HOG), and Local Binary Pattern (LBP) methods. Hybrid feature fusion was incorporated into the SVM to enhance the classification performance. The proposed system achieved promising results for rapid diagnosis of OSCC using histological images. The accuracy, precision, sensitivity, specificity, F-1 score, and area under the curve (AUC) of the support vector machine (SVM) algorithm based on the hybrid feature fusion of DenseNet201 with GLCM, HOG, and LBP features were 97.00%, 96.77%, 90.90%, 98.92%, 93.74%, and 96.80%, respectively.
Collapse
Affiliation(s)
- Mehran Ahmad
- Department of Electrical Engineering, University of Engineering and Technology, Peshawar 25000, Pakistan; (M.A.); (A.J.); (M.I.K.)
- AIH, Intelligent Information Processing Lab (NCAI), University of Engineering and Technology, Peshawar 25000, Pakistan; (U.S.); (I.u.H.)
| | - Muhammad Abeer Irfan
- Department of Computer Systems Engineering, University of Engineering and Technology, Peshawar 25000, Pakistan;
| | - Umar Sadique
- AIH, Intelligent Information Processing Lab (NCAI), University of Engineering and Technology, Peshawar 25000, Pakistan; (U.S.); (I.u.H.)
- Department of Computer Systems Engineering, University of Engineering and Technology, Peshawar 25000, Pakistan;
| | - Ihtisham ul Haq
- AIH, Intelligent Information Processing Lab (NCAI), University of Engineering and Technology, Peshawar 25000, Pakistan; (U.S.); (I.u.H.)
- Department of Mechatronics Engineering, University of Engineering and Technology, Peshawar 25000, Pakistan
| | - Atif Jan
- Department of Electrical Engineering, University of Engineering and Technology, Peshawar 25000, Pakistan; (M.A.); (A.J.); (M.I.K.)
| | - Muhammad Irfan Khattak
- Department of Electrical Engineering, University of Engineering and Technology, Peshawar 25000, Pakistan; (M.A.); (A.J.); (M.I.K.)
| | - Yazeed Yasin Ghadi
- Department of Computer Science, Al Ain University, Al Ain 15551, United Arab Emirates;
| | - Hanan Aljuaid
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University (PNU), Riyadh 11671, Saudi Arabia
| |
Collapse
|
3
|
Anjum S, Ahmed I, Asif M, Aljuaid H, Alturise F, Ghadi YY, Elhabob R. Lung Cancer Classification in Histopathology Images Using Multiresolution Efficient Nets. Comput Intell Neurosci 2023; 2023:7282944. [PMID: 37876944 PMCID: PMC10593544 DOI: 10.1155/2023/7282944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Revised: 11/07/2022] [Accepted: 11/29/2022] [Indexed: 10/26/2023]
Abstract
Histopathological images are very effective for investigating the status of various biological structures and diagnosing diseases like cancer. In addition, digital histopathology increases diagnosis precision and provides better image quality and more detail for the pathologist with multiple viewing options and team annotations. As a result of the benefits above, faster treatment is available, increasing therapy success rates and patient recovery and survival chances. However, the present manual examination of these images is tedious and time-consuming for pathologists. Therefore, reliable automated techniques are needed to effectively classify normal and malignant cancer images. This paper applied a deep learning approach, namely, EfficientNet and its variants from B0 to B7. We used different image resolutions for each model, from 224 × 224 pixels to 600 × 600 pixels. We also applied transfer learning and parameter tuning techniques to improve the results and overcome the overfitting problem. We collected the dataset from the Lung and Colon Cancer Histopathological Image LC25000 image dataset. The dataset acquisition consists of 25,000 histopathology images of five classes (lung adenocarcinoma, lung squamous cell carcinoma, benign lung tissue, colon adenocarcinoma, and colon benign tissue). Then, we performed preprocessing on the dataset to remove the noisy images and bring them into a standard format. The model's performance was evaluated in terms of classification accuracy and loss. We have achieved good accuracy results for all variants; however, the results of EfficientNetB2 stand excellent, with an accuracy of 97% for 260 × 260 pixels resolution images.
Collapse
Affiliation(s)
- Sunila Anjum
- Center of Excellence in Information Technology, Institute of Management Sciences, Hayatabad, Peshawar 25000, Pakistan
| | - Imran Ahmed
- School of Computing and Information Science, Anglia Ruskin University, Cambridge, UK
| | - Muhammad Asif
- Department of Computer Science, National Textile University, Faisalabad, Pakistan
| | - Hanan Aljuaid
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University (PNU), P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Fahad Alturise
- Department of Computer, College of Science and Arts in Ar Rass, Qassim University, Ar Rass, Qassim, Saudi Arabia
| | - Yazeed Yasin Ghadi
- Department of Software Engineering/Computer Science, Al Ain University, Al Ain, UAE
| | - Rashad Elhabob
- College of Computer Science and Information Technology, Karary University, Omdurman, Sudan
| |
Collapse
|
4
|
Pathan RK, Uddin MA, Paul AM, Uddin MI, Hamd ZY, Aljuaid H, Khandaker MU. Monkeypox genome mutation analysis using a timeseries model based on long short-term memory. PLoS One 2023; 18:e0290045. [PMID: 37611023 PMCID: PMC10446231 DOI: 10.1371/journal.pone.0290045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Accepted: 07/31/2023] [Indexed: 08/25/2023] Open
Abstract
Monkeypox is a double-stranded DNA virus with an envelope and is a member of the Poxviridae family's Orthopoxvirus genus. This virus can transmit from human to human through direct contact with respiratory secretions, infected animals and humans, or contaminated objects and causing mutations in the human body. In May 2022, several monkeypox affected cases were found in many countries. Because of its transmitting characteristics, on July 23, 2022, a nationwide public health emergency was proclaimed by WHO due to the monkeypox virus. This study analyzed the gene mutation rate that is collected from the most recent NCBI monkeypox dataset. The collected data is prepared to independently identify the nucleotide and codon mutation. Additionally, depending on the size and availability of the gene dataset, the computed mutation rate is split into three categories: Canada, Germany, and the rest of the world. In this study, the genome mutation rate of the monkeypox virus is predicted using a deep learning-based Long Short-Term Memory (LSTM) model and compared with Gated Recurrent Unit (GRU) model. The LSTM model shows "Root Mean Square Error" (RMSE) values of 0.09 and 0.08 for testing and training, respectively. Using this time series analysis method, the prospective mutation rate of the 50th patient has been predicted. Note that this is a new report on the monkeypox gene mutation. It is found that the nucleotide mutation rates are decreasing, and the balance between bi-directional rates are maintained.
Collapse
Affiliation(s)
- Refat Khan Pathan
- Department of Computing and Information Systems, School of Engineering and Technology, Sunway University, Selangor, Malaysia
| | - Mohammad Amaz Uddin
- Department of Computer Science and Engineering, Chittagong University of Engineering & Technology, Chittagong, Bangladesh
| | - Ananda Mohan Paul
- Department of Computer Science and Engineering, BGC Trust University Bangladesh, Chittagong, Bangladesh
| | - Md. Imtiaz Uddin
- Department of Pharmacy, State University of Bangladesh, Dhaka, Bangladesh
| | - Zuhal Y. Hamd
- Department of Radiological Sciences, College of Health and Rehabilitation Sciences, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Hanan Aljuaid
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University (PNU), Riyadh, Saudi Arabia
| | - Mayeen Uddin Khandaker
- Centre for Applied Physics and Radiation Technologies, School of Engineering and Technology, Sunway University, Selangor, Malaysia
- Department of General Educational Development, Faculty of Science and Information Technology, Daffodil International University, Dhaka, Bangladesh
| |
Collapse
|
5
|
Hamd Z, Elshami W, Al Kawas S, Aljuaid H, Abuzaid MM. A closer look at the current knowledge and prospects of artificial intelligence integration in dentistry practice: A cross-sectional study. Heliyon 2023; 9:e17089. [PMID: 37332919 PMCID: PMC10276225 DOI: 10.1016/j.heliyon.2023.e17089] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 05/19/2023] [Accepted: 06/07/2023] [Indexed: 06/20/2023] Open
Abstract
Background Healthcare professionals have expressed worries about using AI, while others anticipate more work opportunities in the future and better patient care. Integrating AI into practice will directly impact dentistry practice. The purpose of the study is to evaluate organizational readiness, knowledge, attitude, and willingness to integrate AI into dentistry practice. Methods a cross-sectional exploratory study of dentists, academic faculty and students who practice and study dentistry in UAE. Participants were invited to participate in a previously validated survey used to collect participants' demographics, knowledge, perceptions, and organizational readiness. Results One hundred thirty-four responded to the survey with a response rate was 78% from the invited group. Results showed excitement to implement AI in practice accompanied by medium to high knowledge and a lack of education and training programs. As a result, organizations were not well prepared and had to ensure readiness for AI implementation. Conclusion An effort to ensure professional and student readiness will improve AI integration in practice. In addition, dental professional societies and educational institutions must collaborate to develop proper training programs for dentists to close the knowledge gap.
Collapse
Affiliation(s)
- Zuhal Hamd
- Department of Radiological Sciences, College of Health and Rehabilitation Sciences, Princess Nourah bint Abdulrahman University, P.O. 84428, Riyadh, 11671, United Arab Emirates
| | - Wiam Elshami
- Medical Diagnostic Imaging Department, College of Health Sciences, University of Sharjah, Sharjah, United Arab Emirates
- Research Institute for Medical and Health Sciences, University of Sharjah, Sharjah, United Arab Emirates
| | - Sausan Al Kawas
- College of Dental Medicine, University of Sharjah, Sharjah, United Arab Emirates
| | - Hanan Aljuaid
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University (PNU), P.O. Box 84428, Riyadh, 11671, United Arab Emirates
| | - Mohamed M. Abuzaid
- Medical Diagnostic Imaging Department, College of Health Sciences, University of Sharjah, Sharjah, United Arab Emirates
- Research Institute for Medical and Health Sciences, University of Sharjah, Sharjah, United Arab Emirates
| |
Collapse
|
6
|
Aljuaid H, Akhter I, Alsufyani N, Shorfuzzaman M, Alarfaj M, Alnowaiser K, Jalal A, Park J. Postures anomaly tracking and prediction learning model over crowd data analytics. PeerJ Comput Sci 2023; 9:e1355. [PMID: 37346503 PMCID: PMC10280427 DOI: 10.7717/peerj-cs.1355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2022] [Accepted: 03/30/2023] [Indexed: 06/23/2023]
Abstract
Innovative technology and improvements in intelligent machinery, transportation facilities, emergency systems, and educational services define the modern era. It is difficult to comprehend the scenario, do crowd analysis, and observe persons. For e-learning-based multiobject tracking and predication framework for crowd data via multilayer perceptron, this article recommends an organized method that takes e-learning crowd-based type data as input, based on usual and abnormal actions and activities. After that, super pixel and fuzzy c mean, for features extraction, we used fused dense optical flow and gradient patches, and for multiobject tracking, we applied a compressive tracking algorithm and Taylor series predictive tracking approach. The next step is to find the mean, variance, speed, and frame occupancy utilized for trajectory extraction. To reduce data complexity and optimization, we applied T-distributed stochastic neighbor embedding (t-SNE). For predicting normal and abnormal action in e-learning-based crowd data, we used multilayer perceptron (MLP) to classify numerous classes. We used the three-crowd activity University of California San Diego, Department of Pediatrics (USCD-Ped), Shanghai tech, and Indian Institute of Technology Bombay (IITB) corridor datasets for experimental estimation based on human and nonhuman-based videos. We achieve a mean accuracy of 87.00%, USCD-Ped, Shanghai tech for 85.75%, and IITB corridor of 88.00% datasets.
Collapse
Affiliation(s)
- Hanan Aljuaid
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Israr Akhter
- Department of Computer Science, Bahria University, Islamabad, Pakistan
| | - Nawal Alsufyani
- Department of Computer Science, Taif University, Taif, Saudi Arabia
| | | | - Mohammed Alarfaj
- Department of Electrical Engineering, King Faisal University, Al-Ahsa, Saudi Arabia
| | - Khaled Alnowaiser
- Department of Computer Engineering, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia
| | - Ahmad Jalal
- Department of Computer Science, Air University, Islamabad, Pakistan
| | - Jeongmin Park
- Department of Computer Engineering, Tech University of Korea, Sangidaehak-ro, Siheung-si, South Korea
| |
Collapse
|
7
|
Ghadi YY, AlShloul T, Nezami ZI, Ali H, Asif M, Aljuaid H, Ahmad S. An efficient optimizer for the 0/1 knapsack problem using group counseling. PeerJ Comput Sci 2023; 9:e1315. [PMID: 37346609 PMCID: PMC10280447 DOI: 10.7717/peerj-cs.1315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Accepted: 03/06/2023] [Indexed: 06/23/2023]
Abstract
The field of optimization is concerned with determining the optimal solution to a problem. It refers to the mathematical loss or gain of a given objective function. Optimization must reduce the given problem's losses and disadvantages while maximizing its earnings and benefits. We all want optimal or, at the very least, suboptimal answers because we all want to live a better life. Group counseling optimizer (GCO) is an emerging evolutionary algorithm that simulates the human behavior of counseling within a group for solving problems. GCO has been successfully applied to single and multi-objective optimization problems. The 0/1 knapsack problem is also a combinatorial problem in which we can select an item entirely or drop it to fill a knapsack so that the total weight of selected items is less than or equal to the knapsack size and the value of all items is as significant as possible. Dynamic programming solves the 0/1 knapsack problem optimally, but the time complexity of dynamic programming is O(n3). In this article, we provide a feature analysis of GCO parameters and use it to solve the 0/1 knapsack problem (KP) using GCO. The results show that the GCO-based approach efficiently solves the 0/1 knapsack problem; therefore, it is a viable alternative to solving the 0/1 knapsack problem.
Collapse
Affiliation(s)
- Yazeed Yasin Ghadi
- Department of Computer Science/Software Engineering, Al Ain University, Al Ain, UAE
| | - Tamara AlShloul
- Collage of General Education, Liwa College of Technology, Abu Dhabi, UAE
| | - Zahid Iqbal Nezami
- Department of Computer Science, The Superior University, Lahore, Pakistan
| | - Hamid Ali
- Department of Computer Science, National Textile University, Faisalabad, Pakistan
| | - Muhammad Asif
- Department of Computer Science, National Textile University, Faisalabad, Pakistan
| | - Hanan Aljuaid
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Shahbaz Ahmad
- Department of Computer Science, National Textile University, Faisalabad, Pakistan
| |
Collapse
|
8
|
Hamd ZY, Almohammed HI, Lashin MM, Yousef M, Aljuaid H, Khawaji SM, Alhussain NI, Salami AH, Alsowayan RA, Alshaik FA, Alshehri TK, Aldossari DM, Albogami NF, Khandaker MU. Artificial intelligence-based fuzzy logic systems for predicting radiation protection awareness levels among university population. Radiat Phys Chem Oxf Engl 1993 2023. [DOI: 10.1016/j.radphyschem.2023.110888] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
|
9
|
Shafiq N, Hamid I, Asif M, Nawaz Q, Aljuaid H, Ali H. Abstractive text summarization of low-resourced languages using deep learning. PeerJ Comput Sci 2023; 9:e1176. [PMID: 37346684 PMCID: PMC10280265 DOI: 10.7717/peerj-cs.1176] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2022] [Accepted: 11/09/2022] [Indexed: 06/23/2023]
Abstract
Background Humans must be able to cope with the huge amounts of information produced by the information technology revolution. As a result, automatic text summarization is being employed in a range of industries to assist individuals in identifying the most important information. For text summarization, two approaches are mainly considered: text summarization by the extractive and abstractive methods. The extractive summarisation approach selects chunks of sentences like source documents, while the abstractive approach can generate a summary based on mined keywords. For low-resourced languages, e.g., Urdu, extractive summarization uses various models and algorithms. However, the study of abstractive summarization in Urdu is still a challenging task. Because there are so many literary works in Urdu, producing abstractive summaries demands extensive research. Methodology This article proposed a deep learning model for the Urdu language by using the Urdu 1 Million news dataset and compared its performance with the two widely used methods based on machine learning, such as support vector machine (SVM) and logistic regression (LR). The results show that the suggested deep learning model performs better than the other two approaches. The summaries produced by extractive summaries are processed using the encoder-decoder paradigm to create an abstractive summary. Results With the help of Urdu language specialists, the system-generated summaries were validated, showing the proposed model's improvement and accuracy.
Collapse
Affiliation(s)
- Nida Shafiq
- Department of Computer Science, National Textile University, Faisalabad, Pakistan
| | - Isma Hamid
- Department of Computer Science, National Textile University, Faisalabad, Pakistan
| | - Muhammad Asif
- Department of Computer Science, National Textile University, Faisalabad, Pakistan
| | - Qamar Nawaz
- Department of Computer Science, University of Agriculture Faisalabad, Faisalabad, Pakistan
| | - Hanan Aljuaid
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Hamid Ali
- Department of Computer Science, National Textile University, Faisalabad, Pakistan
| |
Collapse
|
10
|
Nezami ZI, Ali H, Asif M, Aljuaid H, Hamid I, Ali Z. An efficient and secure technique for image steganography using a hash function. PeerJ Comput Sci 2022; 8:e1157. [PMID: 36532801 PMCID: PMC9748815 DOI: 10.7717/peerj-cs.1157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Accepted: 10/24/2022] [Indexed: 06/17/2023]
Abstract
Steganography is a technique in which a person hides information in digital media. The message sent by this technique is so secret that other people cannot even imagine the information's existence. This article entails developing a mechanism for communicating one-on-one with individuals by concealing information from the rest of the group. Based on their availability, digital images are the most suited components for use as transmitters when compared to other objects available on the internet. The proposed technique encrypts a message within an image. There are several steganographic techniques for hiding hidden information in photographs, some of which are more difficult than others, and each has its strengths and weaknesses. The encryption mechanism employed may have different requirements depending on the application. For example, certain applications may require complete invisibility of the key information, while others may require the concealment of a larger secret message. In this research, we proposed a technique that converts plain text to ciphertext and encodes it in a picture using up to the four least significant bit (LSB) based on a hash function. The LSBs of the image pixel values are used to substitute pieces of text. Human eyes cannot predict the variation between the initial Image and the resulting image since only the LSBs are modified. The proposed technique is compared with state-of-the-art techniques. The results reveal that the proposed technique outperforms the existing techniques concerning security and efficiency with adequate MSE and PSNR.
Collapse
Affiliation(s)
- Zahid Iqbal Nezami
- Department of Computer Science, The Superior University Lahore, Lahore, Punjab, Pakistan
| | - Hamid Ali
- Department of Computer Science, National Textile University, Faisalabad, Pakistan
| | - Muhammad Asif
- Department of Computer Science, National Textile University, Faisalabad, Pakistan
| | - Hanan Aljuaid
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh, Riyadh, Saudi Arabia
| | - Isma Hamid
- Department of Computer Science, National Textile University, Faisalabad, Pakistan
| | - Zulfiqar Ali
- Department Computer Science, National University of Technology, Islamabad, Pakistan
| |
Collapse
|
11
|
Aljuaid H, Alturki N, Alsubaie N, Cavallaro L, Liotta A. Computer-aided diagnosis for breast cancer classification using deep neural networks and transfer learning. Comput Methods Programs Biomed 2022; 223:106951. [PMID: 35767911 DOI: 10.1016/j.cmpb.2022.106951] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 05/25/2022] [Accepted: 06/09/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Many developed and non-developed countries worldwide suffer from cancer-related fatal diseases. In particular, the rate of breast cancer in females increases daily, partially due to unawareness and undiagnosed at the early stages. A proper first breast cancer treatment can only be provided by adequately detecting and classifying cancer during the very early stages of its development. The use of medical image analysis techniques and computer-aided diagnosis may help the acceleration and the automation of both cancer detection and classification by also training and aiding less experienced physicians. For large datasets of medical images, convolutional neural networks play a significant role in detecting and classifying cancer effectively. METHODS This article presents a novel computer-aided diagnosis method for breast cancer classification (both binary and multi-class), using a combination of deep neural networks (ResNet 18, ShuffleNet, and Inception-V3Net) and transfer learning on the BreakHis publicly available dataset. RESULTS AND CONCLUSIONS Our proposed method provides the best average accuracy for binary classification of benign or malignant cancer cases of 99.7%, 97.66%, and 96.94% for ResNet, InceptionV3Net, and ShuffleNet, respectively. Average accuracies for multi-class classification were 97.81%, 96.07%, and 95.79% for ResNet, Inception-V3Net, and ShuffleNet, respectively.
Collapse
Affiliation(s)
- Hanan Aljuaid
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University (PNU), PO Box 84428, Riyadh 11671, Saudi Arabia
| | - Nazik Alturki
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University (PNU), PO Box 84428, Riyadh 11671, Saudi Arabia
| | - Najah Alsubaie
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University (PNU), PO Box 84428, Riyadh 11671, Saudi Arabia
| | - Lucia Cavallaro
- Faculty of Computer Science, Free University of Bozen-Bolzano, Piazza Domenicani, 3, Bolzano 39100, Italy
| | - Antonio Liotta
- Faculty of Computer Science, Free University of Bozen-Bolzano, Piazza Domenicani, 3, Bolzano 39100, Italy.
| |
Collapse
|
12
|
Batool D, Shahbaz M, Shahzad Asif H, Shaukat K, Alam TM, Hameed IA, Ramzan Z, Waheed A, Aljuaid H, Luo S. A Hybrid Approach to Tea Crop Yield Prediction Using Simulation Models and Machine Learning. Plants (Basel) 2022; 11:plants11151925. [PMID: 35893629 PMCID: PMC9332224 DOI: 10.3390/plants11151925] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Revised: 07/15/2022] [Accepted: 07/21/2022] [Indexed: 06/12/2023]
Abstract
Tea (Camellia sinensis L.) is one of the most highly consumed beverages globally after water. Several countries import large quantities of tea from other countries to meet domestic needs. Therefore, accurate and timely prediction of tea yield is critical. The previous studies used statistical, deep learning, and machine learning techniques for tea yield prediction, but crop simulation models have not yet been used. However, the calibration of a simulation model for tea yield prediction and the comparison of these approaches is needed regarding the different data types. This research study aims to provide a comparative study of the methods for tea yield prediction using the Food and Agriculture Organization (FAO) of the United Nations AquaCrop simulation model and machine learning techniques. We employed weather, soil, crop, and agro-management data from 2016 to 2019 acquired from tea fields of the National Tea and High-Value Crop Research Institute (NTHRI), Pakistan, to calibrate the AquaCrop simulation model and to train regression algorithms. We achieved a mean absolute error (MAE) of 0.45 t/ha, a mean squared error (MSE) of 0.23 t/ha, and a root mean square error (RMSE) of 0.48 t/ha in the calibration of the AquaCrop model and, out of the ten regression models, we achieved the lowest MAE of 0.093 t/ha, MSE of 0.015 t/ha, and RMSE of 0.120 t/ha using 10-fold cross-validation and MAE of 0.123 t/ha, MSE of 0.024 t/ha, and RMSE of 0.154 t/ha using the XGBoost regressor with train test split. We concluded that the machine learning regression algorithm performed better in yield prediction using fewer data than the simulation model. This study provides a technique to improve tea yield prediction by combining different data sources using a crop simulation model and machine learning algorithms.
Collapse
Affiliation(s)
- Dania Batool
- Department of Computer Engineering, University of Engineering and Technology, Lahore 58590, Pakistan; (D.B.); (M.S.)
| | - Muhammad Shahbaz
- Department of Computer Engineering, University of Engineering and Technology, Lahore 58590, Pakistan; (D.B.); (M.S.)
| | - Hafiz Shahzad Asif
- Department of Computer Science, New Campus, University of Engineering and Technology, Lahore 58590, Pakistan;
| | - Kamran Shaukat
- School of Information and Physical Sciences, The University of Newcastle, Newcastle 2308, Australia;
- Department of Data Science, University of the Punjab, Lahore 54890, Pakistan
| | - Talha Mahboob Alam
- Department of Computer Science and Information Technology, Virtual University of Pakistan, Lahore 58590, Pakistan
| | - Ibrahim A. Hameed
- Department of ICT and Natural Sciences, Norwegian University of Science and Technology, 7034 Trondheim, Norway
| | - Zeeshan Ramzan
- Department of Computer Science, New Campus, University of Engineering and Technology, Lahore 58590, Pakistan;
| | - Abdul Waheed
- National Tea and High-Value Crops Research Institute, Shinkiari, Mansehra 21300, Pakistan;
| | - Hanan Aljuaid
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University (PNU), P.O. Box 84428, Riyadh 11671, Saudi Arabia;
| | - Suhuai Luo
- School of Information and Physical Sciences, The University of Newcastle, Newcastle 2308, Australia;
| |
Collapse
|
13
|
Aljuaid H, Mahmoud HAH. Methodology for Exploring Patterns of Epigenetic Information in Cancer Cells Using Data Mining Technique. Healthcare (Basel) 2021; 9:1652. [PMID: 34946378 PMCID: PMC8700852 DOI: 10.3390/healthcare9121652] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Revised: 11/22/2021] [Accepted: 11/23/2021] [Indexed: 12/12/2022] Open
Abstract
Epigenetic changes are a necessary characteristic of all cancer types. Tumor cells usually target genetic changes and epigenetic alterations as well. It is most beneficial to identify epigenetic similar features among cancer various types to be able to discover the appropriate treatments. The existence of epigenetic alteration profiles can aid in targeting this goal. In this paper, we propose a new technique applying data mining and clustering methodologies for cancer epigenetic changes analysis. The proposed technique aims to detect common patterns of epigenetic changes in various cancer types. We demonstrated the validation of the new technique by detecting epigenetic patterns across seven cancer types and by determining epigenetic similarities among various cancer types. The experimental results demonstrate that common epigenetic patterns do exist across these cancer types. Additionally, epigenetic gene analysis performed on the associated genes found a strong relationship with the development of various types of cancer and proved high risk across the studied cancer types. We utilized the frequent pattern data mining approach to represent cancer types compactly in the promoters for some epigenetic marks. Utilizing the built frequent pattern item set, the most frequent items are identified and yield the group of the bi-clusters of these patterns. Experimental results of the proposed method are shown to have a success rate of 88% in detecting cancer types according to specific epigenetic pattern.
Collapse
Affiliation(s)
- Hanan Aljuaid
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh 11047, Saudi Arabia;
| | - Hanan A. Hosni Mahmoud
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh 11047, Saudi Arabia;
- Department of Computer and Systems Engineering, Faculty of Engineering, University of Alexandria, Alexandria 21544, Egypt
| |
Collapse
|
14
|
Imdad U, Tahir Ahmed M, Asif M, Aljuaid H. 3D point cloud lossy compression using quadric surfaces. PeerJ Comput Sci 2021; 7:e675. [PMID: 34712788 PMCID: PMC8507489 DOI: 10.7717/peerj-cs.675] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Accepted: 07/22/2021] [Indexed: 06/13/2023]
Abstract
The presence of 3D sensors in hand-held or head-mounted smart devices has motivated many researchers around the globe to devise algorithms to manage 3D point cloud data efficiently and economically. This paper presents a novel lossy compression technique to compress and decompress 3D point cloud data that will save storage space on smart devices as well as minimize the use of bandwidth when transferred over the network. The idea presented in this research exploits geometric information of the scene by using quadric surface representation of the point cloud. A region of a point cloud can be represented by the coefficients of quadric surface when the boundary conditions are known. Thus, a set of quadric surface coefficients and their associated boundary conditions are stored as a compressed point cloud and used to decompress. An added advantage of proposed technique is its flexibility to decompress the cloud as a dense or a course cloud. We compared our technique with state-of-the-art 3D lossless and lossy compression techniques on a number of standard publicly available datasets with varying the structure complexities.
Collapse
Affiliation(s)
- Ulfat Imdad
- Department of Computer Science, National Textile University, Faisalabad, Pakistan
| | - Mirza Tahir Ahmed
- Department of Electrical and Computer Engineering, Queen’s University, Kingston, Canada
| | - Muhammad Asif
- Department of Computer Science, National Textile University, Faisalabad, Pakistan
| | - Hanan Aljuaid
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University (PNU), Riyadh, Sudia Arabia
| |
Collapse
|
15
|
Shahid A, Afzal MT, Alharbi A, Aljuaid H, Al-Otaibi S. In-text citation's frequencies-based recommendations of relevant research papers. PeerJ Comput Sci 2021; 7:e524. [PMID: 34150995 PMCID: PMC8189020 DOI: 10.7717/peerj-cs.524] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Accepted: 04/14/2021] [Indexed: 06/13/2023]
Abstract
From the past half of a century, identification of the relevant documents is deemed an active area of research due to the rapid increase of data on the web. The traditional models to retrieve relevant documents are based on bibliographic information such as Bibliographic coupling, Co-citations, and Direct citations. However, in the recent past, the scientific community has started to employ textual features to improve existing models' accuracy. In our previous study, we found that analysis of citations at a deep level (i.e., content level) can play a paramount role in finding more relevant documents than surface level (i.e., just bibliography details). We found that cited and citing papers have a high degree of relevancy when in-text citations frequency of the cited paper is more than five times in the citing paper's text. This paper is an extension of our previous study in terms of its evaluation of a comprehensive dataset. Moreover, the study results are also compared with other state-of-the-art approaches i.e., content, metadata, and bibliography. For evaluation, a user study is conducted on selected papers from 1,200 documents (comprise about 16,000 references) of an online journal, Journal of Computer Science (J.UCS). The evaluation results indicate that in-text citation frequency has attained higher precision in finding relevant papers than other state-of-the-art techniques such as content, bibliographic coupling, and metadata-based techniques. The use of in-text citation may help in enhancing the quality of existing information systems and digital libraries. Further, more sophisticated measure may be redefined be considering the use of in-text citations.
Collapse
Affiliation(s)
- Abdul Shahid
- Institute of Computing, Kohat University of Science & Technology, Kohat, Pakistan
| | | | - Abdullah Alharbi
- Department of Information Technology, College of Computers and Information Technology, Taif University, Taif, Saudi Arabia
| | - Hanan Aljuaid
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University (PNU), Riyadh, Saudi Arabia
| | - Shaha Al-Otaibi
- Information Systems Department, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| |
Collapse
|
16
|
Tahir N, Asif M, Ahmad S, Malik MSA, Aljuaid H, Butt MA, Rehman M. FNG-IE: an improved graph-based method for keyword extraction from scholarly big-data. PeerJ Comput Sci 2021; 7:e389. [PMID: 33817035 PMCID: PMC7959634 DOI: 10.7717/peerj-cs.389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2020] [Accepted: 01/20/2021] [Indexed: 06/12/2023]
Abstract
Keyword extraction is essential in determining influenced keywords from huge documents as the research repositories are becoming massive in volume day by day. The research community is drowning in data and starving for information. The keywords are the words that describe the theme of the whole document in a precise way by consisting of just a few words. Furthermore, many state-of-the-art approaches are available for keyword extraction from a huge collection of documents and are classified into three types, the statistical approaches, machine learning, and graph-based methods. The machine learning approaches require a large training dataset that needs to be developed manually by domain experts, which sometimes is difficult to produce while determining influenced keywords. However, this research focused on enhancing state-of-the-art graph-based methods to extract keywords when the training dataset is unavailable. This research first converted the handcrafted dataset, collected from impact factor journals into n-grams combinations, ranging from unigram to pentagram and also enhanced traditional graph-based approaches. The experiment was conducted on a handcrafted dataset, and all methods were applied on it. Domain experts performed the user study to evaluate the results. The results were observed from every method and were evaluated with the user study using precision, recall and f-measure as evaluation matrices. The results showed that the proposed method (FNG-IE) performed well and scored near the machine learning approaches score.
Collapse
Affiliation(s)
- Noman Tahir
- Department of Computer Science, National Textile University, Faisalabad, Punjab, Pakistan
| | - Muhammad Asif
- Department of Computer Science, National Textile University, Faisalabad, Punjab, Pakistan
| | - Shahbaz Ahmad
- Department of Computer Science, National Textile University, Faisalabad, Punjab, Pakistan
| | | | - Hanan Aljuaid
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University (PNU), Riyadh, Sudia Arabia
| | - Muhammad Arif Butt
- Punjab University College of Information Technology (PUCIT), University of the Punjab (PU), Lahore, Pakistan
| | - Mobashar Rehman
- Faculty of Information and Communication Technology, Universiti Tunku Abdul Rahman, Kampar, Perak, Malaysia
| |
Collapse
|
17
|
Aljuaid H, Parah SA. Secure Patient Data Transfer Using Information Embedding and Hyperchaos. Sensors (Basel) 2021; 21:E282. [PMID: 33406623 PMCID: PMC7795495 DOI: 10.3390/s21010282] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Revised: 12/28/2020] [Accepted: 12/30/2020] [Indexed: 11/16/2022]
Abstract
Health 4.0 is an extension of the Industry standard 4.0 which is aimed at the virtualization of health-care services. It employs core technologies and services for integrated management of electronic health records (EHRs), captured through various sensors. The EHR is processed and transmitted to distant experts for better diagnosis and improved healthcare delivery. However, for the successful implementation of Heath 4.0 many challenges do exist. One of the critical issues that needs attention is the security of EHRs in smart health systems. In this work, we have developed a new interpolation scheme capable of providing better quality cover media and supporting reversible EHR embedding. The scheme provides a double layer of security to the EHR by firstly using hyperchaos to encrypt the EHR. The encrypted EHR is reversibly embedded in the cover images produced by the proposed interpolation scheme. The proposed interpolation module has been found to provide better quality interpolated images. The proposed system provides an average peak signal to noise ratio (PSNR) of 52.38 dB for a high payload of 0.75 bits per pixel. In addition to embedding EHR, a fragile watermark (WM) is also encrypted using the hyperchaos embedded into the cover image for tamper detection and authentication of the received EHR. Experimental investigations reveal that our scheme provides improved performance for high contrast medical images (MI) when compared to various techniques for evaluation parameters like imperceptibility, reversibility, payload, and computational complexity. Given the attributes of the scheme, it can be used for enhancing the security of EHR in health 4.0.
Collapse
Affiliation(s)
- Hanan Aljuaid
- Department of Computer Science, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University (PNU), Riyadh 84428, Saudi Arabia;
| | - Shabir A. Parah
- Department of Electronics and IT, University of Kashmir, Srinagar 190006, India
| |
Collapse
|
18
|
Khan FA, Butt AUR, Asif M, Aljuaid H, Adnan A, Shaheen S, Haq IU. Burnt Human Skin Segmentation and Depth Classification Using Deep Convolutional Neural Network (DCNN). j med imaging hlth inform 2020. [DOI: 10.1166/jmihi.2020.3258] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
World Health Organization (WHO) manage health-related statistics all around the world by taking the necessary measures. What could be better for health and what may be the leading causes of deaths, all these statistics are well organized by WHO. Burn Injuries are mostly viewed in middle
and low-income countries due to lack of resources, the result may come in the form of deaths by serious injuries caused by burning. Due to the non-accessibility of specialists and burn surgeons, simple and basic health care units situated at tribble areas as well as in small cities are facing
the problem to diagnose the burn depths accurately. The primary goals and objectives of this research task are to segment the burnt region of skin from the normal skin and to diagnose the burn depths as per the level of burn. The dataset contains the 600 images of burnt patients and has been
taken in a real-time environment from the Allied Burn and Reconstructive Surgery Unit (ABRSU) Faisalabad, Pakistan. Burnt human skin segmentation was carried by the use of Otsu's method and the image feature vector was obtained by using statistical calculations such as mean and median. A classifier
Deep Convolutional Neural Network based on deep learning was used to classify the burnt human skin as per the level of burn into different depths. Almost 60 percent of images have been taken to train the classifier and the rest of the 40 percent burnt skin images were used to estimate the
average accuracy of the classifier. The average accuracy of the DCNN classifier was noted as 83.4 percent and these are the best results yet. By the obtained results of this research task, young physicians and practitioners may be able to diagnose the burn depths and start the proper medication.
Collapse
|
19
|
Asif M, Ishtiaq A, Ahmad H, Aljuaid H, Shah J. Sentiment analysis of extremism in social media from textual information. Telematics and Informatics 2020. [DOI: 10.1016/j.tele.2020.101345] [Citation(s) in RCA: 46] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
20
|
Nazir S, Asif M, Ahmad S, Bukhari F, Afzal MT, Aljuaid H. Important citation identification by exploiting content and section-wise in-text citation count. PLoS One 2020; 15:e0228885. [PMID: 32134940 PMCID: PMC7058319 DOI: 10.1371/journal.pone.0228885] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2019] [Accepted: 01/24/2020] [Indexed: 12/01/2022] Open
Abstract
A citation is deemed as a potential parameter to determine linkage between research articles. The parameter has extensively been employed to form multifarious academic aspects like calculating the impact factor of journals, h-Index of researchers, allocate different research grants, find the latest research trends, etc. The current state-of-the-art contends that all citations are not of equal importance. Based on this argument, the current trend in citation classification community categorizes citations into important and non-important reasons. The community has proposed different approaches to extract important citations such as citation count, context-based, metadata, and textual based approaches. The contemporary state-of-the-art in citation classification community ignores significantly potential features that can play a vital role in citation classification. This research presents a novel approach for binary citation classification by exploiting section-wise in-text citation frequencies, similarity score, and overall citation count-based features. The study also introduces machine learning algorithms based novel approach for assigning appropriate weights to the logical sections of research papers. The weights are allocated to the citations with respect to their sections. To perform the classification, we used three classification techniques, Support Vector Machine, Kernel Linear Regression, and Random Forest. The experiment was performed on two annotated benchmark datasets that contain 465 and 311 citation pairs of research articles respectively. The results revealed that the proposed approach attained an improved value of precision (i.e., 0.84 vs 0.72) from contemporary state-of-the-art approach.
Collapse
Affiliation(s)
- Shahzad Nazir
- Department of Computer Science, National Textile University, Faisalabad, Pakistan
| | - Muhammad Asif
- Department of Computer Science, National Textile University, Faisalabad, Pakistan
- * E-mail:
| | - Shahbaz Ahmad
- Department of Computer Science, National Textile University, Faisalabad, Pakistan
| | - Faisal Bukhari
- Punjab University College of Information Technology (PUCIT), University of the Punjab (PU), Lahore, Pakistan
| | - Muhammad Tanvir Afzal
- Department of Computer Science, Capital University of Science and Technology, Islamabad, Pakistan
| | - Hanan Aljuaid
- Department of Computer Science, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University (PNU), Riyadh, Saudi Arabia
| |
Collapse
|