1
|
Girdhar N, Sharma D, Kumar R, Sahu M, Lin CC. Emerging trends in biomedical trait-based human identification: A bibliometric analysis. SLAS Technol 2024; 29:100136. [PMID: 38677477 DOI: 10.1016/j.slast.2024.100136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Revised: 03/29/2024] [Accepted: 04/22/2024] [Indexed: 04/29/2024]
Abstract
Personal human identification is a crucial aspect of modern society with applications spanning from law enforcement to healthcare and digital security. This bibliometric paper presents a comprehensive analysis of recent advances in personal human identification methodologies focusing on biomedical traits. The paper examines a diverse range of research articles, reviews, and patents published over the last decade to provide insights into the evolving landscape of biometric identification techniques. The study categorizes the identified literature into distinct biomedical trait categories, including but not limited to, fingerprint and palmprint recognition, iris and retinal scanning, facial recognition, voice and speech analysis, gait recognition, and DNA-based identification. Through systematic analysis, the paper highlights key trends, emerging technologies, and interdisciplinary collaborations in each category, revealing the interdisciplinary nature of research in this field. Furthermore, the bibliometric analysis examines the geographical distribution of research efforts, identifying prominent countries and institutions contributing to advancements in personal human identification. Collaboration networks among researchers and institutions are visualized to depict the knowledge flow and collaborative dynamics within the field. Overall, this study serves as a valuable reference for researchers, practitioners, and policymakers, shedding light on the current status and potential future directions of personal human identification leveraging biomedical traits.
Collapse
Affiliation(s)
- Nancy Girdhar
- L3i, University of La Rochelle, La Rochelle, 17000, France.
| | - Deepak Sharma
- Department of Computer Science, Christian-Albrechts-University zu Kiel, Kiel, 24118, Germany.
| | - Rajeev Kumar
- Blockchain Technology Research Lab, Department of Computer Science and Engineering, Delhi Technological University, New Delhi, 110042, India.
| | - Monalisa Sahu
- Amrita School of Computing, Amrita Vishwa Vidyapeetham, Amaravati, Andhra Pradesh, 522503, India.
| | - Chia-Chen Lin
- Department of Computer Science and Information Engineering, National Chin-Yi University of Technology, Taichung, 411030, Taiwan.
| |
Collapse
|
2
|
A. El_Rahman S, Alluhaidan AS. Enhanced multimodal biometric recognition systems based on deep learning and traditional methods in smart environments. PLoS One 2024; 19:e0291084. [PMID: 38358992 PMCID: PMC10868857 DOI: 10.1371/journal.pone.0291084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2022] [Accepted: 08/22/2023] [Indexed: 02/17/2024] Open
Abstract
In the field of data security, biometric security is a significant emerging concern. The multimodal biometrics system with enhanced accuracy and detection rate for smart environments is still a significant challenge. The fusion of an electrocardiogram (ECG) signal with a fingerprint is an effective multimodal recognition system. In this work, unimodal and multimodal biometric systems using Convolutional Neural Network (CNN) are conducted and compared with traditional methods using different levels of fusion of fingerprint and ECG signal. This study is concerned with the evaluation of the effectiveness of proposed parallel and sequential multimodal biometric systems with various feature extraction and classification methods. Additionally, the performance of unimodal biometrics of ECG and fingerprint utilizing deep learning and traditional classification technique is examined. The suggested biometric systems were evaluated utilizing ECG (MIT-BIH) and fingerprint (FVC2004) databases. Additional tests are conducted to examine the suggested models with:1) virtual dataset without augmentation (ODB) and 2) virtual dataset with augmentation (VDB). The findings show that the optimum performance of the parallel multimodal achieved 0.96 Area Under the ROC Curve (AUC) and sequential multimodal achieved 0.99 AUC, in comparison to unimodal biometrics which achieved 0.87 and 0.99 AUCs, for the fingerprint and ECG biometrics, respectively. The overall performance of the proposed multimodal biometrics outperformed unimodal biometrics using CNN. Moreover, the performance of the suggested CNN model for ECG signal and sequential multimodal system based on neural network outperformed other systems. Lastly, the performance of the proposed systems is compared with previously existing works.
Collapse
Affiliation(s)
- Sahar A. El_Rahman
- Department of Electrical Engineering, Faculty of Engineering-Shoubra, Benha University, Cairo, Egypt
| | - Ala Saleh Alluhaidan
- Information Systems Department, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| |
Collapse
|
3
|
Haider SA, Ashraf S, Larik RM, Husain N, Muqeet HA, Humayun U, Yahya A, Arfeen ZA, Khan MF. An Improved Multimodal Biometric Identification System Employing Score-Level Fuzzification of Finger Texture and Finger Vein Biometrics. SENSORS (BASEL, SWITZERLAND) 2023; 23:9706. [PMID: 38139551 PMCID: PMC10748327 DOI: 10.3390/s23249706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/05/2023] [Revised: 11/28/2023] [Accepted: 12/05/2023] [Indexed: 12/24/2023]
Abstract
This research work focuses on a Near-Infra-Red (NIR) finger-images-based multimodal biometric system based on Finger Texture and Finger Vein biometrics. The individual results of the biometric characteristics are fused using a fuzzy system, and the final identification result is achieved. Experiments are performed for three different databases, i.e., the Near-Infra-Red Hand Images (NIRHI), Hong Kong Polytechnic University (HKPU) and University of Twente Finger Vein Pattern (UTFVP) databases. First, the Finger Texture biometric employs an efficient texture feature extracting algorithm, i.e., Linear Binary Pattern. Then, the classification is performed using Support Vector Machine, a proven machine learning classification algorithm. Second, the transfer learning of pre-trained convolutional neural networks (CNNs) is performed for the Finger Vein biometric, employing two approaches. The three selected CNNs are AlexNet, VGG16 and VGG19. In Approach 1, before feeding the images for the training of the CNN, the necessary preprocessing of NIR images is performed. In Approach 2, before the pre-processing step, image intensity optimization is also employed to regularize the image intensity. NIRHI outperforms HKPU and UTFVP for both of the modalities of focus, in a unimodal setup as well as in a multimodal one. The proposed multimodal biometric system demonstrates a better overall identification accuracy of 99.62% in comparison with 99.51% and 99.50% reported in the recent state-of-the-art systems.
Collapse
Affiliation(s)
- Syed Aqeel Haider
- Department of Computer & Information Systems Engineering, Faculty of Computer & Electrical Engineering, N.E.D. University of Engineering and Technology, Karachi 75270, Pakistan
| | - Shahzad Ashraf
- Department of Electrical Engineering, NFC Institute of Engineering and Technology, Multan 60000, Pakistan;
| | - Raja Masood Larik
- Department of Electrical Engineering, N.E.D University of Engineering and Technology, Karachi 75270, Pakistan;
| | - Nusrat Husain
- Department of Electronics & Power Engineering, Pakistan Navy Engineering College, National University of Sciences and Technology (NUST), Islamabad 44000, Pakistan; (N.H.); (A.Y.); (M.F.K.)
| | - Hafiz Abdul Muqeet
- Electrical Engineering Technology Department, Punjab Tianjin University of Technology, Lahore 54770, Pakistan;
| | - Usman Humayun
- Department of Computer Engineering, Faculty of Engineering, Bahauddin Zakariya University (BZU), Multan 60800, Pakistan;
| | - Ashraf Yahya
- Department of Electronics & Power Engineering, Pakistan Navy Engineering College, National University of Sciences and Technology (NUST), Islamabad 44000, Pakistan; (N.H.); (A.Y.); (M.F.K.)
| | - Zeeshan Ahmad Arfeen
- Department of Electrical Engineering, The Islamia University of Bahawalpur, Bahawalpur 63100, Pakistan;
| | - Muhammad Farhan Khan
- Department of Electronics & Power Engineering, Pakistan Navy Engineering College, National University of Sciences and Technology (NUST), Islamabad 44000, Pakistan; (N.H.); (A.Y.); (M.F.K.)
| |
Collapse
|
4
|
Pergolizzi J, LeQuang JAK, Vasiliu-Feltes I, Breve F, Varrassi G. Brave New Healthcare: A Narrative Review of Digital Healthcare in American Medicine. Cureus 2023; 15:e46489. [PMID: 37927734 PMCID: PMC10623488 DOI: 10.7759/cureus.46489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Accepted: 09/30/2023] [Indexed: 11/07/2023] Open
Abstract
The digital revolution has had a profound effect on American and global healthcare, which was accelerated by the pandemic and telehealth applications. Digital health also includes popular and more esoteric forms of wearable monitoring systems and interscatter and other wireless technologies that facilitate their telemetry. The rise in artificial intelligence (AI) and machine learning (ML) may serve to improve interpretation from imaging technologies to electrocardiography or electroencephalographic tracings, and new ML techniques may allow these systems to scan data to discern and contextualize patterns that may have evaded human physicians. The necessity of virtual care during the pandemic has morphed into new treatment paradigms, which have gained patient acceptance but still raise issues with respect to privacy laws and credentialing. Augmented and virtual reality tools can facilitate surgical planning and "hands-on" clinical training activities. Patients are working with new frontiers in digital health in the form of "Dr. Google" and patient support websites to learn or share medical information. Patient-facing digital health information is both a blessing and curse, in that it can be a boon to health-literate patients who seek to be more active in their own care. On the other hand, digital health information can lead to false conclusions, catastrophizing, misunderstandings, and "cyberchondria." The role of blockchain, familiar from cryptocurrency, may play a role in future healthcare information and would serve as a disruptive, decentralizing, and potentially beneficial change. These important changes are both exciting and perplexing as clinicians and their patients learn to navigate this new system and how we address the questions it raises, such as medical privacy in a digital age. The goal of this review is to explore the vast range of digital health and how it may impact the healthcare system.
Collapse
Affiliation(s)
| | | | | | - Frank Breve
- Department of Pharmacy, Temple University, Philadelphia, USA
| | | |
Collapse
|
5
|
Safavipour MH, Doostari MA, Sadjedi H. Deep Hybrid Multimodal Biometric Recognition System Based on Features-Level Deep Fusion of Five Biometric Traits. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023; 2023:6443786. [PMID: 37469627 PMCID: PMC10353898 DOI: 10.1155/2023/6443786] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 08/18/2022] [Accepted: 09/02/2022] [Indexed: 07/21/2023]
Abstract
The need for information security and the adoption of the relevant regulations is becoming an overwhelming demand worldwide. As an efficient solution, hybrid multimodal biometric systems utilize fusion to combine multiple biometric traits and sources with improving recognition accuracy, higher security assurance, and to cope with the limitations of the uni-biometric system. In this paper, three strategies for dealing with a feature-level deep fusion of five biometric traits (face, both irises, and two fingerprints) derived from three sources of evidence are proposed and compared. In the first two proposed methodologies, each feature vector is mapped from the feature space into the reproducing kernel Hilbert space (RKHS) separately by selecting the appropriate reproducing kernel. In this higher space, where the result is the conversion of nonlinear relations to linear ones, dimensionality reduction algorithms (KPCA, KLDA) and quaternion-based algorithms (KQPCA, KQPCA) are used for the fusion of the feature vectors. In the third methodology, the fusion of feature spaces based on deep learning is administered by combining feature vectors in in-depth and fully connected layers. The experimental results on 6 databases in the proposed hybrid multibiometric system clearly show the multimodal template obtained from the deep fusion of feature spaces; while being secure against spoof attacks and making the system robust, they can use the low dimensionality of the fused vector to increase the accuracy of a hybrid multimodal biometric system to 100%, showing a significant improvement compared with uni-biometric and other multimodal systems.
Collapse
Affiliation(s)
| | | | - Hamed Sadjedi
- Department of Electrical Engineering, Shahed University, Tehran, Iran
| |
Collapse
|
6
|
Assiri B, Hossain MA. Face emotion recognition based on infrared thermal imagery by applying machine learning and parallelism. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:913-929. [PMID: 36650795 DOI: 10.3934/mbe.2023042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Over time for the past few years, facial expression identification has been a promising area. However, darkness, lighting conditions, and other factors make facial emotion identification challenging to detect. As a result, thermal images are suggested as a solution to such problems and for a variety of other benefits. Furthermore, focusing on significant regions of a face rather than the entire face is sufficient for reducing processing and improving accuracy at the same time. This research introduces novel infrared thermal image-based approaches for facial emotion recognition. First, the entire image of the face is separated into four pieces. Then, we accepted only four active regions (ARs) to prepare training and testing datasets. These four ARs are the left eye, right eye, and lips areas. In addition, ten-folded cross-validation is proposed to improve recognition accuracy using Convolutional Neural Network (CNN), a machine learning technique. Furthermore, we incorporated a parallelism technique to reduce processing-time in testing and training datasets. As a result, we have seen that the processing time reduces to 50%. Finally, a decision-level fusion is applied to improve the recognition accuracy. As a result, the proposed technique achieves a recognition accuracy of 96.87 %. The achieved accuracy ascertains the robustness of our proposed scheme.
Collapse
Affiliation(s)
- Basem Assiri
- Department of Computer Science, College of CS & IT, Jazan University, Kingdom of Saudi Arabia
| | | |
Collapse
|
7
|
Alhameed M, Jeribi F, Elnaim BME, Hossain MA, Abdelhag ME. Pandemic disease detection through wireless communication using infrared image based on deep learning. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:1083-1105. [PMID: 36650803 DOI: 10.3934/mbe.2023050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Rapid diagnosis to test diseases, such as COVID-19, is a significant issue. It is a routine virus test in a reverse transcriptase-polymerase chain reaction. However, a test like this takes longer to complete because it follows the serial testing method, and there is a high chance of a false-negative ratio (FNR). Moreover, there arises a deficiency of R.T.-PCR test kits. Therefore, alternative procedures for a quick and accurate diagnosis of patients are urgently needed to deal with these pandemics. The infrared image is self-sufficient for detecting these diseases by measuring the temperature at the initial stage. C.T. scans and other pathological tests are valuable aspects of evaluating a patient with a suspected pandemic infection. However, a patient's radiological findings may not be identified initially. Therefore, we have included an Artificial Intelligence (A.I.) algorithm-based Machine Intelligence (MI) system in this proposal to combine C.T. scan findings with all other tests, symptoms, and history to quickly diagnose a patient with a positive symptom of current and future pandemic diseases. Initially, the system will collect information by an infrared camera of the patient's facial regions to measure temperature, keep it as a record, and complete further actions. We divided the face into eight classes and twelve regions for temperature measurement. A database named patient-info-mask is maintained. While collecting sample data, we incorporate a wireless network using a cloudlets server to make processing more accessible with minimal infrastructure. The system will use deep learning approaches. We propose convolution neural networks (CNN) to cross-verify the collected data. For better results, we incorporated tenfold cross-verification into the synthesis method. As a result, our new way of estimating became more accurate and efficient. We achieved 3.29% greater accuracy by incorporating the "decision tree level synthesis method" and "ten-folded-validation method". It proves the robustness of our proposed method.
Collapse
Affiliation(s)
| | - Fathe Jeribi
- College of CS & IT, Jazan University, Jazan, Saudi Arabia
| | | | | | | |
Collapse
|
8
|
Elshazly EA, Hashad FG, Sedik A, El-samie FEA, Abdel-salam N. Compression-Based Cancelable Multi-Biometric System.. [DOI: 10.21203/rs.3.rs-2241969/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Abstract
Abstract
The issue of cybersecurity is one of the important fields which is involved in different research trends. Biometric security is one of these trends which is involved in several applications such as access control systems and online identity verification. The protection of human biometrics can be performed using both bi-directional and unidirectional encryption. The unidirectional encryption is carried out based on cancelable biometric techniques. This paper proposes a cancelable biometric system based on image composition, deep dream, and hashing techniques. The objective of the proposed system is to generate visual and text cancelable biometrics. The visual cancelable templates are generated using image composition and deep dream, while the text templates are generated using SHA hashing techniques. The proposed system is validated by multi-biometric inputs including iris, palm, face, and fingerprint biometrics. In addition, it is evaluated in both visual and text forms. The simulation results reveal that the proposed system appears a superior performance among the works which handle this problem.
Collapse
|
9
|
Deep Learning-Based System for Preoperative Safety Management in Cataract Surgery. J Clin Med 2022; 11:jcm11185397. [PMID: 36143048 PMCID: PMC9503842 DOI: 10.3390/jcm11185397] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Revised: 09/05/2022] [Accepted: 09/10/2022] [Indexed: 11/30/2022] Open
Abstract
An artificial intelligence-based system was implemented for preoperative safety management in cataract surgery, including facial recognition, laterality (right and left eye) confirmation, and intraocular lens (IOL) parameter verification. A deep-learning model was constructed with a face identification development kit for facial recognition, the You Only Look Once Version 3 (YOLOv3) algorithm for laterality confirmation, and the Visual Geometry Group-16 (VGG-16) for IOL parameter verification. In 171 patients who were undergoing phacoemulsification and IOL implantation, a mobile device (iPad mini, Apple Inc.) camera was used to capture patients’ faces, location of surgical drape aperture, and IOL parameter descriptions on the packages, which were then checked with the information stored in the referral database. The authentication rates on the first attempt and after repeated attempts were 92.0% and 96.3% for facial recognition, 82.5% and 98.2% for laterality confirmation, and 67.4% and 88.9% for IOL parameter verification, respectively. After authentication, both the false rejection rate and the false acceptance rate were 0% for all three parameters. An artificial intelligence-based system for preoperative safety management was implemented in real cataract surgery with a passable authentication rate and very high accuracy.
Collapse
|
10
|
Wang Y, Shi D, Zhou W. Convolutional Neural Network Approach Based on Multimodal Biometric System with Fusion of Face and Finger Vein Features. SENSORS (BASEL, SWITZERLAND) 2022; 22:6039. [PMID: 36015799 PMCID: PMC9412820 DOI: 10.3390/s22166039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Revised: 08/05/2022] [Accepted: 08/10/2022] [Indexed: 06/15/2023]
Abstract
In today's information age, how to accurately identify a person's identity and protect information security has become a hot topic of people from all walks of life. At present, a more convenient and secure solution to identity identification is undoubtedly biometric identification, but a single biometric identification cannot support increasingly complex and diversified authentication scenarios. Using multimodal biometric technology can improve the accuracy and safety of identification. This paper proposes a biometric method based on finger vein and face bimodal feature layer fusion, which uses a convolutional neural network (CNN), and the fusion occurs in the feature layer. The self-attention mechanism is used to obtain the weights of the two biometrics, and combined with the RESNET residual structure, the self-attention weight feature is cascaded with the bimodal fusion feature channel Concat. To prove the high efficiency of bimodal feature layer fusion, AlexNet and VGG-19 network models were selected in the experimental part for extracting finger vein and face image features as inputs to the feature fusion module. The extensive experiments show that the recognition accuracy of both models exceeds 98.4%, demonstrating the high efficiency of the bimodal feature fusion.
Collapse
|
11
|
Wang F, Liang H, Zhang Y, Xu Q, Zong R. Recognition and Classification of Ship Images Based on SMS-PCNN Model. Front Neurorobot 2022; 16:889308. [PMID: 35770274 PMCID: PMC9234967 DOI: 10.3389/fnbot.2022.889308] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 04/25/2022] [Indexed: 01/13/2023] Open
Abstract
In the field of ship image recognition and classification, traditional algorithms lack attention to the differences between the grain of ship images. The differences in the hull structure of different categories of ships are reflected in the coarse-grain, whereas the differences in the ship equipment and superstructures of different ships of the same category are reflected in the fine-grain. To extract the ship features of different scales, the multi-scale paralleling CNN oriented on ships images (SMS-PCNN) model is proposed in this paper. This model has three characteristics. (1) Extracting image features of different sizes by parallelizing convolutional branches with different receptive fields. (2) The number of channels of the model is adjusted two times to extract features and eliminate redundant information. (3) The residual connection network is used to extend the network depth and mitigate the gradient disappearance. In this paper, we collected open-source images on the Internet to form an experimental dataset and conduct performance tests. The results show that the SMS-PCNN model proposed in this paper achieves 84.79% accuracy on the dataset, which is better than the existing four state-of-the-art approaches. By the ablation experiments, the effectiveness of the optimization tricks used in the model is verified.
Collapse
Affiliation(s)
- Fengxiang Wang
- College of Electronic Engineering, Naval University of Engineering, Wuhan, China
| | - Huang Liang
- College of Electronic Engineering, Naval University of Engineering, Wuhan, China
- *Correspondence: Huang Liang
| | - Yalun Zhang
- Institute of Noise & Vibration, Naval University of Engineering, Wuhan, China
| | - Qingxia Xu
- College of International Studies, National University of Defense Technology, Changsha, China
| | - Ruirui Zong
- College of Electronic Engineering, Naval University of Engineering, Wuhan, China
| |
Collapse
|
12
|
Agarwal D, Bansal A. An alignment-free non-invertible transformation-based method for generating the cancellable fingerprint template. Pattern Anal Appl 2022. [DOI: 10.1007/s10044-022-01080-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
13
|
Moroń T, Bernacki K, Fiołka J, Peng J, Popowicz A. Recognition of the finger vascular system using multi‐wavelength imaging. IET BIOMETRICS 2022. [DOI: 10.1049/bme2.12068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Affiliation(s)
- Tomasz Moroń
- Department of Cybernetics, Nanotechnology and Data Processing Silesian University of Technology Gliwice Poland
| | - Krzysztof Bernacki
- Department of Electronics, Electrical Engineering and Microelectronics Silesian University of Technology Gliwice Poland
| | - Jerzy Fiołka
- Department of Electronics, Electrical Engineering and Microelectronics Silesian University of Technology Gliwice Poland
| | - Jia Peng
- Department of Physics Durham University Durham UK
| | - Adam Popowicz
- Department of Electronics, Electrical Engineering and Microelectronics Silesian University of Technology Gliwice Poland
| |
Collapse
|
14
|
Hossain MA, Assiri B. Facial expression recognition based on active region of interest using deep learning and parallelism. PeerJ Comput Sci 2022; 8:e894. [PMID: 35494822 PMCID: PMC9044208 DOI: 10.7717/peerj-cs.894] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Accepted: 01/26/2022] [Indexed: 06/14/2023]
Abstract
The automatic facial expression tracking method has become an emergent topic during the last few decades. It is a challenging problem that impacts many fields such as virtual reality, security surveillance, driver safety, homeland security, human-computer interaction, medical applications. A remarkable cost-efficiency can be achieved by considering some areas of a face. These areas are termed Active Regions of Interest (AROIs). This work proposes a facial expression recognition framework that investigates five types of facial expressions, namely neutral, happiness, fear, surprise, and disgust. Firstly, a pose estimation method is incorporated and to go along with an approach to rotate the face to achieve a normalized pose. Secondly, the whole face-image is segmented into four classes and eight regions. Thirdly, only four AROIs are identified from the segmented regions. The four AROIs are the nose-tip, right eye, left eye, and lips respectively. Fourthly, an info-image-data-mask database is maintained for classification and it is used to store records of images. This database is the mixture of all the images that are gained after introducing a ten-fold cross-validation technique using the Convolutional Neural Network. Correlations of variances and standard deviations are computed based on identified images. To minimize the required processing time in both training and testing the data set, a parallelism technique is introduced, in which each region of the AROIs is classified individually and all of them run in parallel. Fifthly, a decision-tree-level synthesis-based framework is proposed to coordinate the results of parallel classification, which helps to improve the recognition accuracy. Finally, experimentation on both independent and synthesis databases is voted for calculating the performance of the proposed technique. By incorporating the proposed synthesis method, we gain 94.499%, 95.439%, and 98.26% accuracy with the CK+ image sets and 92.463%, 93.318%, and 94.423% with the JAFFE image sets. The overall accuracy is 95.27% in recognition. We gain 2.8% higher accuracy by introducing a decision-level synthesis method. Moreover, with the incorporation of parallelism, processing time speeds up three times faster. This accuracy proves the robustness of the proposed scheme.
Collapse
Affiliation(s)
- Mohammad Alamgir Hossain
- Department of COMPUTER SCIENCE, College of Computer Science & Information Technology, Jazan University, Jazan, Kingdom of Saudi Arabia
| | - Basem Assiri
- Department of COMPUTER SCIENCE, College of Computer Science & Information Technology, Jazan University, Jazan, Kingdom of Saudi Arabia
| |
Collapse
|
15
|
Muthusamy D, Rakkimuthu P. Steepest deep bipolar Cascade correlation for finger-vein verification. APPL INTELL 2022. [DOI: 10.1007/s10489-021-02619-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
16
|
Multimodal Biometric Template Protection Based on a Cancelable SoftmaxOut Fusion Network. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12042023] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Authentication systems that employ biometrics are commonplace, as they offer a convenient means of authenticating an individual’s identity. However, these systems give rise to concerns about security and privacy due to insecure template management. As a remedy, biometric template protection (BTP) has been developed. Cancelable biometrics is a non-invertible form of BTP in which the templates are changeable. This paper proposes a deep-learning-based end-to-end multimodal cancelable biometrics scheme called cancelable SoftmaxOut fusion network (CSMoFN). By end-to-end, we mean a model that receives raw biometric data as input and produces a protected template as output. CSMoFN combines two biometric traits, the face and the periocular region, and is composed of three modules: a feature extraction and fusion module, a permutation SoftmaxOut transformation module, and a multiplication-diagonal compression module. The first module carries out feature extraction and fusion, while the second and third are responsible for the hashing of fused features and compression. In addition, our network is equipped with dual template-changeability mechanisms with user-specific seeded permutation and binary random projection. CSMoFN is trained by minimizing the ArcFace loss and the pairwise angular loss. We evaluate the network, using six face–periocular multimodal datasets, in terms of its verification performance, unlinkability, revocability, and non-invertibility.
Collapse
|
17
|
Iris Liveness Detection for Biometric Authentication: A Systematic Literature Review and Future Directions. INVENTIONS 2021. [DOI: 10.3390/inventions6040065] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Biometrics is progressively becoming vital due to vulnerabilities of traditional security systems leading to frequent security breaches. Biometrics is an automated device that studies human beings’ physiological and behavioral features for their unique classification. Iris-based authentication offers stronger, unique, and contactless identification of the user. Iris liveness detection (ILD) confronts challenges such as spoofing attacks with contact lenses, replayed video, and print attacks, etc. Many researchers focus on ILD to guard the biometric system from attack. Hence, it is vital to study the prevailing research explicitly associated with the ILD to address how developing technologies can offer resolutions to lessen the evolving threats. An exhaustive survey of papers on the biometric ILD was performed by searching the most applicable digital libraries. Papers were filtered based on the predefined inclusion and exclusion criteria. Thematic analysis was performed for scrutinizing the data extracted from the selected papers. The exhaustive review now outlines the different feature extraction techniques, classifiers, datasets and presents their critical evaluation. Importantly, the study also discusses the projects, research works for detecting the iris spoofing attacks. The work then realizes in the discovery of the research gaps and challenges in the field of ILD. Many works were restricted to handcrafted methods of feature extraction, which are confronted with bigger feature sizes. The study discloses that dep learning based automated ILD techniques shows higher potential than machine learning techniques. Acquiring an ILD dataset that addresses all the common Iris spoofing attacks is also a need of the time. The survey, thus, opens practical challenges in the field of ILD from data collection to liveness detection and encourage future research.
Collapse
|
18
|
Oh S, Bae C, Cho J, Lee S, Jung Y. Command Recognition Using Binarized Convolutional Neural Network with Voice and Radar Sensors for Human-Vehicle Interaction. SENSORS 2021; 21:s21113906. [PMID: 34198830 PMCID: PMC8201086 DOI: 10.3390/s21113906] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Revised: 05/26/2021] [Accepted: 06/02/2021] [Indexed: 11/24/2022]
Abstract
Recently, as technology has advanced, the use of in-vehicle infotainment systems has increased, providing many functions. However, if the driver’s attention is diverted to control these systems, it can cause a fatal accident, and thus human–vehicle interaction is becoming more important. Therefore, in this paper, we propose a human–vehicle interaction system to reduce driver distraction during driving. We used voice and continuous-wave radar sensors that require low complexity for application to vehicle environments as resource-constrained platforms. The proposed system applies sensor fusion techniques to improve the limit of single-sensor monitoring. In addition, we used a binarized convolutional neural network algorithm, which significantly reduces the computational workload of the convolutional neural network in command classification. As a result of performance evaluation in noisy and cluttered environments, the proposed system showed a recognition accuracy of 96.4%, an improvement of 7.6% compared to a single voice sensor-based system, and 9.0% compared to a single radar sensor-based system.
Collapse
Affiliation(s)
- Seunghyun Oh
- Department of Smart Drone Convergence, Korea Aerospace University, Goyang-si 10540, Korea; (S.O.); (C.B.)
| | - Chanhee Bae
- Department of Smart Drone Convergence, Korea Aerospace University, Goyang-si 10540, Korea; (S.O.); (C.B.)
| | - Jaechan Cho
- School of Electronics and Information Engineering, Korea Aerospace University, Goyang-si 10540, Korea;
| | - Seongjoo Lee
- Department of Information and Communication Engineering and Convergence Engineering for Intelligent Drone, Sejong University, Seoul 05006, Korea;
| | - Yunho Jung
- Department of Smart Drone Convergence, Korea Aerospace University, Goyang-si 10540, Korea; (S.O.); (C.B.)
- School of Electronics and Information Engineering, Korea Aerospace University, Goyang-si 10540, Korea;
- Correspondence: ; Tel.: +82-2-300-0133
| |
Collapse
|
19
|
Adjabi I, Ouahabi A, Benzaoui A, Jacques S. Multi-Block Color-Binarized Statistical Images for Single-Sample Face Recognition. SENSORS (BASEL, SWITZERLAND) 2021; 21:728. [PMID: 33494516 PMCID: PMC7865363 DOI: 10.3390/s21030728] [Citation(s) in RCA: 35] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/08/2020] [Revised: 01/14/2021] [Accepted: 01/19/2021] [Indexed: 11/16/2022]
Abstract
Single-Sample Face Recognition (SSFR) is a computer vision challenge. In this scenario, there is only one example from each individual on which to train the system, making it difficult to identify persons in unconstrained environments, mainly when dealing with changes in facial expression, posture, lighting, and occlusion. This paper discusses the relevance of an original method for SSFR, called Multi-Block Color-Binarized Statistical Image Features (MB-C-BSIF), which exploits several kinds of features, namely, local, regional, global, and textured-color characteristics. First, the MB-C-BSIF method decomposes a facial image into three channels (e.g., red, green, and blue), then it divides each channel into equal non-overlapping blocks to select the local facial characteristics that are consequently employed in the classification phase. Finally, the identity is determined by calculating the similarities among the characteristic vectors adopting a distance measurement of the K-nearest neighbors (K-NN) classifier. Extensive experiments on several subsets of the unconstrained Alex and Robert (AR) and Labeled Faces in the Wild (LFW) databases show that the MB-C-BSIF achieves superior and competitive results in unconstrained situations when compared to current state-of-the-art methods, especially when dealing with changes in facial expression, lighting, and occlusion. The average classification accuracies are 96.17% and 99% for the AR database with two specific protocols (i.e., Protocols I and II, respectively), and 38.01% for the challenging LFW database. These performances are clearly superior to those obtained by state-of-the-art methods. Furthermore, the proposed method uses algorithms based only on simple and elementary image processing operations that do not imply higher computational costs as in holistic, sparse or deep learning methods, making it ideal for real-time identification.
Collapse
Affiliation(s)
- Insaf Adjabi
- Department of Computer Science, LIMPAF, University of Bouira, Bouira 10000, Algeria;
| | - Abdeldjalil Ouahabi
- Department of Computer Science, LIMPAF, University of Bouira, Bouira 10000, Algeria;
- Polytech Tours, Imaging and Brain, INSERM U930, University of Tours, 37200 Tours, France
| | - Amir Benzaoui
- Department of Electrical Engineering, University of Bouira, Bouira 10000, Algeria;
| | - Sébastien Jacques
- GREMAN UMR 7347, University of Tours, CNRS, INSA Centre Val-de-Loire, 37200 Tours, France;
| |
Collapse
|
20
|
Kamlaskar C, Abhyankar A. Iris-Fingerprint multimodal biometric system based on optimal feature level fusion model. AIMS ELECTRONICS AND ELECTRICAL ENGINEERING 2021. [DOI: 10.3934/electreng.2021013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
<abstract><p>For reliable and accurate multimodal biometric based person verification, demands an effective discriminant feature representation and fusion of the extracted relevant information across multiple biometric modalities. In this paper, we propose feature level fusion by adopting the concept of canonical correlation analysis (CCA) to fuse Iris and Fingerprint feature sets of the same person. The uniqueness of this approach is that it extracts maximized correlated features from feature sets of both modalities as effective discriminant information within the features sets. CCA is, therefore, suitable to analyze the underlying relationship between two feature spaces and generates more powerful feature vectors by removing redundant information. We demonstrate that an efficient multimodal recognition can be achieved with a significant reduction in feature dimensions with less computational complexity and recognition time less than one second by exploiting CCA based joint feature fusion and optimization. To evaluate the performance of the proposed system, Left and Right Iris, and thumb Fingerprints from both hands of the SDUMLA-HMT multimodal dataset are considered in this experiment. We show that our proposed approach significantly outperforms in terms of equal error rate (EER) than unimodal system recognition performance. We also demonstrate that CCA based feature fusion excels than the match score level fusion. Further, an exploration of the correlation between Right Iris and Left Fingerprint images (EER of 0.1050%), and Left Iris and Right Fingerprint images (EER of 1.4286%) are also presented to consider the effect of feature dominance and laterality of the selected modalities for the robust multimodal biometric system.</p></abstract>
Collapse
|
21
|
Abstract
In the proposed study, we examined a multimodal biometric system having the utmost capability against spoof attacks. An enhanced anti-spoof capability is successfully demonstrated by choosing hand-related intrinsic modalities. In the proposed system, pulse response, hand geometry, and finger–vein biometrics are the three modalities of focus. The three modalities are combined using a fuzzy rule-based system that provides an accuracy of 92% on near-infrared (NIR) images. Besides that, we propose a new NIR hand images dataset containing a total of 111,000 images. In this research, hand geometry is treated as an intrinsic biometric modality by employing near-infrared imaging for human hands to locate the interphalangeal joints of human fingers. The L2 norm is calculated using the centroid of four pixel clusters obtained from the finger joint locations. This method produced an accuracy of 86% on the new NIR image dataset. We also propose finger–vein biometric identification using convolutional neural networks (CNNs). The CNN provided 90% accuracy on the new NIR image dataset. Moreover, we propose a robust system known as the pulse response biometric against spoof attacks involving fake or artificial human hands. The pulse response system identifies a live human body by applying a specific frequency pulse on the human hand. About 99% of the frequency response samples obtained from the human and non-human subjects were correctly classified by the pulse response biometric. Finally, we propose to combine all three modalities using the fuzzy inference system on the confidence score level, yielding 92% accuracy on the new near-infrared hand images dataset.
Collapse
|