1
|
Fan D, Liang X, Zhang C, Chen J, Wu B, Jia W, Zhang D. Smart touchless palm sensing via palm adjustment and dynamic registration. Nat Commun 2025; 16:2912. [PMID: 40133292 PMCID: PMC11937390 DOI: 10.1038/s41467-025-58213-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2024] [Accepted: 03/13/2025] [Indexed: 03/27/2025] Open
Abstract
Touchless palm recognition is increasingly popular for its effectiveness, privacy, and hygiene benefits in biometric systems. However, several challenges remain, including significant performance degradation caused by variations in palm positioning and capture distance. To address these issues, this paper introduces a comprehensive sensing system that integrates dynamic registration with robust palm adjustment. Specifically, we conduct a thorough investigation of distance variations to establish optimal registration settings. In addition, we propose an edge-aware, rotation-invariant region of interest alignment method, which ensures spatial alignment for any given palm across its different samples, even under challenging conditions. By embedding it into a palm registration framework based on video sequences, we improve the system's ability to adapt to varying conditions automatically. Extensive experiments on various datasets demonstrate that the proposed method significantly enhances the performance of touchless palm recognition systems.
Collapse
Affiliation(s)
- Dandan Fan
- School of Data Science, The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), Shenzhen, China
| | - Xu Liang
- School of Software, Northwestern Polytechnical University, Xi'an, China
| | - Chunsheng Zhang
- School of Data Science, The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), Shenzhen, China
| | - Junan Chen
- School of Data Science, The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), Shenzhen, China
| | - Baoyuan Wu
- School of Data Science, The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), Shenzhen, China
| | - Wei Jia
- School of Computer Science and Information Engineering, Hefei University of Technology, Hefei, China
| | - David Zhang
- School of Data Science, The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), Shenzhen, China.
| |
Collapse
|
2
|
Zhao S, Fei L, Wen J, Zhang B, Zhao P, Li S. Structure Suture Learning-Based Robust Multiview Palmprint Recognition. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:8401-8413. [PMID: 37015591 DOI: 10.1109/tnnls.2022.3227473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Low-quality palmprint images will degrade the recognition performance, when they are captured under the open, unconstraint, and low-illumination conditions. Moreover, the traditional single-view palmprint representation methods have been difficult to express the characteristics of each palm strongly, where the palmprint characteristics become weak. To tackle these issues, in this article, we propose a structure suture learning-based robust multiview palmprint recognition method (SSL_RMPR), which comprehensively presents the salient palmprint features from multiple views. Unlike the existing multiview palmprint representation methods, SSL_RMPR introduces a structure suture learning strategy to produce an elastic nearest neighbor graph (ENNG) on the reconstruction errors that simultaneously exploit the label information and the latent consensus structure of the multiview data, such that the discriminant palmprint representation can be adaptively enhanced. Meanwhile, a low-rank reconstruction term integrating with the projection matrix learning is proposed, in such a manner that the robustness of the projection matrix can be improved. Particularly, since no extra structure capture term is imposed into the proposed model, the complexity of the model can be greatly reduced. Experimental results have proven the superiority of the proposed SSL_RMPR by achieving the best recognition performances on a number of real-world palmprint databases.
Collapse
|
3
|
Zhao S, Fei L, Zhang B, Wen J, Zhao P. Tensorized Multi-View Low-Rank Approximation Based Robust Hand-Print Recognition. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2024; 33:3328-3340. [PMID: 38709602 DOI: 10.1109/tip.2024.3393291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Abstract
Since hand-print recognition, i.e., palmprint, finger-knuckle-print (FKP), and hand-vein, have significant superiority in user convenience and hygiene, it has attracted greater enthusiasm from researchers. Seeking to handle the long-standing interference factors, i.e., noise, rotation, shadow, in hand-print images, multi-view hand-print representation has been proposed to enhance the feature expression by exploiting multiple characteristics from diverse views. However, the existing methods usually ignore the high-order correlations between different views or fuse very limited types of features. To tackle these issues, in this paper, we present a novel tensorized multi-view low-rank approximation based robust hand-print recognition method (TMLA_RHR), which can dexterously manipulate the multi-view hand-print features to produce a high-compact feature representation. To achieve this goal, we formulate TMLA_RHR by two key components, i.e., aligned structure regression loss and tensorized low-rank approximation, in a joint learning model. Specifically, we treat the low-rank representation matrices of different views as a tensor, which is regularized with a low-rank constraint. It models the across information between different views and reduces the redundancy of the learned sub-space representations. Experimental results on eight real-world hand-print databases prove the superiority of the proposed method in comparison with other state-of-the-art related works.
Collapse
|
4
|
Fei L, Zhao S, Jia W, Zhang B, Wen J, Xu Y. Toward Efficient Palmprint Feature Extraction by Learning a Single-Layer Convolution Network. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:9783-9794. [PMID: 35349454 DOI: 10.1109/tnnls.2022.3160597] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
In this article, we propose a collaborative palmprint-specific binary feature learning method and a compact network consisting of a single convolution layer for efficient palmprint feature extraction. Unlike most existing palmprint feature learning methods, such as deep-learning, which usually ignore the inherent characteristics of palmprints and learn features from raw pixels of a massive number of labeled samples, palmprint-specific information, such as the direction and edge of patterns, is characterized by forming two kinds of ordinal measure vectors (OMVs). Then, collaborative binary feature codes are jointly learned by projecting double OMVs into complementary feature spaces in an unsupervised manner. Furthermore, the elements of feature projection functions are integrated into OMV extraction filters to obtain a collection of cascaded convolution templates that form a single-layer convolution network (SLCN) to efficiently obtain the binary feature codes of a new palmprint image within a single-stage convolution operation. Particularly, our proposed method can easily be extended to a general version that can efficiently perform feature extraction with more than two types of OMVs. Experimental results on five benchmark databases show that our proposed method achieves very promising feature extraction efficiency for palmprint recognition.
Collapse
|
5
|
Zhang J, Jiao L, Ma W, Liu F, Liu X, Li L, Zhu H. RDLNet: A Regularized Descriptor Learning Network. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:5669-5681. [PMID: 34878982 DOI: 10.1109/tnnls.2021.3130655] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Local image descriptor learning has been instrumental in various computer vision tasks. Recent innovations lie with similarity measurement of descriptor vectors with metric learning for randomly selected Siamese or triplet patches. Local image descriptor learning focuses more on hard samples since easy samples do not contribute much to optimization. However, few studies focus on hard samples of image patches from the perspective of loss functions and design appropriate learning algorithms to obtain a more compact descriptor representation. This article proposes a regularized descriptor learning network (RDLNet) that makes the network focus on the learning of hard samples and compact descriptor with triplet networks. A novel hard sample mining strategy is designed to select the hardest negative samples in mini-batch. Then batch margin loss concerned with hard samples is adopted to optimize the distance of extreme cases. Finally, for a more stable network and preventing network collapsing, orthogonal regularization is designed to constrain convolutional kernels and obtain rich deep features. RDLNet provides a compact discriminative low-dimensional representation and can be embedded in other pipelines easily. This article gives extensive experimental results for large benchmarks in multiple scenarios and generalization in matching applications with significant improvements.
Collapse
|
6
|
Lv W, Zhang C, Li H, Wang B, Chen C. A robust mixed error coding method based on nonconvex sparse representation. Inf Sci (N Y) 2023. [DOI: 10.1016/j.ins.2023.03.129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
|
7
|
Xiong Q, Zhang X, Wang X, Qiao N, Shen J. Robust Iris-Localization Algorithm in Non-Cooperative Environments Based on the Improved YOLO v4 Model. SENSORS (BASEL, SWITZERLAND) 2022; 22:9913. [PMID: 36560280 PMCID: PMC9785435 DOI: 10.3390/s22249913] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 12/13/2022] [Accepted: 12/13/2022] [Indexed: 06/17/2023]
Abstract
Iris localization in non-cooperative environments is challenging and essential for accurate iris recognition. Motivated by the traditional iris-localization algorithm and the robustness of the YOLO model, we propose a novel iris-localization algorithm. First, we design a novel iris detector with a modified you only look once v4 (YOLO v4) model. We can approximate the position of the pupil center. Then, we use a modified integro-differential operator to precisely locate the iris inner and outer boundaries. Experiment results show that iris-detection accuracy can reach 99.83% with this modified YOLO v4 model, which is higher than that of a traditional YOLO v4 model. The accuracy in locating the inner and outer boundary of the iris without glasses can reach 97.72% at a short distance and 98.32% at a long distance. The locating accuracy with glasses can obtained at 93.91% and 84%, respectively. It is much higher than the traditional Daugman's algorithm. Extensive experiments conducted on multiple datasets demonstrate the effectiveness and robustness of our method for iris localization in non-cooperative environments.
Collapse
Affiliation(s)
- Qi Xiong
- MOE Key Lab for Intelligent Networks and Network Security, School of Automation Science and Engineering, Faculty of Electronic and Information Engineering, Xi’an Jiaotong University, Xi’an 710049, China
- International Collage, Hunan University of Arts and Sciences, Changde 415000, China
| | - Xinman Zhang
- MOE Key Lab for Intelligent Networks and Network Security, School of Automation Science and Engineering, Faculty of Electronic and Information Engineering, Xi’an Jiaotong University, Xi’an 710049, China
| | - Xingzhu Wang
- International Collage, Hunan University of Arts and Sciences, Changde 415000, China
| | - Naosheng Qiao
- International Collage, Hunan University of Arts and Sciences, Changde 415000, China
| | - Jun Shen
- School of Computing and Information Technology, University of Wollongong, Wollongong, NSW 2522, Australia
| |
Collapse
|
8
|
Li M, Wang H, Liu H, Meng Q. Palmprint recognition based on the line feature local tri‐directional patterns. IET BIOMETRICS 2022. [DOI: 10.1049/bme2.12085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Affiliation(s)
- Mengwen Li
- Anhui Engineering Research Center for Intelligent Computing and Application on Cognitive Behavior Huaibei Normal University Huaibei China
| | - Huabin Wang
- Anhui Provincial Key Laboratory of Multimodal Cognitive Computation Anhui University Hefei China
| | - Huaiyu Liu
- Anhui Engineering Research Center for Intelligent Computing and Application on Cognitive Behavior Huaibei Normal University Huaibei China
| | - Qianqian Meng
- Anhui Engineering Research Center for Intelligent Computing and Application on Cognitive Behavior Huaibei Normal University Huaibei China
| |
Collapse
|
9
|
Taouche C, Belhadef H, Laboudi Z. Palmprint Recognition System Based on Multi-Block Local Line Directional Pattern and Feature Selection. INTERNATIONAL JOURNAL OF INFORMATION TECHNOLOGIES AND SYSTEMS APPROACH 2022. [DOI: 10.4018/ijitsa.292042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
In this paper, we deal with multimodal biometric systems based on palmprint recognition. In this regard, several palmprint-based approaches have been already proposed. Although these approaches show interesting results, they have some limitations in terms of recognition rate, running time and storage space. To fill this gap, we propose a novel multimodal biometric system combining left and right palmprints. For building this multimodal system, two compact local descriptors for feature extraction are proposed, fusion of left and right palmprints is performed at feature-level, and feature selection using evolutionary algorithms is introduced. To validate our proposal, we conduct intensive experiments related to performance and running time aspects. The obtained results show that our proposal shows significant improvements in terms of recognition rate, running time and storage space. Also, the comparison with other works shows that the proposed system outperforms some literature approaches and comparable with others.
Collapse
Affiliation(s)
- Cherif Taouche
- RELA(CS)2 Laboratory, University of Oum El-Bouaghi, Algeria
| | - Hacene Belhadef
- SD2A Team, LISIA Laboratory, NTIC Faculty, University Abdelhamid Mehri of Constantine 2, Algeria
| | | |
Collapse
|
10
|
Zhao S, Zhang B. Learning Complete and Discriminative Direction Pattern for Robust Palmprint Recognition. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 30:1001-1014. [PMID: 33270561 DOI: 10.1109/tip.2020.3039895] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Palmprint direction patterns have been widely and successfully used in palmprint recognition methods. Most existing direction-based methods utilize the pre-defined filters to achieve the genuine line responses in the palmprint image, which requires rich prior knowledge and usually ignores the vital direction information. In addition, some line responses influenced by noise will degrade the recognition accuracy. Furthermore, how to extract the discriminative features to make the palmprint more separable is also a dilemma for improving the recognition performance. To solve these problems, we propose to learn complete and discriminative direction patterns in this study. We first extract the complete and salient local direction patterns, which contains a complete local direction feature (CLDF) and a salient convolution difference feature (SCDF) extracted from the palmprint image. Afterwards, two learning models are proposed to learn sparse and discriminative directions from CLDF and to achieve the underlying structure for the SCDFs in the training samples, respectively. Lastly, the projected CLDF and the projected SCDF are concatenated forming the complete and discriminative direction feature for palmprint recognition. Experimental results on seven palmprint databases, as well as three noisy datasets clearly demonstrates the effectiveness of the proposed method.
Collapse
|
11
|
Correntropy-Induced Discriminative Nonnegative Sparse Coding for Robust Palmprint Recognition. SENSORS 2020; 20:s20154250. [PMID: 32751620 PMCID: PMC7436014 DOI: 10.3390/s20154250] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Revised: 07/27/2020] [Accepted: 07/27/2020] [Indexed: 11/17/2022]
Abstract
Palmprint recognition has been widely studied for security applications. However, there is a lack of in-depth investigations on robust palmprint recognition. Regression analysis being intuitively interpretable on robustness design inspires us to propose a correntropy-induced discriminative nonnegative sparse coding method for robust palmprint recognition. Specifically, we combine the correntropy metric and l1-norm to present a powerful error estimator that gains flexibility and robustness to various contaminations by cooperatively detecting and correcting errors. Furthermore, we equip the error estimator with a tailored discriminative nonnegative sparse regularizer to extract significant nonnegative features. We manage to explore an analytical optimization approach regarding this unified scheme and figure out a novel efficient method to address the challenging non-negative constraint. Finally, the proposed coding method is extended for robust multispectral palmprint recognition. Namely, we develop a constrained particle swarm optimizer to search for the feasible parameters to fuse the extracted robust features of different spectrums. Extensive experimental results on both contactless and contact-based multispectral palmprint databases verify the flexibility and robustness of our methods.
Collapse
|