1
|
Huang B, Wang Z, Jiang K, Zou Q, Tian X, Lu T, Han Z. Joint Segmentation and Identification Feature Learning for Occlusion Face Recognition. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:10875-10888. [PMID: 35560076 DOI: 10.1109/tnnls.2022.3171604] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The existing occlusion face recognition algorithms almost tend to pay more attention to the visible facial components. However, these models are limited because they heavily rely on existing face segmentation approaches to locate occlusions, which is extremely sensitive to the performance of mask learning. To tackle this issue, we propose a joint segmentation and identification feature learning framework for end-to-end occlusion face recognition. More particularly, unlike employing an external face segmentation model to locate the occlusion, we design an occlusion prediction module supervised by known mask labels to be aware of the mask. It shares underlying convolutional feature maps with the identification network and can be collaboratively optimized with each other. Furthermore, we propose a novel channel refinement network to cast the predicted single-channel occlusion mask into a multi-channel mask matrix with each channel owing a distinct mask map. Occlusion-free feature maps are then generated by projecting multi-channel mask probability maps onto original feature maps. Thus, it can suppress the representation of occlusion elements in both the spatial and channel dimensions under the guidance of the mask matrix. Moreover, in order to avoid misleading aggressively predicted mask maps and meanwhile actively exploit usable occlusion-robust features, we aggregate the original and occlusion-free feature maps to distill the final candidate embeddings by our proposed feature purification module. Lastly, to alleviate the scarcity of real-world occlusion face recognition datasets, we build large-scale synthetic occlusion face datasets, totaling up to 980193 face images of 10574 subjects for the training dataset and 36721 face images of 6817 subjects for the testing dataset, respectively. Extensive experimental results on the synthetic and real-world occlusion face datasets show that our approach significantly outperforms the state-of-the-art in both 1:1 face verification and 1:N face identification.
Collapse
|
2
|
Zhao C, Qin Y, Zhang B. Adversarially Learning Occlusions by Backpropagation for Face Recognition. SENSORS (BASEL, SWITZERLAND) 2023; 23:8559. [PMID: 37896653 PMCID: PMC10610773 DOI: 10.3390/s23208559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 10/16/2023] [Accepted: 10/17/2023] [Indexed: 10/29/2023]
Abstract
With the accomplishment of deep neural networks, face recognition methods have achieved great success in research and are now being applied at a human level. However, existing face recognition models fail to achieve state-of-the-art performance in recognizing occluded face images, which are common scenarios captured in the real world. One of the potential reasons for this is the lack of large-scale training datasets, requiring labour-intensive and costly labelling of the occlusions. To resolve these issues, we propose an Adversarially Learning Occlusions by Backpropagation (ALOB) model, a simple yet powerful double-network framework used to mitigate manual labelling by contrastively learning the corrupted features against personal identity labels, thereby maximizing the loss. To investigate the performance of the proposed method, we compared our model to the existing state-of-the-art methods, which function under the supervision of occlusion learning, in various experiments. Extensive experimentation on LFW, AR, MFR2, and other synthetic masked or occluded datasets confirmed the effectiveness of the proposed model in occluded face recognition by sustaining better results in terms of masked face recognition and general face recognition. For the AR datasets, the ALOB model outperformed other advanced methods by obtaining a 100% recognition rate for images with sunglasses (protocols 1 and 2). We also achieved the highest accuracies of 94.87%, 92.05%, 78.93%, and 71.57% TAR@FAR = 1 × 10-3 in LFW-OCC-2.0 and LFW-OCC-3.0, respectively. Furthermore, the proposed method generalizes well in terms of FR and MFR, yielding superior results in three datasets, LFW, LFW-Masked, and MFR2, and producing accuracies of 98.77%, 97.62%, and 93.76%, respectively.
Collapse
Affiliation(s)
- Caijie Zhao
- PAMI Research Group, Department of Computer and Information Science, University of Macau, Taipa 999078, Macau SAR, China
| | - Ying Qin
- PAMI Research Group, Department of Computer and Information Science, University of Macau, Taipa 999078, Macau SAR, China
| | - Bob Zhang
- PAMI Research Group, Department of Computer and Information Science, University of Macau, Taipa 999078, Macau SAR, China
- Centre for Artificial Intelligence and Robotics, Institute of Collaborative Innovation, University of Macau, Taipa 999078, Macau SAR, China
| |
Collapse
|
3
|
Qiu H, Gong D, Li Z, Liu W, Tao D. End2End Occluded Face Recognition by Masking Corrupted Features. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:6939-6952. [PMID: 34310287 DOI: 10.1109/tpami.2021.3098962] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
With the recent advancement of deep convolutional neural networks, significant progress has been made in general face recognition. However, the state-of-the-art general face recognition models do not generalize well to occluded face images, which are exactly the common cases in real-world scenarios. The potential reasons are the absences of large-scale occluded face data for training and specific designs for tackling corrupted features brought by occlusions. This article presents a novel face recognition method that is robust to occlusions based on a single end-to-end deep neural network. Our approach, named FROM (Face Recognition with Occlusion Masks), learns to discover the corrupted features from the deep convolutional neural networks, and clean them by the dynamically learned masks. In addition, we construct massive occluded face images to train FROM effectively and efficiently. FROM is simple yet powerful compared to the existing methods that either rely on external detectors to discover the occlusions or employ shallow models which are less discriminative. Experimental results on the LFW, Megaface challenge 1, RMF2, AR dataset and other simulated occluded/masked datasets confirm that FROM dramatically improves the accuracy under occlusions, and generalizes well on general face recognition.
Collapse
|
4
|
Wei G, Tian Y, Kaneko S, Jiang Z. Robust Template Matching Using Multiple-Layered Absent Color Indexing. SENSORS (BASEL, SWITZERLAND) 2022; 22:6661. [PMID: 36081120 PMCID: PMC9460572 DOI: 10.3390/s22176661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Revised: 08/28/2022] [Accepted: 09/01/2022] [Indexed: 06/15/2023]
Abstract
Color is an essential feature in histogram-based matching. This can be extracted as statistical data during the comparison process. Although the applicability of color features in histogram-based techniques has been proven, position information is lacking during the matching process. We present a conceptually simple and effective method called multiple-layered absent color indexing (ABC-ML) for template matching. Apparent and absent color histograms are obtained from the original color histogram, where the absent colors belong to low-frequency or vacant bins. To determine the color range of compared images, we propose a total color space (TCS) that can determine the operating range of the histogram bins. Furthermore, we invert the absent colors to obtain the properties of these colors using threshold hT. Then, we compute the similarity using the intersection. A multiple-layered structure is proposed against the shift issue in histogram-based approaches. Each layer is constructed using the isotonic principle. Thus, absent color indexing and multiple-layered structure are combined to solve the precision problem. Our experiments on real-world images and open data demonstrated that they have produced state-of-the-art results. Moreover, they retained the histogram merits of robustness in cases of deformation and scaling.
Collapse
Affiliation(s)
- Guodong Wei
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun 130022, China
| | - Ying Tian
- Graduate School of Information Science and Technology, Hokkaido University, Sapporo 060-0814, Japan
| | - Shun’ichi Kaneko
- Graduate School of Information Science and Technology, Hokkaido University, Sapporo 060-0814, Japan
| | - Zhengang Jiang
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun 130022, China
| |
Collapse
|
5
|
Lu H, Zhuang Z. ULN: An efficient face recognition method for person wearing a mask. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:42393-42411. [PMID: 35974893 PMCID: PMC9371960 DOI: 10.1007/s11042-022-13495-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Revised: 08/21/2021] [Accepted: 07/13/2022] [Indexed: 06/15/2023]
Abstract
Although the face recognition has advanced by leaps and bounds in recent years, recognizing faces with large occlusion, e.g., masks, is still a challenging problem. In the context of the COVID-19 outbreak, wearing masks becomes mandatory, which fails numerous face attendance and surveillance systems. Therefore, a robust face recognition algorithm that can deal with facial masks is urgently needed. To build a mask-robust face recognition algorithm, we first generate numerous facial images with masks based on public face datasets, which obviously alleviates the problem of the training data shortage. Second, we propose a novel network architecture called Upper-Lower Network (ULN) to recognize the faces with masks efficiently. The upper branch of ULN with the mask-free images as input is pretrained that provides supervisory information for the training of the lower branch. Considering that the occlusion areas of masks usually appear in the lower parts of faces, we further divide the high-order semantic features into upper and lower parts. The designed loss function force the learned features of the lower branch similar to those of the upper branch with the same mask-free image inputs, but only the upper part of features similar to the mask counterparts. Extensive experiments demonstrate that the proposed method is effective for recognizing persons with masks and outperforms other state-of-the-art face recognition methods.
Collapse
Affiliation(s)
- Hongtao Lu
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, 200240 China
| | - Zijun Zhuang
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, 200240 China
| |
Collapse
|
6
|
Zhu C, Zhang H, Chen W, Tan M, Liu Q. An Occlusion Compensation Learning Framework for Improving the Rendering Quality of Light Field. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:5738-5752. [PMID: 33108291 DOI: 10.1109/tnnls.2020.3027468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Occlusions are common phenomena in light field rendering (LFR) technology applications. The 3-D spatial structures of some features may be missing or incorrect when capturing some samples due to occlusion discontinuities. Most prior works on LFR, however, have neglected occlusions from other objects in 3-D scenes that do not participate in the capturing and rendering of the light field. To improve rendering quality, this report proposes an occlusion probability learning framework (OPLF) based on a deep Boltzmann machine (DBM) to compensate for the occluded information. In the OPLF, an occlusion probability density model is applied to calculate the visibility scores, which are modeled as hidden variables. Additionally, the probability of occlusion is related to the visibility, the camera configuration (i.e., position and direction), and the relationship between the occlusion object and occluded object. Furthermore, a deep probability model based on the OPLF is used for learning the occlusion relationship between the camera and object in multiple layers. The proposed OPLF can optimize the LFR quality. Finally, to verify the claimed performance, we also compare the OPLF with the most advanced occlusion theory and light field reconstruction algorithms. The experimental results show that the proposed OPLF outperforms other known occlusion quantization schemes.
Collapse
|
7
|
Hariri W. Efficient masked face recognition method during the COVID-19 pandemic. SIGNAL, IMAGE AND VIDEO PROCESSING 2021; 16:605-612. [PMID: 34804243 PMCID: PMC8591434 DOI: 10.1007/s11760-021-02050-w] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/15/2020] [Revised: 04/29/2021] [Accepted: 10/07/2021] [Indexed: 06/13/2023]
Abstract
The coronavirus disease (COVID-19) is an unparalleled crisis leading to a huge number of casualties and security problems. In order to reduce the spread of coronavirus, people often wear masks to protect themselves. This makes face recognition a very difficult task since certain parts of the face are hidden. A primary focus of researchers during the ongoing coronavirus pandemic is to come up with suggestions to handle this problem through rapid and efficient solutions. In this paper, we propose a reliable method based on occlusion removal and deep learning-based features in order to address the problem of the masked face recognition process. The first step is to remove the masked face region. Next, we apply three pre-trained deep Convolutional Neural Networks (CNN), namely VGG-16, AlexNet, and ResNet-50, and use them to extract deep features from the obtained regions (mostly eyes and forehead regions). The Bag-of-features paradigm is then applied to the feature maps of the last convolutional layer in order to quantize them and to get a slight representation comparing to the fully connected layer of classical CNN. Finally, Multilayer Perceptron (MLP) is applied for the classification process. Experimental results on Real-World-Masked-Face-Dataset show high recognition performance compared to other state-of-the-art methods.
Collapse
Affiliation(s)
- Walid Hariri
- Labged Laboratory, Computer Science department, Badji Mokhtar Annaba University, Annaba, Algeria
| |
Collapse
|
8
|
Hu B, Zheng Z, Liu P, Yang W, Ren M. Unsupervised Eyeglasses Removal in the Wild. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:4373-4385. [PMID: 32511098 DOI: 10.1109/tcyb.2020.2995496] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Eyeglasses removal is challenging in removing different kinds of eyeglasses, e.g., rimless glasses, full-rim glasses, and sunglasses, and recovering appropriate eyes. Due to the significant visual variants, the conventional methods lack scalability. Most existing works focus on the frontal face images in the controlled environment, such as the laboratory, and need to design specific systems for different eyeglass types. To address the limitation, we propose a unified eyeglass removal model called the eyeglasses removal generative adversarial network (ERGAN), which could handle different types of glasses in the wild. The proposed method does not depend on the dense annotation of eyeglasses location but benefits from the large-scale face images with weak annotations. Specifically, we study the two relevant tasks simultaneously, that is, removing eyeglasses and wearing eyeglasses. Given two face images with and without eyeglasses, the proposed model learns to swap the eye area in two faces. The generation mechanism focuses on the eye area and invades the difficulty of generating a new face. In the experiment, we show the proposed method achieves a competitive removal quality in terms of realism and diversity. Furthermore, we evaluate ERGAN on several subsequent tasks, such as face verification and facial expression recognition. The experiment shows that our method could serve as a preprocessing method for these tasks.
Collapse
|
9
|
Zeng D, Veldhuis R, Spreeuwers L, Arendsen R. Occlusion‐invariant face recognition using simultaneous segmentation. IET BIOMETRICS 2021. [DOI: 10.1049/bme2.12036] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Affiliation(s)
- Dan Zeng
- Southern University of Science and Technology China
- University of Twente The Netherlands
| | | | | | | |
Collapse
|
10
|
Zeng D, Veldhuis R, Spreeuwers L. A survey of face recognition techniques under occlusion. IET BIOMETRICS 2021. [DOI: 10.1049/bme2.12029] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Affiliation(s)
- Dan Zeng
- Faculty of EEMCS University of Twente Enschede The Netherlands
- Southern University of Science and Technology Shenzhen China
| | | | - Luuk Spreeuwers
- Faculty of EEMCS University of Twente Enschede The Netherlands
| |
Collapse
|
11
|
Zhu C, Zhang H, Liu Q, Zhuang Z, Yu L. A signal-processing framework for occlusion of 3D scene to improve the rendering quality of views. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; PP:8944-8959. [PMID: 32931432 DOI: 10.1109/tip.2020.3020650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Occlusions will reduce the performance of systems in many computer vision applications with discontinuous surfaces of 3D scenes. We explore a signal-processing framework of occlusions based on the light ray visibility to improve the rendering quality of views. An occlusion field (OCF) theory is derived by calculating the relationship between the occluded light rays and the nonoccluded light rays to quantify the occlusion degree (OCD). The OCF framework can describe the various in-scene information captured by the changes in the camera configuration (i.e., position and direction) through a quantitative description of the occlusion information. From a spectral analysis of the OCF, we mathematically derive analytical functions to determine the changing relationship between the scene and the camera configuration. A reconstruction filter can be designed to achieve interference cancellation and compensate for the missing information caused by the occlusions. Our measurements of different occlusions using this OCF framework included both synthetic and actual scenes. The experimental results show that the proposed OCF framework can improves the rendering quality of views and outperforms other known occlusion quantization schemes in a complex scene.
Collapse
|
12
|
A Novel Approach of Face Recognition Using Optimized Adaptive Illumination–Normalization and KELM. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2020. [DOI: 10.1007/s13369-020-04566-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
13
|
Chen Z, Gao T, Sheng B, Li P, Chen CLP. Outdoor Shadow Estimating Using Multiclass Geometric Decomposition Based on BLS. IEEE TRANSACTIONS ON CYBERNETICS 2020; 50:2152-2165. [PMID: 30403645 DOI: 10.1109/tcyb.2018.2875983] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Illumination is a significant component of an image, and illumination estimation of an outdoor scene from given images is still challenging yet it has wide applications. Most of the traditional illumination estimating methods require prior knowledge or fixed objects within the scene, which makes them often limited by the scene of a given image. We propose an optimization approach that integrates the multiclass cues of the image(s) [a main input image and optional auxiliary input image(s)]. First, Sun visibility is estimated by the efficient broad learning system. And then for the scene with visible Sun, we classify the information in the image by the proposed classification algorithm, which combines the geometric information and shadow information to make the most of the information. And we apply a respective algorithm for every class to estimate the illumination parameters. Finally, our approach integrates all of the estimating results by the Markov random field. We make full use of the cues in the given image instead of an extra requirement for the scene, and the qualitative results are presented and show that our approach outperformed other methods with similar conditions.
Collapse
|
14
|
Haghighat M, Mathew R, Naman A, Taubman D. Illumination Estimation and Compensation of Low Frame Rate Video Sequences for Wavelet-Based Video Compression. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 28:4313-4327. [PMID: 30908217 DOI: 10.1109/tip.2019.2905756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
In this paper, we are interested in the compression of image sets or video with considerable changes in illumination. We develop a framework to decompose frames into illumination fields and texture in order to achieve sparser representations of frames which is beneficial for compression. Illumination variations or contrast ratio factors among frames are described by a full resolution multiplicative field. First, we propose a Lifting-based Illumination Adaptive Transform (LIAT) framework which incorporates illumination compensation to temporal wavelet transforms. We estimate a full resolution illumination field, taking heed of its spatial sparsity by a rate-distortion (R-D) driven framework. An affine mesh model is also developed as a point of comparison. We find the operational coding cost of the subband frames by modeling a typical t + 2D wavelet video coding system. While our general findings on R-D optimization are applicable to a range of coding frameworks, in this paper, we report results based on employing JPEG 2000 coding tools. The experimental results highlight the benefits of the proposed R-D driven illumination estimation and compensation in comparison with alternative scalable coding methods and non-scalable coding schemes of AVC and HEVC employing weighted prediction.
Collapse
|
15
|
Gaston J, Ming J, Crookes D. Matching Larger Image Areas for Unconstrained Face Identification. IEEE TRANSACTIONS ON CYBERNETICS 2019; 49:3191-3202. [PMID: 29994697 DOI: 10.1109/tcyb.2018.2846579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Many approaches to unconstrained face identification exploit small patches which are unaffected by distortions outside their locality. A larger area usually contains more discriminative information, but may be unidentifiable due to local appearance changes across its area, given limited training data. We propose a novel block-based approach, as a complement to existing patch-based approaches, to exploit the greater discriminative information in larger areas, while maintaining robustness to limited training data. A testing block contains several neighboring patches, each of a small size. We identify the matching training block by jointly estimating all of the matching patches, as a means of reducing the uncertainty of each small matching patch with the addition of the neighboring patch information, without assuming additional training data. We further propose a multiscale extension in which we carry out block-based matching at several block sizes, to combine complementary information across scales for further robustness. We have conducted face identification experiments using three datasets, the constrained Georgia Tech dataset to validate the new approach, and two unconstrained datasets, LFW and UFI, to evaluate its potential for improving robustness. The results show that the new approach is able to significantly improve over existing patch-based face identification approaches, in the presence of uncontrolled pose, expression, and lighting variations, using small training datasets. It is also shown that the new block-based scheme can be combined with existing approaches to further improve performance.
Collapse
|
16
|
Guo Y, Jiao L, Wang S, Wang S, Liu F. Fuzzy Sparse Autoencoder Framework for Single Image Per Person Face Recognition. IEEE TRANSACTIONS ON CYBERNETICS 2018; 48:2402-2415. [PMID: 28858822 DOI: 10.1109/tcyb.2017.2739338] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
The issue of single sample per person (SSPP) face recognition has attracted more and more attention in recent years. Patch/local-based algorithm is one of the most popular categories to address the issue, as patch/local features are robust to face image variations. However, the global discriminative information is ignored in patch/local-based algorithm, which is crucial to recognize the nondiscriminative region of face images. To make the best of the advantage of both local information and global information, a novel two-layer local-to-global feature learning framework is proposed to address SSPP face recognition. In the first layer, the objective-oriented local features are learned by a patch-based fuzzy rough set feature selection strategy. The obtained local features are not only robust to the image variations, but also usable to preserve the discrimination ability of original patches. Global structural information is extracted from local features by a sparse autoencoder in the second layer, which reduces the negative effect of nondiscriminative regions. Besides, the proposed framework is a shallow network, which avoids the over-fitting caused by using multilayer network to address SSPP problem. The experimental results have shown that the proposed local-to-global feature learning framework can achieve superior performance than other state-of-the-art feature learning algorithms for SSPP face recognition.
Collapse
|
17
|
Ahmad F, Khan A, Islam IU, Uzair M, Ullah H. Illumination normalization using independent component analysis and filtering. THE IMAGING SCIENCE JOURNAL 2017. [DOI: 10.1080/13682199.2017.1338815] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Fawad Ahmad
- Department of Electronics Engineering, City University of Hong Kong, Kowloon, Hong Kong
| | - Asif Khan
- Faculty of Computer Science and Engineering, Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Swabi, Pakistan
| | - Ihtesham Ul Islam
- Department of Computer Science, Sarhad University of Science & IT, Peshawar, Pakistan
| | - Muhammad Uzair
- Department of Electrical Engineering, COMSATS Institute of Information Technology – Wah Campus, Wah, Pakistan
| | - Habib Ullah
- Department of Electrical Engineering, COMSATS Institute of Information Technology – Wah Campus, Wah, Pakistan
| |
Collapse
|