1
|
Zhou Z, Liu Y, Zhu X, Liu S, Zhang S, Li Y. Supervised Contrastive Learning and Intra-Dataset Adversarial Adaptation for Iris Segmentation. ENTROPY (BASEL, SWITZERLAND) 2022; 24:1276. [PMID: 36141162 PMCID: PMC9497583 DOI: 10.3390/e24091276] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/06/2022] [Revised: 09/07/2022] [Accepted: 09/08/2022] [Indexed: 06/16/2023]
Abstract
Precise iris segmentation is a very important part of accurate iris recognition. Traditional iris segmentation methods require complex prior knowledge and pre- and post-processing and have limited accuracy under non-ideal conditions. Deep learning approaches outperform traditional methods. However, the limitation of a small number of labeled datasets degrades their performance drastically because of the difficulty in collecting and labeling irises. Furthermore, previous approaches ignore the large distribution gap within the non-ideal iris dataset due to illumination, motion blur, squinting eyes, etc. To address these issues, we propose a three-stage training strategy. Firstly, supervised contrastive pretraining is proposed to increase intra-class compactness and inter-class separability to obtain a good pixel classifier under a limited amount of data. Secondly, the entire network is fine-tuned using cross-entropy loss. Thirdly, an intra-dataset adversarial adaptation is proposed, which reduces the intra-dataset gap in the non-ideal situation by aligning the distribution of the hard and easy samples at the pixel class level. Our experiments show that our method improved the segmentation performance and achieved the following encouraging results: 0.44%, 1.03%, 0.66%, 0.41%, and 0.37% in the Nice1 and 96.66%, 98.72%, 93.21%, 94.28%, and 97.41% in the F1 for UBIRIS.V2, IITD, MICHE-I, CASIA-D, and CASIA-T.
Collapse
Affiliation(s)
- Zhiyong Zhou
- College of Computer Science and Technology, Jilin University, Changchun 130012, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China
| | - Yuanning Liu
- College of Computer Science and Technology, Jilin University, Changchun 130012, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China
| | - Xiaodong Zhu
- College of Computer Science and Technology, Jilin University, Changchun 130012, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China
| | - Shuai Liu
- College of Computer Science and Technology, Jilin University, Changchun 130012, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China
| | - Shaoqiang Zhang
- College of Computer Science and Technology, Jilin University, Changchun 130012, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China
| | - Yuanfeng Li
- College of Biological and Agricultural Engineering, Jilin University, Changchun 130012, China
| |
Collapse
|
2
|
Zhou W, Chen T, Huang H, Sheng C, Wang Y, Wang Y, Zhang D. An improved low-complexity DenseUnet for high-accuracy iris segmentation network. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-211396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Iris segmentation is one of the most important steps in iris recognition. The current iris segmentation network is based on convolutional neural network (CNN). Among these methods, there are still problems with the segmentation networks such as high complexity, insufficient accuracy, etc. To solve these problems, an improved low complexity DenseUnet is proposed to this paper based on U-net for acquiring a high-accuracy iris segmentation network. In this network, the improvements are as follows: (1) Design a dense block module that contains five convolutional layers and all convolutions are dilated convolutions aimed at enhancing feature extraction; (2) Except for the last convolutional layer, all convolutional layers output feature maps are set to the number 64, and this operation is to reduce the amounts of parameters without affecting the segmentation accuracy; (3) The solution proposed to this paper has low complexity and provides the possibility for the deployment of portable mobile devices. DenseUnet is used on the dataset of IITD, CASIA V4.0 and UBIRIS V2.0 during the experimental stage. The results of the experiments have shown that the iris segmentation network proposed in this paper has a better performance than existing algorithms.
Collapse
Affiliation(s)
- Weibin Zhou
- College of Electronic Information and Automation, Tianjin University of Science and Technology, Tianjin, China
| | - Tao Chen
- College of Electronic Information and Automation, Tianjin University of Science and Technology, Tianjin, China
| | - Huafang Huang
- College of Electronic Information and Automation, Tianjin University of Science and Technology, Tianjin, China
- Assets Management Section, Tianjin University of Science and Technology, Tianjin, China
| | - Chang Sheng
- College of Electronic Information and Automation, Tianjin University of Science and Technology, Tianjin, China
| | - Yangfeng Wang
- College of Electronic Information and Automation, Tianjin University of Science and Technology, Tianjin, China
| | - Yang Wang
- College of Electronic Information and Automation, Tianjin University of Science and Technology, Tianjin, China
| | - Daqiang Zhang
- College of Electronic Information and Automation, Tianjin University of Science and Technology, Tianjin, China
| |
Collapse
|
3
|
Tann H, Zhao H, Reda S. A Resource-Efficient Embedded Iris Recognition System Using Fully Convolutional Networks. ACM JOURNAL ON EMERGING TECHNOLOGIES IN COMPUTING SYSTEMS 2020; 16:1-23. [DOI: 10.1145/3357796] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/01/2019] [Accepted: 08/01/2019] [Indexed: 09/01/2023]
Abstract
Applications of fully convolutional networks (FCN) in iris segmentation have shown promising advances. For mobile and embedded systems, a significant challenge is that the proposed FCN architectures are extremely computationally demanding. In this article, we propose a resource-efficient, end-to-end iris recognition flow, which consists of FCN-based segmentation and a contour fitting module, followed by Daugman normalization and encoding. To attain accurate and efficient FCN models, we propose a three-step SW/HW co-design methodology consisting of FCN architectural exploration, precision quantization, and hardware acceleration. In our exploration, we propose multiple FCN models, and in comparison to previous works, our best-performing model requires 50× fewer floating-point operations per inference while achieving a new state-of-the-art segmentation accuracy. Next, we select the most efficient set of models and further reduce their computational complexity through weights and activations quantization using an 8-bit dynamic fixed-point format. Each model is then incorporated into an end-to-end flow for true recognition performance evaluation. A few of our end-to-end pipelines outperform the previous state of the art on two datasets evaluated. Finally, we propose a novel dynamic fixed-point accelerator and fully demonstrate the SW/HW co-design realization of our flow on an embedded FPGA platform. In comparison with the embedded CPU, our hardware acceleration achieves up to 8.3× speedup for the overall pipeline while using less than 15% of the available FPGA resources. We also provide comparisons between the FPGA system and an embedded GPU showing different benefits and drawbacks for the two platforms.
Collapse
|
4
|
Larregui JI, Cazzato D, Castro SM. An image processing pipeline to segment iris for unconstrained cow identification system. OPEN COMPUTER SCIENCE 2019. [DOI: 10.1515/comp-2019-0010] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
AbstractOne of the most evident costs in cow farming is the identification of the animals. Classic identification processes are labour-intensive, prone to human errors and invasive for the animal. An automated alternative is an animal identification based on unique biometric patterns like iris recognition; in this context, correct segmentation of the region of interest becomes of critical importance. This work introduces a bovine iris segmentation pipeline that processes images taken in the wild, extracting the iris region. The solution deals with images taken with a regular visible-light camera in real scenarios, where reflections in the iris and camera flash introduce a high level of noise that makes the segmentation procedure challenging. Traditional segmentation techniques for the human iris are not applicable given the nature of the bovine eye; at this aim, a dataset composed of catalogued images and manually labelled ground truth data of Aberdeen-Angus has been used for the experiments and made publicly available. The unique ID number for each different animal in the dataset is provided, making it suitable for recognition tasks. Segmentation results have been validated with our dataset showing high reliability: with the most pessimistic metric (i.e. intersection over union), a mean score of 0.8957 has been obtained.
Collapse
Affiliation(s)
- Juan I. Larregui
- Departamento de Ciencias e Ingeniería de la Computación, Universidad Nacional del Sur (UNS), Instituto de Ciencias e Ingeniería de la Computación (ICIC UNS - CONICET), Argentina, Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), ArgentinaBuenos Aires
| | - Dario Cazzato
- Interdisciplinary Centre for Security Reliability and Trust (SnT), University of Luxembourg, LuxembourgLuxembourg
| | - Silvia M. Castro
- Departamento de Ciencias e Ingeniería de la Computación, Universidad Nacional del Sur (UNS), Instituto de Ciencias e Ingeniería de la Computación (ICIC UNS - CONICET), ArgentinaBuenos Aires
| |
Collapse
|
5
|
Varkarakis V, Bazrafkan S, Corcoran P. Deep neural network and data augmentation methodology for off-axis iris segmentation in wearable headsets. Neural Netw 2019; 121:101-121. [PMID: 31541879 DOI: 10.1016/j.neunet.2019.07.020] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2019] [Revised: 07/02/2019] [Accepted: 07/25/2019] [Indexed: 11/30/2022]
Abstract
A data augmentation methodology is presented and applied to generate a large dataset of off-axis iris regions and train a low-complexity deep neural network. Although of low complexity the resulting network achieves a high level of accuracy in iris region segmentation for challenging off-axis eye-patches. Interestingly, this network is also shown to achieve high levels of performance for regular, frontal, segmentation of iris regions, comparing favourably with state-of-the-art techniques of significantly higher complexity. Due to its lower complexity this network is well suited for deployment in embedded applications such as augmented and mixed reality headsets.
Collapse
Affiliation(s)
- Viktor Varkarakis
- Department of Electronic Engineering, College of Engineering, National University of Ireland Galway, University Road, Galway, Ireland.
| | - Shabab Bazrafkan
- imec-Vision Lab, Department of Physics, University of Antwerp, Antwerp, Belgium.
| | - Peter Corcoran
- Department of Electronic Engineering, College of Engineering, National University of Ireland Galway, University Road, Galway, Ireland.
| |
Collapse
|
6
|
Hofbauer H, Jalilian E, Uhl A. Exploiting superior CNN-based iris segmentation for better recognition accuracy. Pattern Recognit Lett 2019. [DOI: 10.1016/j.patrec.2018.12.021] [Citation(s) in RCA: 42] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
7
|
Bazrafkan S, Thavalengal S, Corcoran P. An end to end Deep Neural Network for iris segmentation in unconstrained scenarios. Neural Netw 2018; 106:79-95. [DOI: 10.1016/j.neunet.2018.06.011] [Citation(s) in RCA: 79] [Impact Index Per Article: 13.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2017] [Revised: 05/08/2018] [Accepted: 06/21/2018] [Indexed: 11/26/2022]
|
8
|
Arsalan M, Naqvi RA, Kim DS, Nguyen PH, Owais M, Park KR. IrisDenseNet: Robust Iris Segmentation Using Densely Connected Fully Convolutional Networks in the Images by Visible Light and Near-Infrared Light Camera Sensors. SENSORS 2018; 18:s18051501. [PMID: 29748495 PMCID: PMC5981870 DOI: 10.3390/s18051501] [Citation(s) in RCA: 67] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/02/2018] [Revised: 05/01/2018] [Accepted: 05/08/2018] [Indexed: 11/21/2022]
Abstract
The recent advancements in computer vision have opened new horizons for deploying biometric recognition algorithms in mobile and handheld devices. Similarly, iris recognition is now much needed in unconstraint scenarios with accuracy. These environments make the acquired iris image exhibit occlusion, low resolution, blur, unusual glint, ghost effect, and off-angles. The prevailing segmentation algorithms cannot cope with these constraints. In addition, owing to the unavailability of near-infrared (NIR) light, iris recognition in visible light environment makes the iris segmentation challenging with the noise of visible light. Deep learning with convolutional neural networks (CNN) has brought a considerable breakthrough in various applications. To address the iris segmentation issues in challenging situations by visible light and near-infrared light camera sensors, this paper proposes a densely connected fully convolutional network (IrisDenseNet), which can determine the true iris boundary even with inferior-quality images by using better information gradient flow between the dense blocks. In the experiments conducted, five datasets of visible light and NIR environments were used. For visible light environment, noisy iris challenge evaluation part-II (NICE-II selected from UBIRIS.v2 database) and mobile iris challenge evaluation (MICHE-I) datasets were used. For NIR environment, the institute of automation, Chinese academy of sciences (CASIA) v4.0 interval, CASIA v4.0 distance, and IIT Delhi v1.0 iris datasets were used. Experimental results showed the optimal segmentation of the proposed IrisDenseNet and its excellent performance over existing algorithms for all five datasets.
Collapse
Affiliation(s)
- Muhammad Arsalan
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 100-715, Korea.
| | - Rizwan Ali Naqvi
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 100-715, Korea.
| | - Dong Seop Kim
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 100-715, Korea.
| | - Phong Ha Nguyen
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 100-715, Korea.
| | - Muhammad Owais
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 100-715, Korea.
| | - Kang Ryoung Park
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 100-715, Korea.
| |
Collapse
|