1
|
The eye in forensic practice: In the living. Med Leg J 2024:258172241228812. [PMID: 38619162 DOI: 10.1177/00258172241228812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/16/2024]
Abstract
Eye examination plays an important role when living individuals are forensically investigated. The iris colour, retinal scans and other biometric features may be used for identification purposes while visual impairments may have legal implications in employment, driving and accidents. Ocular manifestations provide clues regarding substance abuse, poisoning and toxicity, and evidence of trauma, abuse or disease can be revealed along with psychological traits and lifestyle. Thus, the eye is a valuable tool in forensic investigations of living subjects, providing identifying characteristics along with health information. This review focuses on the medico-legal aspects of the eye's contribution when the living are subjected to forensic examination.
Collapse
|
2
|
Synthetic Iris Images: A Comparative Analysis between Cartesian and Polar Representation. SENSORS (BASEL, SWITZERLAND) 2024; 24:2269. [PMID: 38610479 PMCID: PMC11014044 DOI: 10.3390/s24072269] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Revised: 03/09/2024] [Accepted: 03/28/2024] [Indexed: 04/14/2024]
Abstract
In recent years, the advancement of generative techniques, particularly generative adversarial networks (GANs), has opened new possibilities for generating synthetic biometric data from different modalities, including-among others-images of irises, fingerprints, or faces in different representations. This study presents the process of generating synthetic images of human irises, using the recent StyleGAN3 model. The novelty presented in this work consists in producing generated content in both Cartesian and polar coordinate representations, typically used in iris recognition pipelines, such as the foundational work proposed by John Daugman, but hitherto not used in generative AI experiments. The main objective of this study was to conduct a qualitative analysis of the synthetic samples and evaluate the iris texture density and suitability for meaningful feature extraction. During this study, a total of 1327 unique irises were generated, and experimental results carried out using the well-known OSIRIS open-source iris recognition software and the equivalent software, wordlcoin-openiris, newly published at the end of 2023 to prove that (1) no "identity leak" from the training set was observed, and (2) the generated irises had enough unique textural information to be successfully differentiated between both themselves and between them and real, authentic iris samples. The results of our research demonstrate the promising potential of synthetic iris data generation as a valuable tool for augmenting training datasets and improving the overall performance of iris recognition systems. By exploring the synthetic data in both Cartesian and polar representations, we aim to understand the benefits and limitations of each approach and their implications for biometric applications. The findings suggest that synthetic iris data can significantly contribute to the advancement of iris recognition technology, enhancing its accuracy and robustness in real-world scenarios by greatly augmenting the possibilities to gather large and diversified training datasets.
Collapse
|
3
|
Effect of pupil dilation on biometric iris recognition systems for personal authentication. Indian J Ophthalmol 2023; 71:57-61. [PMID: 36588207 PMCID: PMC10155559 DOI: 10.4103/ijo.ijo_1417_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023] Open
Abstract
Purpose To study the effect of pupil dilation on a biometric iris recognition (BIR) system for personal authentication and identification. Methods A prospective, non-randomized, single-center cohort study was conducted on patients who reported for a routine eye check-up from November 2017 to November 2019 (2 years). An iris scanning device "IRITECH-MK2120U" was used to initially enroll the undilated eyes. Baseline scans were taken after matching with the enrolled database. All eyes were topically dilated and matched again with the enrolled database. The Hamming distance (a measure of disagreement between two iris codes) and recognition status were recorded from the device output, and eyes were evaluated by slit-lamp ophthalmoscopy with special emphasis on pupil shape, size, and texture. Results All 321 enrolled eyes matched after topical dilation. The pupil size had a significant effect on Hamming distance with a P value <0.05. There were no false matches. A correct recognition rate of 100% was obtained after dilation. No loss of iris texture or pupil shape was observed after dilation. Conclusion A BIR system is a reliable method for identification and personal authentication after pupil dilation. Topically dilated pupils are not a cause for non-recognition of iris scans.
Collapse
|
4
|
Robust Iris-Localization Algorithm in Non-Cooperative Environments Based on the Improved YOLO v4 Model. SENSORS (BASEL, SWITZERLAND) 2022; 22:9913. [PMID: 36560280 PMCID: PMC9785435 DOI: 10.3390/s22249913] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 12/13/2022] [Accepted: 12/13/2022] [Indexed: 06/17/2023]
Abstract
Iris localization in non-cooperative environments is challenging and essential for accurate iris recognition. Motivated by the traditional iris-localization algorithm and the robustness of the YOLO model, we propose a novel iris-localization algorithm. First, we design a novel iris detector with a modified you only look once v4 (YOLO v4) model. We can approximate the position of the pupil center. Then, we use a modified integro-differential operator to precisely locate the iris inner and outer boundaries. Experiment results show that iris-detection accuracy can reach 99.83% with this modified YOLO v4 model, which is higher than that of a traditional YOLO v4 model. The accuracy in locating the inner and outer boundary of the iris without glasses can reach 97.72% at a short distance and 98.32% at a long distance. The locating accuracy with glasses can obtained at 93.91% and 84%, respectively. It is much higher than the traditional Daugman's algorithm. Extensive experiments conducted on multiple datasets demonstrate the effectiveness and robustness of our method for iris localization in non-cooperative environments.
Collapse
|
5
|
Iris Recognition Method Based on Parallel Iris Localization Algorithm and Deep Learning Iris Verification. SENSORS (BASEL, SWITZERLAND) 2022; 22:7723. [PMID: 36298074 PMCID: PMC9611168 DOI: 10.3390/s22207723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Revised: 10/05/2022] [Accepted: 10/10/2022] [Indexed: 06/16/2023]
Abstract
Biometric recognition technology has been widely used in various fields of society. Iris recognition technology, as a stable and convenient biometric recognition technology, has been widely used in security applications. However, the iris images collected in the actual non-cooperative environment have various noises. Although mainstream iris recognition methods based on deep learning have achieved good recognition accuracy, the intention is to increase the complexity of the model. On the other hand, what the actual optical system collects is the original iris image that is not normalized. The mainstream iris recognition scheme based on deep learning does not consider the iris localization stage. In order to solve the above problems, this paper proposes an effective iris recognition scheme consisting of the iris localization and iris verification stages. For the iris localization stage, we used the parallel Hough circle to extract the inner circle of the iris and the Daugman algorithm to extract the outer circle of the iris, and for the iris verification stage, we developed a new lightweight convolutional neural network. The architecture consists of a deep residual network module and a residual pooling layer which is introduced to effectively improve the accuracy of iris verification. Iris localization experiments were conducted on 400 iris images collected under a non-cooperative environment. Compared with its processing time on a graphics processing unit with a central processing unit architecture, the experimental results revealed that the speed was increased by 26, 32, 36, and 21 times at 4 different iris datasets, respectively, and the effective iris localization accuracy is achieved. Furthermore, we chose four representative iris datasets collected under a non-cooperative environment for the iris verification experiments. The experimental results demonstrated that the network structure could achieve high-precision iris verification with fewer parameters, and the equal error rates are 1.08%, 1.01%, 1.71%, and 1.11% on 4 test databases, respectively.
Collapse
|
6
|
Iris Image Compression Using Deep Convolutional Neural Networks. SENSORS (BASEL, SWITZERLAND) 2022; 22:2698. [PMID: 35408311 PMCID: PMC9002923 DOI: 10.3390/s22072698] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 03/25/2022] [Accepted: 03/29/2022] [Indexed: 02/05/2023]
Abstract
Compression is a way of encoding digital data so that it takes up less storage and requires less network bandwidth to be transmitted, which is currently an imperative need for iris recognition systems due to the large amounts of data involved, while deep neural networks trained as image auto-encoders have recently emerged a promising direction for advancing the state-of-the-art in image compression, yet the generalizability of these schemes to preserve the unique biometric traits has been questioned when utilized in the corresponding recognition systems. For the first time, we thoroughly investigate the compression effectiveness of DSSLIC, a deep-learning-based image compression model specifically well suited for iris data compression, along with an additional deep-learning based lossy image compression technique. In particular, we relate Full-Reference image quality as measured in terms of Multi-scale Structural Similarity Index (MS-SSIM) and Local Feature Based Visual Security (LFBVS), as well as No-Reference images quality as measured in terms of the Blind Reference-less Image Spatial Quality Evaluator (BRISQUE), to the recognition scores as obtained by a set of concrete recognition systems. We further compare the DSSLIC model performance against several state-of-the-art (non-learning-based) lossy image compression techniques including: the ISO standard JPEG2000, JPEG, H.265 derivate BPG, HEVC, VCC, and AV1 to figure out the most suited compression algorithm which can be used for this purpose. The experimental results show superior compression and promising recognition performance of the model over all other techniques on different iris databases.
Collapse
|
7
|
Image Acquisition Device for Smart-City Access Control Applications Based on Iris Recognition. SENSORS 2021; 21:s21186185. [PMID: 34577390 PMCID: PMC8468749 DOI: 10.3390/s21186185] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 09/09/2021] [Accepted: 09/11/2021] [Indexed: 11/29/2022]
Abstract
In this work, we present an eye-image acquisition device that can be used as an image acquisition front-end application in compact, low-cost, and easy-to-integrate products for smart-city access control applications, based on iris recognition. We present the advantages and disadvantages of iris recognition compared to fingerprint- or face recognition. We also present the main drawbacks of the existing commercial solutions and propose a concept device design for door-mounted access control systems based on iris recognition technology. Our eye-image acquisition device was built around a low-cost camera module. An integrated infrared distance measurement was used for active image focusing. FPGA image processing was used for raw-RGB to grayscale demosaicing and passive image focusing. The integrated visible light illumination meets the IEC62471 photobiological safety standard. According to our results, we present the operation of the distance-measurement subsystem, the operation of the image-focusing subsystem, examples of acquired images of an artificial toy eye under different illumination conditions, and the calculation of illumination exposure hazards. We managed to acquire a sharp image of an artificial toy eye sized 22 mm in diameter from an approximate distance of 10 cm, with 400 pixels over the iris diameter, an average acquisition time of 1 s, and illumination below hazardous exposure levels.
Collapse
|
8
|
Use of Iris Scanning for Biometric Recognition of Healthy Adults Participating in an Ebola Vaccine Trial in the Democratic Republic of the Congo: Mixed Methods Study. J Med Internet Res 2021; 23:e28573. [PMID: 34378545 PMCID: PMC8386356 DOI: 10.2196/28573] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2021] [Revised: 05/11/2021] [Accepted: 06/30/2021] [Indexed: 11/23/2022] Open
Abstract
Background A partnership between the University of Antwerp and the University of Kinshasa implemented the EBOVAC3 clinical trial with an Ebola vaccine regimen administered to health care provider participants in Tshuapa Province, Democratic Republic of the Congo. This randomized controlled trial was part of an Ebola outbreak preparedness initiative financed through Innovative Medicines Initiative-European Union. The EBOVAC3 clinical trial used iris scan technology to identify all health care provider participants enrolled in the vaccine trial, to ensure that the right participant received the right vaccine at the right visit. Objective We aimed to assess the acceptability, accuracy, and feasibility of iris scan technology as an identification method within a population of health care provider participants in a vaccine trial in a remote setting. Methods We used a mixed methods study. The acceptability was assessed prior to the trial through 12 focus group discussions (FGDs) and was assessed at enrollment. Feasibility and accuracy research was conducted using a longitudinal trial study design, where iris scanning was compared with the unique study ID card to identify health care provider participants at enrollment and at their follow-up visits. Results During the FGDs, health care provider participants were mainly concerned about the iris scan technology causing physical problems to their eyes or exposing them to spiritual problems through sorcery. However, 99% (85/86; 95% CI 97.1-100.0) of health care provider participants in the FGDs agreed to be identified by the iris scan. Also, at enrollment, 99.0% (692/699; 95% CI 98.2-99.7) of health care provider participants accepted to be identified by iris scan. Iris scan technology correctly identified 93.1% (636/683; 95% CI 91.2-95.0) of the participants returning for scheduled follow-up visits. The iris scanning operation lasted 2 minutes or less for 96.0% (656/683; 95% CI 94.6-97.5), and 1 attempt was enough to identify the majority of study participants (475/683, 69.5%; 95% CI 66.1-73.0). Conclusions Iris scans are highly acceptable as an identification tool in a clinical trial for health care provider participants in a remote setting. Its operationalization during the trial demonstrated a high level of accuracy that can reliably identify individuals. Iris scanning is found to be feasible in clinical trials but requires a trained operator to reduce the duration and the number of attempts to identify a participant. Trial Registration ClinicalTrials.gov NCT04186000; https://clinicaltrials.gov/ct2/show/NCT04186000
Collapse
|
9
|
An Efficient and Accurate Iris Recognition Algorithm Based on a Novel Condensed 2-ch Deep Convolutional Neural Network. SENSORS 2021; 21:s21113721. [PMID: 34071850 PMCID: PMC8197830 DOI: 10.3390/s21113721] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Revised: 05/01/2021] [Accepted: 05/03/2021] [Indexed: 11/20/2022]
Abstract
Recently, deep learning approaches, especially convolutional neural networks (CNNs), have attracted extensive attention in iris recognition. Though CNN-based approaches realize automatic feature extraction and achieve outstanding performance, they usually require more training samples and higher computational complexity than the classic methods. This work focuses on training a novel condensed 2-channel (2-ch) CNN with few training samples for efficient and accurate iris identification and verification. A multi-branch CNN with three well-designed online augmentation schemes and radial attention layers is first proposed as a high-performance basic iris classifier. Then, both branch pruning and channel pruning are achieved by analyzing the weight distribution of the model. Finally, fast finetuning is optionally applied, which can significantly improve the performance of the pruned CNN while alleviating the computational burden. In addition, we further investigate the encoding ability of 2-ch CNN and propose an efficient iris recognition scheme suitable for large database application scenarios. Moreover, the gradient-based analysis results indicate that the proposed algorithm is robust to various image contaminations. We comprehensively evaluated our algorithm on three publicly available iris databases for which the results proved satisfactory for real-time iris recognition.
Collapse
|
10
|
ScatT-LOOP: scattering tetrolet-LOOP descriptor and optimized NN for iris recognition at-a-distance. ACTA ACUST UNITED AC 2021; 66:167-180. [PMID: 33606929 DOI: 10.1515/bmt-2019-0241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2019] [Accepted: 09/07/2020] [Indexed: 11/15/2022]
Abstract
Iris Recognition at-a Distance (IAAD) is a major challenge for researchers due to the defects associated with the visual imaging and poor image quality in dynamic environments, which imposed bad impacts on the accuracy of recognition. Thus, in order to enable the effective IAAD, this paper proposes a new method, named, Chronological Monarch Butterfly Optimization (Chronological MBO)-enabled Neural Network (NN). The recognition of iris using NN is trained with the proposed Chronological MBO, which is developed through the combination of Chronological theory in Monarch Butterfly Optimization (MBO). The recognition becomes effective with the automatic segmentation and the normalization of iris image on the basis of Hough Transform (HT) and Daugman's rubber sheet model followed with the process of feature extraction with the developed ScatT-LOOP descriptor, which is the integration of scattering transform (ST), Local Optimal Oriented Pattern (LOOP) descriptor, and Tetrolet transform (TT). The developed ScatT-LOOP descriptor extracts the texture as well as the orientation details of image for effective recognition. The analysis is evaluated with the CASIA Iris dataset with respect to the evaluation metrics, accuracy, False Acceptance Rate (FAR), and False Rejection Rate (FRR). The proposed method has the accuracy, FRR, and FAR of 0.97, 0.005, and 0.005, respectively. The experimental results proved that the proposed method is effective than the existing methods of iris recognition.
Collapse
|
11
|
Robust Iris Segmentation Algorithm in Non-Cooperative Environments Using Interleaved Residual U-Net. SENSORS 2021; 21:s21041434. [PMID: 33670827 PMCID: PMC7922029 DOI: 10.3390/s21041434] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 01/29/2021] [Accepted: 02/03/2021] [Indexed: 11/17/2022]
Abstract
Iris segmentation plays an important and significant role in the iris recognition system. The prerequisite for accurate iris recognition is the correctness of iris segmentation. However, the efficiency and robustness of traditional iris segmentation methods are severely challenged in a non-cooperative environment because of unfavorable factors, for instance, occlusion, blur, low resolution, off-axis, motion, and specular reflections. All of the above factors seriously reduce the accuracy of iris segmentation. In this paper, we present a novel iris segmentation algorithm that localizes the outer and inner boundaries of the iris image. We propose a neural network model called "Interleaved Residual U-Net" (IRUNet) for semantic segmentation and iris mask synthesis. The K-means clustering is applied to select saliency points set in order to recover the outer boundary of the iris, whereas the inner border is recovered by selecting another set of saliency points on the inner side of the mask. Experimental results demonstrate that the proposed iris segmentation algorithm can achieve the mean IOU value of 98.9% and 97.7% for inner and outer boundary estimation, respectively, which outperforms the existing approaches on the challenging CASIA-Iris-Thousand database.
Collapse
|
12
|
Deep Learning Approach for Multimodal Biometric Recognition System Based on Fusion of Iris, Face, and Finger Vein Traits. SENSORS 2020; 20:s20195523. [PMID: 32992524 PMCID: PMC7582987 DOI: 10.3390/s20195523] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/29/2020] [Revised: 09/20/2020] [Accepted: 09/24/2020] [Indexed: 11/30/2022]
Abstract
With the increasing demand for information security and security regulations all over the world, biometric recognition technology has been widely used in our everyday life. In this regard, multimodal biometrics technology has gained interest and became popular due to its ability to overcome a number of significant limitations of unimodal biometric systems. In this paper, a new multimodal biometric human identification system is proposed, which is based on a deep learning algorithm for recognizing humans using biometric modalities of iris, face, and finger vein. The structure of the system is based on convolutional neural networks (CNNs) which extract features and classify images by softmax classifier. To develop the system, three CNN models were combined; one for iris, one for face, and one for finger vein. In order to build the CNN model, the famous pertained model VGG-16 was used, the Adam optimization method was applied and categorical cross-entropy was used as a loss function. Some techniques to avoid overfitting were applied, such as image augmentation and dropout techniques. For fusing the CNN models, different fusion approaches were employed to explore the influence of fusion approaches on recognition performance, therefore, feature and score level fusion approaches were applied. The performance of the proposed system was empirically evaluated by conducting several experiments on the SDUMLA-HMT dataset, which is a multimodal biometrics dataset. The obtained results demonstrated that using three biometric traits in biometric identification systems obtained better results than using two or one biometric traits. The results also showed that our approach comfortably outperformed other state-of-the-art methods by achieving an accuracy of 99.39%, with a feature level fusion approach and an accuracy of 100% with different methods of score level fusion.
Collapse
|
13
|
Post-mortem Iris Decomposition and its Dynamics in Morgue Conditions. J Forensic Sci 2020; 65:1530-1538. [PMID: 32569420 DOI: 10.1111/1556-4029.14488] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Revised: 05/22/2020] [Accepted: 05/27/2020] [Indexed: 11/27/2022]
Abstract
With increasing interest in employing iris biometrics as a forensic tool for identification by investigation authorities, there is a need for a thorough examination and understanding of postmortem decomposition processes that take place within the human eyeball, especially the iris. This can prove useful for fast and accurate matching of antemortem with postmortem data acquired at crime scenes or mass casualties, as well as for ensuring correct dispatching of bodies from the incident scene to a mortuary or funeral homes. Following these needs of forensic community, this paper offers an analysis of the coarse effects of eyeball decay done from a perspective of automatic iris recognition. We analyze postmortem iris images acquired for a subject with a very long postmortem observation time horizon (34 days), in both visible light and near-infrared light (860 nm), as the latter wavelength is used in commercial iris recognition systems. Conclusions and suggestions are provided that may aid forensic examiners in successfully utilizing iris patterns in postmortem identification of deceased subjects. Initial guidelines regarding the imaging process, types of illumination, and resolution are also given, together with expectations with respect to the iris features decomposition rates. Visible iris features possible for human, expert-based matching persists even up to 407 h postmortem, and near-infrared illumination is suggested for better mitigation of corneal opacity while imaging cadaver eyes (Post-mortem iris decomposition and its dynamics in morgue conditions. ArXiv pre-print, 2019).
Collapse
|
14
|
Deep Learning-Based Enhanced Presentation Attack Detection for Iris Recognition by Combining Features from Local and Global Regions Based on NIR Camera Sensor. SENSORS 2018; 18:s18082601. [PMID: 30096832 PMCID: PMC6111611 DOI: 10.3390/s18082601] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/20/2018] [Revised: 08/02/2018] [Accepted: 08/05/2018] [Indexed: 11/27/2022]
Abstract
Iris recognition systems have been used in high-security-level applications because of their high recognition rate and the distinctiveness of iris patterns. However, as reported by recent studies, an iris recognition system can be fooled by the use of artificial iris patterns and lead to a reduction in its security level. The accuracies of previous presentation attack detection research are limited because they used only features extracted from global iris region image. To overcome this problem, we propose a new presentation attack detection method for iris recognition by combining features extracted from both local and global iris regions, using convolutional neural networks and support vector machines based on a near-infrared (NIR) light camera sensor. The detection results using each kind of image features are fused, based on two fusion methods of feature level and score level to enhance the detection ability of each kind of image features. Through extensive experiments using two popular public datasets (LivDet-Iris-2017 Warsaw and Notre Dame Contact Lens Detection 2015) and their fusion, we validate the efficiency of our proposed method by providing smaller detection errors than those produced by previous studies.
Collapse
|
15
|
IrisDenseNet: Robust Iris Segmentation Using Densely Connected Fully Convolutional Networks in the Images by Visible Light and Near-Infrared Light Camera Sensors. SENSORS 2018; 18:s18051501. [PMID: 29748495 PMCID: PMC5981870 DOI: 10.3390/s18051501] [Citation(s) in RCA: 67] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/02/2018] [Revised: 05/01/2018] [Accepted: 05/08/2018] [Indexed: 11/21/2022]
Abstract
The recent advancements in computer vision have opened new horizons for deploying biometric recognition algorithms in mobile and handheld devices. Similarly, iris recognition is now much needed in unconstraint scenarios with accuracy. These environments make the acquired iris image exhibit occlusion, low resolution, blur, unusual glint, ghost effect, and off-angles. The prevailing segmentation algorithms cannot cope with these constraints. In addition, owing to the unavailability of near-infrared (NIR) light, iris recognition in visible light environment makes the iris segmentation challenging with the noise of visible light. Deep learning with convolutional neural networks (CNN) has brought a considerable breakthrough in various applications. To address the iris segmentation issues in challenging situations by visible light and near-infrared light camera sensors, this paper proposes a densely connected fully convolutional network (IrisDenseNet), which can determine the true iris boundary even with inferior-quality images by using better information gradient flow between the dense blocks. In the experiments conducted, five datasets of visible light and NIR environments were used. For visible light environment, noisy iris challenge evaluation part-II (NICE-II selected from UBIRIS.v2 database) and mobile iris challenge evaluation (MICHE-I) datasets were used. For NIR environment, the institute of automation, Chinese academy of sciences (CASIA) v4.0 interval, CASIA v4.0 distance, and IIT Delhi v1.0 iris datasets were used. Experimental results showed the optimal segmentation of the proposed IrisDenseNet and its excellent performance over existing algorithms for all five datasets.
Collapse
|
16
|
Presentation Attack Detection for Iris Recognition System Using NIR Camera Sensor. SENSORS 2018; 18:s18051315. [PMID: 29695113 PMCID: PMC5981581 DOI: 10.3390/s18051315] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/28/2018] [Revised: 04/20/2018] [Accepted: 04/20/2018] [Indexed: 11/29/2022]
Abstract
Among biometric recognition systems such as fingerprint, finger-vein, or face, the iris recognition system has proven to be effective for achieving a high recognition accuracy and security level. However, several recent studies have indicated that an iris recognition system can be fooled by using presentation attack images that are recaptured using high-quality printed images or by contact lenses with printed iris patterns. As a result, this potential threat can reduce the security level of an iris recognition system. In this study, we propose a new presentation attack detection (PAD) method for an iris recognition system (iPAD) using a near infrared light (NIR) camera image. To detect presentation attack images, we first localized the iris region of the input iris image using circular edge detection (CED). Based on the result of iris localization, we extracted the image features using deep learning-based and handcrafted-based methods. The input iris images were then classified into real and presentation attack categories using support vector machines (SVM). Through extensive experiments with two public datasets, we show that our proposed method effectively solves the iris recognition presentation attack detection problem and produces detection accuracy superior to previous studies.
Collapse
|
17
|
A Novel Anti-Spoofing Solution for Iris Recognition Toward Cosmetic Contact Lens Attack Using Spectral ICA Analysis. SENSORS 2018; 18:s18030795. [PMID: 29509692 PMCID: PMC5876747 DOI: 10.3390/s18030795] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2017] [Revised: 02/26/2018] [Accepted: 03/02/2018] [Indexed: 11/29/2022]
Abstract
In this study, we maneuvered a dual-band spectral imaging system to capture an iridal image from a cosmetic-contact-lens-wearing subject. By using the independent component analysis to separate individual spectral primitives, we successfully distinguished the natural iris texture from the cosmetic contact lens (CCL) pattern, and restored the genuine iris patterns from the CCL-polluted image. Based on a database containing 200 test image pairs from 20 CCL-wearing subjects as the proof of concept, the recognition accuracy (False Rejection Rate: FRR) was improved from FRR = 10.52% to FRR = 0.57% with the proposed ICA anti-spoofing scheme.
Collapse
|
18
|
The Design of a Single-Bit CMOS Image Sensor for Iris Recognition Applications. SENSORS 2018; 18:s18020669. [PMID: 29495273 PMCID: PMC5855011 DOI: 10.3390/s18020669] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/10/2018] [Revised: 02/22/2018] [Accepted: 02/23/2018] [Indexed: 11/17/2022]
Abstract
This paper presents a single-bit CMOS image sensor (CIS) that uses a data processing technique with an edge detection block for simple iris segmentation. In order to recognize the iris image, the image sensor conventionally captures high-resolution image data in digital code, extracts the iris data, and then compares it with a reference image through a recognition algorithm. However, in this case, the frame rate decreases by the time required for digital signal conversion of multi-bit digital data through the analog-to-digital converter (ADC) in the CIS. In order to reduce the overall processing time as well as the power consumption, we propose a data processing technique with an exclusive OR (XOR) logic gate to obtain single-bit and edge detection image data instead of multi-bit image data through the ADC. In addition, we propose a logarithmic counter to efficiently measure single-bit image data that can be applied to the iris recognition algorithm. The effective area of the proposed single-bit image sensor (174 × 144 pixel) is 2.84 mm² with a 0.18 μm 1-poly 4-metal CMOS image sensor process. The power consumption of the proposed single-bit CIS is 2.8 mW with a 3.3 V of supply voltage and 520 frame/s of the maximum frame rates. The error rate of the ADC is 0.24 least significant bit (LSB) on an 8-bit ADC basis at a 50 MHz sampling frequency.
Collapse
|
19
|
Test of the Practicality and Feasibility of EDoF-Empowered Image Sensors for Long-Range Biometrics. SENSORS 2016; 16:s16121994. [PMID: 27897976 PMCID: PMC5190975 DOI: 10.3390/s16121994] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/28/2016] [Revised: 11/17/2016] [Accepted: 11/18/2016] [Indexed: 11/16/2022]
Abstract
For many practical applications of image sensors, how to extend the depth-of-field (DoF) is an important research topic; if successfully implemented, it could be beneficial in various applications, from photography to biometrics. In this work, we want to examine the feasibility and practicability of a well-known “extended DoF” (EDoF) technique, or “wavefront coding,” by building real-time long-range iris recognition and performing large-scale iris recognition. The key to the success of long-range iris recognition includes long DoF and image quality invariance toward various object distance, which is strict and harsh enough to test the practicality and feasibility of EDoF-empowered image sensors. Besides image sensor modification, we also explored the possibility of varying enrollment/testing pairs. With 512 iris images from 32 Asian people as the database, 400-mm focal length and F/6.3 optics over 3 m working distance, our results prove that a sophisticated coding design scheme plus homogeneous enrollment/testing setups can effectively overcome the blurring caused by phase modulation and omit Wiener-based restoration. In our experiments, which are based on 3328 iris images in total, the EDoF factor can achieve a result 3.71 times better than the original system without a loss of recognition accuracy.
Collapse
|
20
|
VASIR: An Open-Source Research Platform for Advanced Iris Recognition Technologies. JOURNAL OF RESEARCH OF THE NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY 2013; 118:218-259. [PMID: 26401431 PMCID: PMC4487315 DOI: 10.6028/jres.118.011] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 04/04/2013] [Indexed: 06/05/2023]
Abstract
The performance of iris recognition systems is frequently affected by input image quality, which in turn is vulnerable to less-than-optimal conditions due to illuminations, environments, and subject characteristics (e.g., distance, movement, face/body visibility, blinking, etc.). VASIR (Video-based Automatic System for Iris Recognition) is a state-of-the-art NIST-developed iris recognition software platform designed to systematically address these vulnerabilities. We developed VASIR as a research tool that will not only provide a reference (to assess the relative performance of alternative algorithms) for the biometrics community, but will also advance (via this new emerging iris recognition paradigm) NIST's measurement mission. VASIR is designed to accommodate both ideal (e.g., classical still images) and less-than-ideal images (e.g., face-visible videos). VASIR has three primary modules: 1) Image Acquisition 2) Video Processing, and 3) Iris Recognition. Each module consists of several sub-components that have been optimized by use of rigorous orthogonal experiment design and analysis techniques. We evaluated VASIR performance using the MBGC (Multiple Biometric Grand Challenge) NIR (Near-Infrared) face-visible video dataset and the ICE (Iris Challenge Evaluation) 2005 still-based dataset. The results showed that even though VASIR was primarily developed and optimized for the less-constrained video case, it still achieved high verification rates for the traditional still-image case. For this reason, VASIR may be used as an effective baseline for the biometrics community to evaluate their algorithm performance, and thus serves as a valuable research platform.
Collapse
|
21
|
Shape adaptive, robust iris feature extraction from noisy iris images. JOURNAL OF MEDICAL SIGNALS AND SENSORS 2013; 3:244-55. [PMID: 24696801 PMCID: PMC3967427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/19/2013] [Accepted: 08/10/2013] [Indexed: 10/25/2022]
Abstract
In the current iris recognition systems, noise removing step is only used to detect noisy parts of the iris region and features extracted from there will be excluded in matching step. Whereas depending on the filter structure used in feature extraction, the noisy parts may influence relevant features. To the best of our knowledge, the effect of noise factors on feature extraction has not been considered in the previous works. This paper investigates the effect of shape adaptive wavelet transform and shape adaptive Gabor-wavelet for feature extraction on the iris recognition performance. In addition, an effective noise-removing approach is proposed in this paper. The contribution is to detect eyelashes and reflections by calculating appropriate thresholds by a procedure called statistical decision making. The eyelids are segmented by parabolic Hough transform in normalized iris image to decrease computational burden through omitting rotation term. The iris is localized by an accurate and fast algorithm based on coarse-to-fine strategy. The principle of mask code generation is to assign the noisy bits in an iris code in order to exclude them in matching step is presented in details. An experimental result shows that by using the shape adaptive Gabor-wavelet technique there is an improvement on the accuracy of recognition rate.
Collapse
|