1
|
Camilo ENR, Junior AP, Pinheiro HM, da Costa RM. A pupillary image dataset: 10,000 annotated and 258,790 non-annotated images of patients with glaucoma, diabetes, and subjects influenced by alcohol, coupled with a segmentation performance evaluation. Comput Biol Med 2025; 186:109594. [PMID: 39753022 DOI: 10.1016/j.compbiomed.2024.109594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 12/13/2024] [Accepted: 12/16/2024] [Indexed: 02/20/2025]
Abstract
The Pupillary Light Reflex (PLR) is the involuntary movement of the pupil adapting to lighting conditions. The measurement and qualification of this information have a broad impact in different fields. Thanks to technological advancements and algorithms, obtaining accurate and non-invasive records of pupillary movements is now possible, expanding practical applications. Visual attention tracking enables the development of solutions for Eye Tracking Marketing or Eye Gaze Marketing, optimized gaming interactions, drowsiness detection in drivers, and, more recently, diagnostic support applications. These advancements have been made possible by algorithms and publicly available datasets to improve these algorithms. However, it is important to note that most of these datasets only provide information from healthy individuals. This article introduces and publicly shares a diverse dataset with three distinct subsets: recordings of individuals who underwent supervised alcohol consumption, individuals diagnosed with type II diabetes mellitus, and individuals diagnosed with glaucoma in early, moderate, and severe stages. In addition to the data, to assist researchers aiming to conduct studies involving pupillary behavior, the study evaluates pupillary segmentation and eye-tracking algorithms, highlighting the superior accuracy of YOLOv7 in calculating pupillary diameter compared to classical approaches. By utilizing the proposed dataset, the research advances the field of pupilometry-based diagnostics, promoting reliability and effectiveness and indicating the most precise methods for pupillary segmentation.
Collapse
Affiliation(s)
- Eduardo Nery Rossi Camilo
- Fundação Banco de Olhos de Goiás, GO, Brazil; Escola Paulista de Medicina, Federal University of Sao Paulo, SP, Brazil
| | | | | | | |
Collapse
|
2
|
Sumi MR, Das P, Hossain A, Dey S, Schuckers S. A Comprehensive Evaluation of Iris Segmentation on Benchmarking Datasets. SENSORS (BASEL, SWITZERLAND) 2024; 24:7079. [PMID: 39517976 PMCID: PMC11548766 DOI: 10.3390/s24217079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/25/2024] [Revised: 10/15/2024] [Accepted: 10/29/2024] [Indexed: 11/16/2024]
Abstract
Iris is one of the most widely used biometric modalities because of its uniqueness, high matching performance, and inherently secure nature. Iris segmentation is an essential preliminary step for iris-based biometric authentication. The authentication accuracy is directly connected with the iris segmentation accuracy. In the last few years, deep-learning-based iris segmentation methodologies have increasingly been adopted because of their ability to handle challenging segmentation tasks and their advantages over traditional segmentation techniques. However, the biggest challenge to the biometric community is the scarcity of open-source resources for adoption for application and reproducibility. This review provides a comprehensive examination of available open-source iris segmentation resources, including datasets, algorithms, and tools. In the process, we designed three U-Net and U-Net++ architecture-influenced segmentation algorithms as standard benchmarks, trained them on a large composite dataset (>45K samples), and created 1K manually segmented ground truth masks. Overall, eleven state-of-the-art algorithms were benchmarked against five datasets encompassing multiple sensors, environmental conditions, demography, and illumination. This assessment highlights the strengths, limitations, and practical implications of each method and identifies gaps that future studies should address to improve segmentation accuracy and robustness. To foster future research, all resources developed during this work would be made publicly available.
Collapse
Affiliation(s)
- Mst Rumana Sumi
- Department of ECE, Clarkson University, Potsdam, NY 13699, USA (A.H.); (S.S.)
| | - Priyanka Das
- Department of ECE, Clarkson University, Potsdam, NY 13699, USA (A.H.); (S.S.)
| | - Afzal Hossain
- Department of ECE, Clarkson University, Potsdam, NY 13699, USA (A.H.); (S.S.)
| | - Soumyabrata Dey
- Department of Computer Science, Clarkson University, Potsdam, NY 13699, USA
| | - Stephanie Schuckers
- Department of ECE, Clarkson University, Potsdam, NY 13699, USA (A.H.); (S.S.)
| |
Collapse
|
3
|
Piperigkos N, Gkillas A, Arvanitis G, Nousias S, Lalos A, Fournaris A, Radoglou-Grammatikis P, Sarigiannidis P, Moustakas K. Distributed intelligence in industrial and automotive cyber-physical systems: a review. Front Robot AI 2024; 11:1430740. [PMID: 39529821 PMCID: PMC11551047 DOI: 10.3389/frobt.2024.1430740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2024] [Accepted: 09/30/2024] [Indexed: 11/16/2024] Open
Abstract
Cyber-physical systems (CPSs) are evolving from individual systems to collectives of systems that collaborate to achieve highly complex goals, realizing a cyber-physical system of systems (CPSoSs) approach. They are heterogeneous systems comprising various autonomous CPSs, each with unique performance capabilities, priorities, and pursued goals. In practice, there are significant challenges in the applicability and usability of CPSoSs that need to be addressed. The decentralization of CPSoSs assigns tasks to individual CPSs within the system of systems. All CPSs should harmonically pursue system-based achievements and collaborate to make system-of-system-based decisions and implement the CPSoS functionality. The automotive domain is transitioning to the system of systems approach, aiming to provide a series of emergent functionalities like traffic management, collaborative car fleet management, or large-scale automotive adaptation to the physical environment, thus providing significant environmental benefits and achieving significant societal impact. Similarly, large infrastructure domains are evolving into global, highly integrated cyber-physical systems of systems, covering all parts of the value chain. This survey provides a comprehensive review of current best practices in connected cyber-physical systems and investigates a dual-layer architecture entailing perception and behavioral components. The presented perception layer entails object detection, cooperative scene analysis, cooperative localization and path planning, and human-centric perception. The behavioral layer focuses on human-in-the-loop (HITL)-centric decision making and control, where the output of the perception layer assists the human operator in making decisions while monitoring the operator's state. Finally, an extended overview of digital twin (DT) paradigms is provided so as to simulate, realize, and optimize large-scale CPSoS ecosystems.
Collapse
Affiliation(s)
- Nikos Piperigkos
- Industrial Systems Institute, Athena Research Center, Patras, Greece
| | | | - Gerasimos Arvanitis
- Department of Electrical and Computer Engineering, University of Patras, Patras, Greece
| | - Stavros Nousias
- Chair of Computational Modeling and Simulation, School of Engineering and Design, Technical University of Munich, Munich, Germany
| | - Aris Lalos
- Industrial Systems Institute, Athena Research Center, Patras, Greece
| | | | | | - Panagiotis Sarigiannidis
- Department of Electrical and Computer Engineering, University of Western Macedonia, Kozani, Greece
| | | |
Collapse
|
4
|
Buric M, Grozdanic S, Ivasic-Kos M. Diagnosis of ophthalmologic diseases in canines based on images using neural networks for image segmentation. Heliyon 2024; 10:e38287. [PMID: 39397908 PMCID: PMC11467576 DOI: 10.1016/j.heliyon.2024.e38287] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2023] [Revised: 09/04/2024] [Accepted: 09/20/2024] [Indexed: 10/15/2024] Open
Abstract
The primary challenge in diagnosing ocular diseases in canines based on images lies in developing an accurate and reliable machine learning method capable of effectively segmenting and diagnosing these conditions through image analysis. Addressing this challenge, the study focuses on developing and rigorously evaluating a machine learning model for diagnosing ocular diseases in canines, employing the U-Net neural network architecture as a foundational element of this investigation. Through this extensive evaluation, the authors identified a model that exhibited good reliability, achieving prediction scores with an Intersection over Union (IoU) exceeding 80 %, as measured by the Jaccard index. The research methodology encompassed a systematic exploration of various neural network backbones (VGG, ResNet, Inception, EfficientNet) and the U-Net model, combined with an extensive model selection process and an in-depth analysis of a custom training dataset consisting of historical images of different medical symptoms and diseases in dog eyes. The results indicate a fairly high degree of accuracy in the segmentation and diagnosis of ocular diseases in canines, demonstrating the model's effectiveness in real-world applications. In conclusion, this potentially makes a significant contribution to the field by utilizing advanced machine-learning techniques to develop image-based diagnostic routines in veterinary ophthalmology. This model's successful development and validation offer a promising new tool for veterinarians and pet owners, enhancing early disease detection and improving health outcomes for canine patients.
Collapse
Affiliation(s)
- Matija Buric
- Faculty of Informatics and Digital Technologies, University of Rijeka, Centre for Artificial Intelligence University of Rijeka Ul, Radmile Matejcic 2, 51000, Rijeka, Croatia
| | - Sinisa Grozdanic
- Animal Eye Consultants of Iowa, 698 Boyson Rd A, Hiawatha, IA, 52233, USA
| | - Marina Ivasic-Kos
- Faculty of Informatics and Digital Technologies, University of Rijeka, Centre for Artificial Intelligence University of Rijeka Ul, Radmile Matejcic 2, 51000, Rijeka, Croatia
| |
Collapse
|
5
|
Du M, Zhang J, Zhi Y, Zhang J, Liu R, Zhang G, Wang J. A method for extracting corneal reflection images from multiple eye images. Comput Biol Med 2024; 177:108631. [PMID: 38824787 DOI: 10.1016/j.compbiomed.2024.108631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Revised: 04/27/2024] [Accepted: 05/18/2024] [Indexed: 06/04/2024]
Abstract
The incident light reflected from the cornea is rich in information about the human surroundings, and these reflected rays are imaged by the camera, which can be used for research on human consciousness and gaze analysis, and produce certain help in the fields of psychology, human computer interaction and disease diagnosis. However, limited by the low corneal reflection ability, when a high-definition camera captures corneal reflecting rays, a large amount of color and texture interference from the iris can seriously contaminate the corneal reflection images, resulting in low usability and ubiquity of corneal reflection images. In this paper, we propose a corneal reflection image extraction method with multiple eye images as input. We align the iris regions of multiple eye images with the help of iris localization method, and by comparing multiple iris regions, we obtain the complementary iris regions, so that the iris interference in the corneal reflection region can be stripped completely. A large number of experiments have demonstrated that our work can effectively mitigate iris interference and effectively improve the quality of corneal reflection images.
Collapse
Affiliation(s)
- Mengqi Du
- College of Computer Science and Technology, Zhejiang University of Technology, Liuhe 288, 310023, Hangzhou, China.
| | - Jiayu Zhang
- Department of Ophthalmology, The Third Affiliated Hospital of Wenzhou Medical University, Ruifeng 168, 325200, Ruian, China.
| | - Yuyi Zhi
- Jianxing Honors College, Zhejiang University of Technology, Liuhe 288, 310023, Hangzhou, China.
| | - Jianhua Zhang
- School of Computer Science and Engineering, Tianjin University of Technology, Tianjin, 300384, China.
| | - Ruyu Liu
- School of Information Science and Technology, Hangzhou Normal University, Hangzhou, 311121, China.
| | - Guodao Zhang
- Institute of Intelligent Media Computing, Hangzhou Dianzi University, Hangzhou, 310018, China.
| | - Jing Wang
- Department of Ophthalmology, Zhongshan Hospital, Fudan University, Shanghai, 200032, China.
| |
Collapse
|
6
|
Lei S, Shan A, Liu B, Zhao Y, Xiang W. Lightweight and efficient dual-path fusion network for iris segmentation. Sci Rep 2023; 13:14034. [PMID: 37640750 PMCID: PMC10462620 DOI: 10.1038/s41598-023-39743-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2022] [Accepted: 07/30/2023] [Indexed: 08/31/2023] Open
Abstract
In order to tackle limitations of current iris segmentation methods based on deep learning, such as an enormous amount of parameters, intensive computation and excessive storage space, a lightweight and efficient iris segmentation network is proposed in this article. Based on the classical semantic segmentation network U-net, the proposed approach designs a dual-path fusion network model to integrate deep semantic information and rich shallow context information at multiple levels. Our model uses the depth-wise separable convolution for feature extraction and introduces a novel attention mechanism, which strengthens the capability of extracting significant features as well as the segmentation capability of the network. Experiments on four public datasets reveal that the proposed approach can raise the MIoU and F1 scores by 15% and 9% on average compared with traditional methods, respectively, and 1.5% and 2.5% on average compared with the classical semantic segmentation method U-net and other relevant methods. Compared with the U-net, the proposed approach reduces about 80%, 90% and 99% in terms of computation, parameters and storage, respectively, and the average run time up to 0.02 s. Our approach not only exhibits a good performance, but also is simpler in terms of computation, parameters and storage compared with existing classical semantic segmentation methods.
Collapse
Affiliation(s)
- Songze Lei
- School of Computer Science and Engineering, Xi'an Technological University, Xi'an, Shaanxi, China
| | - Aokui Shan
- School of Computer Science and Engineering, Xi'an Technological University, Xi'an, Shaanxi, China
| | - Bo Liu
- School of Computer Science and Engineering, Xi'an Technological University, Xi'an, Shaanxi, China
| | - Yanxiao Zhao
- School of Information Communication Engineering, Beijing Information Science and Technology University, Beijing, China
| | - Wei Xiang
- School of Computing Engineering and Mathematical Sciences, La Trobe University, VIC, 3086, Melbourne, Australia.
- College of Science and Engineering, James Cook University, Cairns, QLD, 4878, Australia.
| |
Collapse
|
7
|
Dimauro G, Camporeale MG, Dipalma A, Guarini A, Maglietta R. Anaemia detection based on sclera and blood vessel colour estimation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
8
|
Li J, Feng X. Double-Center-Based Iris Localization and Segmentation in Cooperative Environment with Visible Illumination. SENSORS (BASEL, SWITZERLAND) 2023; 23:2238. [PMID: 36850834 PMCID: PMC9965830 DOI: 10.3390/s23042238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 02/07/2023] [Accepted: 02/13/2023] [Indexed: 06/18/2023]
Abstract
Iris recognition has been considered as one of the most accurate and reliable biometric technologies, and it is widely used in security applications. Iris segmentation and iris localization, as important preprocessing tasks for iris biometrics, jointly determine the valid iris part of the input eye image; however, iris images that have been captured in user non-cooperative and visible illumination environments often suffer from adverse noise (e.g., light reflection, blurring, and glasses occlusion), which challenges many existing segmentation-based parameter-fitting localization methods. To address this problem, we propose a novel double-center-based end-to-end iris localization and segmentation network. Different from many previous iris localization methods, which use massive post-process methods (e.g., integro-differential operator-based or circular Hough transforms-based) on iris or contour mask to fit the inner and outer circles, our method directly predicts the inner and outer circles of the iris on the feature map. In our method, an anchor-free center-based double-circle iris-localization network and an iris mask segmentation module are designed to directly detect the circle boundary of the pupil and iris, and segment the iris region in an end-to-end framework. To facilitate efficient training, we propose a concentric sampling strategy according to the center distribution of the inner and outer iris circles. Extensive experiments on the four challenging iris data sets show that our method achieves excellent iris-localization performance; in particular, it achieves 84.02% box IoU and 89.15% mask IoU on NICE-II. On the three sub-datasets of MICHE, our method achieves 74.06% average box IoU, surpassing the existing methods by 4.64%.
Collapse
|
9
|
Kumar G, Bakshi S, Sangaiah AK, Sa PK. Experimental Evaluation of Covariates Effects on Periocular Biometrics: A Robust Security Assessment Framework. ACM JOURNAL OF DATA AND INFORMATION QUALITY 2023. [DOI: 10.1145/3579029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
Abstract
The growing integration of technology into our lives has resulted in unprecedented amounts of data that are being exchanged among devices in an Internet of Things (IoT) environment. Authentication, identification, and device heterogeneities are major security and privacy concerns in IoT. One of the most effective solutions to avoid unauthorized access to sensitive information is biometrics. Deep learning based biometric systems have been proven to outperform traditional image processing and machine learning techniques. However, the image quality covariates associated with blur, resolution, illumination, and noise predominantly affect recognition performance. Therefore, assessing the robustness of the developed solution is another important concern that still needs to be investigated. This paper proposes a periocular region based biometric system and explores the effect of image quality covariates (artifacts) on the performance of periocular recognition. To simulate the real-time scenarios and understand the consequences of blur, resolution, and bit-depth of images on the recognition accuracy of periocular biometrics, we modeled out-of-focus blur, camera shake blur, low-resolution, and low bit-depth image acquisition using Gaussian function, linear motion, interpolation, and bit plan slicing respectively. All the images of the UBIRIS.v1 database are degraded by varying strength of image quality covariates to obtain degraded versions of the database. Afterward, deep models are trained with each degraded version of the database. The performance of the model is evaluated by measuring statistical parameters calculated from a confusion matrix. Experimental results show that among all types of covariates, camera shake blur has less effect on the recognition performance, while out-of-focus blur significantly impacts it. Irrespective of image quality, the convolutional neural network produces excellent results, which proves the robustness of the developed model.
Collapse
Affiliation(s)
- Gautam Kumar
- National Institute of Technology Rourkela, India
| | | | - Arun Kumar Sangaiah
- International Graduate Institute of AI, National Yunlin University of Science and Technology, Taiwan
| | | |
Collapse
|
10
|
Nguyen K, Fookes C, Sridharan S, Ross A. Complex-Valued Iris Recognition Network. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:182-196. [PMID: 35201979 DOI: 10.1109/tpami.2022.3152857] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
In this work, we design a fully complex-valued neural network for the task of iris recognition. Unlike the problem of general object recognition, where real-valued neural networks can be used to extract pertinent features, iris recognition depends on the extraction of both phase and magnitude information from the input iris texture in order to better represent its biometric content. This necessitates the extraction and processing of phase information that cannot be effectively handled by a real-valued neural network. In this regard, we design a fully complex-valued neural network that can better capture the multi-scale, multi-resolution, and multi-orientation phase and amplitude features of the iris texture. We show a strong correspondence of the proposed complex-valued iris recognition network with Gabor wavelets that are used to generate the classical IrisCode; however, the proposed method enables a new capability of automatic complex-valued feature learning that is tailored for iris recognition. We conduct experiments on three benchmark datasets - ND-CrossSensor-2013, CASIA-Iris-Thousand and UBIRIS.v2 - and show the benefit of the proposed network for the task of iris recognition. We exploit visualization schemes to convey how the complex-valued network, when compared to standard real-valued networks, extracts fundamentally different features from the iris texture.
Collapse
|
11
|
A new periocular dataset collected by mobile devices in unconstrained scenarios. Sci Rep 2022; 12:17989. [PMID: 36289312 PMCID: PMC9605955 DOI: 10.1038/s41598-022-22811-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2022] [Accepted: 10/19/2022] [Indexed: 01/24/2023] Open
Abstract
Recently, ocular biometrics in unconstrained environments using images obtained at visible wavelength have gained the researchers' attention, especially with images captured by mobile devices. Periocular recognition has been demonstrated to be an alternative when the iris trait is not available due to occlusions or low image resolution. However, the periocular trait does not have the high uniqueness presented in the iris trait. Thus, the use of datasets containing many subjects is essential to assess biometric systems' capacity to extract discriminating information from the periocular region. Also, to address the within-class variability caused by lighting and attributes in the periocular region, it is of paramount importance to use datasets with images of the same subject captured in distinct sessions. As the datasets available in the literature do not present all these factors, in this work, we present a new periocular dataset containing samples from 1122 subjects, acquired in 3 sessions by 196 different mobile devices. The images were captured under unconstrained environments with just a single instruction to the participants: to place their eyes on a region of interest. We also performed an extensive benchmark with several Convolutional Neural Network (CNN) architectures and models that have been employed in state-of-the-art approaches based on Multi-class Classification, Multi-task Learning, Pairwise Filters Network, and Siamese Network. The results achieved in the closed- and open-world protocol, considering the identification and verification tasks, show that this area still needs research and development.
Collapse
|
12
|
Zhou Z, Liu Y, Zhu X, Liu S, Zhang S, Li Y. Supervised Contrastive Learning and Intra-Dataset Adversarial Adaptation for Iris Segmentation. ENTROPY (BASEL, SWITZERLAND) 2022; 24:1276. [PMID: 36141162 PMCID: PMC9497583 DOI: 10.3390/e24091276] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/06/2022] [Revised: 09/07/2022] [Accepted: 09/08/2022] [Indexed: 06/16/2023]
Abstract
Precise iris segmentation is a very important part of accurate iris recognition. Traditional iris segmentation methods require complex prior knowledge and pre- and post-processing and have limited accuracy under non-ideal conditions. Deep learning approaches outperform traditional methods. However, the limitation of a small number of labeled datasets degrades their performance drastically because of the difficulty in collecting and labeling irises. Furthermore, previous approaches ignore the large distribution gap within the non-ideal iris dataset due to illumination, motion blur, squinting eyes, etc. To address these issues, we propose a three-stage training strategy. Firstly, supervised contrastive pretraining is proposed to increase intra-class compactness and inter-class separability to obtain a good pixel classifier under a limited amount of data. Secondly, the entire network is fine-tuned using cross-entropy loss. Thirdly, an intra-dataset adversarial adaptation is proposed, which reduces the intra-dataset gap in the non-ideal situation by aligning the distribution of the hard and easy samples at the pixel class level. Our experiments show that our method improved the segmentation performance and achieved the following encouraging results: 0.44%, 1.03%, 0.66%, 0.41%, and 0.37% in the Nice1 and 96.66%, 98.72%, 93.21%, 94.28%, and 97.41% in the F1 for UBIRIS.V2, IITD, MICHE-I, CASIA-D, and CASIA-T.
Collapse
Affiliation(s)
- Zhiyong Zhou
- College of Computer Science and Technology, Jilin University, Changchun 130012, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China
| | - Yuanning Liu
- College of Computer Science and Technology, Jilin University, Changchun 130012, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China
| | - Xiaodong Zhu
- College of Computer Science and Technology, Jilin University, Changchun 130012, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China
| | - Shuai Liu
- College of Computer Science and Technology, Jilin University, Changchun 130012, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China
| | - Shaoqiang Zhang
- College of Computer Science and Technology, Jilin University, Changchun 130012, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China
| | - Yuanfeng Li
- College of Biological and Agricultural Engineering, Jilin University, Changchun 130012, China
| |
Collapse
|
13
|
Towards More Accurate and Complete Heterogeneous Iris Segmentation Using a Hybrid Deep Learning Approach. J Imaging 2022; 8:jimaging8090246. [PMID: 36135411 PMCID: PMC9501181 DOI: 10.3390/jimaging8090246] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 09/02/2022] [Accepted: 09/06/2022] [Indexed: 11/17/2022] Open
Abstract
Accurate iris segmentation is a crucial preprocessing stage for computer-aided ophthalmic disease diagnosis. The quality of iris images taken under different camera sensors varies greatly, and thus accurate segmentation of heterogeneous iris databases is a huge challenge. At present, network architectures based on convolutional neural networks (CNNs) have been widely applied in iris segmentation tasks. However, due to the limited kernel size of convolution layers, iris segmentation networks based on CNNs cannot learn global and long-term semantic information interactions well, and this will bring challenges to accurately segmenting the iris region. Inspired by the success of vision transformer (VIT) and swin transformer (Swin T), a hybrid deep learning approach is proposed to segment heterogeneous iris images. Specifically, we first proposed a bilateral segmentation backbone network that combines the benefits of Swin T with CNNs. Then, a multiscale feature information extraction module (MFIEM) is proposed to extract multiscale spatial information at a more granular level. Finally, a channel attention mechanism module (CAMM) is used in this paper to enhance the discriminability of the iris region. Experimental results on a multisource heterogeneous iris database show that our network has a significant performance advantage compared with some state-of-the-art (SOTA) iris segmentation networks.
Collapse
|
14
|
Improving the Deeplabv3+ Model with Attention Mechanisms Applied to Eye Detection and Segmentation. MATHEMATICS 2022. [DOI: 10.3390/math10152597] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Research on eye detection and segmentation is even more important with mask-wearing measures implemented during the COVID-19 pandemic. Thus, it is necessary to build an eye image detection and segmentation dataset (EIMDSD), including labels for detecting and segmenting. In this study, we established a dataset to reduce elaboration for chipping eye images and denoting labels. An improved DeepLabv3+ network architecture (IDLN) was also proposed for applying it to the benchmark segmentation datasets. The IDLN was modified by cascading convolutional block attention modules (CBAM) with MobileNetV2. Experiments were carried out to verify the effectiveness of the EIMDSD dataset in human eye image detection and segmentation with different deep learning models. The result shows that the IDLN model achieves the appropriate segmentation accuracy for both eye images, while the UNet and ISANet models show the best results for the left eye data and the right eye data among the tested models.
Collapse
|
15
|
Innovation of Human Body Positioning System and Basketball Training System. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:2369925. [PMID: 35707185 PMCID: PMC9192224 DOI: 10.1155/2022/2369925] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Revised: 05/16/2022] [Accepted: 05/25/2022] [Indexed: 11/18/2022]
Abstract
With the development of information management of sports events, basketball teams have higher requirements for the management of training data. The development of data visualization technology has provided convenience for information management which evolves from a digital management model to an efficient graphical management model. In this article, we will mainly design a human body positioning system based on wireless sensor networks. The weak signal from the sensor is tuned by the circuit, and the cylindrical Fresnel lens array is selected to modulate the field of view to ensure an effective response to the infrared signal of the moving human body. The wireless sensor network is used to integrate the human body detected by each pyroelectric sensor node, and the infrared signal is sent to the upper computer for analysis and processing. Through the host computer interface, observe and analyze the relationship between the detection signal of a single pyroelectric sensor and the position, speed, and movement of the moving human body, and deeply explore the operating process of the wireless pyroelectric infrared sensor network system. With the development of the new era, Chinese basketball training and teaching methods generally have some drawbacks, which have seriously affected the quality and effect of college basketball training and restricted the development of college basketball. Therefore, the focus of research is to solve these problems in order to achieve more effective college basketball training and education effects. This article mainly through the research of 5G technology wireless sensing human body positioning shows that wireless sensing technology has made great progress in its aspects, systematically analyzes basketball training, and proposes better training methods.
Collapse
|
16
|
Periocular biometrics: A survey. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2022. [DOI: 10.1016/j.jksuci.2019.06.003] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
17
|
Improved Security of E-Healthcare Images Using Hybridized Robust Zero-Watermarking and Hyper-Chaotic System along with RSA. MATHEMATICS 2022. [DOI: 10.3390/math10071071] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
With the rapid advancements of the internet of things (IoT), several applications have evolved with completely dissimilar structures and requirements. However, the fifth generation of mobile cellular networks (5G) is unable to successfully support the dissimilar structures and requirements. The sixth generation of mobile cellular networks (6G) is likely to enable new and unidentified applications with varying requirements. Therefore, 6G not only provides 10 to 100 times the speed of 5G, but 6G can also provide dynamic services for advanced IoT applications. However, providing security to 6G networks is still a significant problem. Therefore, in this paper, a hybrid image encryption technique is proposed to secure multimedia data communication over 6G networks. Initially, multimedia data are encrypted by using the proposed model. Thereafter, the encrypted data are then transferred over the 6G networks. Extensive experiments are conducted by using various attacks and security measures. A comparative analysis reveals that the proposed model achieves remarkably good performance as compared to the existing encryption techniques.
Collapse
|
18
|
Zhou W, Chen T, Huang H, Sheng C, Wang Y, Wang Y, Zhang D. An improved low-complexity DenseUnet for high-accuracy iris segmentation network. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-211396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Iris segmentation is one of the most important steps in iris recognition. The current iris segmentation network is based on convolutional neural network (CNN). Among these methods, there are still problems with the segmentation networks such as high complexity, insufficient accuracy, etc. To solve these problems, an improved low complexity DenseUnet is proposed to this paper based on U-net for acquiring a high-accuracy iris segmentation network. In this network, the improvements are as follows: (1) Design a dense block module that contains five convolutional layers and all convolutions are dilated convolutions aimed at enhancing feature extraction; (2) Except for the last convolutional layer, all convolutional layers output feature maps are set to the number 64, and this operation is to reduce the amounts of parameters without affecting the segmentation accuracy; (3) The solution proposed to this paper has low complexity and provides the possibility for the deployment of portable mobile devices. DenseUnet is used on the dataset of IITD, CASIA V4.0 and UBIRIS V2.0 during the experimental stage. The results of the experiments have shown that the iris segmentation network proposed in this paper has a better performance than existing algorithms.
Collapse
Affiliation(s)
- Weibin Zhou
- College of Electronic Information and Automation, Tianjin University of Science and Technology, Tianjin, China
| | - Tao Chen
- College of Electronic Information and Automation, Tianjin University of Science and Technology, Tianjin, China
| | - Huafang Huang
- College of Electronic Information and Automation, Tianjin University of Science and Technology, Tianjin, China
- Assets Management Section, Tianjin University of Science and Technology, Tianjin, China
| | - Chang Sheng
- College of Electronic Information and Automation, Tianjin University of Science and Technology, Tianjin, China
| | - Yangfeng Wang
- College of Electronic Information and Automation, Tianjin University of Science and Technology, Tianjin, China
| | - Yang Wang
- College of Electronic Information and Automation, Tianjin University of Science and Technology, Tianjin, China
| | - Daqiang Zhang
- College of Electronic Information and Automation, Tianjin University of Science and Technology, Tianjin, China
| |
Collapse
|
19
|
Iris R-CNN: Accurate iris segmentation and localization in non-cooperative environment with visible illumination. Pattern Recognit Lett 2022. [DOI: 10.1016/j.patrec.2021.10.031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
20
|
Singh A, Vashist C, Gaurav P, Nigam A. A generic framework for deep incremental cancelable template generation. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.09.055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
21
|
Chen Y, Gan H, Zeng Z, Chen H. DADCNet: Dual attention densely connected network for more accurate real iris region segmentation. INT J INTELL SYST 2021. [DOI: 10.1002/int.22649] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Affiliation(s)
- Ying Chen
- Department of Internet of Things Engineering, School of Software Nanchang Hangkong University Nanchang China
| | - Huimin Gan
- Department of Internet of Things Engineering, School of Software Nanchang Hangkong University Nanchang China
| | - Zhuang Zeng
- Department of Internet of Things Engineering, School of Software Nanchang Hangkong University Nanchang China
| | - Huiling Chen
- Department of Computer Science and Artificial Intelligence Wenzhou University Wenzhou China
| |
Collapse
|
22
|
PFSegIris: Precise and Fast Segmentation Algorithm for Multi-Source Heterogeneous Iris. ALGORITHMS 2021. [DOI: 10.3390/a14090261] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Current segmentation methods have limitations for multi-source heterogeneous iris segmentation since differences of acquisition devices and acquisition environment conditions lead to images of greatly varying quality from different iris datasets. Thus, different segmentation algorithms are generally applied to distinct datasets. Meanwhile, deep-learning-based iris segmentation models occupy more space and take a long time. Therefore, a lightweight, precise, and fast segmentation network model, PFSegIris, aimed at the multi-source heterogeneous iris is proposed by us. First, the iris feature extraction modules designed were used to fully extract heterogeneous iris feature information, reducing the number of parameters, computation, and the loss of information. Then, an efficient parallel attention mechanism was introduced only once between the encoder and the decoder to capture semantic information, suppress noise interference, and enhance the discriminability of iris region pixels. Finally, we added a skip connection from low-level features to catch more detailed information. Experiments on four near-infrared datasets and three visible datasets show that the segmentation precision is better than that of existing algorithms, and the number of parameters and storage space are only 1.86 M and 0.007 GB, respectively. The average prediction time is less than 0.10 s. The proposed algorithm can segment multi-source heterogeneous iris images more precisely and quicker than other algorithms.
Collapse
|
23
|
Ocular recognition databases and competitions: a survey. Artif Intell Rev 2021. [DOI: 10.1007/s10462-021-10028-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
24
|
Zhu D, Li J, Li H, Peng J, Wang X, Zhang X. A Less-constrained Sclera Recognition Method based on Stem-and-leaf Branches Network. Pattern Recognit Lett 2021. [DOI: 10.1016/j.patrec.2021.01.025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
25
|
Conti V, Rundo L, Militello C, Salerno VM, Vitabile S, Siniscalchi SM. A multimodal retina‐iris biometric system using the Levenshtein distance for spatial feature comparison. IET BIOMETRICS 2020. [DOI: 10.1049/bme2.12001] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Affiliation(s)
- Vincenzo Conti
- Faculty of Engineering and Architecture University of Enna KORE Enna Italy
| | - Leonardo Rundo
- Department of Radiology University of Cambridge Cambridge UK
- Cancer Research UK Cambridge Institute Cambridge UK
| | - Carmelo Militello
- Institute of Molecular Bioimaging and Physiology Italian National Research Council (IBFM‐CNR) Cefalù Italy
| | | | - Salvatore Vitabile
- Department of Biomedicine Neuroscience and Advanced Diagnostics (BiND) University of Palermo Palermo Italy
| | | |
Collapse
|
26
|
A comprehensive investigation into sclera biometrics: a novel dataset and performance study. Neural Comput Appl 2020. [DOI: 10.1007/s00521-020-04782-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
|
27
|
John B, Jorg S, Koppal S, Jain E. The Security-Utility Trade-off for Iris Authentication and Eye Animation for Social Virtual Avatars. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:1880-1890. [PMID: 32070963 DOI: 10.1109/tvcg.2020.2973052] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The gaze behavior of virtual avatars is critical to social presence and perceived eye contact during social interactions in Virtual Reality. Virtual Reality headsets are being designed with integrated eye tracking to enable compelling virtual social interactions. This paper shows that the near infra-red cameras used in eye tracking capture eye images that contain iris patterns of the user. Because iris patterns are a gold standard biometric, the current technology places the user's biometric identity at risk. Our first contribution is an optical defocus based hardware solution to remove the iris biometric from the stream of eye tracking images. We characterize the performance of this solution with different internal parameters. Our second contribution is a psychophysical experiment with a same-different task that investigates the sensitivity of users to a virtual avatar's eye movements when this solution is applied. By deriving detection threshold values, our findings provide a range of defocus parameters where the change in eye movements would go unnoticed in a conversational setting. Our third contribution is a perceptual study to determine the impact of defocus parameters on the perceived eye contact, attentiveness, naturalness, and truthfulness of the avatar. Thus, if a user wishes to protect their iris biometric, our approach provides a solution that balances biometric protection while preventing their conversation partner from perceiving a difference in the user's virtual avatar. This work is the first to develop secure eye tracking configurations for VR/AR/XR applications and motivates future work in the area.
Collapse
|
28
|
Misra T, Arora A, Marwaha S, Chinnusamy V, Rao AR, Jain R, Sahoo RN, Ray M, Kumar S, Raju D, Jha RR, Nigam A, Goel S. SpikeSegNet-a deep learning approach utilizing encoder-decoder network with hourglass for spike segmentation and counting in wheat plant from visual imaging. PLANT METHODS 2020; 16:40. [PMID: 32206080 PMCID: PMC7079463 DOI: 10.1186/s13007-020-00582-9] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/11/2019] [Accepted: 03/05/2020] [Indexed: 05/26/2023]
Abstract
BACKGROUND High throughput non-destructive phenotyping is emerging as a significant approach for phenotyping germplasm and breeding populations for the identification of superior donors, elite lines, and QTLs. Detection and counting of spikes, the grain bearing organs of wheat, is critical for phenomics of a large set of germplasm and breeding lines in controlled and field conditions. It is also required for precision agriculture where the application of nitrogen, water, and other inputs at this critical stage is necessary. Further, counting of spikes is an important measure to determine yield. Digital image analysis and machine learning techniques play an essential role in non-destructive plant phenotyping analysis. RESULTS In this study, an approach based on computer vision, particularly object detection, to recognize and count the number of spikes of the wheat plant from the digital images is proposed. For spike identification, a novel deep-learning network, SpikeSegNet, has been developed by combining two proposed feature networks: Local Patch extraction Network (LPNet) and Global Mask refinement Network (GMRNet). In LPNet, the contextual and spatial features are learned at the local patch level. The output of LPNet is a segmented mask image, which is further refined at the global level using GMRNet. Visual (RGB) images of 200 wheat plants were captured using LemnaTec imaging system installed at Nanaji Deshmukh Plant Phenomics Centre, ICAR-IARI, New Delhi. The precision, accuracy, and robustness (F1 score) of the proposed approach for spike segmentation are found to be 99.93%, 99.91%, and 99.91%, respectively. For counting the number of spikes, "analyse particles"-function of imageJ was applied on the output image of the proposed SpikeSegNet model. For spike counting, the average precision, accuracy, and robustness are 99%, 95%, and 97%, respectively. SpikeSegNet approach is tested for robustness with illuminated image dataset, and no significant difference is observed in the segmentation performance. CONCLUSION In this study, a new approach called as SpikeSegNet has been proposed based on combined digital image analysis and deep learning techniques. A dedicated deep learning approach has been developed to identify and count spikes in the wheat plants. The performance of the approach demonstrates that SpikeSegNet is an effective and robust approach for spike detection and counting. As detection and counting of wheat spikes are closely related to the crop yield, and the proposed approach is also non-destructive, it is a significant step forward in the area of non-destructive and high-throughput phenotyping of wheat.
Collapse
Affiliation(s)
- Tanuj Misra
- ICAR-Indian Agricultural Statistics Research Institute (IASRI), Library Avenue, Pusa, New Delhi 110012 India
| | - Alka Arora
- ICAR-Indian Agricultural Statistics Research Institute (IASRI), Library Avenue, Pusa, New Delhi 110012 India
| | - Sudeep Marwaha
- ICAR-Indian Agricultural Statistics Research Institute (IASRI), Library Avenue, Pusa, New Delhi 110012 India
| | | | - Atmakuri Ramakrishna Rao
- ICAR-Indian Agricultural Statistics Research Institute (IASRI), Library Avenue, Pusa, New Delhi 110012 India
| | - Rajni Jain
- ICAR-National Institute of Agricultural Economics and Policy Research, New Delhi, India
| | | | - Mrinmoy Ray
- ICAR-Indian Agricultural Statistics Research Institute (IASRI), Library Avenue, Pusa, New Delhi 110012 India
| | - Sudhir Kumar
- ICAR-Indian Agricultural Research Institute, New Delhi, India
| | - Dhandapani Raju
- ICAR-Indian Agricultural Research Institute, New Delhi, India
| | | | - Aditya Nigam
- Indian Institute of Technology, Mandi, Himachal Pradesh India
| | - Swati Goel
- ICAR-Indian Agricultural Research Institute, New Delhi, India
| |
Collapse
|
29
|
Blind Quality Assessment of Iris Images Acquired in Visible Light for Biometric Recognition. SENSORS 2020; 20:s20051308. [PMID: 32121182 PMCID: PMC7085724 DOI: 10.3390/s20051308] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/04/2020] [Revised: 02/23/2020] [Accepted: 02/25/2020] [Indexed: 11/30/2022]
Abstract
Image quality is a key issue affecting the performance of biometric systems. Ensuring the quality of iris images acquired in unconstrained imaging conditions in visible light poses many challenges to iris recognition systems. Poor-quality iris images increase the false rejection rate and decrease the performance of the systems by quality filtering. Methods that can accurately predict iris image quality can improve the efficiency of quality-control protocols in iris recognition systems. We propose a fast blind/no-reference metric for predicting iris image quality. The proposed metric is based on statistical features of the sign and the magnitude of local image intensities. The experiments, conducted with a reference iris recognition system and three datasets of iris images acquired in visible light, showed that the quality of iris images strongly affects the recognition performance and is highly correlated with the iris matching scores. Rejecting poor-quality iris images improved the performance of the iris recognition system. In addition, we analyzed the effect of iris image quality on the accuracy of the iris segmentation module in the iris recognition system.
Collapse
|
30
|
Zanlorensi LA, Lucio DR, Britto Junior ADS, Proença H, Menotti D. Deep representations for cross‐spectral ocular biometrics. IET BIOMETRICS 2020. [DOI: 10.1049/iet-bmt.2019.0116] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Affiliation(s)
- Luiz A. Zanlorensi
- Department of InformaticsFederal University of Paraná (UFPR)CuritibaParanáBrazil
| | - Diego Rafael Lucio
- Department of InformaticsFederal University of Paraná (UFPR)CuritibaParanáBrazil
| | | | - Hugo Proença
- IT: Instituto de TelecomunicaçõesUniversity of Beira InteriorCovilhãPortugal
| | - David Menotti
- Department of InformaticsFederal University of Paraná (UFPR)CuritibaParanáBrazil
| |
Collapse
|
31
|
Jha RR, Jaswal G, Gupta D, Saini S, Nigam A. PixISegNet: pixel‐level iris segmentation network using convolutional encoder–decoder with stacked hourglass bottleneck. IET BIOMETRICS 2019. [DOI: 10.1049/iet-bmt.2019.0025] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022] Open
Affiliation(s)
- Ranjeet Ranjan Jha
- School of Computing and Electrical EngineeringIndian Institute of TechnologyHimachal PradeshMandiIndia
| | - Gaurav Jaswal
- School of Computing and Electrical EngineeringIndian Institute of TechnologyHimachal PradeshMandiIndia
| | - Divij Gupta
- Department of Electrical EngineeringIndian Institute of Technology JodhpurJodhpurIndia
| | - Shreshth Saini
- Department of Electrical EngineeringIndian Institute of Technology JodhpurJodhpurIndia
| | - Aditya Nigam
- School of Computing and Electrical EngineeringIndian Institute of TechnologyHimachal PradeshMandiIndia
| |
Collapse
|
32
|
Varkarakis V, Bazrafkan S, Corcoran P. Deep neural network and data augmentation methodology for off-axis iris segmentation in wearable headsets. Neural Netw 2019; 121:101-121. [PMID: 31541879 DOI: 10.1016/j.neunet.2019.07.020] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2019] [Revised: 07/02/2019] [Accepted: 07/25/2019] [Indexed: 11/30/2022]
Abstract
A data augmentation methodology is presented and applied to generate a large dataset of off-axis iris regions and train a low-complexity deep neural network. Although of low complexity the resulting network achieves a high level of accuracy in iris region segmentation for challenging off-axis eye-patches. Interestingly, this network is also shown to achieve high levels of performance for regular, frontal, segmentation of iris regions, comparing favourably with state-of-the-art techniques of significantly higher complexity. Due to its lower complexity this network is well suited for deployment in embedded applications such as augmented and mixed reality headsets.
Collapse
Affiliation(s)
- Viktor Varkarakis
- Department of Electronic Engineering, College of Engineering, National University of Ireland Galway, University Road, Galway, Ireland.
| | - Shabab Bazrafkan
- imec-Vision Lab, Department of Physics, University of Antwerp, Antwerp, Belgium.
| | - Peter Corcoran
- Department of Electronic Engineering, College of Engineering, National University of Ireland Galway, University Road, Galway, Ireland.
| |
Collapse
|
33
|
FMnet: Iris Segmentation and Recognition by Using Fully and Multi-Scale CNN for Biometric Security. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9102042] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In Deep Learning, recent works show that neural networks have a high potential in the field of biometric security. The advantage of using this type of architecture, in addition to being robust, is that the network learns the characteristic vectors by creating intelligent filters in an automatic way, grace to the layers of convolution. In this paper, we propose an algorithm “FMnet” for iris recognition by using Fully Convolutional Network (FCN) and Multi-scale Convolutional Neural Network (MCNN). By taking into considerations the property of Convolutional Neural Networks to learn and work at different resolutions, our proposed iris recognition method overcomes the existing issues in the classical methods which only use handcrafted features extraction, by performing features extraction and classification together. Our proposed algorithm shows better classification results as compared to the other state-of-the-art iris recognition approaches.
Collapse
|
34
|
Affiliation(s)
- Alejandra Consejo
- Department of Ophthalmology, Antwerp University Hospital, Edegem, Belgium
- Department of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
- Department of Biomedical Engineering, Wroclaw University of Science and Technology, Wroclaw, Poland
- Institute of Physical Chemistry, Polish Academy of Sciences, Warsaw, Poland
| | - Tomasz Melcer
- Department of Biomedical Engineering, Wroclaw University of Science and Technology, Wroclaw, Poland
| | - Jos J. Rozema
- Department of Ophthalmology, Antwerp University Hospital, Edegem, Belgium
- Department of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
| |
Collapse
|
35
|
Luz E, Moreira G, Zanlorensi Junior LA, Menotti D. Deep periocular representation aiming video surveillance. Pattern Recognit Lett 2018. [DOI: 10.1016/j.patrec.2017.12.009] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
36
|
Bazrafkan S, Thavalengal S, Corcoran P. An end to end Deep Neural Network for iris segmentation in unconstrained scenarios. Neural Netw 2018; 106:79-95. [DOI: 10.1016/j.neunet.2018.06.011] [Citation(s) in RCA: 79] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2017] [Revised: 05/08/2018] [Accepted: 06/21/2018] [Indexed: 11/26/2022]
|
37
|
|
38
|
Sun Y, Zhang M, Sun Z, Tan T. Demographic Analysis from Biometric Data: Achievements, Challenges, and New Frontiers. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2018; 40:332-351. [PMID: 28212078 DOI: 10.1109/tpami.2017.2669035] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Biometrics is the technique of automatically recognizing individuals based on their biological or behavioral characteristics. Various biometric traits have been introduced and widely investigated, including fingerprint, iris, face, voice, palmprint, gait and so forth. Apart from identity, biometric data may convey various other personal information, covering affect, age, gender, race, accent, handedness, height, weight, etc. Among these, analysis of demographics (age, gender, and race) has received tremendous attention owing to its wide real-world applications, with significant efforts devoted and great progress achieved. This survey first presents biometric demographic analysis from the standpoint of human perception, then provides a comprehensive overview of state-of-the-art advances in automated estimation from both academia and industry. Despite these advances, a number of challenging issues continue to inhibit its full potential. We second discuss these open problems, and finally provide an outlook into the future of this very active field of research by sharing some promising opportunities.
Collapse
|
39
|
Kaur B, Singh S, Kumar J. Iris Recognition Using Zernike Moments and Polar Harmonic Transforms. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2018. [DOI: 10.1007/s13369-017-3057-2] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
40
|
Neves J, Moreno J, Proença H. QUIS‐CAMPI: an annotated multi‐biometrics data feed from surveillance scenarios. IET BIOMETRICS 2017. [DOI: 10.1049/iet-bmt.2016.0178] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Affiliation(s)
- João Neves
- IT – Instituto de TelecomunicaçõesDepartment of Computer Science, University of Beira InteriorRua Marquês d'Ávila e BolamaCovilhãPortugal
| | - Juan Moreno
- Department of Computer ScienceUniversity of Beira InteriorRua Marquês d'Ávila e BolamaCovilhãPortugal
| | - Hugo Proença
- IT – Instituto de TelecomunicaçõesDepartment of Computer Science, University of Beira InteriorRua Marquês d'Ávila e BolamaCovilhãPortugal
| |
Collapse
|
41
|
Noisy Ocular Recognition Based on Three Convolutional Neural Networks. SENSORS 2017; 17:s17122933. [PMID: 29258217 PMCID: PMC5751551 DOI: 10.3390/s17122933] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/18/2017] [Revised: 12/10/2017] [Accepted: 12/14/2017] [Indexed: 11/30/2022]
Abstract
In recent years, the iris recognition system has been gaining increasing acceptance for applications such as access control and smartphone security. When the images of the iris are obtained under unconstrained conditions, an issue of undermined quality is caused by optical and motion blur, off-angle view (the user’s eyes looking somewhere else, not into the front of the camera), specular reflection (SR) and other factors. Such noisy iris images increase intra-individual variations and, as a result, reduce the accuracy of iris recognition. A typical iris recognition system requires a near-infrared (NIR) illuminator along with an NIR camera, which are larger and more expensive than fingerprint recognition equipment. Hence, many studies have proposed methods of using iris images captured by a visible light camera without the need for an additional illuminator. In this research, we propose a new recognition method for noisy iris and ocular images by using one iris and two periocular regions, based on three convolutional neural networks (CNNs). Experiments were conducted by using the noisy iris challenge evaluation-part II (NICE.II) training dataset (selected from the university of Beira iris (UBIRIS).v2 database), mobile iris challenge evaluation (MICHE) database, and institute of automation of Chinese academy of sciences (CASIA)-Iris-Distance database. As a result, the method proposed by this study outperformed previous methods.
Collapse
|
42
|
Deep Learning-Based Iris Segmentation for Iris Recognition in Visible Light Environment. Symmetry (Basel) 2017. [DOI: 10.3390/sym9110263] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Existing iris recognition systems are heavily dependent on specific conditions, such as the distance of image acquisition and the stop-and-stare environment, which require significant user cooperation. In environments where user cooperation is not guaranteed, prevailing segmentation schemes of the iris region are confronted with many problems, such as heavy occlusion of eyelashes, invalid off-axis rotations, motion blurs, and non-regular reflections in the eye area. In addition, iris recognition based on visible light environment has been investigated to avoid the use of additional near-infrared (NIR) light camera and NIR illuminator, which increased the difficulty of segmenting the iris region accurately owing to the environmental noise of visible light. To address these issues; this study proposes a two-stage iris segmentation scheme based on convolutional neural network (CNN); which is capable of accurate iris segmentation in severely noisy environments of iris recognition by visible light camera sensor. In the experiment; the noisy iris challenge evaluation part-II (NICE-II) training database (selected from the UBIRIS.v2 database) and mobile iris challenge evaluation (MICHE) dataset were used. Experimental results showed that our method outperformed the existing segmentation methods.
Collapse
|
43
|
Fairhurst M, Li C, Da Costa‐Abreu M. Predictive biometrics: a review and analysis of predicting personal characteristics from biometric data. IET BIOMETRICS 2017. [DOI: 10.1049/iet-bmt.2016.0169] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Affiliation(s)
- Michael Fairhurst
- School of Engineering and Digital ArtsUniversity of KentCanterburyKent CT2 7NTUK
| | - Cheng Li
- School of Engineering and Digital ArtsUniversity of KentCanterburyKent CT2 7NTUK
| | | |
Collapse
|
44
|
Raja KB, Raghavendra R, Venkatesh S, Busch C. Multi-patch deep sparse histograms for iris recognition in visible spectrum using collaborative subspace for robust verification. Pattern Recognit Lett 2017. [DOI: 10.1016/j.patrec.2016.12.025] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
45
|
|
46
|
Nalla PR, Kumar A. Toward More Accurate Iris Recognition Using Cross-Spectral Matching. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2017; 26:208-221. [PMID: 27740482 DOI: 10.1109/tip.2016.2616281] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Iris recognition systems are increasingly deployed for large-scale applications such as national ID programs, which continue to acquire millions of iris images to establish identity among billions. However, with the availability of variety of iris sensors that are deployed for the iris imaging under different illumination/environment, significant performance degradation is expected while matching such iris images acquired under two different domains (either sensor-specific or wavelength-specific). This paper develops a domain adaptation framework to address this problem and introduces a new algorithm using Markov random fields model to significantly improve cross-domain iris recognition. The proposed domain adaptation framework based on the naive Bayes nearest neighbor classification uses a real-valued feature representation, which is capable of learning domain knowledge. Our approach to estimate corresponding visible iris patterns from the synthesis of iris patches in the near infrared iris images achieves outperforming results for the cross-spectral iris recognition. In this paper, a new class of bi-spectral iris recognition system that can simultaneously acquire visible and near infra-red images with pixel-to-pixel correspondences is proposed and evaluated. This paper presents experimental results from three publicly available databases; PolyU cross-spectral iris image database, IIITD CLI and UND database, and achieve outperforming results for the cross-sensor and cross-spectral iris matching.
Collapse
|
47
|
Alkassar S, Woo W, Dlay S, Chambers J. Sclera recognition: on the quality measure and segmentation of degraded images captured under relaxed imaging conditions. IET BIOMETRICS 2016. [DOI: 10.1049/iet-bmt.2016.0114] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Affiliation(s)
- Sinan Alkassar
- School of Electrical and Electronic EngineeringNewcastle UniversityNewcastle upon TyneUK
| | - Wai‐Lok Woo
- School of Electrical and Electronic EngineeringNewcastle UniversityNewcastle upon TyneUK
| | - Satnam Dlay
- School of Electrical and Electronic EngineeringNewcastle UniversityNewcastle upon TyneUK
| | - Jonathon Chambers
- School of Electrical and Electronic EngineeringNewcastle UniversityNewcastle upon TyneUK
| |
Collapse
|
48
|
Abdullah MAM, Dlay SS, Woo WL, Chambers JA. A novel framework for cross-spectral iris matching. ACTA ACUST UNITED AC 2016. [DOI: 10.1186/s41074-016-0009-9] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Abstract
Previous work on iris recognition focused on either visible light (VL), near-infrared (NIR) imaging, or their fusion. However, limited numbers of works have investigated cross-spectral matching or compared the iris biometric performance under both VL and NIR spectrum using unregistered iris images taken from the same subject. To the best of our knowledge, this is the first work that proposes a framework for cross-spectral iris matching using unregistered iris images. To this end, three descriptors are proposed namely, Gabor-difference of Gaussian (G-DoG), Gabor-binarized statistical image feature (G-BSIF), and Gabor-multi-scale Weberface (G-MSW) to achieve robust cross-spectral iris matching. In addition, we explore the differences in iris recognition performance across the VL and NIR spectra. The experiments are carried out on the UTIRIS database which contains iris images acquired with both VL and NIR spectra for the same subject. Experimental and comparison results demonstrate that the proposed framework achieves state-of-the-art cross-spectral matching. In addition, the results indicate that the VL and NIR images provide complementary features for the iris pattern and their fusion improves notably the recognition performance.
Collapse
|
49
|
|
50
|
|