1
|
Ambika K, Radhika KR. Model-free supervised learning-based gait authentication scheme based on optimized gabor features. Soft comput 2023. [DOI: 10.1007/s00500-023-08029-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/28/2023]
|
2
|
Gao Z, Wu J, Wu T, Huang R, Zhang A, Zhao J. Robust clothing-independent gait recognition using hybrid part-based gait features. PeerJ Comput Sci 2022; 8:e996. [PMID: 35721406 PMCID: PMC9202625 DOI: 10.7717/peerj-cs.996] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 05/09/2022] [Indexed: 06/15/2023]
Abstract
Recently, gait has been gathering extensive interest for the non-fungible position in applications. Although various methods have been proposed for gait recognition, most of them can only attain an excellent recognition performance when the probe and gallery gaits are in a similar condition. Once external factors (e.g., clothing variations) influence people's gaits and changes happen in human appearances, a significant performance degradation occurs. Hence, in our article, a robust hybrid part-based spatio-temporal feature learning method is proposed for gait recognition to handle this cloth-changing problem. First, human bodies are segmented into the affected and non/less unaffected parts based on the anatomical studies. Then, a well-designed network is proposed in our method to formulate our required hybrid features from the non/less unaffected body parts. This network contains three sub-networks, aiming to generate features independently. Each sub-network emphasizes individual aspects of gait, hence an effective hybrid gait feature can be created through their concatenation. In addition, temporal information can be used as complement to enhance the recognition performance, a sub-network is specifically proposed to establish the temporal relationship between consecutive short-range frames. Also, since local features are more discriminative than global features in gait recognition, in this network a sub-network is specifically proposed to generate features of local refined differences. The effectiveness of our proposed method has been evaluated by experiments on the CASIA Gait Dataset B and OU-ISIR Treadmill Gait Dataset B. Related experiments illustrate that compared with other gait recognition methods, our proposed method can achieve a prominent result when handling this cloth-changing gait recognition problem.
Collapse
Affiliation(s)
- Zhipeng Gao
- Xiamen Meiya Pico Information Co., Ltd., Xiamen, Fujian, China
| | - Junyi Wu
- Xiamen Meiya Pico Information Co., Ltd., Xiamen, Fujian, China
| | - Tingting Wu
- Xiamen Meiya Pico Information Co., Ltd., Xiamen, Fujian, China
| | - Renyu Huang
- Xiamen Meiya Pico Information Co., Ltd., Xiamen, Fujian, China
| | - Anguo Zhang
- College of Mathematics and Data Science, Minjiang University, Fuzhou, China
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, China
| | - Jianqiang Zhao
- Xiamen Meiya Pico Information Co., Ltd., Xiamen, Fujian, China
| |
Collapse
|
3
|
Iwashita Y, Sakano H, Kurazume R, Stoica A. Speed invariant gait recognition-The enhanced mutual subspace method. PLoS One 2021; 16:e0255927. [PMID: 34379692 PMCID: PMC8357177 DOI: 10.1371/journal.pone.0255927] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2020] [Accepted: 07/28/2021] [Indexed: 11/30/2022] Open
Abstract
This paper introduces an enhanced MSM (Mutual Subspace Method) methodology for gait recognition, to provide robustness to variations in walking speed. The enhanced MSM (eMSM) methodology expands and adapts the MSM, commonly used for face recognition, which is a static/physiological biometric, to gait recognition, which is a dynamic/behavioral biometrics. To address the loss of accuracy during calculation of the covariance matrix in the PCA step of MSM, we use a 2D PCA-based mutual subspace. Furhtermore, to enhance the discrimination capability, we rotate images over a number of angles, which enables us to extract richer gait features to then be fused by a boosting method. The eMSM methodology is evaluated on existing data sets which provide variable walking speed, i.e. CASIA-C and OU-ISIR gait databases, and it is shown to outperform state-of-the art methods. While the enhancement to MSM discussed in this paper uses combinations of 2D-PCA, rotation, boosting, other combinations of operations may also be advantageous.
Collapse
Affiliation(s)
- Yumi Iwashita
- Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA, United States of America
- Kyushu University, Fukuoka, Japan
- * E-mail: ,
| | | | | | - Adrian Stoica
- Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA, United States of America
| |
Collapse
|
4
|
Kusakunniran W. Review of gait recognition approaches and their challenges on view changes. IET BIOMETRICS 2020. [DOI: 10.1049/iet-bmt.2020.0103] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Affiliation(s)
- Worapan Kusakunniran
- Faculty of Information and Communication Technology Mahidol University 999 Phuttamonthon 4 Road Salaya Nakhon Pathom 73170 Thailand
| |
Collapse
|
5
|
|
6
|
Multi-level features fusion and selection for human gait recognition: an optimized framework of Bayesian model and binomial distribution. INT J MACH LEARN CYB 2019. [DOI: 10.1007/s13042-019-00947-0] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
7
|
Ben X, Gong C, Zhang P, Jia X, Wu Q, Meng W. Coupled Patch Alignment for Matching Cross-view Gaits. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 28:3142-3157. [PMID: 30676959 DOI: 10.1109/tip.2019.2894362] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Gait recognition has attracted growing attention in recent years as the gait of humans has a strong discriminative ability even under low resolution at a distance. Unfortunately, the performance of gait recognition can be largely affected by view change. To address this problem, we propose a Coupled Patch Alignment (CPA) algorithm that effectively matches a pair of gaits across different views. To realize CPA, we first build a certain amount of patches, and each of them is made up of a sample as well as its intra-class and inter-class nearest-neighbors. Then we design an objective function for each patch to balance the cross-view intra-class compactness and the cross-view inter-class separability. Finally, all the local independent patches are combined to render a unified objective function. Theoretically, we show that the proposed CPA has a close relationship with Canonical Correlation Analysis (CCA). Algorithmically, we extend CPA to "Multi-dimensional Patch Alignment" (MPA) that can handle an arbitrary number of views. Comprehensive experiments on CASIA(B), USF and OU-ISIR gait databases firmly demonstrate the effectiveness of our methods over other existing popular methods in terms of cross-view gait recognition.
Collapse
|
8
|
Affiliation(s)
- Imad Rida
- Department of Computer Science and EngineeringQatar UniversityDohaQatar
| | - Noor Almaadeed
- Department of Computer Science and EngineeringQatar UniversityDohaQatar
| | - Somaya Almaadeed
- Department of Computer Science and EngineeringQatar UniversityDohaQatar
| |
Collapse
|
9
|
Koo JH, Cho SW, Baek NR, Kim MC, Park KR. CNN-Based Multimodal Human Recognition in Surveillance Environments. SENSORS 2018; 18:s18093040. [PMID: 30208648 PMCID: PMC6164664 DOI: 10.3390/s18093040] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/07/2018] [Revised: 09/07/2018] [Accepted: 09/08/2018] [Indexed: 11/25/2022]
Abstract
In the current field of human recognition, most of the research being performed currently is focused on re-identification of different body images taken by several cameras in an outdoor environment. On the other hand, there is almost no research being performed on indoor human recognition. Previous research on indoor recognition has mainly focused on face recognition because the camera is usually closer to a person in an indoor environment than an outdoor environment. However, due to the nature of indoor surveillance cameras, which are installed near the ceiling and capture images from above in a downward direction, people do not look directly at the cameras in most cases. Thus, it is often difficult to capture front face images, and when this is the case, facial recognition accuracy is greatly reduced. To overcome this problem, we can consider using the face and body for human recognition. However, when images are captured by indoor cameras rather than outdoor cameras, in many cases only part of the target body is included in the camera viewing angle and only part of the body is captured, which reduces the accuracy of human recognition. To address all of these problems, this paper proposes a multimodal human recognition method that uses both the face and body and is based on a deep convolutional neural network (CNN). Specifically, to solve the problem of not capturing part of the body, the results of recognizing the face and body through separate CNNs of VGG Face-16 and ResNet-50 are combined based on the score-level fusion by Weighted Sum rule to improve recognition performance. The results of experiments conducted using the custom-made Dongguk face and body database (DFB-DB1) and the open ChokePoint database demonstrate that the method proposed in this study achieves high recognition accuracy (the equal error rates of 1.52% and 0.58%, respectively) in comparison to face or body single modality-based recognition and other methods used in previous studies.
Collapse
Affiliation(s)
- Ja Hyung Koo
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pil-dong-ro, 1-gil, Jung-gu, Seoul 100-715, Korea.
| | - Se Woon Cho
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pil-dong-ro, 1-gil, Jung-gu, Seoul 100-715, Korea.
| | - Na Rae Baek
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pil-dong-ro, 1-gil, Jung-gu, Seoul 100-715, Korea.
| | - Min Cheol Kim
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pil-dong-ro, 1-gil, Jung-gu, Seoul 100-715, Korea.
| | - Kang Ryoung Park
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pil-dong-ro, 1-gil, Jung-gu, Seoul 100-715, Korea.
| |
Collapse
|
10
|
Gait Energy Response Functions for Gait Recognition against Various Clothing and Carrying Status. APPLIED SCIENCES-BASEL 2018. [DOI: 10.3390/app8081380] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Silhouette-based gait representations are widely used in the current gait recognition community due to their effectiveness and efficiency, but they are subject to changes in covariate conditions such as clothing and carrying status. Therefore, we propose a gait energy response function (GERF) that transforms a gait energy (i.e., an intensity value) of a silhouette-based gait feature into a value more suitable for handling these covariate conditions. Additionally, since the discrimination capability of gait energies, as well as the degree to which they are affected by the covariate conditions, differs among body parts, we extend the GERF framework to spatially dependent GERF (SD-GERF) which accounts for spatial dependence. Moreover, the proposed GERFs are represented as a vector in the transformation lookup table and are optimized through an efficient generalized eigenvalue problem in a closed form. Finally, two post-processing techniques, Gabor filtering and spatial metric learning, are employed for the transformed gait features to boost the accuracy. Experimental results with three publicly available datasets including clothing and carrying status variations show the state-of-the-art performance of the proposed method compared with other state-of-the-art methods.
Collapse
|
11
|
Zou Q, Ni L, Wang Q, Li Q, Wang S. Robust Gait Recognition by Integrating Inertial and RGBD Sensors. IEEE TRANSACTIONS ON CYBERNETICS 2018; 48:1136-1150. [PMID: 28368842 DOI: 10.1109/tcyb.2017.2682280] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Gait has been considered as a promising and unique biometric for person identification. Traditionally, gait data are collected using either color sensors, such as a CCD camera, depth sensors, such as a Microsoft Kinect, or inertial sensors, such as an accelerometer. However, a single type of sensors may only capture part of the dynamic gait features and make the gait recognition sensitive to complex covariate conditions, leading to fragile gait-based person identification systems. In this paper, we propose to combine all three types of sensors for gait data collection and gait recognition, which can be used for important identification applications, such as identity recognition to access a restricted building or area. We propose two new algorithms, namely EigenGait and TrajGait, to extract gait features from the inertial data and the RGBD (color and depth) data, respectively. Specifically, EigenGait extracts general gait dynamics from the accelerometer readings in the eigenspace and TrajGait extracts more detailed subdynamics by analyzing 3-D dense trajectories. Finally, both extracted features are fed into a supervised classifier for gait recognition and person identification. Experiments on 50 subjects, with comparisons to several other state-of-the-art gait-recognition approaches, show that the proposed approach can achieve higher recognition accuracy and robustness.
Collapse
|
12
|
Li W, Kuo CCJ, Peng J. Gait recognition via GEI subspace projections and collaborative representation classification. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2017.10.049] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
13
|
Medikonda J, Madasu H, Bijaya Ketan P. Information set based features for the speed invariant gait recognition. IET BIOMETRICS 2017. [DOI: 10.1049/iet-bmt.2016.0136] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Affiliation(s)
- Jeevan Medikonda
- Department of Biomedical EngineeringManipal Institute of Technology, Manipal UniversityManipalIndia
| | | | | |
Collapse
|
14
|
Castro F, Marín-Jiménez M, Mata N, Muñoz-Salinas R. Fisher Motion Descriptor for Multiview Gait Recognition. INT J PATTERN RECOGN 2017. [DOI: 10.1142/s021800141756002x] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
The goal of this paper is to identify individuals by analyzing their gait. Instead of using binary silhouettes as input data (as done in many previous works) we propose and evaluate the use of motion descriptors based on densely sampled short-term trajectories. We take advantage of state-of-the-art people detectors to define custom spatial configurations of the descriptors around the target person, obtaining a rich representation of the gait motion. The local motion features (described by the Divergence-Curl-Shear descriptor [M. Jain, H. Jegou and P. Bouthemy, Better exploiting motion for better action recognition, in Proc. IEEE Conf. Computer Vision Pattern Recognition (CVPR) (2013), pp. 2555–2562.]) extracted on the different spatial areas of the person are combined into a single high-level gait descriptor by using the Fisher Vector encoding [F. Perronnin, J. Sánchez and T. Mensink, Improving the Fisher kernel for large-scale image classification, in Proc. European Conf. Computer Vision (ECCV) (2010), pp. 143–156]. The proposed approach, coined Pyramidal Fisher Motion, is experimentally validated on ‘CASIA’ dataset [S. Yu, D. Tan and T. Tan, A framework for evaluating the effect of view angle, clothing and carrying condition on gait recognition, in Proc. Int. Conf. Pattern Recognition, Vol. 4 (2006), pp. 441–444]. (parts B and C), ‘TUM GAID’ dataset, [M. Hofmann, J. Geiger, S. Bachmann, B. Schuller and G. Rigoll, The TUM Gait from Audio, Image and Depth (GAID) database: Multimodal recognition of subjects and traits, J. Vis. Commun. Image Represent. 25(1) (2014) 195–206]. ‘CMU MoBo’ dataset [R. Gross and J. Shi, The CMU Motion of Body (MoBo) database, Technical Report CMU-RI-TR-01-18, Robotics Institute (2001)]. and the recent ‘AVA Multiview Gait’ dataset [D. López-Fernández, F. Madrid-Cuevas, A. Carmona-Poyato, M. Marín-Jiménez and R. Muñoz-Salinas, The AVA multi-view dataset for gait recognition, in Activity Monitoring by Multiple Distributed Sensing, Lecture Notes in Computer Science (Springer, 2014), pp. 26–39]. The results show that this new approach achieves state-of-the-art results in the problem of gait recognition, allowing to recognize walking people from diverse viewpoints on single and multiple camera setups, wearing different clothes, carrying bags, walking at diverse speeds and not limited to straight walking paths.
Collapse
Affiliation(s)
- F.M. Castro
- Department of Computer Architecture, University of Malaga, 29071, Spain
| | - M.J. Marín-Jiménez
- Department of Computing and Numerical Analysis, University of Cordoba, 14071, Spain
| | - N.Guil Mata
- Department of Computer Architecture, University of Malaga, 29071, Spain
| | - R. Muñoz-Salinas
- Department of Computing and Numerical Analysis, University of Cordoba, 14071, Spain
| |
Collapse
|
15
|
Muramatsu D, Makihara Y, Yagi Y. View Transformation Model Incorporating Quality Measures for Cross-View Gait Recognition. IEEE TRANSACTIONS ON CYBERNETICS 2016; 46:1602-1615. [PMID: 26259209 DOI: 10.1109/tcyb.2015.2452577] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Cross-view gait recognition authenticates a person using a pair of gait image sequences with different observation views. View difference causes degradation of gait recognition accuracy, and so several solutions have been proposed to suppress this degradation. One useful solution is to apply a view transformation model (VTM) that encodes a joint subspace of multiview gait features trained with auxiliary data from multiple training subjects, who are different from test subjects (recognition targets). In the VTM framework, a gait feature with a destination view is generated from that with a source view by estimating a vector on the trained joint subspace, and gait features with the same destination view are compared for recognition. Although this framework improves recognition accuracy as a whole, the fit of the VTM depends on a given gait feature pair, and causes an inhomogeneously biased dissimilarity score. Because it is well known that normalization of such inhomogeneously biased scores improves recognition accuracy in general, we therefore propose a VTM incorporating a score normalization framework with quality measures that encode the degree of the bias. From a pair of gait features, we calculate two quality measures, and use them to calculate the posterior probability that both gait features originate from the same subjects together with the biased dissimilarity score. The proposed method was evaluated against two gait datasets, a large population gait dataset of over-ground walking (course dataset) and a treadmill gait dataset. The experimental results show that incorporating the quality measures contributes to accuracy improvement in many cross-view settings.
Collapse
|
16
|
Abstract
The body of research that examines the perception of biological motion is extensive and explores the factors that are perceived from biological motion and how this information is processed. This research demonstrates that individuals are able to use relative (temporal and spatial) information from a person's movement to recognize factors, including gender, age, deception, emotion, intention, and action. The research also demonstrates that movement presents idiosyncratic properties that allow individual discrimination, thus providing the basis for significant exploration in the domain of biometrics and social signal processing. Medical forensics, safety garments, and victim selection domains also have provided a history of research on the perception of biological motion applications; however, a number of additional domains present opportunities for application that have not been explored in depth. Therefore, the purpose of this paper is to present an overview of the current applications of biological motion-based research and to propose a number of areas where biological motion research, specific to recognition, could be applied in the future.
Collapse
|
17
|
Intra-individual gait pattern variability in specific situations: Implications for forensic gait analysis. Forensic Sci Int 2016; 264:15-23. [PMID: 26990706 DOI: 10.1016/j.forsciint.2016.02.043] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2015] [Revised: 01/10/2016] [Accepted: 02/23/2016] [Indexed: 11/20/2022]
Abstract
In this study, inter- and intra-individual gait pattern differences are examined in various gait situations by means of phase diagrams of the extremity angles (cyclograms). 8 test subjects walked along a walking distance of 6m under different conditions three times each: barefoot, wearing sneakers, wearing combat boots, after muscular fatigue, and wearing a full-face motorcycle helmet restricting vision. The joint angles of foot, knee, and hip were recorded in the sagittal plane. The coupling of movements was represented by time-adjusted cyclograms, and the inter- and intra-individual differences were captured by calculating the similarity between different gait patterns. Gait pattern variability was often greater between the defined test situations than between the individual test subjects. The results have been interpreted considering neurophysiological regulation mechanisms. Footwear, masking, and fatigue were interpreted as disturbance parameters, each being a cause for gait pattern variability and complicating the inference of identity of persons in video recordings.
Collapse
|
18
|
Gibelli D, Obertová Z, Ritz-Timme S, Gabriel P, Arent T, Ratnayake M, De Angelis D, Cattaneo C. The identification of living persons on images: A literature review. Leg Med (Tokyo) 2016; 19:52-60. [DOI: 10.1016/j.legalmed.2016.02.001] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2015] [Revised: 01/08/2016] [Accepted: 02/01/2016] [Indexed: 10/22/2022]
|
19
|
|
20
|
|
21
|
|
22
|
Rogez G, Rihan J, Guerrero JJ, Orrite C. Monocular 3-D gait tracking in surveillance scenes. IEEE TRANSACTIONS ON CYBERNETICS 2014; 44:894-909. [PMID: 23955796 DOI: 10.1109/tcyb.2013.2275731] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Gait recognition can potentially provide a noninvasive and effective biometric authentication from a distance. However, the performance of gait recognition systems will suffer in real surveillance scenarios with multiple interacting individuals and where the camera is usually placed at a significant angle and distance from the floor. We present a methodology for view-invariant monocular 3-D human pose tracking in man-made environments in which we assume that observed people move on a known ground plane. First, we model 3-D body poses and camera viewpoints with a low dimensional manifold and learn a generative model of the silhouette from this manifold to a reduced set of training views. During the online stage, 3-D body poses are tracked using recursive Bayesian sampling conducted jointly over the scene's ground plane and the pose-viewpoint manifold. For each sample, the homography that relates the corresponding training plane to the image points is calculated using the dominant 3-D directions of the scene, the sampled location on the ground plane and the sampled camera view. Each regressed silhouette shape is projected using this homographic transformation and is matched in the image to estimate its likelihood. Our framework is able to track 3-D human walking poses in a 3-D environment exploring only a 4-D state space with success. In our experimental evaluation, we demonstrate the significant improvements of the homographic alignment over a commonly used similarity transformation and provide quantitative pose tracking results for the monocular sequences with a high perspective effect from the CAVIAR dataset.
Collapse
|
23
|
Nandy A, Chakraborty R, Chakraborty P, Nandi G. A novel Approach to Human Gait Recognition using possible Speed Invariant features. INT J COMPUT INT SYS 2014. [DOI: 10.1080/18756891.2014.967004] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022] Open
|