1
|
Wang Y, Yao A, Dou B, Huang C, Yang L, Liang J, Lan J, Lin S. Self-healing, environmentally stable and adhesive hydrogel sensor with conductive cellulose nanocrystals for motion monitoring and character recognition. Carbohydr Polym 2024; 332:121932. [PMID: 38431422 DOI: 10.1016/j.carbpol.2024.121932] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 01/23/2024] [Accepted: 02/06/2024] [Indexed: 03/05/2024]
Abstract
Conductive hydrogel-based sensors offer diverse applications in artificial intelligence, wearable electronic devices and character recognition management. However, it remains a significant challenge to maintain their satisfactory performances under extreme climatic conditions. Herein, a stretchable, self-adhesive, self-healing and environmentally stable conductive hydrogel was developed through free radical polymerization of hydroxyethyl acrylate (HEA) and poly(ethylene glycol) methacrylate (PEG) as the skeleton, followed by the incorporation of polyaniline-coated cellulose nanocrystal (CNC@PANI) as the conductive and reinforced nanofiller. Encouragingly, the as-prepared hydrogel (CHP) exhibited decent mechanical strength, satisfactory self-adhesion, prominent self-healing property (95.04 % after 60 s), excellent anti-freezing performance (below -60 °C) and outstanding moisture retention. The assembled sensor derived from CHP hydrogel possessed a low detection limit (0.5 % strain), high strain sensitivity (GF = 1.68) and fast response time (96 ms). Remarkably, even in harsh environmental temperatures from -60 °C to 80 °C, it reliably detected subtle and large-scale human motion for a long-term process (>10,000 cycles), manifesting its exceptional environmental tolerance. More interestingly, this hydrogel-based sensor could be assembled into a "writing board" for accurate handwritten numeral recognition. Therefore, the as-obtained multifunctional hydrogel could be a promising material applied in human motion detection and character recognition platforms even in harsh surroundings.
Collapse
Affiliation(s)
- Yafang Wang
- National Engineering Laboratory for Clean Technology of Leather Manufacture, College of Biomass Science and Engineering, Sichuan University, Chengdu 610065, PR China; High-Tech Organic Fibers Key Laboratory of Sichuan Province, Chengdu 610036, PR China
| | - Anrong Yao
- National Engineering Laboratory for Clean Technology of Leather Manufacture, College of Biomass Science and Engineering, Sichuan University, Chengdu 610065, PR China
| | - Baojie Dou
- National Engineering Laboratory for Clean Technology of Leather Manufacture, College of Biomass Science and Engineering, Sichuan University, Chengdu 610065, PR China
| | - Cuimin Huang
- National Engineering Laboratory for Clean Technology of Leather Manufacture, College of Biomass Science and Engineering, Sichuan University, Chengdu 610065, PR China
| | - Lin Yang
- Department of Chemical and Materials Engineering, University of Alberta, Edmonton, AB T6G 1H9, Canada
| | - Juan Liang
- High-Tech Organic Fibers Key Laboratory of Sichuan Province, Chengdu 610036, PR China
| | - Jianwu Lan
- National Engineering Laboratory for Clean Technology of Leather Manufacture, College of Biomass Science and Engineering, Sichuan University, Chengdu 610065, PR China.
| | - Shaojian Lin
- National Engineering Laboratory for Clean Technology of Leather Manufacture, College of Biomass Science and Engineering, Sichuan University, Chengdu 610065, PR China; High-Tech Organic Fibers Key Laboratory of Sichuan Province, Chengdu 610036, PR China.
| |
Collapse
|
2
|
Benaissa A, Bahri A, El Allaoui A. Multilingual character recognition dataset for Moroccan official documents. Data Brief 2024; 52:109953. [PMID: 38186736 PMCID: PMC10770702 DOI: 10.1016/j.dib.2023.109953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 11/23/2023] [Accepted: 12/06/2023] [Indexed: 01/09/2024] Open
Abstract
This article focuses on the construction of a dataset for multilingual character recognition in Moroccan official documents. The dataset covers languages such as Arabic, French, and Tamazight and are built programmatically to ensure data diversity. It consists of sub-datasets such as Uppercase alphabet (26 classes), Lowercase alphabet (26 classes), Digits (9 classes), Arabic (28 classes), Tifinagh letters (33 classes), Symbols (14 classes), and French special characters (16 classes). The dataset construction process involves collecting representative fonts and generating multiple character images using a Python script, presenting a comprehensive variety essential for robust recognition models. Moreover, this dataset contributes to the digitization of these diverse official documents and archival papers, essential for preserving cultural heritage and enabling advanced text recognition technologies. The need for this work arises from the advancements in character recognition techniques and the significance of large-scale annotated datasets. The proposed dataset contributes to the development of robust character recognition models for practical applications.
Collapse
Affiliation(s)
- Ali Benaissa
- Data Science and Competitive Intelligence Team (DSCI), ENSAH, Abdelmalek Essaadi University (UAE), Tetouan, Morocco
- Finance and Governance of Organizations team, Governance and Performance of Organizations laboratory, The National School of Management, Abdelmalek Essaadi University, Tangier, Morocco
| | - Abdelkhalak Bahri
- Data Science and Competitive Intelligence Team (DSCI), ENSAH, Abdelmalek Essaadi University (UAE), Tetouan, Morocco
| | - Ahmad El Allaoui
- Decisional Computing and Systems Modelling Team, Engineering Sciences and Techniques Laboratory, Faculty of Sciences and Techniques Errachidia, Moulay Ismail University of Meknes, Morocco
| |
Collapse
|
3
|
Thierfelder P, Cai ZG, Huang S, Lin H. The Chinese lexicon of deaf readers: A database of character decisions and a comparison between deaf and hearing readers. Behav Res Methods 2023:10.3758/s13428-023-02305-z. [PMID: 38114882 DOI: 10.3758/s13428-023-02305-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/22/2023] [Indexed: 12/21/2023]
Abstract
We present a psycholinguistic study investigating lexical effects on simplified Chinese character recognition by deaf readers. Prior research suggests that deaf readers exhibit efficient orthographic processing and decreased reliance on speech-based phonology in word recognition compared to hearing readers. In this large-scale character decision study (25 participants, each evaluating 2500 real characters and 2500 pseudo-characters), we analyzed various factors influencing character recognition accuracy and speed in deaf readers. Deaf participants demonstrated greater accuracy and faster recognition when characters were more frequent, were acquired earlier, had more strokes, displayed higher orthographic complexity, were more imageable in reference, or were less concrete in reference. Comparison with a previous study of hearing readers revealed that the facilitative effect of frequency on character decision accuracy was stronger for deaf readers than hearing readers. The effect of orthographic-phonological regularity differed significantly for the two groups, indicating that deaf readers rely more on orthographic structure and less on phonological information during character recognition. Notably, increased stroke counts (i.e., higher orthographic complexity) hindered hearing readers but facilitated recognition processes in deaf readers, suggesting that deaf readers excel at recognizing characters based on orthographic structure. The database generated from this large-scale character decision study offers a valuable resource for further research and practical applications in deaf education and literacy.
Collapse
Affiliation(s)
- Philip Thierfelder
- Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Sha Tin, N.T., Hong Kong, SAR
| | - Zhenguang G Cai
- Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Sha Tin, N.T., Hong Kong, SAR.
| | - Shuting Huang
- Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Sha Tin, N.T., Hong Kong, SAR
| | - Hao Lin
- Shanghai International Studies University, 550 Dalian Road(W), Shanghai, People's Republic of China.
| |
Collapse
|
4
|
Wu R, Zhou F, Li N, Liu X, Wang R. Reconstructed SqueezeNext with C-CBAM for offline handwritten Chinese character recognition. PeerJ Comput Sci 2023; 9:e1529. [PMID: 37705648 PMCID: PMC10496007 DOI: 10.7717/peerj-cs.1529] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 07/20/2023] [Indexed: 09/15/2023]
Abstract
Background Handwritten Chinese character recognition (HCCR) is a difficult problem in character recognition. Chinese characters are diverse and many of them are very similar. The HCCR model consumes a large number of computational resources during runtime, making it difficult to deploy to resource-limited development platforms. Methods In order to reduce the computational consumption and improve the operational efficiency of such models, an improved lightweight HCCR model is proposed in this article. We reconstructed the basic modules of the SqueezeNext network so that the model would be compatible with the introduced attention module and model compression techniques. The proposed Cross-stage Convolutional Block Attention Module (C-CBAM) redeploys the Spatial Attention Module (SAM) and the Channel Attention Module (CAM) according to the feature map characteristics of the deep and shallow layers of the model, targeting enhanced information interaction between the deep and shallow layers. The reformulated intra-stage convolutional kernel importance assessment criterion integrates the normalization nature of the weights and allows for structured pruning in equal proportions for each stage of the model. The quantization aware training is able to map the 32-bit floating-point weights in the pruned model to 8-bit fixed-point weights with minor loss. Results Pruning with the new convolutional kernel importance evaluation criterion proposed in this article can achieve a pruning rate of 50.79% with little impact on the accuracy rate. The various optimization methods can compress the model to 1.06 MB and achieve an accuracy of 97.36% on the CASIA-HWDB dataset. Compared with the initial model, the volume is reduced by 87.15%, and the accuracy is improved by 1.71%. The model proposed in this article greatly reduces the running time and storage requirements of the model while maintaining accuracy.
Collapse
Affiliation(s)
- Ruiqi Wu
- School of Information Technology, Yancheng Institute of Technology, Yancheng, Jiangsu, China
| | - Feng Zhou
- School of Information Technology, Yancheng Institute of Technology, Yancheng, Jiangsu, China
| | - Nan Li
- School of Information Technology, Yancheng Institute of Technology, Yancheng, Jiangsu, China
| | - Xian Liu
- School of Information Technology, Yancheng Institute of Technology, Yancheng, Jiangsu, China
| | - Rugang Wang
- School of Information Technology, Yancheng Institute of Technology, Yancheng, Jiangsu, China
| |
Collapse
|
5
|
Zhou J. The Contribution of Radical Knowledge and Character Recognition to L2 Chinese Reading Comprehension. J Psycholinguist Res 2023; 52:445-475. [PMID: 35715712 DOI: 10.1007/s10936-022-09880-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 04/11/2022] [Indexed: 05/07/2023]
Abstract
Although there have been extensive theoretical discussions on the various component skills needed in comprehending texts in L2 English and L1 Chinese, empirical investigations on the component skills of L2 Chinese are scarce. This study attempts to fill in this gap by investigating the direct and indirect contributions of two lower-level latent component skills, radical knowledge and character recognition, to L2 Chinese passage-level reading comprehension. Radical knowledge was measured by a Receptive Semantic Radical Knowledge Test and a Semantic Radical Meaning Matching Test. Character recognition was assessed by a Lexical Decision Test and a Character Knowledge Test. Two tests, a Multiple-choice Test and a Cloze Test, were adopted to measure textual reading comprehension. The data were collected from 209 learners of Chinese as a second language (CSL). Radical knowledge was found to have a significant direct effect on character recognition and significant indirect effect on L2 Chinese reading through the mediation of character recognition. Character recognition was found to have a significant direct effect on reading comprehension. Taken together, this study suggests the importance of lower-level character and sub-character component skills to L2 Chinese reading.
Collapse
Affiliation(s)
- Jing Zhou
- Department of Asian Languages and Literatures, Pomona College, 333 N College Way, Claremont, CA, 91711, USA.
- School of English Studies, Zhejiang International Studies University, 2 Xiyuanjiu Rd, Xihu, Hangzhou, Zhejiang, 310030, China.
| |
Collapse
|
6
|
Abu Al-Haija Q. Leveraging ShuffleNet transfer learning to enhance handwritten character recognition. Gene Expr Patterns 2022; 45:119263. [PMID: 35850482 DOI: 10.1016/j.gep.2022.119263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 05/01/2022] [Accepted: 07/09/2022] [Indexed: 12/01/2022]
Abstract
Handwritten character recognition has continually been a fascinating field of study in pattern recognition due to its numerous real-life applications, such as the reading tools for blind people and the reading tools for handwritten bank cheques. Therefore, the proper and accurate conversion of handwriting into organized digital files that can be easily recognized and processed by computer algorithms is required for various applications and systems. This paper proposes an accurate and precise autonomous structure for handwriting recognition using a ShuffleNet convolutional neural network to produce a multi-class recognition for the offline handwritten characters and numbers. The developed system utilizes the transfer learning of the powerful ShuffleNet CNN to train, validate, recognize, and categorize the handwritten character/digit images dataset into 26 classes for the English characters and ten categories for the digit characters. The experimental outcomes exhibited that the proposed recognition system achieves extraordinary overall recognition accuracy peaking at 99.50% outperforming other contrasted character recognition systems reported in the state-of-art. Besides, a low computational cost has been observed for the proposed model recording an average of 2.7 (ms) for the single sample inferencing.
Collapse
Affiliation(s)
- Qasem Abu Al-Haija
- Department of Computer Science/Cybersecurity, Princess Sumaya University for Technology, Amman, Jordan.
| |
Collapse
|
7
|
Salemdeeb M, Ertürk S. Full depth CNN classifier for handwritten and license plate characters recognition. PeerJ Comput Sci 2021; 7:e576. [PMID: 34239971 PMCID: PMC8237323 DOI: 10.7717/peerj-cs.576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2020] [Accepted: 05/12/2021] [Indexed: 06/13/2023]
Abstract
Character recognition is an important research field of interest for many applications. In recent years, deep learning has made breakthroughs in image classification, especially for character recognition. However, convolutional neural networks (CNN) still deliver state-of-the-art results in this area. Motivated by the success of CNNs, this paper proposes a simple novel full depth stacked CNN architecture for Latin and Arabic handwritten alphanumeric characters that is also utilized for license plate (LP) characters recognition. The proposed architecture is constructed by four convolutional layers, two max-pooling layers, and one fully connected layer. This architecture is low-complex, fast, reliable and achieves very promising classification accuracy that may move the field forward in terms of low complexity, high accuracy and full feature extraction. The proposed approach is tested on four benchmarks for handwritten character datasets, Fashion-MNIST dataset, public LP character datasets and a newly introduced real LP isolated character dataset. The proposed approach tests report an error of only 0.28% for MNIST, 0.34% for MAHDB, 1.45% for AHCD, 3.81% for AIA9K, 5.00% for Fashion-MNIST, 0.26% for Saudi license plate character and 0.97% for Latin license plate characters datasets. The license plate characters include license plates from Turkey (TR), Europe (EU), USA, United Arab Emirates (UAE) and Kingdom of Saudi Arabia (KSA).
Collapse
Affiliation(s)
- Mohammed Salemdeeb
- Department of Electrical-Electronics Engineering, Bartin University, Bartin, Turkey
| | - Sarp Ertürk
- Department of Electronics & Communication Eng., Kocaeli University, Izmit, Kocaeli, Turkey
| |
Collapse
|
8
|
Zhao J, Chen S, Tong X, Yi L. Advantage in Character Recognition Among Chinese Preschool Children with Autism Spectrum Disorder. J Autism Dev Disord 2019; 49:4929-40. [PMID: 31493156 DOI: 10.1007/s10803-019-04202-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
This study examined Chinese character recognition and its cognitive and linguistic correlates in preschool children with autism spectrum disorder (ASD). Forty-seven children with ASD and 51 IQ-matched typically developing (TD) children were tested on Chinese character recognition, rapid automatized naming, inhibitory control, digit span, IQ, vocabulary, phonological awareness, morphological awareness, and listening comprehension. Chinese children with ASD showed strong character recognition skills. Unlike TD children's character recognition, which was correlated with all the measured cognitive and linguistic skills, character recognition of children with ASD was only significantly correlated with rapid automatized naming, inhibitory control, and phonological awareness. Our findings suggest that phonological awareness and rapid automatized naming may serve as important predictors for possible advantage in emergent literacy acquisition in Chinese children with ASD.
Collapse
|
9
|
Large DR, Burnett G, Crundall E, Lawson G, Skrypchuk L, Mouzakitis A. Evaluating secondary input devices to support an automotive touchscreen HMI: A cross-cultural simulator study conducted in the UK and China. Appl Ergon 2019; 78:184-196. [PMID: 31046950 DOI: 10.1016/j.apergo.2019.03.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2018] [Revised: 02/14/2019] [Accepted: 03/13/2019] [Indexed: 06/09/2023]
Abstract
Touchscreen Human-Machine Interfaces (HMIs) are a well-established and popular choice to provide the primary control interface between driver and vehicle, yet inherently demand some visual attention. Employing a secondary device with the touchscreen may reduce the demand but there is some debate about which device is most suitable, with current manufacturers favouring different solutions and applying these internationally. We present an empirical driving simulator study, conducted in the UK and China, in which 48 participants undertook typical in-vehicle tasks utilising either a touchscreen, rotary-controller, steering-wheel-controls or touchpad. In both the UK and China, the touchscreen was the most preferred/least demanding to use, and the touchpad least preferred/most demanding, whereas the rotary-controller was generally favoured by UK drivers and steering-wheel-controls were more popular in China. Chinese drivers were more excited by the novelty of the technology, and spent more time attending to the devices while driving, leading to an increase in off-road glance time and a corresponding detriment to vehicle control. Even so, Chinese drivers rated devices as easier-to-use while driving, and felt that they interfered less with their driving performance, compared to their UK counterparts. Results suggest that the most effective solution (to maximise performance/acceptance, while minimising visual demand) is to maintain the touchscreen as the primary control interface (e.g. for top-level tasks), and supplement this with a secondary device that is only enabled for certain actions; moreover, different devices may be employed in different cultural markets. Further work is required to explore these recommendations in greater depth (e.g. during extended or real-world testing), and to validate the findings and approach in other cultural contexts.
Collapse
Affiliation(s)
- David R Large
- Human Factors Research Group, Faculty of Engineering, University of Nottingham, UK.
| | - Gary Burnett
- Human Factors Research Group, Faculty of Engineering, University of Nottingham, UK
| | - Elizabeth Crundall
- Human Factors Research Group, Faculty of Engineering, University of Nottingham, UK
| | - Glyn Lawson
- Human Factors Research Group, Faculty of Engineering, University of Nottingham, UK
| | - Lee Skrypchuk
- Jaguar Land Rover Research, International Digital Laboratory, Coventry, UK
| | - Alex Mouzakitis
- Jaguar Land Rover Research, International Digital Laboratory, Coventry, UK
| |
Collapse
|
10
|
Zhou W, Gao Y, Chang Y, Su M. Hemispheric processing of lexical information in Chinese character recognition and its relationship to reading performance. J Gen Psychol 2019; 146:34-49. [PMID: 30632925 DOI: 10.1080/00221309.2018.1535483] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
Hemispheric predominance has been well documented in the visual perception of alphabetic words. However, the hemispheric processing of lexical information in Chinese character recognition and its relationship to reading performance are far from clear. In the divided visual field paradigm, participants were required to judge the orthography, phonology, or semantics of Chinese characters, which were presented randomly in the left or right visual field. The results showed a right visual field/left hemispheric superiority in the phonological judgment task, but no hemispheric advantage in the orthographic or semantic task was found. In addition, reaction times in the right visual field for phonological and semantic tasks were significantly correlated with the reading test score. These results suggest that both hemispheres involved in the orthographic and semantic processing of Chinese characters, and that the left lateralized phonological processing is important for Chinese fluent reading.
Collapse
|
11
|
Lagorce X, Orchard G, Galluppi F, Shi BE, Benosman RB. HOTS: A Hierarchy of Event-Based Time-Surfaces for Pattern Recognition. IEEE Trans Pattern Anal Mach Intell 2017; 39:1346-1359. [PMID: 27411216 DOI: 10.1109/tpami.2016.2574707] [Citation(s) in RCA: 46] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
This paper describes novel event-based spatio-temporal features called time-surfaces and how they can be used to create a hierarchical event-based pattern recognition architecture. Unlike existing hierarchical architectures for pattern recognition, the presented model relies on a time oriented approach to extract spatio-temporal features from the asynchronously acquired dynamics of a visual scene. These dynamics are acquired using biologically inspired frameless asynchronous event-driven vision sensors. Similarly to cortical structures, subsequent layers in our hierarchy extract increasingly abstract features using increasingly large spatio-temporal windows. The central concept is to use the rich temporal information provided by events to create contexts in the form of time-surfaces which represent the recent temporal activity within a local spatial neighborhood. We demonstrate that this concept can robustly be used at all stages of an event-based hierarchical model. First layer feature units operate on groups of pixels, while subsequent layer feature units operate on the output of lower level feature units. We report results on a previously published 36 class character recognition task and a four class canonical dynamic card pip task, achieving near 100 percent accuracy on each. We introduce a new seven class moving face recognition task, achieving 79 percent accuracy.This paper describes novel event-based spatio-temporal features called time-surfaces and how they can be used to create a hierarchical event-based pattern recognition architecture. Unlike existing hierarchical architectures for pattern recognition, the presented model relies on a time oriented approach to extract spatio-temporal features from the asynchronously acquired dynamics of a visual scene. These dynamics are acquired using biologically inspired frameless asynchronous event-driven vision sensors. Similarly to cortical structures, subsequent layers in our hierarchy extract increasingly abstract features using increasingly large spatio-temporal windows. The central concept is to use the rich temporal information provided by events to create contexts in the form of time-surfaces which represent the recent temporal activity within a local spatial neighborhood. We demonstrate that this concept can robustly be used at all stages of an event-based hierarchical model. First layer feature units operate on groups of pixels, while subsequent layer feature units operate on the output of lower level feature units. We report results on a previously published 36 class character recognition task and a four class canonical dynamic card pip task, achieving near 100 percent accuracy on each. We introduce a new seven class moving face recognition task, achieving 79 percent accuracy.
Collapse
Affiliation(s)
- Xavier Lagorce
- Vision and Natural Computation Group, Institut National de la Santé et de la Recherche Médicale, Sorbonne Universités, Institut de la Vision, Université Paris 06, Paris, Paris, FranceFrance
| | - Garrick Orchard
- Singapore Institute for Neurotechnology (SINAPSE), National University of Singapore, Singapore
| | - Francesco Galluppi
- Vision and Natural Computation Group, Institut National de la Santé et de la Recherche Médicale, Sorbonne Universités, Institut de la Vision, Université Paris 06, Paris, Paris, FranceFrance
| | | | - Ryad B Benosman
- Vision and Natural Computation Group, Institut National de la Santé et de la Recherche Médicale, Sorbonne Universités, Institut de la Vision, Université Paris 06, Paris, Paris, FranceFrance
| |
Collapse
|
12
|
Abstract
Computer based pattern recognition is a process that involves several sub-processes, including pre-processing, feature extraction, feature selection, and classification. Feature extraction is the estimation of certain attributes of the target patterns. Selection of the right set of features is the most crucial and complex part of building a pattern recognition system. In this work we have combined multiple features extracted using seven different approaches. The novelty of this approach is to achieve better accuracy and reduced computational time for recognition of handwritten characters using Genetic Algorithm which optimizes the number of features along with a simple and adaptive Multi Layer Perceptron classifier. Experiments have been performed using standard database of CEDAR (Centre of Excellence for Document Analysis and Recognition) for English alphabet. The experimental results obtained on this database demonstrate the effectiveness of this system.
Collapse
Affiliation(s)
- Gauri Katiyar
- Department of Electrical Engineering, Jamia Millia Islamia, New Delhi, India ; ITS Engineering College, 46 Knowledge Park, Greater Noida, Uttar Pradesh 201308 India
| | - Shabana Mehfuz
- Department of Electrical Engineering, Jamia Millia Islamia, New Delhi, India
| |
Collapse
|
13
|
Chen HY, Chang EC, Chen SHY, Lin YC, Wu DH. Functional and anatomical dissociation between the orthographic lexicon and the orthographic buffer revealed in reading and writing Chinese characters by fMRI. Neuroimage 2016; 129:105-116. [PMID: 26777478 DOI: 10.1016/j.neuroimage.2016.01.009] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2015] [Revised: 12/29/2015] [Accepted: 01/06/2016] [Indexed: 11/30/2022] Open
Abstract
The contribution of orthographic representations to reading and writing has been intensively investigated in the literature. However, the distinction between neuronal correlates of the orthographic lexicon and the orthographic (graphemic) buffer has rarely been examined in alphabetic languages and never been explored in non-alphabetic languages. To determine whether the neural networks associated with the orthographic lexicon and buffer of logographic materials are comparable to those reported in the literature, the present fMRI experiment manipulated frequency and the stroke number of Chinese characters in the tasks of form judgment and stroke judgment, which emphasized the processing of character recognition and writing, respectively. It was found that the left fusiform gyrus exhibited higher activation when encountering low-frequency than high-frequency characters in both tasks, which suggested this region to be the locus of the orthographic lexicon that represents the knowledge of character forms. On the other hand, the activations in the posterior part of the left middle frontal gyrus and in the left angular gyrus were parametrically modulated by the stroke number of target characters only in the stroke judgment task, which suggested these regions to be the locus of the orthographic buffer that represents the processing of stroke sequence in writing. These results provide the first evidence for the functional and anatomical dissociation between the orthographic lexicon and buffer in reading and writing Chinese characters. They also demonstrate the critical roles of the left fusiform area and the frontoparietal network to the long-term and short-term representations of orthographic knowledge, respectively, across different orthographies.
Collapse
Affiliation(s)
- Hsiang-Yu Chen
- Institute of Cognitive Neuroscience, National Central University, No.300, Zhongda Rd., Zhongli Dist., Taoyuan City 32001, Taiwan; Laboratories for Cognitive Neuroscience, National Yang-Ming University, No.155, Sec. 2, Linong St., Beitou Dist., Taipei City 11221, Taiwan
| | - Erik C Chang
- Institute of Cognitive Neuroscience, National Central University, No.300, Zhongda Rd., Zhongli Dist., Taoyuan City 32001, Taiwan; Laboratories for Cognitive Neuroscience, National Yang-Ming University, No.155, Sec. 2, Linong St., Beitou Dist., Taipei City 11221, Taiwan
| | - Sinead H Y Chen
- Institute of Cognitive Neuroscience, National Central University, No.300, Zhongda Rd., Zhongli Dist., Taoyuan City 32001, Taiwan; Laboratories for Cognitive Neuroscience, National Yang-Ming University, No.155, Sec. 2, Linong St., Beitou Dist., Taipei City 11221, Taiwan
| | - Yi-Chen Lin
- Laboratories for Cognitive Neuroscience, National Yang-Ming University, No.155, Sec. 2, Linong St., Beitou Dist., Taipei City 11221, Taiwan; Institute of Neuroscience, National Yang-Ming University, No.155, Sec. 2, Linong St., Beitou Dist., Taipei City 11221, Taiwan
| | - Denise H Wu
- Institute of Cognitive Neuroscience, National Central University, No.300, Zhongda Rd., Zhongli Dist., Taoyuan City 32001, Taiwan; Laboratories for Cognitive Neuroscience, National Yang-Ming University, No.155, Sec. 2, Linong St., Beitou Dist., Taipei City 11221, Taiwan.
| |
Collapse
|