1
|
Chen L, Xia C, Zhao Z, Fu H, Chen Y. AI-Driven Sensing Technology: Review. SENSORS (BASEL, SWITZERLAND) 2024; 24:2958. [PMID: 38793814 PMCID: PMC11125233 DOI: 10.3390/s24102958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2024] [Revised: 04/30/2024] [Accepted: 05/04/2024] [Indexed: 05/26/2024]
Abstract
Machine learning and deep learning technologies are rapidly advancing the capabilities of sensing technologies, bringing about significant improvements in accuracy, sensitivity, and adaptability. These advancements are making a notable impact across a broad spectrum of fields, including industrial automation, robotics, biomedical engineering, and civil infrastructure monitoring. The core of this transformative shift lies in the integration of artificial intelligence (AI) with sensor technology, focusing on the development of efficient algorithms that drive both device performance enhancements and novel applications in various biomedical and engineering fields. This review delves into the fusion of ML/DL algorithms with sensor technologies, shedding light on their profound impact on sensor design, calibration and compensation, object recognition, and behavior prediction. Through a series of exemplary applications, the review showcases the potential of AI algorithms to significantly upgrade sensor functionalities and widen their application range. Moreover, it addresses the challenges encountered in exploiting these technologies for sensing applications and offers insights into future trends and potential advancements.
Collapse
Affiliation(s)
| | | | | | - Haoran Fu
- Department of Civil Engineering, Zhejiang University, Hangzhou 310058, China; (L.C.); (C.X.); (Z.Z.)
| | | |
Collapse
|
2
|
Mahanta LB, Mahanta DR, Rahman T, Chakraborty C. Handloomed fabrics recognition with deep learning. Sci Rep 2024; 14:7974. [PMID: 38575749 PMCID: PMC10994934 DOI: 10.1038/s41598-024-58750-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Accepted: 04/02/2024] [Indexed: 04/06/2024] Open
Abstract
Every nation treasures its handloom heritage, and in India, the handloom industry safeguards cultural traditions, sustains millions of artisans, and preserves ancient weaving techniques. To protect this legacy, a critical need arises to distinguish genuine handloom products, exemplified by the renowned "gamucha" from India's northeast, from counterfeit powerloom imitations. Our study's objective is to create an AI tool for effortless detection of authentic handloom items amidst a sea of fakes. Six deep learning architectures-VGG16, VGG19, ResNet50, InceptionV3, InceptionResNetV2, and DenseNet201-were trained on annotated image repositories of handloom and powerloom towels (17,484 images in total, with 14,020 for training and 3464 for validation). A novel deep learning model was also proposed. Despite respectable training accuracies, the pre-trained models exhibited lower performance on the validation dataset compared to our novel model. The proposed model outperformed pre-trained models, demonstrating superior validation accuracy, lower validation loss, computational efficiency, and adaptability to the specific classification problem. Notably, the existing models showed challenges in generalizing to unseen data and raised concerns about practical deployment due to computational expenses. This study pioneers a computer-assisted approach for automated differentiation between authentic handwoven "gamucha"s and counterfeit powerloom imitations-a groundbreaking recognition method. The methodology presented not only holds scalability potential and opportunities for accuracy improvement but also suggests broader applications across diverse fabric products.
Collapse
Affiliation(s)
- Lipi B Mahanta
- Mathematical and Computational Sciences Division, Institute of Advanced Study in Science & Technology (IASST) (An Autonomous R&D Institute Under Department of Science & Technology), Vigyan Path, Paschim Boragaon, P.O. Garchuk, Guwahati, Assam, 781035, India.
| | - Deva Raj Mahanta
- Mathematical and Computational Sciences Division, Institute of Advanced Study in Science & Technology (IASST) (An Autonomous R&D Institute Under Department of Science & Technology), Vigyan Path, Paschim Boragaon, P.O. Garchuk, Guwahati, Assam, 781035, India
| | - Taibur Rahman
- Mathematical and Computational Sciences Division, Institute of Advanced Study in Science & Technology (IASST) (An Autonomous R&D Institute Under Department of Science & Technology), Vigyan Path, Paschim Boragaon, P.O. Garchuk, Guwahati, Assam, 781035, India
| | - Chandan Chakraborty
- Department of Computer Science and Engineering, NITTTR, Kolkata, 700106, West Bengal, India
| |
Collapse
|
3
|
Ba F, Peng P, Zhang Y, Zhao Y. Classification and Identification of Contaminants in Recyclable Containers Based on a Recursive Feature Elimination-Light Gradient Boosting Machine Algorithm Using an Electronic Nose. MICROMACHINES 2023; 14:2047. [PMID: 38004904 PMCID: PMC10673532 DOI: 10.3390/mi14112047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 10/29/2023] [Accepted: 10/30/2023] [Indexed: 11/26/2023]
Abstract
Establishing an excellent recycling mechanism for containers is of great importance for environmental protection, so many technical approaches applied during the whole recycling stage have become popular research issues. Among them, classification is considered a key step, but this work is mostly achieved manually in practical applications. Due to the influence of human subjectivity, the classification accuracy often varies significantly. In order to overcome this shortcoming, this paper proposes an identification method based on a Recursive Feature Elimination-Light Gradient Boosting Machine (RFE-LightGBM) algorithm using electronic nose. Firstly, odor features were extracted, and feature datasets were then constructed based on the response data of the electronic nose to the detected gases. Afterwards, a principal component analysis (PCA) and the RFE-LightGBM algorithm were applied to reduce the dimensionality of the feature datasets, and the differences between these two methods were analyzed, respectively. Finally, the differences in the classification accuracies on the three datasets (the original feature dataset, PCA dimensionality reduction dataset, and RFE-LightGBM dimensionality reduction dataset) were discussed. The results showed that the highest classification accuracy of 95% could be obtained by using the RFE-LightGBM algorithm in the classification stage of recyclable containers, compared to the original feature dataset (88.38%) and PCA dimensionality reduction dataset (92.02%).
Collapse
Affiliation(s)
| | | | | | - Yongli Zhao
- School of Mechanical and Automotive Engineering, Shanghai University of Engineering Science, Shanghai 201620, China
| |
Collapse
|
4
|
Wang Y, Adam ML, Zhao Y, Zheng W, Gao L, Yin Z, Zhao H. Machine Learning-Enhanced Flexible Mechanical Sensing. NANO-MICRO LETTERS 2023; 15:55. [PMID: 36800133 PMCID: PMC9936950 DOI: 10.1007/s40820-023-01013-9] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Accepted: 01/08/2023] [Indexed: 05/31/2023]
Abstract
To realize a hyperconnected smart society with high productivity, advances in flexible sensing technology are highly needed. Nowadays, flexible sensing technology has witnessed improvements in both the hardware performances of sensor devices and the data processing capabilities of the device's software. Significant research efforts have been devoted to improving materials, sensing mechanism, and configurations of flexible sensing systems in a quest to fulfill the requirements of future technology. Meanwhile, advanced data analysis methods are being developed to extract useful information from increasingly complicated data collected by a single sensor or network of sensors. Machine learning (ML) as an important branch of artificial intelligence can efficiently handle such complex data, which can be multi-dimensional and multi-faceted, thus providing a powerful tool for easy interpretation of sensing data. In this review, the fundamental working mechanisms and common types of flexible mechanical sensors are firstly presented. Then how ML-assisted data interpretation improves the applications of flexible mechanical sensors and other closely-related sensors in various areas is elaborated, which includes health monitoring, human-machine interfaces, object/surface recognition, pressure prediction, and human posture/motion identification. Finally, the advantages, challenges, and future perspectives associated with the fusion of flexible mechanical sensing technology and ML algorithms are discussed. These will give significant insights to enable the advancement of next-generation artificial flexible mechanical sensing.
Collapse
Affiliation(s)
- Yuejiao Wang
- Applied Mechanics Laboratory, Department of Engineering Mechanics, Tsinghua University, Beijing, 100084, People's Republic of China
| | - Mukhtar Lawan Adam
- Materials Interfaces Center, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, People's Republic of China
| | - Yunlong Zhao
- Department of Mechanical and Electrical Engineering, Xiamen University, Xiamen, 361102, People's Republic of China
| | - Weihao Zheng
- School of Mechano-Electronic Engineering, Xidian University, Xi'an , 710071, People's Republic of China
| | - Libo Gao
- Department of Mechanical and Electrical Engineering, Xiamen University, Xiamen, 361102, People's Republic of China.
| | - Zongyou Yin
- Research School of Chemistry, Australian National University, Canberra, ACT, 2601, Australia.
| | - Haitao Zhao
- Materials Interfaces Center, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, People's Republic of China.
| |
Collapse
|
5
|
Bruni G, Marinelli A, Bucchieri A, Boccardo N, Caserta G, Di Domenico D, Barresi G, Florio A, Canepa M, Tessari F, Laffranchi M, De Michieli L. Object stiffness recognition and vibratory feedback without ad-hoc sensing on the Hannes prosthesis: A machine learning approach. Front Neurosci 2023; 17:1078846. [PMID: 36875662 PMCID: PMC9978002 DOI: 10.3389/fnins.2023.1078846] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Accepted: 01/24/2023] [Indexed: 02/18/2023] Open
Abstract
Introduction In recent years, hand prostheses achieved relevant improvements in term of both motor and functional recovery. However, the rate of devices abandonment, also due to their poor embodiment, is still high. The embodiment defines the integration of an external object - in this case a prosthetic device - into the body scheme of an individual. One of the limiting factors causing lack of embodiment is the absence of a direct interaction between user and environment. Many studies focused on the extraction of tactile information via custom electronic skin technologies coupled with dedicated haptic feedback, though increasing the complexity of the prosthetic system. Contrary wise, this paper stems from the authors' preliminary works on multi-body prosthetic hand modeling and the identification of possible intrinsic information to assess object stiffness during interaction. Methods Based on these initial findings, this work presents the design, implementation and clinical validation of a novel real-time stiffness detection strategy, without ad-hoc sensing, based on a Non-linear Logistic Regression (NLR) classifier. This exploits the minimum grasp information available from an under-sensorized and under-actuated myoelectric prosthetic hand, Hannes. The NLR algorithm takes as input motor-side current, encoder position, and reference position of the hand and provides as output a classification of the grasped object (no-object, rigid object, and soft object). This information is then transmitted to the user via vibratory feedback to close the loop between user control and prosthesis interaction. This implementation was validated through a user study conducted both on able bodied subjects and amputees. Results The classifier achieved excellent performance in terms of F1Score (94.93%). Further, the able-bodied subjects and amputees were able to successfully detect the objects' stiffness with a F1Score of 94.08% and 86.41%, respectively, by using our proposed feedback strategy. This strategy allowed amputees to quickly recognize the objects' stiffness (response time of 2.82 s), indicating high intuitiveness, and it was overall appreciated as demonstrated by the questionnaire. Furthermore, an embodiment improvement was also obtained as highlighted by the proprioceptive drift toward the prosthesis (0.7 cm).
Collapse
Affiliation(s)
- Giulia Bruni
- Rehab Technologies, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Andrea Marinelli
- Rehab Technologies, Istituto Italiano di Tecnologia, Genoa, Italy.,Department of Informatics, Bioengineering, Robotics System Engineering (DIBRIS), University of Genova, Genoa, Italy
| | - Anna Bucchieri
- Rehab Technologies, Istituto Italiano di Tecnologia, Genoa, Italy.,Department of Electronics, Information and Bioengineering (NearLab), Politecnico of Milan, Milan, Italy
| | - Nicolò Boccardo
- Rehab Technologies, Istituto Italiano di Tecnologia, Genoa, Italy.,The Open University Affiliated Research Centre at Istituto Italiano di Tecnologia (ARC@IIT), Genoa, Italy
| | - Giulia Caserta
- Rehab Technologies, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Dario Di Domenico
- Rehab Technologies, Istituto Italiano di Tecnologia, Genoa, Italy.,Department of Electronics and Telecommunications, Politecnico of Torino, Turin, Italy
| | - Giacinto Barresi
- Rehab Technologies, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Astrid Florio
- Rehab Technologies, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Michele Canepa
- Rehab Technologies, Istituto Italiano di Tecnologia, Genoa, Italy.,The Open University Affiliated Research Centre at Istituto Italiano di Tecnologia (ARC@IIT), Genoa, Italy
| | - Federico Tessari
- Newman Laboratory, Massachusetts Institute of Technology, Boston, MA, United States
| | | | | |
Collapse
|
6
|
Halepoto H, Gong T, Noor S, Memon H. Bibliometric Analysis of Artificial Intelligence in Textiles. MATERIALS 2022; 15:ma15082910. [PMID: 35454603 PMCID: PMC9027006 DOI: 10.3390/ma15082910] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/20/2022] [Revised: 04/13/2022] [Accepted: 04/13/2022] [Indexed: 02/04/2023]
Abstract
Generally, comprehensive documents are needed to provide the research community with relevant details of any research direction. This study conducted the first descriptive bibliometric analysis to examine the most influential journals, institutions, and countries in the field of artificial intelligence in textiles. Furthermore, bibliometric mapping analysis was also used to examine diverse research topics of artificial intelligence in textiles. VOSviewer was used to process 996 articles retrieved from Web of Science—Core Collection from 2007 to 2020. The results show that China and the United States have the largest number of publications, while Donghua University and Jiangnan University have the highest output. These three themes have also appeared in textile artificial intelligence publications and played a significant role in the textile structure, textile inspection, and textile clothing production. The authors believe that this research will unfold new research domains for researchers in computer science, electronics, material science, imaging science, and optics and will benefit academic and industrial circles.
Collapse
Affiliation(s)
- Habiba Halepoto
- Engineering Research Center of Digitized Textile and Fashion Technology, Donghua University, Shanghai 201620, China;
| | - Tao Gong
- Engineering Research Center of Digitized Textile and Fashion Technology, Donghua University, Shanghai 201620, China;
- College of Information Science and Technology, Donghua University, Shanghai 201620, China
- Correspondence:
| | - Saleha Noor
- School of Information Science and Engineering, East China Science and Technology University, Shanghai 200237, China;
| | - Hafeezullah Memon
- College of Textile Science and Engineering, Zhejiang Sci-Tech University, Hangzhou 310018, China;
| |
Collapse
|
7
|
Texture Identification and Object Recognition Using a Soft Robotic Hand Innervated Bio-Inspired Proprioception. MACHINES 2022. [DOI: 10.3390/machines10030173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
Abstract
In this study, we innervated bio-inspired proprioception into a soft hand, facilitating a robust perception of textures and object shapes. The tendon-driven soft finger with three joints, inspired by the human finger, was detailed. With tension sensors embedded in the tendon that simulate the Golgi tendon organ of the human body, 17 types of textures can be identified under uncertain rotation angles and actuator displacements. Four classifiers were used and the highest identification accuracy was 98.3%. A three-fingered soft hand based on the bionic finger was developed. Its basic grasp capability was tested experimentally. The soft hand can distinguish 10 types of objects that vary in shape with top grasp and side grasp, with the highest accuracies of 96.33% and 96.00%, respectively. Additionally, for six objects with close shapes, the soft hand obtained an identification accuracy of 97.69% with a scan-grasp method. This study offers a novel bionic solution for the texture identification and object recognition of soft manipulators.
Collapse
|
8
|
Abstract
While in most industries, most processes are automated and human workers have either been replaced by robots or work alongside them, fewer changes have occurred in industries that use limp materials, like fabrics, clothes, and garments, than might be expected with today’s technological evolution. Integration of robots in these industries is a relatively demanding and challenging task, mostly because of the natural and mechanical properties of limp materials. In this review, information on sensors that have been used in fabric-handling applications is gathered, analyzed, and organized based on criteria such as their working principle and the task they are designed to support. Categorization and related works are presented in tables and figures so someone who is interested in developing automated fabric-handling applications can easily get useful information and ideas, at least regarding the necessary sensors for the most common handling tasks. Finally, we hope this work will inspire researchers to design new sensor concepts that could promote automation in the industry and boost the robotization of domestic chores involving with flexible materials.
Collapse
|
9
|
Qin L, Zhang Y. A reference spike train-based neurocomputing method for enhanced tactile discrimination of surface roughness. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06119-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
10
|
Huang S, Wu H. Texture Recognition Based on Perception Data from a Bionic Tactile Sensor. SENSORS 2021; 21:s21155224. [PMID: 34372461 PMCID: PMC8347799 DOI: 10.3390/s21155224] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 07/28/2021] [Accepted: 07/29/2021] [Indexed: 11/16/2022]
Abstract
Texture recognition is important for robots to discern the characteristics of the object surface and adjust grasping and manipulation strategies accordingly. It is still challenging to develop texture classification approaches that are accurate and do not require high computational costs. In this work, we adopt a bionic tactile sensor to collect vibration data while sliding against materials of interest. Under a fixed contact pressure and speed, a total of 1000 sets of vibration data from ten different materials were collected. With the tactile perception data, four types of texture recognition algorithms are proposed. Three machine learning algorithms, including support vector machine, random forest, and K-nearest neighbor, are established for texture recognition. The test accuracy of those three methods are 95%, 94%, 94%, respectively. In the detection process of machine learning algorithms, the asamoto and polyester are easy to be confused with each other. A convolutional neural network is established to further increase the test accuracy to 98.5%. The three machine learning models and convolutional neural network demonstrate high accuracy and excellent robustness.
Collapse
|
11
|
Electrotactile Feedback for the Discrimination of Different Surface Textures Using a Microphone. SENSORS 2021; 21:s21103384. [PMID: 34066279 PMCID: PMC8152043 DOI: 10.3390/s21103384] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Revised: 05/03/2021] [Accepted: 05/08/2021] [Indexed: 11/16/2022]
Abstract
Most commercial prosthetic hands lack closed-loop feedback, thus, a lot of research has been focusing on implementing sensory feedback systems to provide the user with sensory information during activities of daily living. This study evaluates the possibilities of using a microphone and electrotactile feedback to identify different textures. A condenser microphone was used as a sensor to detect the friction sound generated from the contact between different textures and the microphone. The generated signal was processed to provide a characteristic electrical stimulation presented to the participants. The main goal of the processing was to derive a continuous and intuitive transfer function between the microphone signal and stimulation frequency. Twelve able-bodied volunteers participated in the study, in which they were asked to identify the stroked texture (among four used in this study: Felt, sponge, silicone rubber, and string mesh) using only electrotactile feedback. The experiments were done in three phases: (1) Training, (2) with-feedback, (3) without-feedback. Each texture was stroked 20 times each during all three phases. The results show that the participants were able to differentiate between different textures, with a median accuracy of 85%, by using only electrotactile feedback with the stimulation frequency being the only variable parameter.
Collapse
|
12
|
Abstract
Background:
Bioluminescence is a unique and significant phenomenon in nature.
Bioluminescence is important for the lifecycle of some organisms and is valuable in biomedical
research, including for gene expression analysis and bioluminescence imaging technology. In recent
years, researchers have identified a number of methods for predicting bioluminescent proteins
(BLPs), which have increased in accuracy, but could be further improved.
Method:
In this study, a new bioluminescent proteins prediction method, based on a voting
algorithm, is proposed. Four methods of feature extraction based on the amino acid sequence were
used. 314 dimensional features in total were extracted from amino acid composition,
physicochemical properties and k-spacer amino acid pair composition. In order to obtain the highest
MCC value to establish the optimal prediction model, a voting algorithm was then used to build the
model. To create the best performing model, the selection of base classifiers and vote counting rules
are discussed.
Results:
The proposed model achieved 93.4% accuracy, 93.4% sensitivity and
91.7% specificity in the test set, which was better than any other method. A previous prediction of
bioluminescent proteins in three lineages was also improved using the model building method,
resulting in greatly improved accuracy.
Collapse
Affiliation(s)
- Shulin Zhao
- Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China, Chengdu, China
| | - Ying Ju
- School of Informatics, Xiamen University, Xiamen, China
| | - Xiucai Ye
- Department of Computer Science, University of Tsukuba, Tsukuba Science City, Japan
| | - Jun Zhang
- Rehabilitation Department, Heilongjiang Province Land Reclamation Headquarters General Hospital, Harbin, China
| | - Shuguang Han
- Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
13
|
Pastor F, Garcia-Gonzalez J, Gandarias JM, Medina D, Closas P, Garcia-Cerezo AJ, Gomez-de-Gabriel JM. Bayesian and Neural Inference on LSTM-Based Object Recognition From Tactile and Kinesthetic Information. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2020.3038377] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
14
|
Masteller A, Sankar S, Kim HB, Ding K, Liu X, All AH. Recent Developments in Prosthesis Sensors, Texture Recognition, and Sensory Stimulation for Upper Limb Prostheses. Ann Biomed Eng 2020; 49:57-74. [PMID: 33140242 DOI: 10.1007/s10439-020-02678-8] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2020] [Accepted: 10/22/2020] [Indexed: 12/20/2022]
Abstract
Current developments being made in upper limb prostheses are focused on replacing lost sensory information to the amputees. Providing sensory stimulation from the prosthesis can directly improve control over the prosthetic and provide a sense of body ownership. The focus of this review article is on recent developments while including foundational knowledge for some of the critical concepts in neural prostheses. Reported concepts follow the flow of information from sensors to signal processing, with emphasis on texture recognition, and then to sensory stimulation strategies that reestablish the lost sensory feedback loop. Prosthetic sensors are used to detect the physical environment, converting pressure, force, and position into electrical signals. The electrical signals can then be processed in an effort to identify the surrounding environment using distinctive characteristics such as stiffness and texture. In order for the amputee to use this information in a natural manner, there must be real-time sensory stimulation, perception, and motor control of the prosthesis. Although truly complete sensory replacement has not yet been realized, some basic percepts can be partially restored, allowing progress towards a more realistic prosthesis with natural sensations.
Collapse
Affiliation(s)
- Andrew Masteller
- Department of Biomedical Engineering, School of Medicine, Johns Hopkins University, Traylor Building, 720 Rutland Ave, Baltimore, MD, 21205, USA
| | - Sriramana Sankar
- Department of Biomedical Engineering, School of Medicine, Johns Hopkins University, Traylor Building, 720 Rutland Ave, Baltimore, MD, 21205, USA
| | - Han Biehn Kim
- Department of Biomedical Engineering, School of Medicine, Johns Hopkins University, Traylor Building, 720 Rutland Ave, Baltimore, MD, 21205, USA
| | - Keqin Ding
- Department of Biomedical Engineering, School of Medicine, Johns Hopkins University, Traylor Building, 720 Rutland Ave, Baltimore, MD, 21205, USA
| | - Xiaogang Liu
- Department of Chemistry, Faculty of Science, National University of Singapore, Building 3 Science Drive 3, 117543, Singapore, Singapore. .,The N. 1 Institute for Health, National University of Singapore, Singapore, Singapore.
| | - Angelo H All
- Department of Chemistry, Faculty of Science, Hong Kong Baptist University, # 844, RRS Building, Ho Sin Hang Campus, Hong Kong, Hong Kong.
| |
Collapse
|
15
|
Sankar S, Balamurugan D, Brown A, Ding K, Xu X, Low JH, Yeow CH, Thakor N. Texture Discrimination with a Soft Biomimetic Finger Using a Flexible Neuromorphic Tactile Sensor Array That Provides Sensory Feedback. Soft Robot 2020; 8:577-587. [PMID: 32976080 DOI: 10.1089/soro.2020.0016] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023] Open
Abstract
The compliant nature of soft fingers allows for safe and dexterous manipulation of objects by humans in an unstructured environment. A soft prosthetic finger design with tactile sensing capabilities for texture discrimination and subsequent sensory stimulation has the potential to create a more natural experience for an amputee. In this work, a pneumatically actuated soft biomimetic finger is integrated with a textile neuromorphic tactile sensor array for a texture discrimination task. The tactile sensor outputs were converted into neuromorphic spike trains, which emulate the firing pattern of biological mechanoreceptors. Spike-based features from each taxel compressed the information and were then used as inputs for the support vector machine classifier to differentiate the textures. Our soft biomimetic finger with neuromorphic encoding was able to achieve an average overall classification accuracy of 99.57% over 16 independent parameters when tested on 13 standardized textured surfaces. The 16 parameters were the combination of 4 angles of flexion of the soft finger and 4 speeds of palpation. To aid in the perception of more natural objects and their manipulation, subjects were provided with transcutaneous electrical nerve stimulation to convey a subset of four textures with varied textural information. Three able-bodied subjects successfully distinguished two or three textures with the applied stimuli. This work paves the way for a more human-like prosthesis through a soft biomimetic finger with texture discrimination capabilities using neuromorphic techniques that provide sensory feedback; furthermore, texture feedback has the potential to enhance user experience when interacting with their surroundings.
Collapse
Affiliation(s)
- Sriramana Sankar
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Darshini Balamurugan
- Laboratory for Computational Sensing and Robotics, (LCSR) Johns Hopkins University, Baltimore, Maryland, USA
| | - Alisa Brown
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Keqin Ding
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Xingyuan Xu
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, China
| | - Jin Huat Low
- Department of Biomedical Engineering, National University of Singapore, Singapore
| | - Chen Hua Yeow
- Department of Biomedical Engineering, National University of Singapore, Singapore
| | - Nitish Thakor
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA.,Department of Biomedical Engineering, National University of Singapore, Singapore.,Singapore Institute for Neurotechnology (SINAPSE) Laboratory, National University of Singapore, Singapore
| |
Collapse
|
16
|
Liu H, Guo D, Sun F, Yang W, Furber S, Sun T. Embodied tactile perception and learning. BRAIN SCIENCE ADVANCES 2020. [DOI: 10.26599/bsa.2020.9050012] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
Various living creatures exhibit embodiment intelligence, which is reflected by a collaborative interaction of the brain, body, and environment. The actual behavior of embodiment intelligence is generated by a continuous and dynamic interaction between a subject and the environment through information perception and physical manipulation. The physical interaction between a robot and the environment is the basis for realizing embodied perception and learning. Tactile information plays a critical role in this physical interaction process. It can be used to ensure safety, stability, and compliance, and can provide unique information that is difficult to capture using other perception modalities. However, due to the limitations of existing sensors and perception and learning methods, the development of robotic tactile research lags significantly behind other sensing modalities, such as vision and hearing, thereby seriously restricting the development of robotic embodiment intelligence. This paper presents the current challenges related to robotic tactile embodiment intelligence and reviews the theory and methods of robotic embodied tactile intelligence. Tactile perception and learning methods for embodiment intelligence can be designed based on the development of new large‐scale tactile array sensing devices, with the aim to make breakthroughs in the neuromorphic computing technology of tactile intelligence.
Collapse
Affiliation(s)
- Huaping Liu
- Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
| | - Di Guo
- Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
| | - Fuchun Sun
- Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
| | - Wuqiang Yang
- Department of Electrical and Electronic Engineering, The University of Manchester, Manchester M13 9 PL, U.K
| | - Steve Furber
- Department of Computer Science, The University of Manchester, Manchester M13 9 PL, U.K
| | - Tengchen Sun
- Beijing Tashan Technology Co., Ltd., Beijing 102300, China
| |
Collapse
|
17
|
Shih B, Shah D, Li J, Thuruthel TG, Park YL, Iida F, Bao Z, Kramer-Bottiglio R, Tolley MT. Electronic skins and machine learning for intelligent soft robots. Sci Robot 2020; 5:5/41/eaaz9239. [PMID: 33022628 DOI: 10.1126/scirobotics.aaz9239] [Citation(s) in RCA: 162] [Impact Index Per Article: 40.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2019] [Accepted: 03/24/2020] [Indexed: 01/14/2023]
Abstract
Soft robots have garnered interest for real-world applications because of their intrinsic safety embedded at the material level. These robots use deformable materials capable of shape and behavioral changes and allow conformable physical contact for manipulation. Yet, with the introduction of soft and stretchable materials to robotic systems comes a myriad of challenges for sensor integration, including multimodal sensing capable of stretching, embedment of high-resolution but large-area sensor arrays, and sensor fusion with an increasing volume of data. This Review explores the emerging confluence of e-skins and machine learning, with a focus on how roboticists can combine recent developments from the two fields to build autonomous, deployable soft robots, integrated with capabilities for informative touch and proprioception to stand up to the challenges of real-world environments.
Collapse
Affiliation(s)
- Benjamin Shih
- Department of Mechanical and Aerospace Engineering, University of California, San Diego, CA, USA
| | - Dylan Shah
- Department of Mechanical Engineering and Materials Science, Yale University, CT, USA
| | - Jinxing Li
- Departments of Chemical Engineering and Material Science and Engineering, Stanford University, CA, USA
| | | | - Yong-Lae Park
- Department of Mechanical and Aerospace Engineering, Seoul National University, South Korea
| | - Fumiya Iida
- Department of Engineering, University of Cambridge, UK
| | - Zhenan Bao
- Departments of Chemical Engineering and Material Science and Engineering, Stanford University, CA, USA
| | | | - Michael T Tolley
- Department of Mechanical and Aerospace Engineering, University of California, San Diego, CA, USA.
| |
Collapse
|
18
|
Strese M, Brudermueller L, Kirsch J, Steinbach E. Haptic Material Analysis and Classification Inspired by Human Exploratory Procedures. IEEE TRANSACTIONS ON HAPTICS 2020; 13:404-424. [PMID: 31715573 DOI: 10.1109/toh.2019.2952118] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
We present a framework for the acquisition and parametrization of object material properties. The introduced acquisition device, denoted as Texplorer2, is able to extract surface material properties while a human operator is performing exploratory procedures. Using the Texplorer2, we scanned 184 material classes which we labeled according to biological, chemical, and geological naming conventions. Based on these real material recordings, we introduce a novel set of mathematical features which align with corresponding material properties defined in perceptual studies from related work and classify the materials using common machine learning techniques. Validation results of the proposed multi-modal features lead to an overall classification accuracy of 90.2% ± 1.2% and an F[Formula: see text] score of 0.90 ± 0.01 using the random forest classifier. For the sake of comparison, a deep neural network is trained and tested on images of the material surfaces; it outperforms (90.7% ± 1.0%) the hand-crafted feature-based approach yet leads to more critical misclassifications in terms of the proposed taxonomy.
Collapse
|
19
|
Using 3D Convolutional Neural Networks for Tactile Object Recognition with Robotic Palpation. SENSORS 2019; 19:s19245356. [PMID: 31817320 PMCID: PMC6960774 DOI: 10.3390/s19245356] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Revised: 11/29/2019] [Accepted: 12/02/2019] [Indexed: 01/08/2023]
Abstract
In this paper, a novel method of active tactile perception based on 3D neural networks and a high-resolution tactile sensor installed on a robot gripper is presented. A haptic exploratory procedure based on robotic palpation is performed to get pressure images at different grasping forces that provide information not only about the external shape of the object, but also about its internal features. The gripper consists of two underactuated fingers with a tactile sensor array in the thumb. A new representation of tactile information as 3D tactile tensors is described. During a squeeze-and-release process, the pressure images read from the tactile sensor are concatenated forming a tensor that contains information about the variation of pressure matrices along with the grasping forces. These tensors are used to feed a 3D Convolutional Neural Network (3D CNN) called 3D TactNet, which is able to classify the grasped object through active interaction. Results show that 3D CNN performs better, and provide better recognition rates with a lower number of training data.
Collapse
|
20
|
Zhu Y, Hao J, Li W, Yang J, Dong E. A new robotic tactile sensor with bio-mimetic structural colour inspired by Morpho butterflies. BIOINSPIRATION & BIOMIMETICS 2019; 14:056010. [PMID: 31284276 DOI: 10.1088/1748-3190/ab3014] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Since tactile perception and robotic manipulation play important roles in human survival, we propose a new method for developing robotic tactile sensors based on the structural colours of Morpho menelaus (a kind of Morpho butterfly). The first task is to fabricate a flexible bioinspired grating with a similar microstructure to the wings of Morpho menelaus using the transfer technique, onto the surfaces of polydimethylsiloxane (PDMS) films. The second task, depending on the angle of diffracted light, is to integrate the flexible diffraction grating with a polychromatic light source and a CCD camera, and then predict the position and magnitude of the contact force caused by a change in the diffraction pattern. The final task is to set up an experimental calibration platform and a marker point array with an interval of 1 mm for an image processing algorithm and a deep learning method to establish the relationship between the contact point position, and the magnitude of the force and diffraction pattern. The results showed that this tactile sensor has high sensitivity and resolution, with the position of the contact force of 1 mm. This practical application of the UR-5 manipulator verifies the feasibility of the prototype as a tactile sensor. This tactile sensing method may be widely used in robotics by miniaturising the design.
Collapse
Affiliation(s)
- Yin Zhu
- CAS Key Laboratory of Mechanical Behavior and Design of Materials, Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, Anhui, China
| | | | | | | | | |
Collapse
|
21
|
Jiang H, Yan Y, Zhu X, Zhang C. A 3-D Surface Reconstruction with Shadow Processing for Optical Tactile Sensors. SENSORS 2018; 18:s18092785. [PMID: 30149551 PMCID: PMC6163909 DOI: 10.3390/s18092785] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/01/2018] [Revised: 08/13/2018] [Accepted: 08/21/2018] [Indexed: 11/16/2022]
Abstract
An optical tactile sensor technique with 3-dimension (3-D) surface reconstruction is proposed for robotic fingers. The hardware of the tactile sensor consists of a surface deformation sensing layer, an image sensor and four individually controlled flashing light emitting diodes (LEDs). The image sensor records the deformation images when the robotic finger touches an object. For each object, four deformation images are taken with the LEDs providing different illumination directions. Before the 3-D reconstruction, the look-up tables are built to map the intensity distribution to the image gradient data. The possible image shadow will be detected and amended. Then the 3-D depth distribution of the object surface can be reconstructed from the 2-D gradient obtained using the look-up tables. The architecture of the tactile sensor and the proposed signal processing flow have been presented in details. A prototype tactile sensor has been built. Both the simulation and experimental results have validated the effectiveness of the proposed 3-D surface reconstruction method for the optical tactile sensors. The proposed 3-D surface reconstruction method has the unique feature of image shadow detection and compensation, which differentiates itself from those in the literature.
Collapse
Affiliation(s)
- Hanjun Jiang
- Institute of Microelectronics, Tsinghua University, Beijing 100084, China.
| | - Yan Yan
- Institute of Microelectronics, Tsinghua University, Beijing 100084, China.
| | - Xiyang Zhu
- Institute of Microelectronics, Tsinghua University, Beijing 100084, China.
| | - Chun Zhang
- Institute of Microelectronics, Tsinghua University, Beijing 100084, China.
| |
Collapse
|
22
|
Robust Tactile Descriptors for Discriminating Objects From Textural Properties via Artificial Robotic Skin. IEEE T ROBOT 2018. [DOI: 10.1109/tro.2018.2830364] [Citation(s) in RCA: 47] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
23
|
Sun T, Back J, Liu H. Combining Contact Forces and Geometry to Recognize Objects During Surface Haptic Exploration. IEEE Robot Autom Lett 2018. [DOI: 10.1109/lra.2018.2814083] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
24
|
Kaboli M, Feng D, Cheng G. Active Tactile Transfer Learning for Object Discrimination in an Unstructured Environment Using Multimodal Robotic Skin. INT J HUM ROBOT 2018. [DOI: 10.1142/s0219843618500019] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
In this paper, we propose a probabilistic active tactile transfer learning (ATTL) method to enable robotic systems to exploit their prior tactile knowledge while discriminating among objects via their physical properties (surface texture, stiffness, and thermal conductivity). Using the proposed method, the robot autonomously selects and exploits its most relevant prior tactile knowledge to efficiently learn about new unknown objects with a few training samples or even one. The experimental results show that using our proposed method, the robot successfully discriminated among new objects with [Formula: see text] discrimination accuracy using only one training sample (on-shot-tactile-learning). Furthermore, the results demonstrate that our method is robust against transferring irrelevant prior tactile knowledge (negative tactile knowledge transfer).
Collapse
Affiliation(s)
- Mohsen Kaboli
- The Institute for Cognitive Systems, Technical University of Munich, Arcisstrasse 21 80333, Munich, Germany
| | - Di Feng
- The Institute for Cognitive Systems, Technical University of Munich, Arcisstrasse 21 80333, Munich, Germany
| | - Gordon Cheng
- The Institute for Cognitive Systems, Technical University of Munich, Arcisstrasse 21 80333, Munich, Germany
| |
Collapse
|
25
|
Gandarias JM, Gómez-de-Gabriel JM, García-Cerezo AJ. Enhancing Perception with Tactile Object Recognition in Adaptive Grippers for Human-Robot Interaction. SENSORS 2018; 18:s18030692. [PMID: 29495409 PMCID: PMC5876667 DOI: 10.3390/s18030692] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/28/2017] [Revised: 02/17/2018] [Accepted: 02/19/2018] [Indexed: 11/16/2022]
Abstract
The use of tactile perception can help first response robotic teams in disaster scenarios, where visibility conditions are often reduced due to the presence of dust, mud, or smoke, distinguishing human limbs from other objects with similar shapes. Here, the integration of the tactile sensor in adaptive grippers is evaluated, measuring the performance of an object recognition task based on deep convolutional neural networks (DCNNs) using a flexible sensor mounted in adaptive grippers. A total of 15 classes with 50 tactile images each were trained, including human body parts and common environment objects, in semi-rigid and flexible adaptive grippers based on the fin ray effect. The classifier was compared against the rigid configuration and a support vector machine classifier (SVM). Finally, a two-level output network has been proposed to provide both object-type recognition and human/non-human classification. Sensors in adaptive grippers have a higher number of non-null tactels (up to 37% more), with a lower mean of pressure values (up to 72% less) than when using a rigid sensor, with a softer grip, which is needed in physical human–robot interaction (pHRI). A semi-rigid implementation with 95.13% object recognition rate was chosen, even though the human/non-human classification had better results (98.78%) with a rigid sensor.
Collapse
Affiliation(s)
- Juan M Gandarias
- System Engineering and Automation Department, University of Málaga, 29071 Málaga, Spain.
| | | | | |
Collapse
|
26
|
Kaboli M, Yao K, Feng D, Cheng G. Tactile-based active object discrimination and target object search in an unknown workspace. Auton Robots 2018. [DOI: 10.1007/s10514-018-9707-8] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
27
|
Croitoru C, Spirchez C, Cristea D, Lunguleasa A, Pop MA, Bedo T, Roata IC, Luca MA. Calcium carbonate and wood reinforced hybrid PVC composites. J Appl Polym Sci 2018. [DOI: 10.1002/app.46317] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Affiliation(s)
- Catalin Croitoru
- Materials Engineering and Welding Department; Transilvania University of Brasov, Eroilor 29 Str; Brasov 500036 Romania
| | - Cosmin Spirchez
- Wood Processing and Design of Wooden Products Department; Transilvania University of Brasov, Eroilor 29 Str; Brasov 500036 Romania
| | - Daniel Cristea
- Materials Science Department; Transilvania University of Brasov, Eroilor 29 Str; Brasov 500036 Romania
| | - Aurel Lunguleasa
- Wood Processing and Design of Wooden Products Department; Transilvania University of Brasov, Eroilor 29 Str; Brasov 500036 Romania
| | - Mihai Alin Pop
- Materials Science Department; Transilvania University of Brasov, Eroilor 29 Str; Brasov 500036 Romania
| | - Tibor Bedo
- Materials Science Department; Transilvania University of Brasov, Eroilor 29 Str; Brasov 500036 Romania
| | - Ionut Claudiu Roata
- Materials Engineering and Welding Department; Transilvania University of Brasov, Eroilor 29 Str; Brasov 500036 Romania
| | - Mihai Alexandru Luca
- Materials Engineering and Welding Department; Transilvania University of Brasov, Eroilor 29 Str; Brasov 500036 Romania
| |
Collapse
|
28
|
A Novel Inverse Solution of Contact Force Based on a Sparse Tactile Sensor Array. SENSORS 2018; 18:s18020351. [PMID: 29373489 PMCID: PMC5854967 DOI: 10.3390/s18020351] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/27/2017] [Revised: 01/23/2018] [Accepted: 01/23/2018] [Indexed: 11/17/2022]
Abstract
High-density tactile sensing has been pursued for humanoid robotic hands to obtain contact force information while the elastomer skin cover is traditionally considered to impair the force discrimination. In this work, we try to utilize the diffusion effect of the elastomer cover to identify an arbitrary contact force load just based on a sparse tactile sensor array. By numerical analysis, we proved the monotonous relation between the Pearson's correlation coefficient and the relative distance of two single-force loads. Then, we meshed the elastomer surface and conducted the calibration load process to establish the calibration database of the sensing outputs. Afterwards, we applied the correlation method to the database and the sensing output of the unknown load to determine its location and intensity. For validation tests of the proposed method, we designed and fabricated a 3 × 3 sparse tactile sensor array with flat elastomer cover and established an automatic three-axis loading system. The validation tests were implemented including 100 random points with force intensity ranging from 0.1 to 1 N. The test results show that the method has good accuracy of detecting force load with the mean location error of 0.46 mm and the mean intensity error of 0.043 N, which meets the basic requirements of tactile sensing. Therefore, it is feasible for the sparse tactile sensor array to realize high-density load detection.
Collapse
|
29
|
Luo S, Zhu L, Althoefer K, Liu H. Knock-Knock: Acoustic object recognition by using stacked denoising autoencoders. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2017.03.014] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
30
|
Abstract
Conventional visual perception technology is subject to many restrictions, such as illumination, background clutter, and occlusion. Many intrinsic properties of objects, like stiffness, hardness, and internal state, cannot be effectively perceived by visual sensors. For robots, tactile perception is a key approach to obtain environmental and object information. Different from vision sensors, tactile sensors can directly measure some physical properties of objects and environment. At the same time, humans also utilize touch sensory receptors as an important means to perceive and interact with the environment. In this article, we present a detailed discussion on tactile object recognition problem. We divide the current studies on the tactile object recognition into three subcategories and detailed analysis has been put forward on them. In addition, we also discuss some advanced topics such as visual–tactile fusion, exploratory procedure, and data sets.
Collapse
Affiliation(s)
- Huaping Liu
- Department of Computer Science and Technology, Tsinghua University, Beijing, China
- State Key Laboratory of Intelligent Technology and Systems, Beijing, China
- Tsinghua National Laboratory for Information Science and Technology, Beijing, China
| | - Yupei Wu
- Department of Computer Science and Technology, Tsinghua University, Beijing, China
- State Key Laboratory of Intelligent Technology and Systems, Beijing, China
- Tsinghua National Laboratory for Information Science and Technology, Beijing, China
| | - Fuchun Sun
- Department of Computer Science and Technology, Tsinghua University, Beijing, China
- State Key Laboratory of Intelligent Technology and Systems, Beijing, China
- Tsinghua National Laboratory for Information Science and Technology, Beijing, China
| | - Di Guo
- Department of Computer Science and Technology, Tsinghua University, Beijing, China
- State Key Laboratory of Intelligent Technology and Systems, Beijing, China
- Tsinghua National Laboratory for Information Science and Technology, Beijing, China
| |
Collapse
|
31
|
Corradi T, Hall P, Iravani P. Object recognition combining vision and touch. ACTA ACUST UNITED AC 2017; 4:2. [PMID: 28480157 PMCID: PMC5395591 DOI: 10.1186/s40638-017-0058-2] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2016] [Accepted: 04/05/2017] [Indexed: 11/15/2022]
Abstract
This paper explores ways of combining vision and touch for the purpose of object recognition. In particular, it focuses on scenarios when there are few tactile training samples (as these are usually costly to obtain) and when vision is artificially impaired. Whilst machine vision is a widely studied field, and machine touch has received some attention recently, the fusion of both modalities remains a relatively unexplored area. It has been suggested that, in the human brain, there exist shared multi-sensorial representations of objects. This provides robustness when one or more senses are absent or unreliable. Modern robotics systems can benefit from multi-sensorial input, in particular in contexts where one or more of the sensors perform poorly. In this paper, a recently proposed tactile recognition model was extended by integrating a simple vision system in three different ways: vector concatenation (vision feature vector and tactile feature vector), object label posterior averaging and object label posterior product. A comparison is drawn in terms of overall accuracy of recognition and in terms of how quickly (number of training samples) learning occurs. The conclusions reached are: (1) the most accurate system is “posterior product”, (2) multi-modal recognition has higher accuracy to either modality alone if all visual and tactile training data are pooled together, and (3) in the case of visual impairment, multi-modal recognition “learns faster”, i.e. requires fewer training samples to achieve the same accuracy as either other modality.
Collapse
Affiliation(s)
- Tadeo Corradi
- Department of Mechanical Engineering, University of Bath, Claverton Down, Bath, BA27AY UK
| | - Peter Hall
- Department of Mechanical Engineering, University of Bath, Claverton Down, Bath, BA27AY UK
| | - Pejman Iravani
- Department of Mechanical Engineering, University of Bath, Claverton Down, Bath, BA27AY UK
| |
Collapse
|
32
|
Strese M, Schuwerk C, Iepure A, Steinbach E. Multimodal Feature-Based Surface Material Classification. IEEE TRANSACTIONS ON HAPTICS 2017; 10:226-239. [PMID: 27845677 DOI: 10.1109/toh.2016.2625787] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
When a tool is tapped on or dragged over an object surface, vibrations are induced in the tool, which can be captured using acceleration sensors. The tool-surface interaction additionally creates audible sound waves, which can be recorded using microphones. Features extracted from camera images provide additional information about the surfaces. We present an approach for tool-mediated surface classification that combines these signals and demonstrate that the proposed method is robust against variable scan-time parameters. We examine freehand recordings of 69 textured surfaces recorded by different users and propose a classification system that uses perception-related features, such as hardness, roughness, and friction; selected features adapted from speech recognition, such as modified cepstral coefficients applied to our acceleration signals; and surface texture-related image features. We focus on mitigating the effect of variable contact force and exploration velocity conditions on these features as a prerequisite for a robust machine-learning-based approach for surface classification. The proposed system works without explicit scan force and velocity measurements. Experimental results show that our proposed approach allows for successful classification of textured surfaces under variable freehand movement conditions, exerted by different human operators. The proposed subset of six features, selected from the described sound, image, friction force, and acceleration features, leads to a classification accuracy of 74 percent in our experiments when combined with a Naive Bayes classifier.
Collapse
|
33
|
Khasnobish A, Pal M, Tibarewala DN, Konar A, Pal K. Texture- and deformability-based surface recognition by tactile image analysis. Med Biol Eng Comput 2016; 54:1269-83. [PMID: 27008211 DOI: 10.1007/s11517-016-1464-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2014] [Accepted: 01/30/2016] [Indexed: 10/22/2022]
Abstract
Deformability and texture are two unique object characteristics which are essential for appropriate surface recognition by tactile exploration. Tactile sensation is required to be incorporated in artificial arms for rehabilitative and other human-computer interface applications to achieve efficient and human-like manoeuvring. To accomplish the same, surface recognition by tactile data analysis is one of the prerequisites. The aim of this work is to develop effective technique for identification of various surfaces based on deformability and texture by analysing tactile images which are obtained during dynamic exploration of the item by artificial arms whose gripper is fitted with tactile sensors. Tactile data have been acquired, while human beings as well as a robot hand fitted with tactile sensors explored the objects. The tactile images are pre-processed, and relevant features are extracted from the tactile images. These features are provided as input to the variants of support vector machine (SVM), linear discriminant analysis and k-nearest neighbour (kNN) for classification. Based on deformability, six household surfaces are recognized from their corresponding tactile images. Moreover, based on texture five surfaces of daily use are classified. The method adopted in the former two cases has also been applied for deformability- and texture-based recognition of four biomembranes, i.e. membranes prepared from biomaterials which can be used for various applications such as drug delivery and implants. Linear SVM performed best for recognizing surface deformability with an accuracy of 83 % in 82.60 ms, whereas kNN classifier recognizes surfaces of daily use having different textures with an accuracy of 89 % in 54.25 ms and SVM with radial basis function kernel recognizes biomembranes with an accuracy of 78 % in 53.35 ms. The classifiers are observed to generalize well on the unseen test datasets with very high performance to achieve efficient material recognition based on its deformability and texture.
Collapse
Affiliation(s)
- Anwesha Khasnobish
- School of Bioscience and Engineering, Jadavpur University, Raja S.C. Mullick Road, Kolkata, West Bengal, 700032, India.
| | - Monalisa Pal
- Department of Electronics and Telecommunication Engineering, Jadavpur University, Raja S.C. Mullick Road, Kolkata, West Bengal, 700032, India
| | - D N Tibarewala
- School of Bioscience and Engineering, Jadavpur University, Raja S.C. Mullick Road, Kolkata, West Bengal, 700032, India
| | - Amit Konar
- Department of Electronics and Telecommunication Engineering, Jadavpur University, Raja S.C. Mullick Road, Kolkata, West Bengal, 700032, India
| | - Kunal Pal
- Department of Biotechnology and Medical Engineering, National Institute of Technology Rourkela, Rourkela, India
| |
Collapse
|
34
|
Xin Y, Tian H, Guo C, Li X, Sun H, Wang P, Qian C, Wang S, Wang C. A biomimetic tactile sensing system based on polyvinylidene fluoride film. THE REVIEW OF SCIENTIFIC INSTRUMENTS 2016; 87:025002. [PMID: 26931883 DOI: 10.1063/1.4941736] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Polyvinylidene fluoride (PVDF) film has been widely investigated as a sensing material due to its outstanding properties such as biocompatibility, high thermal stability, good chemical resistance, high piezo-, pyro- and ferro-electric properties. This paper reports on the design, test, and analysis of a biomimetic tactile sensor based on PVDF film. This sensor consists of a PVDF film with aluminum electrodes, a pair of insulating layers, and a "handprint" friction layer with a copper foil. It is designed for easy fabrication and high reliability in outputting signals. In bionics, the fingerprint of the glabrous skin plays an important role during object handling. Therefore, in order to enhance friction and to provide better manipulation, the ridges of the fingertips were introduced into the design of the proposed tactile sensor. And, a basic experimental study on the selection of the high sensitivity fingerprint type for the biomimetic sensor was performed. In addition, we proposed a texture distinguish experiment to verify the sensor sensitivity. The experiment's results show that the novel biomimetic sensor is effective in discriminating object surface characteristics. Furthermore, an efficient visual application program (LabVIEW) and a quantitative evaluation method were proposed for the verification of the biomimetic sensor. The proposed tactile sensor shows great potential for contact force and slip measurements.
Collapse
Affiliation(s)
- Yi Xin
- College of Instrumentation and Electrical Engineering, Jilin University, Changchun 130061, China
| | - Hongying Tian
- College of Instrumentation and Electrical Engineering, Jilin University, Changchun 130061, China
| | - Chao Guo
- College of Instrumentation and Electrical Engineering, Jilin University, Changchun 130061, China
| | - Xiang Li
- College of Instrumentation and Electrical Engineering, Jilin University, Changchun 130061, China
| | - Hongshuai Sun
- College of Instrumentation and Electrical Engineering, Jilin University, Changchun 130061, China
| | - Peiyuan Wang
- College of Instrumentation and Electrical Engineering, Jilin University, Changchun 130061, China
| | - Chenghui Qian
- College of Instrumentation and Electrical Engineering, Jilin University, Changchun 130061, China
| | - Shuhong Wang
- Key Laboratory of Functional Inorganic Material Chemistry, Ministry of Education, Heilongjiang University, Harbin 150080, China
| | - Cheng Wang
- Key Laboratory of Functional Inorganic Material Chemistry, Ministry of Education, Heilongjiang University, Harbin 150080, China
| |
Collapse
|
35
|
Friedl KE, Voelker AR, Peer A, Eliasmith C. Human-Inspired Neurorobotic System for Classifying Surface Textures by Touch. IEEE Robot Autom Lett 2016. [DOI: 10.1109/lra.2016.2517213] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
36
|
Yogeswaran N, Dang W, Navaraj W, Shakthivel D, Khan S, Polat E, Gupta S, Heidari H, Kaboli M, Lorenzelli L, Cheng G, Dahiya R. New materials and advances in making electronic skin for interactive robots. Adv Robot 2015. [DOI: 10.1080/01691864.2015.1095653] [Citation(s) in RCA: 77] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
37
|
Abstract
The manipulation of objects held in a robotic hand or gripper is accompanied by events such as making and breaking contact and slippage, between the fingertips and the grasped object and between the grasped object and external surfaces. Humans can distinguish among such events, in part, because they excite the various mechanoreceptors in the hands differently. As part of an effort to provide robots with a similar capability, we propose two features that can be extracted from dynamic tactile array data and used to discriminate between hand/object and object/world slips. Both features rely on examining how slippage affects an array of dynamic tactile sensors compared with the way it affects individual elements of the array. In comparison with approaches that require extensive training with particular combinations of objects and skin, the features work for a wide range of frequencies and grasp conditions. The performance and generalizability of the features are verified with testing on three different kinds of sensors and for a range of object textures, grasp forces and slip conditions. Both features demonstrate greater than 85% accuracy in identifying the location of slip.
Collapse
|
38
|
Liu H, Nguyen KC, Perdereau V, Bimbo J, Back J, Godden M, Seneviratne LD, Althoefer K. Finger contact sensing and the application in dexterous hand manipulation. Auton Robots 2015. [DOI: 10.1007/s10514-015-9425-4] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
39
|
Gastaldo P, Pinna L, Seminara L, Valle M, Zunino R. Computational intelligence techniques for tactile sensing systems. SENSORS 2014; 14:10952-76. [PMID: 24949646 PMCID: PMC4118344 DOI: 10.3390/s140610952] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/15/2014] [Revised: 06/05/2014] [Accepted: 06/10/2014] [Indexed: 11/16/2022]
Abstract
Tactile sensing helps robots interact with humans and objects effectively in real environments. Piezoelectric polymer sensors provide the functional building blocks of the robotic electronic skin, mainly thanks to their flexibility and suitability for detecting dynamic contact events and for recognizing the touch modality. The paper focuses on the ability of tactile sensing systems to support the challenging recognition of certain qualities/modalities of touch. The research applies novel computational intelligence techniques and a tensor-based approach for the classification of touch modalities; its main results consist in providing a procedure to enhance system generalization ability and architecture for multi-class recognition applications. An experimental campaign involving 70 participants using three different modalities in touching the upper surface of the sensor array was conducted, and confirmed the validity of the approach.
Collapse
Affiliation(s)
- Paolo Gastaldo
- Department of Electric, Electronic, Telecommunication Engineering and Naval Architecture, DITEN, University of Genoa, Via Opera Pia 11a, 16145 Genova, Italy.
| | - Luigi Pinna
- Department of Electric, Electronic, Telecommunication Engineering and Naval Architecture, DITEN, University of Genoa, Via Opera Pia 11a, 16145 Genova, Italy.
| | - Lucia Seminara
- Department of Electric, Electronic, Telecommunication Engineering and Naval Architecture, DITEN, University of Genoa, Via Opera Pia 11a, 16145 Genova, Italy.
| | - Maurizio Valle
- Department of Electric, Electronic, Telecommunication Engineering and Naval Architecture, DITEN, University of Genoa, Via Opera Pia 11a, 16145 Genova, Italy.
| | - Rodolfo Zunino
- Department of Electric, Electronic, Telecommunication Engineering and Naval Architecture, DITEN, University of Genoa, Via Opera Pia 11a, 16145 Genova, Italy.
| |
Collapse
|
40
|
A finger-shaped tactile sensor for fabric surfaces evaluation by 2-dimensional active sliding touch. SENSORS 2014; 14:4899-913. [PMID: 24618775 PMCID: PMC4003973 DOI: 10.3390/s140304899] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/08/2014] [Revised: 02/12/2014] [Accepted: 02/18/2014] [Indexed: 12/04/2022]
Abstract
Sliding tactile perception is a basic function for human beings to determine the mechanical properties of object surfaces and recognize materials. Imitating this process, this paper proposes a novel finger-shaped tactile sensor based on a thin piezoelectric polyvinylidene fluoride (PVDF) film for surface texture measurement. A parallelogram mechanism is designed to ensure that the sensor applies a constant contact force perpendicular to the object surface, and a 2-dimensional movable mechanical structure is utilized to generate the relative motion at a certain speed between the sensor and the object surface. By controlling the 2-dimensional motion of the finger-shaped sensor along the object surface, small height/depth variation of surface texture changes the output charge of PVDF film then surface texture can be measured. In this paper, the finger-shaped tactile sensor is used to evaluate and classify five different kinds of linen. Fast Fourier Transformation (FFT) is utilized to get original attribute data of surface in the frequency domain, and principal component analysis (PCA) is used to compress the attribute data and extract feature information. Finally, low dimensional features are classified by Support Vector Machine (SVM). The experimental results show that this finger-shaped tactile sensor is effective and high accurate for discriminating the five textures.
Collapse
|
41
|
Loeb GE, Fishel JA. Bayesian action&perception: representing the world in the brain. Front Neurosci 2014; 8:341. [PMID: 25400542 PMCID: PMC4214374 DOI: 10.3389/fnins.2014.00341] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2014] [Accepted: 10/08/2014] [Indexed: 11/23/2022] Open
Abstract
Theories of perception seek to explain how sensory data are processed to identify previously experienced objects, but they usually do not consider the decisions and effort that goes into acquiring the sensory data. Identification of objects according to their tactile properties requires active exploratory movements. The sensory data thereby obtained depend on the details of those movements, which human subjects change rapidly and seemingly capriciously. Bayesian Exploration is an algorithm that uses prior experience to decide which next exploratory movement should provide the most useful data to disambiguate the most likely possibilities. In previous studies, a simple robot equipped with a biomimetic tactile sensor and operated according to Bayesian Exploration performed in a manner similar to and actually better than humans on a texture identification task. Expanding on this, "Bayesian Action&Perception" refers to the construction and querying of an associative memory of previously experienced entities containing both sensory data and the motor programs that elicited them. We hypothesize that this memory can be queried (i) to identify useful next exploratory movements during identification of an unknown entity ("action for perception") or (ii) to characterize whether an unknown entity is fit for purpose ("perception for action") or (iii) to recall what actions might be feasible for a known entity (Gibsonian affordance). The biomimetic design of this mechatronic system may provide insights into the neuronal basis of biological action and perception.
Collapse
Affiliation(s)
- Gerald E. Loeb
- SynTouch LLCLos Angeles, CA, USA
- Department of Biomedical Engineering, University of Southern CaliforniaLos Angeles, CA, USA
- *Correspondence:
| | | |
Collapse
|