1
|
Xu J, Song S, Ciocarlie M. TANDEM: Learning Joint Exploration and Decision Making with Tactile Sensors. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3193466] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Affiliation(s)
- Jingxi Xu
- Department of Computer Science, Columbia University, New York, NY, USA
| | - Shuran Song
- Department of Computer Science, Columbia University, New York, NY, USA
| | - Matei Ciocarlie
- Department of Mechanical Engineering, Columbia University, New York, NY, USA
| |
Collapse
|
2
|
Xiao C, Madapana N, Wachs J. Fingers See Things Differently (FIST-D): An Object Aware Visualization and Manipulation Framework Based on Tactile Observations. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3064211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
3
|
Turlapati SH, Accoto D, Campolo D. Haptic Manipulation of 3D Scans for Geometric Feature Enhancement. SENSORS 2021; 21:s21082716. [PMID: 33921508 PMCID: PMC8070226 DOI: 10.3390/s21082716] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/17/2021] [Revised: 03/17/2021] [Accepted: 04/05/2021] [Indexed: 11/16/2022]
Abstract
Localisation of geometric features like holes, edges, slots, etc. is vital to robotic planning in industrial automation settings. Low-cost 3D scanners are crucial in terms of improving accessibility, but pose a practical challenge to feature localisation because of poorer resolution and consequently affect robotic planning. In this work, we address the possibility of enhancing the quality of a 3D scan by a manual ’touch-up’ of task-relevant features, to ensure their automatic detection prior to automation. We propose a framework whereby the operator (i) has access to both the actual work-piece and its 3D scan; (ii) evaluates the missing salient features from the scan; (iii) uses a haptic stylus to physically interact with the actual work-piece, around such specific features; (iv) interactively updates the scan using the position and force information from the haptic stylus. The contribution of this work is the use of haptic mismatch for geometric update. Specifically, the geometry from the 3D scan is used to predict haptic feedback at a point on the work-piece surface. The haptic mismatch is derived as a measure of error between this prediction and the real interaction forces from physical contact at that point on the work-piece. The geometric update is driven until the haptic mismatch is minimised. Convergence of the proposed algorithm is first numerically verified on an analytical surface with simulated physical interaction. Error analysis of the surface position and orientations were also plotted. Experiments were conducted using a motion capture system providing sub-mm accuracy in position and a 6 axis F/T sensor. Missing features are successfully detected after the update of the scan using the proposed method in an experiment.
Collapse
|
4
|
Li Q, Kroemer O, Su Z, Veiga FF, Kaboli M, Ritter HJ. A Review of Tactile Information: Perception and Action Through Touch. IEEE T ROBOT 2020. [DOI: 10.1109/tro.2020.3003230] [Citation(s) in RCA: 53] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
5
|
|
6
|
Falco P, Lu S, Natale C, Pirozzi S, Lee D. A Transfer Learning Approach to Cross-Modal Object Recognition: From Visual Observation to Robotic Haptic Exploration. IEEE T ROBOT 2019. [DOI: 10.1109/tro.2019.2914772] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
7
|
|
8
|
Rosales C, Spinelli F, Gabiccini M, Zito C, Wyatt JL. GPAtlasRRT: A Local Tactile Exploration Planner for Recovering the Shape of Novel Objects. INT J HUM ROBOT 2018. [DOI: 10.1142/s0219843618500147] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Touch is an important modality to recover object shape. We present a method for a robot to complete a partial shape model by local tactile exploration. In local tactile exploration, the finger is constrained to follow the local surface. This is useful for recovering information about a contiguous portion of the object and is frequently employed by humans. There are three contributions. First, we show how to segment an initial point cloud of a grasped, unknown object into hand and object. Second, we present a local tactile exploration planner. This combines a Gaussian Process (GP) model of the object surface with an AtlasRRT planner. The GP predicts the unexplored surface and the uncertainty of that prediction. The AtlasRRT creates a tactile exploration path across this predicted surface, driving it towards the region of greatest uncertainty. Finally, we experimentally compare the planner with alternatives in simulation, and demonstrate the complete approach on a real robot. We show that our planner successfully traverses the object, and that the full object shape can be recovered with a good degree of accuracy.
Collapse
Affiliation(s)
- Carlos Rosales
- Centro di Ricerca E. Piaggio, Università di Pisa, Pisa, Italy
| | | | - Marco Gabiccini
- Dipartimento di Ingegneria Civile e Industriale, Largo Lucio Lazzarino 1, Universita di Pisa, 56122 Pisa Pl, Italy
| | - Claudio Zito
- IRLab, CN–CR, School of Computer Science, University of Birmingham, Birmingham B15 2TT, UK
| | - Jeremy L. Wyatt
- IRLab, CN–CR, School of Computer Science, University of Birmingham, Birmingham B15 2TT, UK
| |
Collapse
|
9
|
Abraham I, Prabhakar A, Hartmann MJZ, Murphey TD. Ergodic Exploration Using Binary Sensing for Nonparametric Shape Estimation. IEEE Robot Autom Lett 2017; 2:827-834. [PMID: 30234157 PMCID: PMC6140341 DOI: 10.1109/lra.2017.2654542] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Current methods to estimate object shape-using either vision or touch-generally depend on high-resolution sensing. Here, we exploit ergodic exploration to demonstrate successful shape estimation when using a low-resolution binary contact sensor. The measurement model is posed as a collision-based tactile measurement, and classification methods are used to discriminate between shape boundary regions in the search space. Posterior likelihood estimates of the measurement model help the system actively seek out regions where the binary sensor is most likely to return informative measurements. Results show successful shape estimation of various objects as well as the ability to identify multiple objects in an environment. Interestingly, it is shown that ergodic exploration utilizes non-contact motion to gather significant information about shape. The algorithm is extended in three dimensions in simulation and we present two dimensional experimental results using the Rethink Baxter robot.
Collapse
Affiliation(s)
- Ian Abraham
- Neuroscience and Robotics Laboratory (NxR), Department of Mechanical Engineering Northwestern University, 2145 Sheridan Road Evanston, IL 60208 USA
| | - Ahalya Prabhakar
- Neuroscience and Robotics Laboratory (NxR), Department of Mechanical Engineering Northwestern University, 2145 Sheridan Road Evanston, IL 60208 USA
| | - Mitra J Z Hartmann
- Neuroscience and Robotics Laboratory (NxR), Department of Mechanical Engineering Northwestern University, 2145 Sheridan Road Evanston, IL 60208 USA
- Neuroscience and Robotics Laboratory (NxR), Biomedical Engineering, Northwestern University, 2145 Sheridan Road Evanston, IL 60208 USA
| | - Todd D Murphey
- Neuroscience and Robotics Laboratory (NxR), Department of Mechanical Engineering Northwestern University, 2145 Sheridan Road Evanston, IL 60208 USA
| |
Collapse
|
10
|
Kim H, Choi S, Chung WK. Contact Force Decomposition Using Contact Pressure Distribution. IEEE Robot Autom Lett 2017. [DOI: 10.1109/lra.2016.2598554] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
11
|
Gu H, Fan S, Zong H, Jin M, Liu H. Haptic Perception of Unknown Object by Robot Hand: Exploration Strategy and Recognition Approach. INT J HUM ROBOT 2016. [DOI: 10.1142/s0219843616500080] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
In this paper, the exploration and recognition in unknown object perception by robot hand is discussed. Inspired by the touch and exploration of human hand, a haptic exploration strategy for multi-fingered robot hand is proposed. Based on the observations from human experiments, the proposed strategy can be used to guide the robot hand to plan a series of movements to get tactile information from different unknown objects, with the precondition of avoiding unexpected collisions with the objects. A recognition approach is then presented to recognize object shapes based on the tactile point data collected by the strategy. Geometric feature vectors are extracted from tactile point locations and normal vectors after clustering, and the object shapes are recognized by the random forests classifier. Simulations and experiments results show that the exploration strategy can be used to guide the robot to gather tactile information from unknown object automatically, and the recognition approach is effective and robust in object shape recognition work. This framework provides a sensible solution for robot unknown object perception problem, which is suitable for the multi-fingered robot hand with low-resolution tactile sensors.
Collapse
Affiliation(s)
- Haiwei Gu
- State Key Laboratory of Robotics and Systems, Harbin Institute of Technology (HIT), Harbin 150001, P. R. China
| | - Shaowei Fan
- State Key Laboratory of Robotics and Systems, Harbin Institute of Technology (HIT), Harbin 150001, P. R. China
| | - Hua Zong
- School of Electronics and Information Engineering, Harbin Institute of Technology (HIT), Harbin 150001, P. R. China
| | - Minghe Jin
- State Key Laboratory of Robotics and Systems, Harbin Institute of Technology (HIT), Harbin 150001, P. R. China
| | - Hong Liu
- State Key Laboratory of Robotics and Systems, Harbin Institute of Technology (HIT), Harbin 150001, P. R. China
| |
Collapse
|
12
|
Soh H, Demiris Y. Incrementally learning objects by touch: online discriminative and generative models for tactile-based recognition. IEEE TRANSACTIONS ON HAPTICS 2014; 7:512-525. [PMID: 25532151 DOI: 10.1109/toh.2014.2326159] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Human beings not only possess the remarkable ability to distinguish objects through tactile feedback but are further able to improve upon recognition competence through experience. In this work, we explore tactile-based object recognition with learners capable of incremental learning. Using the sparse online infinite Echo-State Gaussian process (OIESGP), we propose and compare two novel discriminative and generative tactile learners that produce probability distributions over objects during object grasping/palpation. To enable iterative improvement, our online methods incorporate training samples as they become available. We also describe incremental unsupervised learning mechanisms, based on novelty scores and extreme value theory, when teacher labels are not available. We present experimental results for both supervised and unsupervised learning tasks using the iCub humanoid, with tactile sensors on its five-fingered anthropomorphic hand, and 10 different object classes. Our classifiers perform comparably to state-of-the-art methods (C4.5 and SVM classifiers) and findings indicate that tactile signals are highly relevant for making accurate object classifications. We also show that accurate "early" classifications are possible using only 20-30 percent of the grasp sequence. For unsupervised learning, our methods generate high quality clusterings relative to the widely-used sequential k-means and self-organising map (SOM), and we present analyses into the differences between the approaches.
Collapse
|
13
|
Aggarwal A, Kampmann P, Lemburg J, Kirchner F. Haptic Object Recognition in Underwater and Deep-sea Environments. J FIELD ROBOT 2014. [DOI: 10.1002/rob.21538] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Affiliation(s)
| | - Peter Kampmann
- DFKI GmbH Robotics Innovation Center (RIC); Bremen Germany
| | | | - Frank Kirchner
- DFKI GmbH Robotics Innovation Center (RIC); Bremen Germany
- Robotics Group, University of Bremen; Bremen Germany
| |
Collapse
|
14
|
Object-shape recognition and 3D reconstruction from tactile sensor images. Med Biol Eng Comput 2014; 52:353-62. [PMID: 24469960 DOI: 10.1007/s11517-014-1142-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2013] [Accepted: 01/17/2014] [Indexed: 10/25/2022]
Abstract
This article presents a novel approach of edged and edgeless object-shape recognition and 3D reconstruction from gradient-based analysis of tactile images. We recognize an object's shape by visualizing a surface topology in our mind while grasping the object in our palm and also taking help from our past experience of exploring similar kind of objects. The proposed hybrid recognition strategy works in similar way in two stages. In the first stage, conventional object-shape recognition using linear support vector machine classifier is performed where regional descriptors features have been extracted from the tactile image. A 3D shape reconstruction is also performed depending upon the edged or edgeless objects classified from the tactile images. In the second stage, the hybrid recognition scheme utilizes the feature set comprising both the previously obtained regional descriptors features and some gradient-related information from the reconstructed object-shape image for the final recognition in corresponding four classes of objects viz. planar, one-edged object, two-edged object and cylindrical objects. The hybrid strategy achieves 97.62 % classification accuracy, while the conventional recognition scheme reaches only to 92.60 %. Moreover, the proposed algorithm has been proved to be less noise prone and more statistically robust.
Collapse
|
15
|
Ilonen J, Bohg J, Kyrki V. Three-dimensional object reconstruction of symmetric objects by fusing visual and tactile sensing. Int J Rob Res 2013. [DOI: 10.1177/0278364913497816] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
In this work, we propose to reconstruct a complete three-dimensional (3-D) model of an unknown object by fusion of visual and tactile information while the object is grasped. Assuming the object is symmetric, a first hypothesis of its complete 3-D shape is generated. A grasp is executed on the object with a robotic manipulator equipped with tactile sensors. Given the detected contacts between the fingers and the object, the initial full object model including the symmetry parameters can be refined. This refined model will then allow the planning of more complex manipulation tasks. The main contribution of this work is an optimal estimation approach for the fusion of visual and tactile data applying the constraint of object symmetry. The fusion is formulated as a state estimation problem and solved with an iterated extended Kalman filter. The approach is validated experimentally using both artificial and real data from two different robotic platforms.
Collapse
Affiliation(s)
- Jarmo Ilonen
- Machine Vision and Pattern Recognition Research Group, Lappeenranta University of Technology, Finland
| | - Jeannette Bohg
- Autonomous Motion Department, Max-Planck-Institute for Intelligent Systems, Tübingen, Germany
| | - Ville Kyrki
- Department of Automation and Systems Technology, Aalto University, Finland
| |
Collapse
|