Grasp Stability Assessment through Unsupervised Feature Learning of Tactile Images

Published in: 2017 IEEE International Conference on Robotics and Automation (ICRA)

Date of Conference: May 29th - June 3rd 2017

DOI: 10.1109/ICRA.2017.7989257

Authors: Deen Cockburn, Jean-Philippe Roberge, Thuy-Hong-Loan Le, Alexis Maslyczyk, Vincent Duchaine

Abstract:

Grasping tasks have always been challenging for robots, despite recent innovations in vision-based algorithms and object-specific training. If robots are to match human abilities and learn to pick up never-before-seen objects, they must combine vision with tactile sensing. This paper present a novel way to improve robotic grasping: by using tactile sensors and an unsupervised feature-learning approach, a robot can find the common denominators behind successful and failed grasps, and use this knowledge to predict whether a grasp attempt will succeed or fail. This method is promising as it uses only high-level features from two tactile sensors to evaluate grasp quality, and works for the training set as well as for new objects. In total, using a total of 54 different objects, our system recognized grasp failure 83.70% of time.

Previous
Previous

A Highly Sensitive Multimodal Capacitive Tactile Sensor

Next
Next

What is dynamic tactile sensing?