Published using Google Docs
Copy of Science Robotics Editorial: Special Issue on Robot Grasping and Manipulation May 2021
Updated automatically every 5 minutes

Science Robotics Journal

Special Issue on Robot Grasping and Manipulation

Guest Editorial

 

Getting a Grip on Reality

 

Ken Goldberg

William S. Floyd Jr. Distinguished Chair in Engineering, UC Berkeley

and Chief Scientist, Ambi Robotics

goldberg@berkeley.edu

May 2021

 

 

The ability to get a secure grip on a wide range of unfamiliar objects is common to babies, dogs, lobsters, birds, ants, …except robots.  Despite 70 years of research, robots outside of repetitive assembly lines remain clumsy. The reason is that precise placement of robot fingers or jaws matters- even a sub-millimeter change in contact points can transform a secure grip into a fumble.  This is equally true for the location of the suction cup common in industrial automation.  The precise placement of contact points depends on perception, control, and physics, which are inherently uncertain.

 

It is impossible to perfectly sense the shape and location of an object surface in 3D space.  Consider a stereo pair of digital cameras, regardless of their resolution, it is impossible to precisely match all corresponding points in each image to triangulate every surface point on the object.  Depth sensors using structured light have evolved considerably in the last decade in terms of resolution and frame rate, but they are prone to errors on object surfaces that are specular or transparent.

 

Even if perception could be solved, it is not possible to precisely land each jaw of a robot gripper onto a desired 3D object point.  All robots have precision limits due to gears, motors, encoders, cables, backlash, or some combination of these factors.  As the gripper gets closer to an object, visual sensing is obstructed, making it extremely difficult to predict which jaw of the gripper will contact the object first.  This uncertainty matters because the first contact point can slightly push the object, changing its position and orientation changing where the second jaw will make contact.

 

Even if control could be solved,  the precise motion of the object resulting from the push by the first contact is undecidable – no algorithm can solve it. This is not a matter of developing better sensors, motors, or algorithms -- it is due to the fundamental physics of contact and friction.   A microscopic change in surface properties, smaller than a grain of sand, can dramatically change how an object will move when pushed.  We can predict the motion of an asteroid a million miles away far better than we can predict the motion of a pencil pushed across a table.

 

Some degree of uncertainty in perception, control, and physics is unavoidable, and animals seem to manage, so what’s a robot to do?

 

For most of the past 70 years, roboticists extended and applied the rigorous mathematics of Wrench Theory to precisely specify the conditions for immobilizing grasps.  This created an elegant body of research but didn’t address uncertainty. Matt Mason conjectured 40 years ago that some grasp motions are robust to these inherent uncertainties[1].  Picture a hand scooping up a rock.  That grasp compensates based on passive mechanics; no sensors are needed.  It’s not necessary to know the precise shape or location of the rock, or of the hand.

 

 

In 2012, a breakthrough in hyper-parametric function approximation (also known as Deep Learning) offered another option:  learning from examples.  Although the use of neural networks in robotics dates back to the 1990s[2], the new wave of research based on massive multi-layered networks is yielding surprising results.  As in computer vision, it seems that many examples are needed before a learned grasping model can generalize, so researchers have tried hand labeling grasps[3], physical “arm farms” where many robots grasp objects in bins 24 hours a day for months on end[4], and self-supervised simulation[5].

 

 

This special issue opens with two Focus perspectives from leading researchers.  Matei Ciocarlie and colleagues consider the mechanical design of novel robot grippers, jaws, and hands, suggesting that parameters can be optimized in parallel with the design of control algorithms. Alberto Rodriguez considers how robot grippers can make use of tactile sensors using internal optical reflection.  This new class of sensors is promising but as Rodriguez acknowledges,  there’s a Faustian tradeoff -- additional sensors introduce additional noise.

 

Next are two surveys of the state of the art in robot grasping, one on rigid objects and one on deformable objects.

 

Cui and Trinkle provide a new survey of robot learning for manipulation, noting that 50 years of engineering research has not resolved the nuances of contact mechanics (where n contacts yield 3n possible contact modes) nor the variability that is inherent in unstructured environments such as homes, offices, schools, hospitals and the natural world.  They cite 120 papers on grasping using dense object descriptors, domain adaptation, meta-learning, and other new methods, concluding with a list of open research problems.

 

Yin et al. provide a survey of research on robot manipulation of deformable materials, emphasizing recent results that use learning.  They also address perception and control, classifying over 130 papers into concise tables and reviewing research on methods using simulation f

 

Yang et al. propose a soft gripper design that is lightweight based on recent work using kirigami -- the Japanese art of cutting slits in paper to create highly morphable structures -- for grasping delicate objects such as hydrogel spheres.

 

Bircher et al. consider “whole hand” manipulation involving surfaces in addition to endpoint contacts and present a novel two-fingered dextrous robot hand design inspired by caging grasps.  They consider  6250 designs in a systematic parametric evaluation of a potential energy grasp metric to design their “Model W” hand that is capable of a range of dextrous, in-hand manipulations, and make it available open-source to researchers.

 

Kieliba et al. study how human subjects adapt to using a mechanical “Third Thumb” for manipulation.  They discover that most humans can readily adapt and begin to use it for grasping, suggesting exciting future possibilities for “superhuman” prosthetics.

 

Mitrano et al. consider robot manipulation of a rope and address the issue of quantifying how confident a robot is in its learned model so it can trigger recovery strategies when confidence falters.

 

Acknowledging that robot grippers change over time due to wear or breakage, Hang et al. explore how a robot can use an exploration policy with camera to identify its own system parameters such as link lengths.  This creates a virtual linkage-based representation that can be used to plan subsequent manipulation tasks.

 

Robin Murphy speculates on the long history of robots wielding grippers and tentacles in science fiction that resulted in research prototypes decades later. She notes, however, that not all science fiction has been prescient and asks whether science fiction can keep up with new innovations in robot grasping and manipulation.

 

One outcome of the Covid-19 pandemic is a massive global adoption of e-commerce.  The demand for handling packages far exceeds the supply of available workers. A new category of robots and automation systems are starting help by learning to grasp a diverse range of package shapes and materials.  

 

The results reported in this Special Issue are are encouraging, yet robust grasping and manipulation of novel objects remains a Grand Challenge for researchers in Robotics and Automation.

 

 

 

 

 


[1] Robot Hands and the Mechanics of Manipulation.  Matt Mason and Ken Salisbury. MIT Press, 1985.

 

[2] Neural Networks in Robotics. George Bekey and Ken Goldberg, Eds. Kluwer Academic, 1993.

[3] Deep Learning for Detecting Robotic Grasps Ian Lenz, Honglak Lee, and Ashutosh Saxena.  Robotics Science and Systems (RSS) 2009.

 

[4]   Levine S, Pastor P, Krizhevsky A, Ibarz J, Quillen D. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. The International Journal of Robotics Research. 2018;37(4-5):421-436.

 

 [5] Learning Ambidextrous Robot Grasping Policies. Jeffrey Mahler, Matthew Matl, Vishal Satish, Mike Danielczuk, Bill DeRose, Stephen McKinley, Ken Goldberg. Science Robotics Journal. V4(26). Jan 2019.