Improving Remote Robot Teleoperation Interfaces for General Object Manipulation


Motivation

Robust remote teleoperation of high-DOF manipulators is of critical importance across a wide range of robotics applications. These applications often involve environments with high latency, due either to distance or to degraded network conditions, and as such manipulators are commanded using supervisory control.  Contemporary robot manipulation interfaces primarily utilize a free positioning pose specification approach to independently control each axis of translation and orientation in free space. This can be cumbersome in practice, particularly for non-expert users.

Teleoperation Approaches

We developed two novel approaches that leverage depth data for pose specification, which we evaluated against the Free Positioning approach:
  • Constrained Positioning: pose specification based on selecting a point-of-interest and an approach angle
  • Point-and-Click: supervisory selection of poses produced by autonomous grasp calculation (using a novel extension of antipodal grasp sampling algorithms)

Results

Evaluating the new approaches against the Free Positioning approach in a 90-participant user study, we found that:

  • The Point-and-Click approach, regardless of robotics experience, significantly reduces the number of manipulation errors, increases the number of manipulation tasks that participants could complete, all with a reduced interaction count
  • Rendering interfaces as overlays to 2D camera feeds, regardless of the approach (even for Free Positioning, which was designed for use in a 3D rendered scene), results in significantly fewer manipulation errors, task completion time, number of interactions required, and operator workload, increasing the number of manipulation tasks participants could complete

This is particularly promising for high-latency applications, as the 2D overlay interfaces do not require point cloud streaming, heavily reducing bandwidth requirements.  Providing more information, in the form of a 3D scene with rendered point clouds, robot models, and a freely controllable camera, negatively impacted performance.

 

Ongoing Work

We are currently adapting the Point-and-Click approach for NASA’s Astrobee platform.

 

Resources and Links