Object recognition and manipulation are critical for enabling robots to operate within a household environment. There are many grasp planners that can estimate grasps based on object shape, but these approaches often perform poorly because they miss key information about non-visual object characteristics, such as weight distribution, fragility of materials, and usability characteristics. Object model databases can account for this information, but existing methods for constructing 3D object recognition databases are time and resource intensive, often requiring specialized equipment, and are therefore difficult to apply to robots in the field.
We have developed an easy-to-use system for constructing object models for 3D object recognition and manipulation made possible by advances in web robotics. The database consists of point clouds generated using a novel graph-based iterative point cloud registration algorithm, which includes the encoding of manipulation data and usability characteristics. The system requires no additional equipment other than the robot itself, and non-expert users can demonstrate grasps through an intuitive web interface with virtually no training required. We have validated this system with both remotely demonstrated grasps from a crowdsourcing user study, and with expert-demonstrated grasps in our own lab. The non-expert crowdsourced grasps can produce successful autonomous grasps, and furthermore the demonstration approach outperforms purely vision-based grasp planning approaches for a wide variety of object classes.
The system provides a quick and easy way for adding new models to a database, and it’s designed to work with any ROS-enabled robot. Data can be collected through the web using RMS browser-based interfaces like the one shown above, or offline through a plugin for RVIZ. The RAIL lab uses it as a primary recognition and manipulation system in ongoing research on robots operating in human environments. Documentation and tutorials can be found under RAIL pick and place page on the ROS wiki, and the latest code can be found here.