Tool Macgyvering: Autonomous Tool Construction Using Geometric and Attachment Reasoning
This research focuses on tool construction, contributing a computational framework that enables a robot to construct, or Macgyver, tools out of parts available in the environment, to improve resourcefulness and adaptability of robots.
Improving Remote Robot Teleoperation Interfaces for General Object Manipulation
We develop two novel supervisory control interfaces that leverage depth data to significantly improve performance for a set of teleoperated object manipulation tasks, verified in an extensive user study. The approaches were also designed as overlays to a 2D camera feed, which both demonstrably improved performance, reduced operator workload, and alleviated bandwidth requirements for high latency applications.
Interruptibility Classification of Collocated People
We apply latest advances in Computer Vision to accurately classify the interruptibility of collocated humans, so that a robot can know appropriate times to interrupt a person
Robot Commonsense Reasoning
We apply a knowledge base framework to enable a mobile robot to perform a series of real-world tasks with common sense reasoning.
Crowdsourced Interactive Task Learning
We apply crowdsourcing to interactive hierarchical task learning from demonstration to allow remote users to teach tasks to a real robot.
Improving Navigation with Learning from Demonstration
We apply learning from demonstration techniques to improve already effective autonomous mobile robots
Plan Recovery through Object Substitution
We present object substitution as a solution to repairing plans in open-world robotic applications.
Reinforcement Learning from Demonstration
In a series of projects, we have explored the intersection between RL and learning from demonstration, with the goal of leveraging human input as prior knowledge of the task, while also enabling the agent to learn to surpass the (typically suboptimal) performance of the teacher.
We leverage noisy semantic networks to answer and explain a wide spectrum of analogy questions.
Object Recognition and Grasping
We have developed an easy-to-use system for constructing an object recognition and manipulation database from crowdsourced grasp demonstrations. The method requires no additional equipment other than the robot itself, and non-expert users can demonstrate grasps through an intuitive web interface, with virtually no training required.
Goal State Learning from Demonstration
This project introduces an algorithm for learning item placements and templates from a corpus of noisy human demonstrations.
Robot Web Tools
Robot Web Tools is an ongoing collaborative effort to realize seamless, general, and implementation-independent applications layer network communications for robotics through the design of open-source network protocols and client libraries.
The Robot Management System
The Robot Management System is a novel framework for bringing robotic experiments to the web.
Hierarchical Task Learning from Demonstration
We have developed learning and interaction algorithms to support a human teaching hierarchical task models to a robot using a single demonstration in the context of a mixed-initiative interaction with bidirectional communication. In particular, we have identified and implemented two important heuristics for suggesting task groupings based on the physical structure of the manipulated artifact and on the data flow between tasks. We have evaluated our algorithms with users in a simulated environment and shown both that the overall approach is usable and that the grouping suggestions significantly improve the learning and interaction.
DARPA Robotics Challenge
The DARPA Robotics Challenge seeks to develop ground robots capable of executing complex tasks in dangerous, degraded, human-engineered environments, particularly in the area of disaster response. The RAIL research group is part of the 10-university Track A DRC-HUBO team led by Drexel University. The project is a collaboration with Dmitry Berenson and Rob Lindeman at WPI. Our work focuses on user-guided manipulation framework for high degree-of-freedom robots operating in environments with limited communication.