As robots become increasingly prevalent in human environments, situations will arise in which a robot will need to interrupt a human to learn a task, signal task completion, report a problem, or to offer a service. Interruptions are always distracting and often have costs that manifest task performance penalties, antipathy, and even catastrophe. These costs can be mitigated by interrupting at an “appropriate” moment; a notion captured by the idea of interruptibility. The goal of this project is to use onboard sensors on a mobile robot to accurately classify the moment-to-moment interruptibility of collocated humans that the robot might sense in its environment.
We have developed a system that uses latest advances in computer vision to derive features in data from a Kinect, which can be fed through a Latent-Dynamic Conditional Random Field (LDCRF) to accurately classify the interruptibility of people. Our contributions so far are:
Development of an online system that can be deployed on a mobile robot to derive the interruptibility of humans from visual data
- Verification of the LDCRF as the most accurate and consistent model when classifying interruptibility on a continuous stream of data
- An evaluation of the task performance penalties (or the lack thereof in some cases) on humans when a robot is equipped with our classification system
(Most of the code for this project can be found in our Github repos)