Reinforcement learning (RL) has long been recognized as an approach animals use to learn and adapt to situations, by associating positive (and negative) stimuli with situations and responses. As RL was introduced in artificial intelligence, simple algorithms were developed that operate with little prior knowledge and have strong theoretical guarantees for convergence to optimal behavior. Yet, exactly because of this lack of prior knowledge, these algorithms often require too many experiences for learning, preventing them from being practical in most real world situations. A lot of progress is being made towards mitigating this sample complexity problem by incorporating prior knowledge of various forms into the learning process.
In a series of projects, we have explored the intersection between RL and learning from demonstration, with the goal of leveraging human input as prior knowledge of the task, while also enabling the agent to learn to surpass the (typically suboptimal) performance of the teacher.