Aude G. Billard, Sylvain Calinon and Rüdiger Dillmann
This chapter surveys the main approaches developed to date to endow robots with the ability to learn from human guidance. The field is best known as robot programming by demonstration, robot learning from/by demonstration, apprenticeship learning and imitation learning. We start with a brief historical overview of the field. We then summarize the various approaches taken to solve four main questions: when, what, who and when to imitate. We emphasize the importance of choosing well the interface and the channels used to convey the demonstrations, with an eye on interfaces providing force control and force feedback. We then review algorithmic approaches to model skills individually and as a compound and algorithms that combine learning from human guidance with reinforcement learning. We close with a look on the use of language to guide teaching and a list of open issues.
Learning from failure II
Author Aude Billard
Video ID : 477
This video illustrates in a second example how learning from demonstration can benefit from failed demonstrations (as opposed to learning from successful demonstrations). Here, the robot Robota must learn how to coordinate its two arms in a timely manner for the left arm to hit the ball with the racket right on time, after the left arm sent the ball flying by hitting the catapult. More details on this work is available in: A. Rai, G. de Chambrier, A. Billard: Learning from failed demonstrations in unreliable systems, Proc. IEEE-RAS Int. Conf. Humanoid Robots (Humanoids), Atlanta (2013), pp. 410 – 416; doi: 10.1109/HUMANOIDS.2013.7030007 .