View Chapter

Chapter 74 — Learning from Humans

Aude G. Billard, Sylvain Calinon and Rüdiger Dillmann

This chapter surveys the main approaches developed to date to endow robots with the ability to learn from human guidance. The field is best known as robot programming by demonstration, robot learning from/by demonstration, apprenticeship learning and imitation learning. We start with a brief historical overview of the field. We then summarize the various approaches taken to solve four main questions: when, what, who and when to imitate. We emphasize the importance of choosing well the interface and the channels used to convey the demonstrations, with an eye on interfaces providing force control and force feedback. We then review algorithmic approaches to model skills individually and as a compound and algorithms that combine learning from human guidance with reinforcement learning. We close with a look on the use of language to guide teaching and a list of open issues.

Demonstrations and reproduction of the task of juicing an orange

Author  Florent D'Halluin, Aude Billard

Video ID : 29

Human demonstrations of the task of juicing an orange, and reproductions by the robot in new situations where the objects are located in positions not seen in the demonstrations. URL:

Demonstrations and reproduction of moving a chessman

Author  Sylvain Calinon, Florent Guenter, Aude Billard

Video ID : 97

A robot learns how to make a chess move from multiple demonstrations and to reproduce the skill in a new situation (different position of the chessman) by finding a controller which satisfies both the task constraints (what-to-imitate) and constraints relative to its body limitation (how-to-imitate). Reference: S. Calinon, F. Guenter, A. Billard: On earning, representing and generalizing a task in a humanoid robot, IEEE Trans. Syst. Man Cybernet. B 37(2), 286-298 (2007); URL:

Full-body motion transfer under kinematic/dynamic disparity

Author  Sovannara Hak, Nicolas Mansard, Oscar Ramos, Layale Saab, Olivier Stasse

Video ID : 98

Offline full-body motion transfer by taking into account the kinematic and dynamic disparity between the human and the humanoid. Reference: S. Hak, N. Mansard, O. Ramos, L. Saab, O. Stasse: Capture, recognition and imitation of anthropomorphic motion, Proc. IEEE Int. Conf. Robot. Autom. (ICRA), St. Paul (2012), pp. 3539–3540; URL: .

Demonstration by visual tracking of gestures

Author  Ales Ude

Video ID : 99

Demonstration by visual tracking of gestures. Reference: A. Ude: Trajectory generation from noisy positions of object features for teaching robot paths, Robot. Auton. Syst. 11(2), 113–127 (1993); URL: .

Demonstration by kinesthetic teaching

Author  Baris Akgun, Maya Cakmak, Karl Jiang, Andrea Thomaz

Video ID : 100

Demonstration by kinesthetic teaching with the Simon humanoid robot. Reference: B. Akgun, M. Cakmak, K. Jiang, A.L. Thomaz: Keyframe-based learning from demonstration, Int. J. Social Robot. 4(4), 343–355 (2012); URL: .

Demonstration by teleoperation of humanoid HRP-2

Author  Sylvain Calinon, Paul Evrard, Elena Gribovskaya, Aude Billard, Abderrahmane Kheddar

Video ID : 101

Demonstration by teleoperation of the HRP-2 humanoid robot. Reference: S. Calinon, P. Evrard, E. Gribovskaya, A.G. Billard, A. Kheddar: Learning collaborative manipulation tasks by demonstration using a haptic interface, Proc. Intl Conf. Adv. Robot. (ICAR), (2009), pp. 1–6; URL: .

Probabilistic encoding of motion in a subspace of reduced dimensionality

Author  Keith Grochow, Steven Martin, Aaron Hertzmann, Zoran Popovic

Video ID : 102

Probabilistic encoding of motion in a subspace of reduced dimensionality. Reference: K. Grochow, S. L. Martin, A. Hertzmann, Z. Popovic: Style-based inverse kinematics, Proc. ACM Int. Conf. Comput. Graphics Interact. Tech. (SIGGRAPH), 522–531 (2004); URL: .

Reproduction of dishwasher-unloading task based on task-precedence graph

Author  Michael Pardowitz, Raoul Zöllner, Steffen Knoop, Tamim Asfour, Kristian Regenstein, Pedram Azad, Joachim Schröder, Rüdiger Dillmann

Video ID : 103

ARMAR-III humanoid robot reproducing the task of unloading a dishwasher, based on a task precedence graph learned from demonstrations. References: 1) T. Asfour, K. Regenstein, P. Azad, J. Schroeder, R. Dillmann: ARMAR-III: A humanoid platform for perception-action integration, Int. Workshop Human-Centered Robotic Systems (HCRS)(2006); 2) M. Pardowitz, R. Zöllner, S. Knoop, R. Dillmann: Incremental learning of tasks from user demonstrations, past experiences and vocal comments, IEEE Trans. Syst. Man Cybernet. B37(2), 322–332 (2007); URL: .

Incremental learning of finger manipulation with tactile capability

Author  Eric Sauser, Brenna Argall, Aude Billard

Video ID : 104

Incremental learning of fingers manipulation skill, first demonstrated through a dataglove and then refined through kinesthetic teaching by exploiting the tactile capabilities of the iCub humanoid robot. Reference: E.L. Sauser, B.D. Argall, G. Metta, A.G. Billard: Iterative learning of grasp adaptation through human corrections, Robot. Auton. Syst. 60(1), 55–71 (2012); URL: .

Policy refinement after demonstration

Author  Sylvain Calinon, Petar Kormushev, Darwin Caldwell

Video ID : 105

Use of stochastic optimization in the policy-parameters space to refine a skill initially learned from demonstration. Reference: S. Calinon, P. Kormushev, D.G. Caldwell: Compliant skills acquisition and multi-optima policy search with EM-based reinforcement learning, Robot. Auton. Syst. 61(4), 369–379 (2013); URL: