Jan Peters, Daniel D. Lee, Jens Kober, Duy Nguyen-Tuong, J. Andrew Bagnell and Stefan Schaal
Machine learning offers to robotics a framework and set of tools for the design of sophisticated and hard-to-engineer behaviors; conversely, the challenges of robotic problems provide both inspiration, impact, and validation for developments in robot learning. The relationship between disciplines has sufficient promise to be likened to that between physics and mathematics. In this chapter, we attempt to strengthen the links between the two research communities by providing a survey of work in robot learning for learning control and behavior generation in robots. We highlight both key challenges in robot learning as well as notable successes. We discuss how contributions tamed the complexity of the domain and study the role of algorithms, representations, and prior knowledge in achieving these successes. As a result, a particular focus of our chapter lies on model learning for control and robot reinforcement learning. We demonstrate how machine learning approaches may be profitably applied, and we note throughout open questions and the tremendous potential for future research.
Inverse reinforcement
Author Pieter Abbeel
Video ID : 353
This video shows a successful example of inverse reinforcement learning for acrobatic helicopter maneuvers. It illustrates apprenticeship learning algorithms, which leverage expert demonstrations to efficiently learn good controllers for tasks as demonstrated by an expert. The experimental results captured here include the first autonomous execution of a wide range of maneuvers and a complete airshow. The controllers perform as well as, and often even better than, the human expert pilot.
The video illustrates a solution to the "Curse of Goal Specification" in Sect 15.3.6 Challenges in Robot Reinforcement Learning.
Reference: P. Abbeel, A. Coates, A.Y. Ng: Autonomous helicopter aerobatics through apprenticeship learning, Int. J. Robot. Res. 29(13), 1608–1639 (2010)