View Chapter

Chapter 20 — Snake-Like and Continuum Robots

Ian D. Walker, Howie Choset and Gregory S. Chirikjian

This chapter provides an overview of the state of the art of snake-like (backbones comprised of many small links) and continuum (continuous backbone) robots. The history of each of these classes of robot is reviewed, focusing on key hardware developments. A review of the existing theory and algorithms for kinematics for both types of robot is presented, followed by a summary ofmodeling of locomotion for snake-like and continuum mechanisms.

Binary-manipulator object recovery

Author  Greg Chirikjian

Video ID : 164

Video of Greg Chirikjian's binary manipulator performing an object retrieval task for satellite-recovery applications.

Chapter 78 — Perceptual Robotics

Heinrich Bülthoff, Christian Wallraven and Martin A. Giese

Robots that share their environment with humans need to be able to recognize and manipulate objects and users, perform complex navigation tasks, and interpret and react to human emotional and communicative gestures. In all of these perceptual capabilities, the human brain, however, is still far ahead of robotic systems. Hence, taking clues from the way the human brain solves such complex perceptual tasks will help to design better robots. Similarly, once a robot interacts with humans, its behaviors and reactions will be judged by humans – movements of the robot, for example, should be fluid and graceful, and it should not evoke an eerie feeling when interacting with a user. In this chapter, we present Perceptual Robotics as the field of robotics that takes inspiration from perception research and neuroscience to, first, build better perceptual capabilities into robotic systems and, second, to validate the perceptual impact of robotic systems on the user.

Active in-hand object recognition

Author  Christian Wallraven

Video ID : 569

This video showcases the implementation of active object learning and recognition using the framework proposed in Browatzki et al. [1, 2]. The first phase shows the robot trying to learn the visual representation of several paper cups differing by a few key features. The robot executes a pre-programmed exploration program to look at the cup from all sides. The (very low-resolution) visual input is tracked and so-called key-frames are extracted which represent the (visual) exploration. After learning, the robot tries to recognize cups that have been placed into its hands using a similar exploration program based on visual information - due to the low-resolution input and the highly similar objects, the robot, however, fails to make the correct decision. The video then shows the second, advanced, exploration, which is based on actively seeking the view that is expected to provide maximum information about the object. For this, the robot embeds the learned visual information into a proprioceptive map indexed by the two joint angles of the hand. In this map, the robot now tries to predict the joint-angle combination that provides the most information about the object, given the current state of exploration. The implementation uses particle filtering to track a large number of object (view) hypotheses at the same time. Since the robot now uses a multisensory representation, the subsequent object-recognition trials are all correct, despite poor visual input and highly similar objects. References: [1] B Browatzki, V. Tikhanoff, G. Metta, H.H. Bülthoff, C. Wallraven: Active in-hand object recognition on a humanoid robot, IEEE Trans. Robot. 30(5), 1260-1269 (2014); [2] B. Browatzki, V. Tikhanoff, G. Metta, H.H. Bülthoff, C. Wallraven: Active object recognition on a humanoid robot, Proc. IEEE Int. Conf. Robot. Autom. (ICRA), St. Paul (2012), pp. 2021-2028.

Chapter 56 — Robotics in Agriculture and Forestry

Marcel Bergerman, John Billingsley, John Reid and Eldert van Henten

Robotics for agriculture and forestry (A&F) represents the ultimate application of one of our society’s latest and most advanced innovations to its most ancient and important industries. Over the course of history, mechanization and automation increased crop output several orders of magnitude, enabling a geometric growth in population and an increase in quality of life across the globe. Rapid population growth and rising incomes in developing countries, however, require ever larger amounts of A&F output. This chapter addresses robotics for A&F in the form of case studies where robotics is being successfully applied to solve well-identified problems. With respect to plant crops, the focus is on the in-field or in-farm tasks necessary to guarantee a quality crop and, generally speaking, end at harvest time. In the livestock domain, the focus is on breeding and nurturing, exploiting, harvesting, and slaughtering and processing. The chapter is organized in four main sections. The first one explains the scope, in particular, what aspects of robotics for A&F are dealt with in the chapter. The second one discusses the challenges and opportunities associated with the application of robotics to A&F. The third section is the core of the chapter, presenting twenty case studies that showcase (mostly) mature applications of robotics in various agricultural and forestry domains. The case studies are not meant to be comprehensive but instead to give the reader a general overview of how robotics has been applied to A&F in the last 10 years. The fourth section concludes the chapter with a discussion on specific improvements to current technology and paths to commercialization.

Ladybird: An intelligent farm robot for the vegetable industry

Author  James Underwood, Calvin Hung, Suchet Bargoti, Mark Calleija, Robert Fitch, Juan Nieto, Salah Sukkarieh

Video ID : 305

This video showcases the Ladybird, an intelligent robot for the vegetable industry. Ladybird provides a flexible platform for sensing and automating commercial vegetable farms. The solar-electric powered vehicle has a flexible drive system that allows precise motion in potentially tight environments, and the platform geometry can be configured to suit different crop configurations. The vehicle autonomously traverses the farm, gathering data from a variety of sensors, including stereo vision, hyperspectral, thermal, and LIDAR. The data is processed to provide useful information for the management and optimization of the crop, including yield mapping, phenotyping, and disease and stress detection. Ladybird is equipped with a manipulator arm for a variety of mechanical tasks, including thinning, weeding (especially of herbicide-resistant weeds), spot spraying, foreign body removal and to support research towards automated harvesting.

Chapter 74 — Learning from Humans

Aude G. Billard, Sylvain Calinon and Rüdiger Dillmann

This chapter surveys the main approaches developed to date to endow robots with the ability to learn from human guidance. The field is best known as robot programming by demonstration, robot learning from/by demonstration, apprenticeship learning and imitation learning. We start with a brief historical overview of the field. We then summarize the various approaches taken to solve four main questions: when, what, who and when to imitate. We emphasize the importance of choosing well the interface and the channels used to convey the demonstrations, with an eye on interfaces providing force control and force feedback. We then review algorithmic approaches to model skills individually and as a compound and algorithms that combine learning from human guidance with reinforcement learning. We close with a look on the use of language to guide teaching and a list of open issues.

Learning from failure I

Author  Aude Billard

Video ID : 476

This video illustrates how learning from demonstration can be bootstrapped using failed demonstrations only (in place of traditional approaches that use successful demonstrations). The algorithm is described in detail in two publications: 1)D.-H. Grollman, A. Billard: Donut as I do: Learning from failed demonstrations, Proc. IEEE Int. Conf. Robot. Autom. (ICRA), Shanghai (2011) Best Paper Award (Cognitive Robotics); 2) D.-H. Grollman, A. Billard: Robot learning from failed demonstrations, Int. J. Social Robot. 4(4), 331-342 (2012).

Chapter 15 — Robot Learning

Jan Peters, Daniel D. Lee, Jens Kober, Duy Nguyen-Tuong, J. Andrew Bagnell and Stefan Schaal

Machine learning offers to robotics a framework and set of tools for the design of sophisticated and hard-to-engineer behaviors; conversely, the challenges of robotic problems provide both inspiration, impact, and validation for developments in robot learning. The relationship between disciplines has sufficient promise to be likened to that between physics and mathematics. In this chapter, we attempt to strengthen the links between the two research communities by providing a survey of work in robot learning for learning control and behavior generation in robots. We highlight both key challenges in robot learning as well as notable successes. We discuss how contributions tamed the complexity of the domain and study the role of algorithms, representations, and prior knowledge in achieving these successes. As a result, a particular focus of our chapter lies on model learning for control and robot reinforcement learning. We demonstrate how machine learning approaches may be profitably applied, and we note throughout open questions and the tremendous potential for future research.

Inverse reinforcement

Author  Pieter Abbeel

Video ID : 353

This video shows a successful example of inverse reinforcement learning for acrobatic helicopter maneuvers. It illustrates apprenticeship learning algorithms, which leverage expert demonstrations to efficiently learn good controllers for tasks as demonstrated by an expert. The experimental results captured here include the first autonomous execution of a wide range of maneuvers and a complete airshow. The controllers perform as well as, and often even better than, the human expert pilot. The video illustrates a solution to the "Curse of Goal Specification" in Sect 15.3.6 Challenges in Robot Reinforcement Learning. Reference: P. Abbeel, A. Coates, A.Y. Ng: Autonomous helicopter aerobatics through apprenticeship learning, Int. J. Robot. Res. 29(13), 1608–1639 (2010)

Chapter 40 — Mobility and Manipulation

Oliver Brock, Jaeheung Park and Marc Toussaint

Mobile manipulation requires the integration of methodologies from all aspects of robotics. Instead of tackling each aspect in isolation,mobilemanipulation research exploits their interdependence to solve challenging problems. As a result, novel views of long-standing problems emerge. In this chapter, we present these emerging views in the areas of grasping, control, motion generation, learning, and perception. All of these areas must address the shared challenges of high-dimensionality, uncertainty, and task variability. The section on grasping and manipulation describes a trend towards actively leveraging contact and physical and dynamic interactions between hand, object, and environment. Research in control addresses the challenges of appropriately coupling mobility and manipulation. The field of motion generation increasingly blurs the boundaries between control and planning, leading to task-consistent motion in high-dimensional configuration spaces, even in dynamic and partially unknown environments. A key challenge of learning formobilemanipulation consists of identifying the appropriate priors, and we survey recent learning approaches to perception, grasping, motion, and manipulation. Finally, a discussion of promising methods in perception shows how concepts and methods from navigation and active perception are applied.

Policy learning

Author  Peter Pastor

Video ID : 668

The video explains and demonstrates the basics of policy learning as based on two tasks, pool strokes and chopstick manipulation.

Chapter 6 — Model Identification

John Hollerbach, Wisama Khalil and Maxime Gautier

This chapter discusses how to determine the kinematic parameters and the inertial parameters of robot manipulators. Both instances of model identification are cast into a common framework of least-squares parameter estimation, and are shown to have common numerical issues relating to the identifiability of parameters, adequacy of the measurement sets, and numerical robustness. These discussions are generic to any parameter estimation problem, and can be applied in other contexts.

For kinematic calibration, the main aim is to identify the geometric Denavit–Hartenberg (DH) parameters, although joint-based parameters relating to the sensing and transmission elements can also be identified. Endpoint sensing or endpoint constraints can provide equivalent calibration equations. By casting all calibration methods as closed-loop calibration, the calibration index categorizes methods in terms of how many equations per pose are generated.

Inertial parameters may be estimated through the execution of a trajectory while sensing one or more components of force/torque at a joint. Load estimation of a handheld object is simplest because of full mobility and full wrist force-torque sensing. For link inertial parameter estimation, restricted mobility of links nearer the base as well as sensing only the joint torque means that not all inertial parameters can be identified. Those that can be identified are those that affect joint torque, although they may appear in complicated linear combinations.

Dynamic identification of a parallel robot : Trajectory without load

Author  Maxime Gautier

Video ID : 488

This video shows a trajectory without payload used to identify the dynamic parameters and joint drive gains of a parallel prototype robot Orthoglyde. Details and results are given in the paper : S. Briot, M. Gautier: Global identification of joint drive gains and dynamic parameters of parallel robots, Multibody Syst. Dyn. 33(1), 3-26 (2015); doi 10.1007/s11044-013-9403-6

Chapter 15 — Robot Learning

Jan Peters, Daniel D. Lee, Jens Kober, Duy Nguyen-Tuong, J. Andrew Bagnell and Stefan Schaal

Machine learning offers to robotics a framework and set of tools for the design of sophisticated and hard-to-engineer behaviors; conversely, the challenges of robotic problems provide both inspiration, impact, and validation for developments in robot learning. The relationship between disciplines has sufficient promise to be likened to that between physics and mathematics. In this chapter, we attempt to strengthen the links between the two research communities by providing a survey of work in robot learning for learning control and behavior generation in robots. We highlight both key challenges in robot learning as well as notable successes. We discuss how contributions tamed the complexity of the domain and study the role of algorithms, representations, and prior knowledge in achieving these successes. As a result, a particular focus of our chapter lies on model learning for control and robot reinforcement learning. We demonstrate how machine learning approaches may be profitably applied, and we note throughout open questions and the tremendous potential for future research.

Inverted helicopter hovering

Author  Pieter Abbeel

Video ID : 352

An example of simulation-based optimization using a learned forward model. This brief video shows a successful application of reinforcement learning to the design of a controller for sustained inverted flight of an autonomous helicopter. The authors began by learning a stochastic, nonlinear forward model of the helicopter’s dynamics. Then, a reinforcement learning algorithm was applied to automatically learn a controller for autonomous inverted hovering. The video illustrates Section 15.2.5 -- Applications of Model Learning, Springer Handbook of Robotics, 2nd ed (2016); Reference: A.Y. Ng, A. Coates, M. Diel, V. Ganapathi, J. Schulte, B. Tse, E. Berger, E. Liang: Autonomous inverted helicopter flight via reinforcement learning, IX Int. Symp. Exp. Robot. 2004, Springer Tract. Adv. Robot. 21, 363-372 (2006)

Chapter 43 — Telerobotics

Günter Niemeyer, Carsten Preusche, Stefano Stramigioli and Dongjun Lee

In this chapter we present an overview of the field of telerobotics with a focus on control aspects. To acknowledge some of the earliest contributions and motivations the field has provided to robotics in general, we begin with a brief historical perspective and discuss some of the challenging applications. Then, after introducing and classifying the various system architectures and control strategies, we emphasize bilateral control and force feedback. This particular area has seen intense research work in the pursuit of telepresence. We also examine some of the emerging efforts, extending telerobotic concepts to unconventional systems and applications. Finally,we suggest some further reading for a closer engagement with the field.

Teleoperated humanoid robot - HRP: Tele-driving of lifting vehicle

Author  Masami Kobayashi, Hisashi Moriyama, Toshiyuki Itoko, Yoshitaka Yanagihara, Takao Ueno, Kazuhisa Ohya, Kazuhito Yokoi

Video ID : 319

This video shows the teleoperation a humanoid robot HRP using whole-body multimodal tele-existence system. The human operator teleoperates the humanoid robot to drive a lifting vehicle in a warehouse. Presented at ICRA 2002.

Chapter 7 — Motion Planning

Lydia E. Kavraki and Steven M. LaValle

This chapter first provides a formulation of the geometric path planning problem in Sect. 7.2 and then introduces sampling-based planning in Sect. 7.3. Sampling-based planners are general techniques applicable to a wide set of problems and have been successful in dealing with hard planning instances. For specific, often simpler, planning instances, alternative approaches exist and are presented in Sect. 7.4. These approaches provide theoretical guarantees and for simple planning instances they outperform samplingbased planners. Section 7.5 considers problems that involve differential constraints, while Sect. 7.6 overviews several other extensions of the basic problem formulation and proposed solutions. Finally, Sect. 7.8 addresses some important andmore advanced topics related to motion planning.

Simulation of a large crowd

Author  Dinesh Manocha

Video ID : 21

Motion-planning methods can be used to simulate a large crowd which is a system with a very high degree of freedom. This video illustrates an approach that uses an optimization method to compute a biomechanically energy-efficient, collision-free trajectory for each agent. Many phenomena arise such as lane formation.