View Chapter

Chapter 6 — Model Identification

John Hollerbach, Wisama Khalil and Maxime Gautier

This chapter discusses how to determine the kinematic parameters and the inertial parameters of robot manipulators. Both instances of model identification are cast into a common framework of least-squares parameter estimation, and are shown to have common numerical issues relating to the identifiability of parameters, adequacy of the measurement sets, and numerical robustness. These discussions are generic to any parameter estimation problem, and can be applied in other contexts.

For kinematic calibration, the main aim is to identify the geometric Denavit–Hartenberg (DH) parameters, although joint-based parameters relating to the sensing and transmission elements can also be identified. Endpoint sensing or endpoint constraints can provide equivalent calibration equations. By casting all calibration methods as closed-loop calibration, the calibration index categorizes methods in terms of how many equations per pose are generated.

Inertial parameters may be estimated through the execution of a trajectory while sensing one or more components of force/torque at a joint. Load estimation of a handheld object is simplest because of full mobility and full wrist force-torque sensing. For link inertial parameter estimation, restricted mobility of links nearer the base as well as sensing only the joint torque means that not all inertial parameters can be identified. Those that can be identified are those that affect joint torque, although they may appear in complicated linear combinations.

Robot calibration using a touch probe

Author  Ilian Bonev

Video ID : 425

The video shows a kinematic calibration experiment using a touch probe. The system realizes the point-plan contact with different plans. The calibration is thus based on position contact without orientation.

Chapter 74 — Learning from Humans

Aude G. Billard, Sylvain Calinon and Rüdiger Dillmann

This chapter surveys the main approaches developed to date to endow robots with the ability to learn from human guidance. The field is best known as robot programming by demonstration, robot learning from/by demonstration, apprenticeship learning and imitation learning. We start with a brief historical overview of the field. We then summarize the various approaches taken to solve four main questions: when, what, who and when to imitate. We emphasize the importance of choosing well the interface and the channels used to convey the demonstrations, with an eye on interfaces providing force control and force feedback. We then review algorithmic approaches to model skills individually and as a compound and algorithms that combine learning from human guidance with reinforcement learning. We close with a look on the use of language to guide teaching and a list of open issues.

Reproduction of dishwasher-unloading task based on task-precedence graph

Author  Michael Pardowitz, Raoul Zöllner, Steffen Knoop, Tamim Asfour, Kristian Regenstein, Pedram Azad, Joachim Schröder, Rüdiger Dillmann

Video ID : 103

ARMAR-III humanoid robot reproducing the task of unloading a dishwasher, based on a task precedence graph learned from demonstrations. References: 1) T. Asfour, K. Regenstein, P. Azad, J. Schroeder, R. Dillmann: ARMAR-III: A humanoid platform for perception-action integration, Int. Workshop Human-Centered Robotic Systems (HCRS)(2006); 2) M. Pardowitz, R. Zöllner, S. Knoop, R. Dillmann: Incremental learning of tasks from user demonstrations, past experiences and vocal comments, IEEE Trans. Syst. Man Cybernet. B37(2), 322–332 (2007); URL: .

Learning compliant motion from human demonstration

Author  Aude Billard

Video ID : 478

This video illustrates how one can teach a robot to display the right amount of stiffness to perform a task successfully. Decrease in stiffness is demonstrated by shaking the robot, while increase in stiffness is conveyed by pressing on the robot's arm (pressure being measured through tactile sensors along the robot's arm). Reference: K. Kronander,A. Billard: Learning compliant manipulation through kinesthetic and tactile human-robot interaction, IEEE Trans. Haptics 7(3), 367-380 (2013); doi: 10.1109/TOH.2013.54 .

Chapter 68 — Human Motion Reconstruction

Katsu Yamane and Wataru Takano

This chapter presents a set of techniques for reconstructing and understanding human motions measured using current motion capture technologies. We first review modeling and computation techniques for obtaining motion and force information from human motion data (Sect. 68.2). Here we show that kinematics and dynamics algorithms for articulated rigid bodies can be applied to human motion data processing, with help from models based on knowledge in anatomy and physiology. We then describe methods for analyzing human motions so that robots can segment and categorize different behaviors and use them as the basis for human motion understanding and communication (Sect. 68.3). These methods are based on statistical techniques widely used in linguistics. The two fields share the common goal of converting continuous and noisy signal to discrete symbols, and therefore it is natural to apply similar techniques. Finally, we introduce some application examples of human motion and models ranging from simulated human control to humanoid robot motion synthesis.

The Crystal Ball: Predicting future motions

Author  Katsu Yamane

Video ID : 764

This video shows a demonstration of The Crystal Ball, a system that predicts future motions based on a graphical motion model. The rightmost figure represents the current motion, while the other figures represent the predicted motions.

Chapter 51 — Modeling and Control of Underwater Robots

Gianluca Antonelli, Thor I. Fossen and Dana R. Yoerger

This chapter deals with modeling and control of underwater robots. First, a brief introduction showing the constantly expanding role of marine robotics in oceanic engineering is given; this section also contains some historical backgrounds. Most of the following sections strongly overlap with the corresponding chapters presented in this handbook; hence, to avoid useless repetitions, only those aspects peculiar to the underwater environment are discussed, assuming that the reader is already familiar with concepts such as fault detection systems when discussing the corresponding underwater implementation. Themodeling section is presented by focusing on a coefficient-based approach capturing the most relevant underwater dynamic effects. Two sections dealing with the description of the sensor and the actuating systems are then given. Autonomous underwater vehicles require the implementation of mission control system as well as guidance and control algorithms. Underwater localization is also discussed. Underwater manipulation is then briefly approached. Fault detection and fault tolerance, together with the coordination control of multiple underwater vehicles, conclude the theoretical part of the chapter. Two final sections, reporting some successful applications and discussing future perspectives, conclude the chapter. The reader is referred to Chap. 25 for the design issues.

Neptus command and control infrastructure

Author  Laboratario de Sistemas e Tecnologias Subaquaticas - Porto University

Video ID : 324

See how Neptus is used to plan, simulate, monitor and review missions performed by autonomous vehicles. Neptus, originally developed at the Underwater Systems and Technology Laboratory, is open source software available from / NOPTILUS project [NOPTILUS is funded by European Community's Seventh Framework Programme ICT-FP]

Chapter 24 — Wheeled Robots

Woojin Chung and Karl Iagnemma

The purpose of this chapter is to introduce, analyze, and compare various wheeled mobile robots (WMRs) and to present several realizations and commonly encountered designs. The mobility of WMR is discussed on the basis of the kinematic constraints resulting from the pure rolling conditions at the contact points between the wheels and the ground. Practical robot structures are classified according to the number of wheels, and features are introduced focusing on commonly adopted designs. Omnimobile robot and articulated robots realizations are described. Wheel–terrain interaction models are presented in order to compute forces at the contact interface. Four possible wheel-terrain interaction cases are shown on the basis of relative stiffness of the wheel and terrain. A suspension system is required to move on uneven surfaces. Structures, dynamics, and important features of commonly used suspensions are explained.

An omnidirectional mobile robot with active caster wheels

Author  Woojin Chung

Video ID : 325

This video shows a holonomic omnidirectional mobile robot with two active and two passive caster wheels. Each active caster is composed of two actuators. The first actuator drives a wheel; the second actuator steers the wheel orientation. Although the mechanical structure of the driving mechanisms becomes a little complicated, conventional tires can be used for omnidirectional motions. Since the robot is overactuated, four actuators should be carefully controlled.

Chapter 76 — Evolutionary Robotics

Stefano Nolfi, Josh Bongard, Phil Husbands and Dario Floreano

Evolutionary Robotics is a method for automatically generating artificial brains and morphologies of autonomous robots. This approach is useful both for investigating the design space of robotic applications and for testing scientific hypotheses of biological mechanisms and processes. In this chapter we provide an overview of methods and results of Evolutionary Robotics with robots of different shapes, dimensions, and operation features. We consider both simulated and physical robots with special consideration to the transfer between the two worlds.

Online learning to adapt to fast environmental variations

Author  Dario Floreano

Video ID : 40

A mobile robot Khepera, equipped with a vision module, can gain fitness points by staying on the gray area only when the light is on. The light is normally off, but it can be switched on if the robot passes over the black area positioned on the other side of the arena. The robot can detect ambient light and wall color, but not the color of the floor.

Chapter 75 — Biologically Inspired Robotics

Fumiya Iida and Auke Jan Ijspeert

Throughout the history of robotics research, nature has been providing numerous ideas and inspirations to robotics engineers. Small insect-like robots, for example, usually make use of reflexive behaviors to avoid obstacles during locomotion, whereas large bipedal robots are designed to control complex human-like leg for climbing up and down stairs. While providing an overview of bio-inspired robotics, this chapter particularly focus on research which aims to employ robotics systems and technologies for our deeper understanding of biological systems. Unlike most of the other robotics research where researchers attempt to develop robotic applications, these types of bio-inspired robots are generally developed to test unsolved hypotheses in biological sciences. Through close collaborations between biologists and roboticists, bio-inspired robotics research contributes not only to elucidating challenging questions in nature but also to developing novel technologies for robotics applications. In this chapter, we first provide a brief historical background of this research area and then an overview of ongoing research methodologies. A few representative case studies will detail the successful instances in which robotics technologies help identifying biological hypotheses. And finally we discuss challenges and perspectives in the field.

Biologically inspired robotics (or bio-inspired robotics in short) is a very broad research area because almost all robotic systems are, in one way or the other, inspired from biological systems. Therefore, there is no clear distinction between bio-inspired robots and the others, and there is no commonly agreed definition [75.1]. For example, legged robots that walk, hop, and run are usually regarded as bio-inspired robots because many biological systems rely on legged locomotion for their survival. On the other hand, many robotics researchers implement biologicalmodels ofmotion control and navigation onto wheeled platforms, which could also be regarded as bio-inspired robots [75.2].

Dynamic-rolling locomotion of GoQBot

Author  Fumiya Iida, Auke Ijspeert

Video ID : 109

This video presents dynamic-rolling locomotion of a worm-like robot GoQBot. Unlike the other conventional soft robots that are capable of only slow motions, this platform exhibits fast locomotion by exploiting the flexible deformation of the body as inspired from nature.

Chapter 18 — Parallel Mechanisms

Jean-Pierre Merlet, Clément Gosselin and Tian Huang

This chapter presents an introduction to the kinematics and dynamics of parallel mechanisms, also referred to as parallel robots. As opposed to classical serial manipulators, the kinematic architecture of parallel robots includes closed-loop kinematic chains. As a consequence, their analysis differs considerably from that of their serial counterparts. This chapter aims at presenting the fundamental formulations and techniques used in their analysis.

Quadrupteron robot

Author  Clément Gosselin

Video ID : 52

This video demonstrates a 4-DOF partially decoupled scara-type parallel robot (Quadrupteron). References: 1. P.L. Richard, C. Gosselin, X. Kong: Kinematic analysis and prototyping of a partially decoupled 4-DOF 3T1R parallel manipulator, ASME J. Mech. Des. 129(6), 611-616 (2007); 2. X. Kong, C. Gosselin: Forward displacement analysis of a quadratic 4-DOF 3T1R parallel manipulator: The Quadrupteron, Meccanica 46(1), 147-154 (2011); 3. C. Gosselin: Compact dynamic models for the tripteron and quadrupteron parallel manipulators, J. Syst. Control Eng. 223(I1), 1-11 (2009)

Chapter 62 — Intelligent Vehicles

Alberto Broggi, Alex Zelinsky, Ümit Özgüner and Christian Laugier

This chapter describes the emerging robotics application field of intelligent vehicles – motor vehicles that have autonomous functions and capabilities. The chapter is organized as follows. Section 62.1 provides a motivation for why the development of intelligent vehicles is important, a brief history of the field, and the potential benefits of the technology. Section 62.2 describes the technologies that enable intelligent vehicles to sense vehicle, environment, and driver state, work with digital maps and satellite navigation, and communicate with intelligent transportation infrastructure. Section 62.3 describes the challenges and solutions associated with road scene understanding – a key capability for all intelligent vehicles. Section 62.4 describes advanced driver assistance systems, which use the robotics and sensing technologies described earlier to create new safety and convenience systems for motor vehicles, such as collision avoidance, lane keeping, and parking assistance. Section 62.5 describes driver monitoring technologies that are being developed to mitigate driver fatigue, inattention, and impairment. Section 62.6 describes fully autonomous intelligent vehicles systems that have been developed and deployed. The chapter is concluded in Sect. 62.7 with a discussion of future prospects, while Sect. 62.8 provides references to further reading and additional resources.

PROUD2013 - Inside VisLab's driverless car

Author  Alberto Broggi

Video ID : 178

This video shows the internal and external view of what happened during the PROUD2013 driverlesscar test in downtown Parma, Italy, on July 12, 2013. It also displays the internal status of the vehicle plus some vehicle data (speed, steering angle, and some perception results like pedestrian detection, roundabout merging alert, freeway merging alert, traffic light sensing, etc.). More info available from