View Chapter

Chapter 14 — AI Reasoning Methods for Robotics

Michael Beetz, Raja Chatila, Joachim Hertzberg and Federico Pecora

Artificial intelligence (AI) reasoning technology involving, e.g., inference, planning, and learning, has a track record with a healthy number of successful applications. So can it be used as a toolbox of methods for autonomous mobile robots? Not necessarily, as reasoning on a mobile robot about its dynamic, partially known environment may differ substantially from that in knowledge-based pure software systems, where most of the named successes have been registered. Moreover, recent knowledge about the robot’s environment cannot be given a priori, but needs to be updated from sensor data, involving challenging problems of symbol grounding and knowledge base change. This chapter sketches the main roboticsrelevant topics of symbol-based AI reasoning. Basic methods of knowledge representation and inference are described in general, covering both logicand probability-based approaches. The chapter first gives a motivation by example, to what extent symbolic reasoning has the potential of helping robots perform in the first place. Then (Sect. 14.2), we sketch the landscape of representation languages available for the endeavor. After that (Sect. 14.3), we present approaches and results for several types of practical, robotics-related reasoning tasks, with an emphasis on temporal and spatial reasoning. Plan-based robot control is described in some more detail in Sect. 14.4. Section 14.5 concludes.

From knowledge grounding to dialogue processing

Author  Séverin Lemaignan, Rachid Alami

Video ID : 705

This 2012 video documents the entire process of perspective-aware knowledge acquisition, knowledge representation and storage, and dialogue understanding. It demonstrates several examples of the natural interaction of a human with a PR2 robot, including speech recognition and action execution.

Chapter 55 — Space Robotics

Kazuya Yoshida, Brian Wilcox, Gerd Hirzinger and Roberto Lampariello

In the space community, any unmanned spacecraft can be called a robotic spacecraft. However, Space Robots are considered to be more capable devices that can facilitate manipulation, assembling, or servicing functions in orbit as assistants to astronauts, or to extend the areas and abilities of exploration on remote planets as surrogates for human explorers.

In this chapter, a concise digest of the historical overview and technical advances of two distinct types of space robotic systems, orbital robots and surface robots, is provided. In particular, Sect. 55.1 describes orbital robots, and Sect. 55.2 describes surface robots. In Sect. 55.3, the mathematical modeling of the dynamics and control using reference equations are discussed. Finally, advanced topics for future space exploration missions are addressed in Sect. 55.4.

DLR GETEX manipulation experiments on ETS-VII

Author  Gerd Hirzinger, Klaus Landzettel

Video ID : 332

This is a video record of the remote control of the first free-flying space robot ETS-VII from the DLR ground control station in Tsukuba, done in close cooperation with Japan’s NASDA (today’s JAXA). The video shows a visual-servoing task in which the robot moves autonomously to a reference position defined by visual markers placed on the experimental task board. In view are the true camera measurements (top left, end-effector camera; top right, side camera), the control room in the ground control station (bottom left), and the robot simulation environment (bottom right), which was used as a predictive simulation tool.

Chapter 69 — Physical Human-Robot Interaction

Sami Haddadin and Elizabeth Croft

Over the last two decades, the foundations for physical human–robot interaction (pHRI) have evolved from successful developments in mechatronics, control, and planning, leading toward safer lightweight robot designs and interaction control schemes that advance beyond the current capacities of existing high-payload and highprecision position-controlled industrial robots. Based on their ability to sense physical interaction, render compliant behavior along the robot structure, plan motions that respect human preferences, and generate interaction plans for collaboration and coaction with humans, these novel robots have opened up novel and unforeseen application domains, and have advanced the field of human safety in robotics.

This chapter gives an overview on the state of the art in pHRI as of the date of publication. First, the advances in human safety are outlined, addressing topics in human injury analysis in robotics and safety standards for pHRI. Then, the foundations of human-friendly robot design, including the development of lightweight and intrinsically flexible force/torque-controlled machines together with the required perception abilities for interaction are introduced. Subsequently, motionplanning techniques for human environments, including the domains of biomechanically safe, risk-metric-based, human-aware planning are covered. Finally, the rather recent problem of interaction planning is summarized, including the issues of collaborative action planning, the definition of the interaction planning problem, and an introduction to robot reflexes and reactive control architecture for pHRI.

An assistive, decision-and-control architecture for force-sensitive, hand-arm systems driven via human-machine interfaces (MM1)

Author  Jörn Vogel, Sami Haddadin, John D. Simeral, Daniel Bacher , Beata Jarosiewicz, Leigh R. Hochberg, John P. Donoghue, Patrick van der Smagt

Video ID : 619

The video shows the "grasp" and "release" skills demonstrated in a 1-D control task using the Braingate2 neural-interface system. The robot is controlled through a multipriority Cartesian impedance controller and its behavior is extended with collision detection and reflex reaction. Furthermore, virtual workspaces are added to ensure safety. On top of this, a decision-and-control architecture, which uses sensory information available from the robotic system to evaluate the current state of task execution, is employed.

Human-robot interactions

Author   J.Y.S. Luh, Shuyi Hu

Video ID : 613

In human-robot cooperative tasks, the robot is required to memorize different trajectories for different assignments and to automatically retrieve a proper one from them in real-time for the robot to follow when any assignment is repeated as, e.g., when carrying a rigid object jointly by a human and a robot. To start the task, the human leads the robot along a suitable trajectory and thereby achieves the desired goal. For every new task, the human is required to lead the robot. During the process, the trajectories are recorded and stored in memory as "skillful trajectories" for later use. Reference: J.Y.S. Luh, S. Hu: Interactions and motions in human-robot coordination, Proc. IEEE Int. Robot. Autom. (ICRA), Detroit (1999), Vol. 4, pp. 3171 – 3176; doi: 10.1109/ROBOT.1999.774081.

Chapter 23 — Biomimetic Robots

Kyu-Jin Cho and Robert Wood

Biomimetic robot designs attempt to translate biological principles into engineered systems, replacing more classical engineering solutions in order to achieve a function observed in the natural system. This chapter will focus on mechanism design for bio-inspired robots that replicate key principles from nature with novel engineering solutions. The challenges of biomimetic design include developing a deep understanding of the relevant natural system and translating this understanding into engineering design rules. This often entails the development of novel fabrication and actuation to realize the biomimetic design.

This chapter consists of four sections. In Sect. 23.1, we will define what biomimetic design entails, and contrast biomimetic robots with bio-inspired robots. In Sect. 23.2, we will discuss the fundamental components for developing a biomimetic robot. In Sect. 23.3, we will review detailed biomimetic designs that have been developed for canonical robot locomotion behaviors including flapping-wing flight, jumping, crawling, wall climbing, and swimming. In Sect. 23.4, we will discuss the enabling technologies for these biomimetic designs including material and fabrication.

RoACH: a 2.4 gram, untethered, crawling hexapod robot

Author  Aaron M. Hoover, Erik Steltz, Ronald S. Fearing

Video ID : 286

The robotic autonomous crawling hexapod (RoACH) is made using lightweight composites with integrated flexural hinges. It is actuated by two shape-memory-alloy wires and controlled by a PIC microprocessor. It can communicate over IrDA and run untethered for more than nine minutes on a single charge.

Chapter 75 — Biologically Inspired Robotics

Fumiya Iida and Auke Jan Ijspeert

Throughout the history of robotics research, nature has been providing numerous ideas and inspirations to robotics engineers. Small insect-like robots, for example, usually make use of reflexive behaviors to avoid obstacles during locomotion, whereas large bipedal robots are designed to control complex human-like leg for climbing up and down stairs. While providing an overview of bio-inspired robotics, this chapter particularly focus on research which aims to employ robotics systems and technologies for our deeper understanding of biological systems. Unlike most of the other robotics research where researchers attempt to develop robotic applications, these types of bio-inspired robots are generally developed to test unsolved hypotheses in biological sciences. Through close collaborations between biologists and roboticists, bio-inspired robotics research contributes not only to elucidating challenging questions in nature but also to developing novel technologies for robotics applications. In this chapter, we first provide a brief historical background of this research area and then an overview of ongoing research methodologies. A few representative case studies will detail the successful instances in which robotics technologies help identifying biological hypotheses. And finally we discuss challenges and perspectives in the field.

Biologically inspired robotics (or bio-inspired robotics in short) is a very broad research area because almost all robotic systems are, in one way or the other, inspired from biological systems. Therefore, there is no clear distinction between bio-inspired robots and the others, and there is no commonly agreed definition [75.1]. For example, legged robots that walk, hop, and run are usually regarded as bio-inspired robots because many biological systems rely on legged locomotion for their survival. On the other hand, many robotics researchers implement biologicalmodels ofmotion control and navigation onto wheeled platforms, which could also be regarded as bio-inspired robots [75.2].

MIT Compass Gait Robot - Locomotion over rough terrain

Author  Fumiya Iida, Auke Ijspeert

Video ID : 111

This video shows an experiment of the MIT Compass Gait Robot for locomotion over rough terrain. This platform takes advantage of point-feet of compass-gait robots which are usually advantageous for locomotion in challenging, rough terrains. The motion controller uses a simple oscillator because of the intrinsic dynamic stability of this robot.

Chapter 26 — Flying Robots

Stefan Leutenegger, Christoph Hürzeler, Amanda K. Stowers, Kostas Alexis, Markus W. Achtelik, David Lentink, Paul Y. Oh and Roland Siegwart

Unmanned aircraft systems (UASs) have drawn increasing attention recently, owing to advancements in related research, technology, and applications. While having been deployed successfully in military scenarios for decades, civil use cases have lately been tackled by the robotics research community.

This chapter overviews the core elements of this highly interdisciplinary field; the reader is guided through the design process of aerial robots for various applications starting with a qualitative characterization of different types of UAS. Design and modeling are closely related, forming a typically iterative process of drafting and analyzing the related properties. Therefore, we overview aerodynamics and dynamics, as well as their application to fixed-wing, rotary-wing, and flapping-wing UAS, including related analytical tools and practical guidelines. Respecting use-case-specific requirements and core autonomous robot demands, we finally provide guidelines to related system integration challenges.

Robotic insects make first controlled flight

Author  Robert J. Wood

Video ID : 697

First flight results of the RoboBee project.

Chapter 76 — Evolutionary Robotics

Stefano Nolfi, Josh Bongard, Phil Husbands and Dario Floreano

Evolutionary Robotics is a method for automatically generating artificial brains and morphologies of autonomous robots. This approach is useful both for investigating the design space of robotic applications and for testing scientific hypotheses of biological mechanisms and processes. In this chapter we provide an overview of methods and results of Evolutionary Robotics with robots of different shapes, dimensions, and operation features. We consider both simulated and physical robots with special consideration to the transfer between the two worlds.

Evolution of cooperative and communicative behaviors

Author  Stefano Nolfi, Joachim De Greeff

Video ID : 117

A group of two e-puck robots are evolved for the capacity to reach and to move back and forth between the two circular areas. The robots are provided with infrared sensors, a camera with which they can perceive the relative position of the other robot, a microphone with which they can sense the sound-signal produced by the other robot, two motors which set the desired speed of the two wheels, and a speaker to emit sound signals. The evolved robots coordinate and cooperate on the basis of an evolved communication system which includes several implicit and explicit signals constituted, respectively, by the relative positions assumed by the robots in the environment as perceived through the robots' cameras and by the sounds with varying frequencies emitted and perceived by the robots through the robots' speakers and microphones.

Chapter 6 — Model Identification

John Hollerbach, Wisama Khalil and Maxime Gautier

This chapter discusses how to determine the kinematic parameters and the inertial parameters of robot manipulators. Both instances of model identification are cast into a common framework of least-squares parameter estimation, and are shown to have common numerical issues relating to the identifiability of parameters, adequacy of the measurement sets, and numerical robustness. These discussions are generic to any parameter estimation problem, and can be applied in other contexts.

For kinematic calibration, the main aim is to identify the geometric Denavit–Hartenberg (DH) parameters, although joint-based parameters relating to the sensing and transmission elements can also be identified. Endpoint sensing or endpoint constraints can provide equivalent calibration equations. By casting all calibration methods as closed-loop calibration, the calibration index categorizes methods in terms of how many equations per pose are generated.

Inertial parameters may be estimated through the execution of a trajectory while sensing one or more components of force/torque at a joint. Load estimation of a handheld object is simplest because of full mobility and full wrist force-torque sensing. For link inertial parameter estimation, restricted mobility of links nearer the base as well as sensing only the joint torque means that not all inertial parameters can be identified. Those that can be identified are those that affect joint torque, although they may appear in complicated linear combinations.

Dynamic identification of Kuka LWR : Trajectory without load

Author  Maxime Gautier

Video ID : 482

This video shows a trajectory without load used to identify the dynamic parameters of the links, load and torque sensor gain of the Kuka LWR manipulator. Details and results are given in the papers: A. Jubien, M. Gautier, A. Janot: Dynamic identification of the Kuka LWR robot using motor torques and joint torque sensors data, preprint 19th IFAC World Congress, Cape Town (2014) pp. 8391-8396, M. Gautier, A. Jubien: Force calibration of the Kuka LWR-like robots including embedded joint torque sensors and robot structure, IEEE/RSJ Int. Conf. Intel. Robot. Syst. (IROS), Chicago (2014) pp. 416-421

Chapter 13 — Behavior-Based Systems

François Michaud and Monica Nicolescu

Nature is filled with examples of autonomous creatures capable of dealing with the diversity, unpredictability, and rapidly changing conditions of the real world. Such creatures must make decisions and take actions based on incomplete perception, time constraints, limited knowledge about the world, cognition, reasoning and physical capabilities, in uncontrolled conditions and with very limited cues about the intent of others. Consequently, one way of evaluating intelligence is based on the creature’s ability to make the most of what it has available to handle the complexities of the real world. The main objective of this chapter is to explain behavior-based systems and their use in autonomous control problems and applications. The chapter is organized as follows. Section 13.1 overviews robot control, introducing behavior-based systems in relation to other established approaches to robot control. Section 13.2 follows by outlining the basic principles of behavior-based systems that make them distinct from other types of robot control architectures. The concept of basis behaviors, the means of modularizing behavior-based systems, is presented in Sect. 13.3. Section 13.4 describes how behaviors are used as building blocks for creating representations for use by behavior-based systems, enabling the robot to reason about the world and about itself in that world. Section 13.5 presents several different classes of learning methods for behavior-based systems, validated on single-robot and multirobot systems. Section 13.6 provides an overview of various robotics problems and application domains that have successfully been addressed or are currently being studied with behavior-based control. Finally, Sect. 13.7 concludes the chapter.

Toto

Author  Maja J. Mataric

Video ID : 35

This is a video of the work done early 1990, showing Toto which introduced the use of distributed representation into behavior-based systems. Reference: M.J. Matarić: Integration of representation into goal-driven behavior-based robots, IEEE Trans. Robot. Autom. 8(3), 304–312 (1992)