View Chapter

Chapter 80 — Roboethics: Social and Ethical Implications

Gianmarco Veruggio, Fiorella Operto and George Bekey

This chapter outlines the main developments of roboethics 9 years after a worldwide debate on the subject – that is, the applied ethics about ethical, legal, and societal aspects of robotics – opened up. Today, roboethics not only counts several thousands of voices on the Web, but is the issue of important literature relating to almost all robotics applications, and of hundreds of rich projects, workshops, and conferences. This increasing interest and sometimes even fierce debate expresses the perception and need of scientists, manufacturers, and users of professional guidelines and ethical indications about robotics in society.

Some of the issues presented in the chapter are well known to engineers, and less known or unknown to scholars of humanities, and vice versa. However, because the subject is transversal to many disciplines, complex, articulated, and often misrepresented, some of the fundamental concepts relating to ethics in science and technology are recalled and clarified.

A detailed taxonomy of sensitive areas is presented. It is based on a study of several years and referred to by scientists and scholars, the result of which is the Euron Roboethics Roadmap. This taxonomy identifies themost evident/urgent/sensitive ethical problems in the main applicative fields of robotics, leaving more in-depth research to further studies.

Roboethics: Introduction

Author  Fiorella Operto

Video ID : 773

Introduction ton Ethical, Legal and Societal issues. This is the first time in history that humanity is nearing the achievement of replicating an intelligent and autonomous entity. This compels the scientific community to examine closely the very concept of intelligence – in humans and animals, and of the me- chanical – from a cybernetic standpoint. In fact, complex concepts like autonomy, learning, consciousness, evaluation, free will, decision making, freedom, emotions, and many others need to be analyzed, taking into account that the same concept may not have, in humans, animals, and machines, the same semantic meaning. From this standpoint, it can be seen as natural and necessary that robotics draws on several other disciplines, such as logic, linguistics, neuroscience, psychology, biology, physiology, philosophy, litera- ture, natural history, anthropology, art, and design. In fact, robotics de facto combines the so-called two cultural spheres, science and humanities. The effort to design roboethics should take into account this specificity. This means that experts will consider robotics as a whole - in spite of the current early stage which recalls a melting pot – so they can achieve the vision of robotics’ future. “Roboethics is an applied ethics whose objective is to develop scientific/cultural/technical tools that can be shared by different social groups and beliefs. These tools aim to promote and encourage the development of robotics for the advancement of human society and individuals, and to help preventing its misuse against humankind.” (Veruggio, 2002)

Chapter 14 — AI Reasoning Methods for Robotics

Michael Beetz, Raja Chatila, Joachim Hertzberg and Federico Pecora

Artificial intelligence (AI) reasoning technology involving, e.g., inference, planning, and learning, has a track record with a healthy number of successful applications. So can it be used as a toolbox of methods for autonomous mobile robots? Not necessarily, as reasoning on a mobile robot about its dynamic, partially known environment may differ substantially from that in knowledge-based pure software systems, where most of the named successes have been registered. Moreover, recent knowledge about the robot’s environment cannot be given a priori, but needs to be updated from sensor data, involving challenging problems of symbol grounding and knowledge base change. This chapter sketches the main roboticsrelevant topics of symbol-based AI reasoning. Basic methods of knowledge representation and inference are described in general, covering both logicand probability-based approaches. The chapter first gives a motivation by example, to what extent symbolic reasoning has the potential of helping robots perform in the first place. Then (Sect. 14.2), we sketch the landscape of representation languages available for the endeavor. After that (Sect. 14.3), we present approaches and results for several types of practical, robotics-related reasoning tasks, with an emphasis on temporal and spatial reasoning. Plan-based robot control is described in some more detail in Sect. 14.4. Section 14.5 concludes.

From knowledge grounding to dialogue processing

Author  Séverin Lemaignan, Rachid Alami

Video ID : 705

This 2012 video documents the entire process of perspective-aware knowledge acquisition, knowledge representation and storage, and dialogue understanding. It demonstrates several examples of the natural interaction of a human with a PR2 robot, including speech recognition and action execution.

Chapter 47 — Motion Planning and Obstacle Avoidance

Javier Minguez, Florant Lamiraux and Jean-Paul Laumond

This chapter describes motion planning and obstacle avoidance for mobile robots. We will see how the two areas do not share the same modeling background. From the very beginning of motion planning, research has been dominated by computer sciences. Researchers aim at devising well-grounded algorithms with well-understood completeness and exactness properties.

The challenge of this chapter is to present both nonholonomic motion planning (Sects. 47.1–47.6) and obstacle avoidance (Sects. 47.7–47.10) issues. Section 47.11 reviews recent successful approaches that tend to embrace the whole problemofmotion planning and motion control. These approaches benefit from both nonholonomic motion planning and obstacle avoidance methods.

Mobile-robot navigation system in outdoor pedestrian environment

Author  Chin-Kai Chang

Video ID : 711

We present a mobile-robot navigation system guided by a novel vision-based, road-recognition approach. The system represents the road as a set of lines extrapolated from the detected image contour segments. These lines enable the robot to maintain its heading by centering the vanishing point in its field of view, and to correct the long-term drift from its original lateral position. We integrate odometry and our visual, road-recognition system into a grid-based local map which estimates the robot pose as well as its surroundings to generate a movement path. Our road recognition system is able to estimate the road center on a standard dataset with 25 076 images to within 11.42 cm (with respect to roads that are at least 3 m wide). It outperforms three other state-of-the-art systems. In addition, we extensively test our navigation system in four busy campus environments using a wheeled robot. Our tests cover more than 5 km of autonomous driving on a busy college campus without failure. This demonstrates the robustness of the proposed approach to handle challenges including occlusion by pedestrians, non-standard complex road markings and shapes, shadows, and miscellaneous obstacle objects.

Chapter 74 — Learning from Humans

Aude G. Billard, Sylvain Calinon and Rüdiger Dillmann

This chapter surveys the main approaches developed to date to endow robots with the ability to learn from human guidance. The field is best known as robot programming by demonstration, robot learning from/by demonstration, apprenticeship learning and imitation learning. We start with a brief historical overview of the field. We then summarize the various approaches taken to solve four main questions: when, what, who and when to imitate. We emphasize the importance of choosing well the interface and the channels used to convey the demonstrations, with an eye on interfaces providing force control and force feedback. We then review algorithmic approaches to model skills individually and as a compound and algorithms that combine learning from human guidance with reinforcement learning. We close with a look on the use of language to guide teaching and a list of open issues.

Probabilistic encoding of motion in a subspace of reduced dimensionality

Author  Keith Grochow, Steven Martin, Aaron Hertzmann, Zoran Popovic

Video ID : 102

Probabilistic encoding of motion in a subspace of reduced dimensionality. Reference: K. Grochow, S. L. Martin, A. Hertzmann, Z. Popovic: Style-based inverse kinematics, Proc. ACM Int. Conf. Comput. Graphics Interact. Tech. (SIGGRAPH), 522–531 (2004); URL: http://grail.cs.washington.edu/projects/styleik/ .

Chapter 69 — Physical Human-Robot Interaction

Sami Haddadin and Elizabeth Croft

Over the last two decades, the foundations for physical human–robot interaction (pHRI) have evolved from successful developments in mechatronics, control, and planning, leading toward safer lightweight robot designs and interaction control schemes that advance beyond the current capacities of existing high-payload and highprecision position-controlled industrial robots. Based on their ability to sense physical interaction, render compliant behavior along the robot structure, plan motions that respect human preferences, and generate interaction plans for collaboration and coaction with humans, these novel robots have opened up novel and unforeseen application domains, and have advanced the field of human safety in robotics.

This chapter gives an overview on the state of the art in pHRI as of the date of publication. First, the advances in human safety are outlined, addressing topics in human injury analysis in robotics and safety standards for pHRI. Then, the foundations of human-friendly robot design, including the development of lightweight and intrinsically flexible force/torque-controlled machines together with the required perception abilities for interaction are introduced. Subsequently, motionplanning techniques for human environments, including the domains of biomechanically safe, risk-metric-based, human-aware planning are covered. Finally, the rather recent problem of interaction planning is summarized, including the issues of collaborative action planning, the definition of the interaction planning problem, and an introduction to robot reflexes and reactive control architecture for pHRI.

Mobile robot helper - Mr. Helper

Author   Kazuhiro Kosuge, Manabu Sato, Norihide Kazamura

Video ID : 606

In this video, a mobile robot helper referred to as Mr. Helper is proposed. Mr. Helper consists of two 7-DOF manipulators and an omni-directional mobile base. The omnidirectional mobile base is the VUTON mechanism. In this system, a human and Mr. Helper communicate with each other by intentional force. That is, a human manipulates an object by applying intentional force/torque to the object. We design an impedance controller for each manipulator, so that the object manipulated by both arms has a specified impedance around a specified compliance center. Refrence: ICRA 2000 Video Abstracts.

Chapter 19 — Robot Hands

Claudio Melchiorri and Makoto Kaneko

Multifingered robot hands have a potential capability for achieving dexterous manipulation of objects by using rolling and sliding motions. This chapter addresses design, actuation, sensing and control of multifingered robot hands. From the design viewpoint, they have a strong constraint in actuator implementation due to the space limitation in each joint. After briefly introducing the overview of anthropomorphic end-effector and its dexterity in Sect. 19.1, various approaches for actuation are provided with their advantages and disadvantages in Sect. 19.2. The key classification is (1) remote actuation or build-in actuation and (2) the relationship between the number of joints and the number of actuator. In Sect. 19.3, actuators and sensors used for multifingered hands are described. In Sect. 19.4, modeling and control are introduced by considering both dynamic effects and friction. Applications and trends are given in Sect. 19.5. Finally, this chapter is closed with conclusions and further reading.

A high-speed hand

Author  Ishikawa Komuro Lab

Video ID : 755

Ishikawa Komuro Lab's high-speed robot hand performing impressive acts of dexterity and skillful manipulation.

Chapter 40 — Mobility and Manipulation

Oliver Brock, Jaeheung Park and Marc Toussaint

Mobile manipulation requires the integration of methodologies from all aspects of robotics. Instead of tackling each aspect in isolation,mobilemanipulation research exploits their interdependence to solve challenging problems. As a result, novel views of long-standing problems emerge. In this chapter, we present these emerging views in the areas of grasping, control, motion generation, learning, and perception. All of these areas must address the shared challenges of high-dimensionality, uncertainty, and task variability. The section on grasping and manipulation describes a trend towards actively leveraging contact and physical and dynamic interactions between hand, object, and environment. Research in control addresses the challenges of appropriately coupling mobility and manipulation. The field of motion generation increasingly blurs the boundaries between control and planning, leading to task-consistent motion in high-dimensional configuration spaces, even in dynamic and partially unknown environments. A key challenge of learning formobilemanipulation consists of identifying the appropriate priors, and we survey recent learning approaches to perception, grasping, motion, and manipulation. Finally, a discussion of promising methods in perception shows how concepts and methods from navigation and active perception are applied.

Handle localization and grasping

Author  Robert Platt

Video ID : 652

The robot localizes and grasps appropriate handles on novel objects in real time.

Chapter 30 — Sonar Sensing

Lindsay Kleeman and Roman Kuc

Sonar or ultrasonic sensing uses the propagation of acoustic energy at higher frequencies than normal hearing to extract information from the environment. This chapter presents the fundamentals and physics of sonar sensing for object localization, landmark measurement and classification in robotics applications. The source of sonar artifacts is explained and how they can be dealt with. Different ultrasonic transducer technologies are outlined with their main characteristics highlighted.

Sonar systems are described that range in sophistication from low-cost threshold-based ranging modules to multitransducer multipulse configurations with associated signal processing requirements capable of accurate range and bearing measurement, interference rejection, motion compensation, and target classification. Continuous-transmission frequency-modulated (CTFM) systems are introduced and their ability to improve target sensitivity in the presence of noise is discussed. Various sonar ring designs that provide rapid surrounding environmental coverage are described in conjunction with mapping results. Finally the chapter ends with a discussion of biomimetic sonar, which draws inspiration from animals such as bats and dolphins.

Biological bat-ear deformation in sonar detection

Author  Rolf Mueller

Video ID : 312

Fast deformations of the outer ear (pinnae) in a female Pratt's roundleaf bat (Hipposideros pratti). The deformations are shown at a speed 67 times slower than real time and occur synchronously with the emission of the biosonar pulses and the reception of the echoes. These changes in the pinnae give the biosonar of roundleaf bats a dynamic dimension that is not found in technical sonar.

Chapter 71 — Cognitive Human-Robot Interaction

Bilge Mutlu, Nicholas Roy and Selma Šabanović

A key research challenge in robotics is to design robotic systems with the cognitive capabilities necessary to support human–robot interaction. These systems will need to have appropriate representations of the world; the task at hand; the capabilities, expectations, and actions of their human counterparts; and how their own actions might affect the world, their task, and their human partners. Cognitive human–robot interaction is a research area that considers human(s), robot(s), and their joint actions as a cognitive system and seeks to create models, algorithms, and design guidelines to enable the design of such systems. Core research activities in this area include the development of representations and actions that allow robots to participate in joint activities with people; a deeper understanding of human expectations and cognitive responses to robot actions; and, models of joint activity for human–robot interaction. This chapter surveys these research activities by drawing on research questions and advances from a wide range of fields including computer science, cognitive science, linguistics, and robotics.

Gaze and gesture cues for robots

Author  Bilge Mutlu

Video ID : 128

In human-robot communication, nonverbal cues like gaze and gesture can be a source of important information for starting and maintaining interaction. Gaze, for example, can tell a person about what the robot is attending to, its mental state, and its role in a conversation. Researchers are studying and developing models of nonverbal cues in human-robot interaction to enable more successful collaboration between robots and humans in a variety of domains, including education.

Chapter 67 — Humanoids

Paul Fitzpatrick, Kensuke Harada, Charles C. Kemp, Yoshio Matsumoto, Kazuhito Yokoi and Eiichi Yoshida

Humanoid robots selectively immitate aspects of human form and behavior. Humanoids come in a variety of shapes and sizes, from complete human-size legged robots to isolated robotic heads with human-like sensing and expression. This chapter highlights significant humanoid platforms and achievements, and discusses some of the underlying goals behind this area of robotics. Humanoids tend to require the integration ofmany of the methods covered in detail within other chapters of this handbook, so this chapter focuses on distinctive aspects of humanoid robotics with liberal cross-referencing.

This chapter examines what motivates researchers to pursue humanoid robotics, and provides a taste of the evolution of this field over time. It summarizes work on legged humanoid locomotion, whole-body activities, and approaches to human–robot communication. It concludes with a brief discussion of factors that may influence the future of humanoid robots.

Dynamic multicontact motion

Author  Eiichi Yoshida

Video ID : 597

A method to plan optimal whole-body, dynamic motion in multicontact non-gaited transitions has been developed. Using a B-spline time parameterization for the active joints, we turn the motion-planning problem into a semi-infinite programming formulation which is solved by nonlinear optimization techniques. We address the problem of the balance within the optimization problem and demonstrate that generating whole-body multicontact dynamic motion for complex tasks is possible.