View Chapter

Chapter 68 — Human Motion Reconstruction

Katsu Yamane and Wataru Takano

This chapter presents a set of techniques for reconstructing and understanding human motions measured using current motion capture technologies. We first review modeling and computation techniques for obtaining motion and force information from human motion data (Sect. 68.2). Here we show that kinematics and dynamics algorithms for articulated rigid bodies can be applied to human motion data processing, with help from models based on knowledge in anatomy and physiology. We then describe methods for analyzing human motions so that robots can segment and categorize different behaviors and use them as the basis for human motion understanding and communication (Sect. 68.3). These methods are based on statistical techniques widely used in linguistics. The two fields share the common goal of converting continuous and noisy signal to discrete symbols, and therefore it is natural to apply similar techniques. Finally, we introduce some application examples of human motion and models ranging from simulated human control to humanoid robot motion synthesis.

Converting human motion to sentences

Author  Katsu Yamane

Video ID : 766

This video shows an example of converting human motion sequences to descriptive sentences.

Chapter 65 — Domestic Robotics

Erwin Prassler, Mario E. Munich, Paolo Pirjanian and Kazuhiro Kosuge

When the first edition of this book was published domestic robots were spoken of as a dream that was slowly becoming reality. At that time, in 2008, we looked back on more than twenty years of research and development in domestic robotics, especially in cleaning robotics. Although everybody expected cleaning to be the killer app for domestic robotics in the first half of these twenty years nothing big really happened. About ten years before the first edition of this book appeared, all of a sudden things started moving. Several small, but also some larger enterprises announced that they would soon launch domestic cleaning robots. The robotics community was anxiously awaiting these first cleaning robots and so were consumers. The big burst, however, was yet to come. The price tag of those cleaning robots was far beyond what people were willing to pay for a vacuum cleaner. It took another four years until, in 2002, a small and inexpensive device, which was not even called a cleaning robot, brought the first breakthrough: Roomba. Sales of the Roomba quickly passed the first million robots and increased rapidly. While for the first years after Roomba’s release, the big players remained on the sidelines, possibly to revise their own designs and, in particular their business models and price tags, some other small players followed quickly and came out with their own products. We reported about theses devices and their creators in the first edition. Since then the momentum in the field of domestics robotics has steadily increased. Nowadays most big appliance manufacturers have domestic cleaning robots in their portfolio. We are not only seeing more and more domestic cleaning robots and lawn mowers on the market, but we are also seeing new types of domestic robots, window cleaners, plant watering robots, tele-presence robots, domestic surveillance robots, and robotic sports devices. Some of these new types of domestic robots are still prototypes or concept studies. Others have already crossed the threshold to becoming commercial products.

For the second edition of this chapter, we have decided to not only enumerate the devices that have emerged and survived in the past five years, but also to take a look back at how it all began, contrasting this retrospection with the burst of progress in the past five years in domestic cleaning robotics. We will not describe and discuss in detail every single cleaning robot that has seen the light of the day, but select those that are representative for the evolution of the technology as well as the market. We will also reserve some space for new types of mobile domestic robots, which will be the success stories or failures for the next edition of this chapter. Further we will look into nonmobile domestic robots, also called smart appliances, and examine their fate. Last but not least, we will look at the recent developments in the area of intelligent homes that surround and, at times, also control the mobile domestic robots and smart appliances described in the preceding sections.

Windoro window-cleaning robot review

Author  Erwin Prassler

Video ID : 734

Video reviews the performance of the robotic window-cleaner Windoro.

Chapter 18 — Parallel Mechanisms

Jean-Pierre Merlet, Clément Gosselin and Tian Huang

This chapter presents an introduction to the kinematics and dynamics of parallel mechanisms, also referred to as parallel robots. As opposed to classical serial manipulators, the kinematic architecture of parallel robots includes closed-loop kinematic chains. As a consequence, their analysis differs considerably from that of their serial counterparts. This chapter aims at presenting the fundamental formulations and techniques used in their analysis.

6-DOF statically balanced parallel robot

Author  Clément Gosselin

Video ID : 48

This video demonstrates a 6-DOF statically balanced parallel robot. References: 1. C. Gosselin, J. Wang, T. Laliberté, I. Ebert-Uphoff: On the design of a statically balanced 6-DOF parallel manipulator, Proc. IFToMM Tenth World Congress Theory of Machines and Mechanisms, Oulu (1999) pp. 1045-1050; 2. C. Gosselin, J. Wang: On the design of statically balanced motion bases for flight simulators, Proc. AIAA Modeling and Simulation Technologies Conf., Boston (1998), pp. 272-282; 3. I. Ebert-Uphoff, C. Gosselin: Dynamic modeling of a class of spatial statically-balanced parallel platform mechanisms, Proc. IEEE Int. Conf. Robot. Autom. (ICRA), Detroit (1999), Vol. 2, pp. 881-888

Chapter 27 — Micro-/Nanorobots

Bradley J. Nelson, Lixin Dong and Fumihito Arai

The field of microrobotics covers the robotic manipulation of objects with dimensions in the millimeter to micron range as well as the design and fabrication of autonomous robotic agents that fall within this size range. Nanorobotics is defined in the same way only for dimensions smaller than a micron. With the ability to position and orient objects with micron- and nanometer-scale dimensions, manipulation at each of these scales is a promising way to enable the assembly of micro- and nanosystems, including micro- and nanorobots.

This chapter overviews the state of the art of both micro- and nanorobotics, outlines scaling effects, actuation, and sensing and fabrication at these scales, and focuses on micro- and nanorobotic manipulation systems and their application in microassembly, biotechnology, and the construction and characterization of micro and nanoelectromechanical systems (MEMS/NEMS). Material science, biotechnology, and micro- and nanoelectronics will also benefit from advances in these areas of robotics.

The electromagnetic control of an untethered microrobot

Author  Bradley J. Nelson

Video ID : 12

This is a video of a computer simulation showing the electromagnetic control of an untethered microrobot for ophthalmic applications, such as targeted drug delivery and epiretinal membrane peeling.

Chapter 9 — Force Control

Luigi Villani and Joris De Schutter

A fundamental requirement for the success of a manipulation task is the capability to handle the physical contact between a robot and the environment. Pure motion control turns out to be inadequate because the unavoidable modeling errors and uncertainties may cause a rise of the contact force, ultimately leading to an unstable behavior during the interaction, especially in the presence of rigid environments. Force feedback and force control becomes mandatory to achieve a robust and versatile behavior of a robotic system in poorly structured environments as well as safe and dependable operation in the presence of humans. This chapter starts from the analysis of indirect force control strategies, conceived to keep the contact forces limited by ensuring a suitable compliant behavior to the end effector, without requiring an accurate model of the environment. Then the problem of interaction tasks modeling is analyzed, considering both the case of a rigid environment and the case of a compliant environment. For the specification of an interaction task, natural constraints set by the task geometry and artificial constraints set by the control strategy are established, with respect to suitable task frames. This formulation is the essential premise to the synthesis of hybrid force/motion control schemes.

Recent research in impedance ontrol

Author  Unknown, Case Western Reserve University, Cleveland

Video ID : 684

Experimentacl research on impedance control done in 1991 at Case Western Reserve University in Cleveland, Ohio. The demonstrations involve three scenarios: stiffness control without force sensing; impedance control based on a wrist force sensor; and impedance control based on joint torque sensing. This work was published in the ICRA 1991 video proceedings.

Chapter 45 — World Modeling

Wolfram Burgard, Martial Hebert and Maren Bennewitz

In this chapter we describe popular ways to represent the environment of a mobile robot. For indoor environments, which are often stored using two-dimensional representations, we discuss occupancy grids, line maps, topologicalmaps, and landmark-based representations. Each of these techniques has its own advantages and disadvantages. Whilst occupancy grid maps allow for quick access and can efficiently be updated, line maps are more compact. Also landmark-basedmaps can efficiently be updated and maintained, however, they do not readily support navigation tasks such as path planning like topological representations do.

Additionally, we discuss approaches suited for outdoor terrain modeling. In outdoor environments, the flat-surface assumption underling many mapping techniques for indoor environments is no longer valid. A very popular approach in this context are elevation and variants maps, which store the surface of the terrain over a regularly spaced grid. Alternatives to such maps are point clouds, meshes, or three-dimensional grids, which provide a greater flexibility but have higher storage demands.

3-D textured model of urban environments

Author  Michael Maurer

Video ID : 269

In this video, a micro aerial vehicle developed by the Institute for Computer Graphics and Vision, Graz Univ. of Technology, flies to predefined points and captures images for building a 3-D textured model of an urban environment. The video contains a nice description of the different steps necessary to generate a precise model by fusing the areal images with public geographic data.

Chapter 32 — 3-D Vision for Navigation and Grasping

Danica Kragic and Kostas Daniilidis

In this chapter, we describe algorithms for three-dimensional (3-D) vision that help robots accomplish navigation and grasping. To model cameras, we start with the basics of perspective projection and distortion due to lenses. This projection from a 3-D world to a two-dimensional (2-D) image can be inverted only by using information from the world or multiple 2-D views. If we know the 3-D model of an object or the location of 3-D landmarks, we can solve the pose estimation problem from one view. When two views are available, we can compute the 3-D motion and triangulate to reconstruct the world up to a scale factor. When multiple views are given either as sparse viewpoints or a continuous incoming video, then the robot path can be computer and point tracks can yield a sparse 3-D representation of the world. In order to grasp objects, we can estimate 3-D pose of the end effector or 3-D coordinates of the graspable points on the object.

DTAM: Dense tracking and mapping in real-time

Author  Richard A. Newcombe, Steven J. Lovegrove, Andrew J. Davison

Video ID : 124

This video demonstrates the system described in the paper, "DTAM: Dense Tracking and Mapping in Real-Time" by Richard Newcombe, Steven Lovegrove and Andrew Davison for ICCV 2011.

Chapter 76 — Evolutionary Robotics

Stefano Nolfi, Josh Bongard, Phil Husbands and Dario Floreano

Evolutionary Robotics is a method for automatically generating artificial brains and morphologies of autonomous robots. This approach is useful both for investigating the design space of robotic applications and for testing scientific hypotheses of biological mechanisms and processes. In this chapter we provide an overview of methods and results of Evolutionary Robotics with robots of different shapes, dimensions, and operation features. We consider both simulated and physical robots with special consideration to the transfer between the two worlds.

Evolved GasNet visualisation

Author  Phil Husbands

Video ID : 375

The video shows a successfully evolved GasNet controlling a simulated robot engaged in a visual-discrimination task under noisy lighting. The GasNet architecture and all node properties are evolved along with the visual sampling morphology (parts of the visual field used as inputs to the GasNet). A minimal simulation is used which allows transfer to the real robot (see Sussex gantry Video 371). A highly minimal controller and visual morphology have evolved. The system is highly robust, coping with very noisy conditions. As can be seen, the GasNet employs multiple oscillator subcircuits - partly to filter out noise. Work by Tom Smith and Phil Husbands.

Chapter 74 — Learning from Humans

Aude G. Billard, Sylvain Calinon and Rüdiger Dillmann

This chapter surveys the main approaches developed to date to endow robots with the ability to learn from human guidance. The field is best known as robot programming by demonstration, robot learning from/by demonstration, apprenticeship learning and imitation learning. We start with a brief historical overview of the field. We then summarize the various approaches taken to solve four main questions: when, what, who and when to imitate. We emphasize the importance of choosing well the interface and the channels used to convey the demonstrations, with an eye on interfaces providing force control and force feedback. We then review algorithmic approaches to model skills individually and as a compound and algorithms that combine learning from human guidance with reinforcement learning. We close with a look on the use of language to guide teaching and a list of open issues.

Demonstrations and reproduction of the task of juicing an orange

Author  Florent D'Halluin, Aude Billard

Video ID : 29

Human demonstrations of the task of juicing an orange, and reproductions by the robot in new situations where the objects are located in positions not seen in the demonstrations. URL: http://www.scholarpedia.org/article/Robot_learning_by_demonstration

Chapter 71 — Cognitive Human-Robot Interaction

Bilge Mutlu, Nicholas Roy and Selma Šabanović

A key research challenge in robotics is to design robotic systems with the cognitive capabilities necessary to support human–robot interaction. These systems will need to have appropriate representations of the world; the task at hand; the capabilities, expectations, and actions of their human counterparts; and how their own actions might affect the world, their task, and their human partners. Cognitive human–robot interaction is a research area that considers human(s), robot(s), and their joint actions as a cognitive system and seeks to create models, algorithms, and design guidelines to enable the design of such systems. Core research activities in this area include the development of representations and actions that allow robots to participate in joint activities with people; a deeper understanding of human expectations and cognitive responses to robot actions; and, models of joint activity for human–robot interaction. This chapter surveys these research activities by drawing on research questions and advances from a wide range of fields including computer science, cognitive science, linguistics, and robotics.

Active key-frame-based learning from demonstration

Author  Maya Cakmak, Andrea Thomaz

Video ID : 238

Simon asks different types of questions in response to demonstrations given by the teacher.