View Chapter

Chapter 27 — Micro-/Nanorobots

Bradley J. Nelson, Lixin Dong and Fumihito Arai

The field of microrobotics covers the robotic manipulation of objects with dimensions in the millimeter to micron range as well as the design and fabrication of autonomous robotic agents that fall within this size range. Nanorobotics is defined in the same way only for dimensions smaller than a micron. With the ability to position and orient objects with micron- and nanometer-scale dimensions, manipulation at each of these scales is a promising way to enable the assembly of micro- and nanosystems, including micro- and nanorobots.

This chapter overviews the state of the art of both micro- and nanorobotics, outlines scaling effects, actuation, and sensing and fabrication at these scales, and focuses on micro- and nanorobotic manipulation systems and their application in microassembly, biotechnology, and the construction and characterization of micro and nanoelectromechanical systems (MEMS/NEMS). Material science, biotechnology, and micro- and nanoelectronics will also benefit from advances in these areas of robotics.

Linear-to-rotary motion converters for three-dimensional microscopy

Author  Lixin Dong

Video ID : 492

This video shows the application of a linear-to-rotary motion converter in 3-D imaging using a scanning electron microscope. The motion converter consists of a SiGe/Si dual-chirality helical nanobelt (DCHNB). The experiment was done using nanorobotic manipulation. Analytical and experimental investigation shows that the motion conversion has excellent linearity for small deflections. The stiffness (0.033 N/m) is much smaller than that of bottom-up synthesized helical nanostructures, which is promising for high-resolution force measurement in nanoelectromechanical systems (NEMS). The ultracompact size makes it also possible for DCHNBs to serve as rotary stages for creating 3-D scanning probe microscopes or microgoniometers.

Chapter 76 — Evolutionary Robotics

Stefano Nolfi, Josh Bongard, Phil Husbands and Dario Floreano

Evolutionary Robotics is a method for automatically generating artificial brains and morphologies of autonomous robots. This approach is useful both for investigating the design space of robotic applications and for testing scientific hypotheses of biological mechanisms and processes. In this chapter we provide an overview of methods and results of Evolutionary Robotics with robots of different shapes, dimensions, and operation features. We consider both simulated and physical robots with special consideration to the transfer between the two worlds.

Evolution of visually-guided behaviour on Sussex gantry robot

Author  Phil Husbands

Video ID : 371

Behaviour evolved in the real world on the Sussex gantry robot in 1994. Controllers (evolved neural networks plus visual sampling morphology) are automatically evaluated on the actual robot. The required behaviour is a shape discrimination task: to move to the triangle, while ignoring the rectangle, under very noisy lighting conditions.

Chapter 9 — Force Control

Luigi Villani and Joris De Schutter

A fundamental requirement for the success of a manipulation task is the capability to handle the physical contact between a robot and the environment. Pure motion control turns out to be inadequate because the unavoidable modeling errors and uncertainties may cause a rise of the contact force, ultimately leading to an unstable behavior during the interaction, especially in the presence of rigid environments. Force feedback and force control becomes mandatory to achieve a robust and versatile behavior of a robotic system in poorly structured environments as well as safe and dependable operation in the presence of humans. This chapter starts from the analysis of indirect force control strategies, conceived to keep the contact forces limited by ensuring a suitable compliant behavior to the end effector, without requiring an accurate model of the environment. Then the problem of interaction tasks modeling is analyzed, considering both the case of a rigid environment and the case of a compliant environment. For the specification of an interaction task, natural constraints set by the task geometry and artificial constraints set by the control strategy are established, with respect to suitable task frames. This formulation is the essential premise to the synthesis of hybrid force/motion control schemes.

Integration of force strategies and natural-admittance control

Author  Brian B. Mathewson, Wyatt S. Newman

Video ID : 685

When mating parts are brought together, small misalignments must be accommodated by responding to contact forces. Using force feedback, a robot may sense contact forces during assembly and invoke a response to guide the parts into their correct mating positions. The proposed approach integrates force-guided strategies into Hogan's impedance control. Stability of both geometric convergence and of contact dynamics are achieved. Geometric convergence is accomplished more reliably than through the use of impedance control alone, and such a convergence is achieved more rapidly than through the use of force-guided strategies alone. This work was published in the ICRA 1995 video proceedings.

Chapter 67 — Humanoids

Paul Fitzpatrick, Kensuke Harada, Charles C. Kemp, Yoshio Matsumoto, Kazuhito Yokoi and Eiichi Yoshida

Humanoid robots selectively immitate aspects of human form and behavior. Humanoids come in a variety of shapes and sizes, from complete human-size legged robots to isolated robotic heads with human-like sensing and expression. This chapter highlights significant humanoid platforms and achievements, and discusses some of the underlying goals behind this area of robotics. Humanoids tend to require the integration ofmany of the methods covered in detail within other chapters of this handbook, so this chapter focuses on distinctive aspects of humanoid robotics with liberal cross-referencing.

This chapter examines what motivates researchers to pursue humanoid robotics, and provides a taste of the evolution of this field over time. It summarizes work on legged humanoid locomotion, whole-body activities, and approaches to human–robot communication. It concludes with a brief discussion of factors that may influence the future of humanoid robots.

3-D, collision-free motion combining locomotion and manipulation by humanoid robot HRP-2 (experiment)

Author  Eiichi Yoshida

Video ID : 598

In this video, the whole-body motion generation described in video 598 is experimentally validated, using the HRP-2 humanoid robot.

Chapter 34 — Visual Servoing

François Chaumette, Seth Hutchinson and Peter Corke

This chapter introduces visual servo control, using computer vision data in the servo loop to control the motion of a robot. We first describe the basic techniques that are by now well established in the field. We give a general overview of the formulation of the visual servo control problem, and describe the two archetypal visual servo control schemes: image-based and pose-based visual servo control. We then discuss performance and stability issues that pertain to these two schemes, motivating advanced techniques. Of the many advanced techniques that have been developed, we discuss 2.5-D, hybrid, partitioned, and switched approaches. Having covered a variety of control schemes, we deal with target tracking and controlling motion directly in the joint space and extensions to under-actuated ground and aerial robots. We conclude by describing applications of visual servoing in robotics.

2.5-D VS on a 6-DOF robot arm (1)

Author  Francois Chaumette, Seth Hutchinson, Peter Corke

Video ID : 64

This video shows a 2.5-D VS on a 6-DOF robot arm with (x_g, log(Z_g), theta u) as visual features. It corresponds to the results depicted in Figure 34.12.

Chapter 72 — Social Robotics

Cynthia Breazeal, Kerstin Dautenhahn and Takayuki Kanda

This chapter surveys some of the principal research trends in Social Robotics and its application to human–robot interaction (HRI). Social (or Sociable) robots are designed to interact with people in a natural, interpersonal manner – often to achieve positive outcomes in diverse applications such as education, health, quality of life, entertainment, communication, and tasks requiring collaborative teamwork. The long-term goal of creating social robots that are competent and capable partners for people is quite a challenging task. They will need to be able to communicate naturally with people using both verbal and nonverbal signals. They will need to engage us not only on a cognitive level, but on an emotional level as well in order to provide effective social and task-related support to people. They will need a wide range of socialcognitive skills and a theory of other minds to understand human behavior, and to be intuitively understood by people. A deep understanding of human intelligence and behavior across multiple dimensions (i. e., cognitive, affective, physical, social, etc.) is necessary in order to design robots that can successfully play a beneficial role in the daily lives of people. This requires a multidisciplinary approach where the design of social robot technologies and methodologies are informed by robotics, artificial intelligence, psychology, neuroscience, human factors, design, anthropology, and more.

Overview of Kismet's expressive behavior

Author  Cynthia Breazeal

Video ID : 557

This video presents an overview of Kismet's expressive behavior and rationale. The video presents how Kismet can express internal emotive/affective states through three modalities: facial expression, vocal affect, and body posture. The video also shows how Kismet can recognize aspects of affective intent in human speech (e.g., praising, scolding, soothing, and attentional bids). The video shows how human participants can interact in a natural and intuitive way with the robot, by reading and responding to its emotive and social cues.

Chapter 19 — Robot Hands

Claudio Melchiorri and Makoto Kaneko

Multifingered robot hands have a potential capability for achieving dexterous manipulation of objects by using rolling and sliding motions. This chapter addresses design, actuation, sensing and control of multifingered robot hands. From the design viewpoint, they have a strong constraint in actuator implementation due to the space limitation in each joint. After briefly introducing the overview of anthropomorphic end-effector and its dexterity in Sect. 19.1, various approaches for actuation are provided with their advantages and disadvantages in Sect. 19.2. The key classification is (1) remote actuation or build-in actuation and (2) the relationship between the number of joints and the number of actuator. In Sect. 19.3, actuators and sensors used for multifingered hands are described. In Sect. 19.4, modeling and control are introduced by considering both dynamic effects and friction. Applications and trends are given in Sect. 19.5. Finally, this chapter is closed with conclusions and further reading.

The PISA-IIT SoftHand (1)

Author  IIT - Pisa University

Video ID : 749

The PISA-IIT SoftHand is a anthropomorphic device with a single actuator. The video shows the hand being controlled with EMG signals.

Chapter 34 — Visual Servoing

François Chaumette, Seth Hutchinson and Peter Corke

This chapter introduces visual servo control, using computer vision data in the servo loop to control the motion of a robot. We first describe the basic techniques that are by now well established in the field. We give a general overview of the formulation of the visual servo control problem, and describe the two archetypal visual servo control schemes: image-based and pose-based visual servo control. We then discuss performance and stability issues that pertain to these two schemes, motivating advanced techniques. Of the many advanced techniques that have been developed, we discuss 2.5-D, hybrid, partitioned, and switched approaches. Having covered a variety of control schemes, we deal with target tracking and controlling motion directly in the joint space and extensions to under-actuated ground and aerial robots. We conclude by describing applications of visual servoing in robotics.

2.5-D VS on a 6 DOF robot arm (2)

Author  Francois Chaumette, Seth Hutchinson, Peter Corke

Video ID : 65

This video shows a 2.5-D VS on a 6 DOF robot arm with (c*^t_c, x_g, theta u_z) as visual features. It corresponds to the results depicted in Figure 34.13.

Chapter 53 — Multiple Mobile Robot Systems

Lynne E. Parker, Daniela Rus and Gaurav S. Sukhatme

Within the context of multiple mobile, and networked robot systems, this chapter explores the current state of the art. After a brief introduction, we first examine architectures for multirobot cooperation, exploring the alternative approaches that have been developed. Next, we explore communications issues and their impact on multirobot teams in Sect. 53.3, followed by a discussion of networked mobile robots in Sect. 53.4. Following this we discuss swarm robot systems in Sect. 53.5 and modular robot systems in Sect. 53.6. While swarm and modular systems typically assume large numbers of homogeneous robots, other types of multirobot systems include heterogeneous robots. We therefore next discuss heterogeneity in cooperative robot teams in Sect. 53.7. Once robot teams allow for individual heterogeneity, issues of task allocation become important; Sect. 53.8 therefore discusses common approaches to task allocation. Section 53.9 discusses the challenges of multirobot learning, and some representative approaches. We outline some of the typical application domains which serve as test beds for multirobot systems research in Sect. 53.10. Finally, we conclude in Sect. 53.11 with some summary remarks and suggestions for further reading.

Formation control via a distributed controller-observer

Author  Gianluca Antonelli, Filippo Arrichiello, Fabrizio Caccavale, Alessandro Marino

Video ID : 293

This video shows an experiment of formation control with a multirobot system composed of Khepera III mobile robots using the distributed controller-observer schema.

Chapter 74 — Learning from Humans

Aude G. Billard, Sylvain Calinon and Rüdiger Dillmann

This chapter surveys the main approaches developed to date to endow robots with the ability to learn from human guidance. The field is best known as robot programming by demonstration, robot learning from/by demonstration, apprenticeship learning and imitation learning. We start with a brief historical overview of the field. We then summarize the various approaches taken to solve four main questions: when, what, who and when to imitate. We emphasize the importance of choosing well the interface and the channels used to convey the demonstrations, with an eye on interfaces providing force control and force feedback. We then review algorithmic approaches to model skills individually and as a compound and algorithms that combine learning from human guidance with reinforcement learning. We close with a look on the use of language to guide teaching and a list of open issues.

Reproduction of dishwasher-unloading task based on task-precedence graph

Author  Michael Pardowitz, Raoul Zöllner, Steffen Knoop, Tamim Asfour, Kristian Regenstein, Pedram Azad, Joachim Schröder, Rüdiger Dillmann

Video ID : 103

ARMAR-III humanoid robot reproducing the task of unloading a dishwasher, based on a task precedence graph learned from demonstrations. References: 1) T. Asfour, K. Regenstein, P. Azad, J. Schroeder, R. Dillmann: ARMAR-III: A humanoid platform for perception-action integration, Int. Workshop Human-Centered Robotic Systems (HCRS)(2006); 2) M. Pardowitz, R. Zöllner, S. Knoop, R. Dillmann: Incremental learning of tasks from user demonstrations, past experiences and vocal comments, IEEE Trans. Syst. Man Cybernet. B37(2), 322–332 (2007); URL: https://www.youtube.com/user/HumanoidRobots .