View Chapter

Chapter 66 — Robotics Competitions and Challenges

Daniele Nardi, Jonathan Roberts, Manuela Veloso and Luke Fletcher

This chapter explores the use of competitions to accelerate robotics research and promote science, technology, engineering, and mathematics (STEM) education. We argue that the field of robotics is particularly well suited to innovation through competitions. Two broad categories of robot competition are used to frame the discussion: human-inspired competitions and task-based challenges. Human-inspired robot competitions, of which the majority are sports contests, quickly move through platform development to focus on problemsolving and test through game play. Taskbased challenges attempt to attract participants by presenting a high aim for a robotic system. The contest can then be tuned, as required, to maintain motivation and ensure that the progress is made. Three case studies of robot competitions are presented, namely robot soccer, the UAV challenge, and the DARPA (Defense Advanced Research Projects Agency) grand challenges. The case studies serve to explore from the point of view of organizers and participants, the benefits and limitations of competitions, and what makes a good robot competition.

This chapter ends with some concluding remarks on the natural convergence of humaninspired competitions and task-based challenges in the promotion of STEM education, research, and vocations.

Brief history of RoboCup robot soccer

Author  Manuela Veloso

Video ID : 385

In this 5 min video, we explain the history of the multiple RoboCup soccer leagues.

Chapter 53 — Multiple Mobile Robot Systems

Lynne E. Parker, Daniela Rus and Gaurav S. Sukhatme

Within the context of multiple mobile, and networked robot systems, this chapter explores the current state of the art. After a brief introduction, we first examine architectures for multirobot cooperation, exploring the alternative approaches that have been developed. Next, we explore communications issues and their impact on multirobot teams in Sect. 53.3, followed by a discussion of networked mobile robots in Sect. 53.4. Following this we discuss swarm robot systems in Sect. 53.5 and modular robot systems in Sect. 53.6. While swarm and modular systems typically assume large numbers of homogeneous robots, other types of multirobot systems include heterogeneous robots. We therefore next discuss heterogeneity in cooperative robot teams in Sect. 53.7. Once robot teams allow for individual heterogeneity, issues of task allocation become important; Sect. 53.8 therefore discusses common approaches to task allocation. Section 53.9 discusses the challenges of multirobot learning, and some representative approaches. We outline some of the typical application domains which serve as test beds for multirobot systems research in Sect. 53.10. Finally, we conclude in Sect. 53.11 with some summary remarks and suggestions for further reading.

A robotic reconnaissance and surveillance team

Author  Paul Rybski, Saifallah Benjaafar, John R. Budenske, Mark Dvorak, Maria Gini, Dean F. Hougen, Donald G. Krantz, Perry Y. Li, Fred Malver, Brad Nelson, Nikolaos Papanikolopoulos, Sascha A. Stoeter, Richard Voyles, Kemel Berk Yesin

Video ID : 203

A two-tiered system for surveillance and exploration tasks is presented. The first tier is the scout (a small mobile sensor platform); the second tier consists of rangers (larger robots that transport and deploy scouts). Scouts send data (commonly video) to other robots via an RF data link.

Synchronization and fault detection in autonomous rbots

Author  Andres Lyhne Christensen, Rehan O'Grady, Marco Dorigo

Video ID : 194

This video demonstrates a group of robots detecting faults in each other and simulating repair. The technique relies on visual fire-fly-like synchronization. Each robot synchronizes with the others based on the detection of LED lights and flashes using on-board cameras. The robots simulate fault and repair based on the frequency of flashes. The video shows an experiment with many robots working together and simulating faults and repairs.

Chapter 32 — 3-D Vision for Navigation and Grasping

Danica Kragic and Kostas Daniilidis

In this chapter, we describe algorithms for three-dimensional (3-D) vision that help robots accomplish navigation and grasping. To model cameras, we start with the basics of perspective projection and distortion due to lenses. This projection from a 3-D world to a two-dimensional (2-D) image can be inverted only by using information from the world or multiple 2-D views. If we know the 3-D model of an object or the location of 3-D landmarks, we can solve the pose estimation problem from one view. When two views are available, we can compute the 3-D motion and triangulate to reconstruct the world up to a scale factor. When multiple views are given either as sparse viewpoints or a continuous incoming video, then the robot path can be computer and point tracks can yield a sparse 3-D representation of the world. In order to grasp objects, we can estimate 3-D pose of the end effector or 3-D coordinates of the graspable points on the object.

Finding paths through the world's photos

Author  Noah Snavely, Rahul Garg, Steven M. Seitz, Richard Szeliski

Video ID : 121

When a scene is photographed many times by different people, the viewpoints often cluster along certain paths. These paths are largely specific to the scene being photographed and follow interesting patterns and viewpoints. We seek to discover a range of such paths and turn them into controls for image-based rendering. Our approach takes as input a large set of community or personal photos, reconstructs camera viewpoints, and automatically computes orbits, panoramas, canonical views, and optimal paths between views. The scene can then be interactively browsed in 3-D using these controls or with six DOF free-viewpoint control. As the user browses the scene, nearby views are continuously selected and transformed, using control-adaptive reprojection techniques.

Chapter 72 — Social Robotics

Cynthia Breazeal, Kerstin Dautenhahn and Takayuki Kanda

This chapter surveys some of the principal research trends in Social Robotics and its application to human–robot interaction (HRI). Social (or Sociable) robots are designed to interact with people in a natural, interpersonal manner – often to achieve positive outcomes in diverse applications such as education, health, quality of life, entertainment, communication, and tasks requiring collaborative teamwork. The long-term goal of creating social robots that are competent and capable partners for people is quite a challenging task. They will need to be able to communicate naturally with people using both verbal and nonverbal signals. They will need to engage us not only on a cognitive level, but on an emotional level as well in order to provide effective social and task-related support to people. They will need a wide range of socialcognitive skills and a theory of other minds to understand human behavior, and to be intuitively understood by people. A deep understanding of human intelligence and behavior across multiple dimensions (i. e., cognitive, affective, physical, social, etc.) is necessary in order to design robots that can successfully play a beneficial role in the daily lives of people. This requires a multidisciplinary approach where the design of social robot technologies and methodologies are informed by robotics, artificial intelligence, psychology, neuroscience, human factors, design, anthropology, and more.

Human-robot teaming in a search-and-retrieve task

Author  Cynthia Breazeal

Video ID : 555

This video shows an example from a human participant study examining the role of nonverbal social signals on human-robot teamwork for a complex search-and-retrieve task. In a controlled experiment, we examined the role of backchanneling and task complexity on team functioning and perceptions of the robots’ engagement and competence. Seventy three participants interacted with autonomous humanoid robots as part of a human-robot team: One participant, one confederate (a remote operator controlling an aerial robot), and three robots (2 mobile humanoids and an aerial robot). We found that, when robots used backchanneling, team functioning improved and the robots were seen as more engaged.

Chapter 71 — Cognitive Human-Robot Interaction

Bilge Mutlu, Nicholas Roy and Selma Šabanović

A key research challenge in robotics is to design robotic systems with the cognitive capabilities necessary to support human–robot interaction. These systems will need to have appropriate representations of the world; the task at hand; the capabilities, expectations, and actions of their human counterparts; and how their own actions might affect the world, their task, and their human partners. Cognitive human–robot interaction is a research area that considers human(s), robot(s), and their joint actions as a cognitive system and seeks to create models, algorithms, and design guidelines to enable the design of such systems. Core research activities in this area include the development of representations and actions that allow robots to participate in joint activities with people; a deeper understanding of human expectations and cognitive responses to robot actions; and, models of joint activity for human–robot interaction. This chapter surveys these research activities by drawing on research questions and advances from a wide range of fields including computer science, cognitive science, linguistics, and robotics.

Gaze and gesture cues for robots

Author  Bilge Mutlu

Video ID : 128

In human-robot communication, nonverbal cues like gaze and gesture can be a source of important information for starting and maintaining interaction. Gaze, for example, can tell a person about what the robot is attending to, its mental state, and its role in a conversation. Researchers are studying and developing models of nonverbal cues in human-robot interaction to enable more successful collaboration between robots and humans in a variety of domains, including education.

Chapter 76 — Evolutionary Robotics

Stefano Nolfi, Josh Bongard, Phil Husbands and Dario Floreano

Evolutionary Robotics is a method for automatically generating artificial brains and morphologies of autonomous robots. This approach is useful both for investigating the design space of robotic applications and for testing scientific hypotheses of biological mechanisms and processes. In this chapter we provide an overview of methods and results of Evolutionary Robotics with robots of different shapes, dimensions, and operation features. We consider both simulated and physical robots with special consideration to the transfer between the two worlds.

More complex robots evolve in more complex environments

Author  Josh Bongard

Video ID : 772

This set of videos demonstrates that complex environments influence the evolution of robots with more complex body plans.

Chapter 40 — Mobility and Manipulation

Oliver Brock, Jaeheung Park and Marc Toussaint

Mobile manipulation requires the integration of methodologies from all aspects of robotics. Instead of tackling each aspect in isolation,mobilemanipulation research exploits their interdependence to solve challenging problems. As a result, novel views of long-standing problems emerge. In this chapter, we present these emerging views in the areas of grasping, control, motion generation, learning, and perception. All of these areas must address the shared challenges of high-dimensionality, uncertainty, and task variability. The section on grasping and manipulation describes a trend towards actively leveraging contact and physical and dynamic interactions between hand, object, and environment. Research in control addresses the challenges of appropriately coupling mobility and manipulation. The field of motion generation increasingly blurs the boundaries between control and planning, leading to task-consistent motion in high-dimensional configuration spaces, even in dynamic and partially unknown environments. A key challenge of learning formobilemanipulation consists of identifying the appropriate priors, and we survey recent learning approaches to perception, grasping, motion, and manipulation. Finally, a discussion of promising methods in perception shows how concepts and methods from navigation and active perception are applied.

Exploitation of environmental constraints in human and robotic grasping

Author  Clemens Eppner, Raphael Deimel, Jose Alvarez-Ruiz, Marianne Maertens, Oliver Brock

Video ID : 657

We investigate the premise that robust grasping performance is enabled by exploiting constraints present in the environment. Given this premise, grasping becomes a process of successive exploitation of environmental constraints, until a successful grasp has been established. We present evidence for this view by showing robust robotic grasping based on constraint-exploiting grasp strategies, and we show that it is possible to design robotic hands with inherent capabilities for the exploitation of environmental constraints.

Atlas walking and manipulation

Author  DRC Team MIT

Video ID : 662

Autonomy demonstration with the MIT Atlas robot which is composed of the execution of a sequence of autonomous sub-tasks. Walking and manipulation plans are computed online with object fitting input from the perception system.

Chapter 20 — Snake-Like and Continuum Robots

Ian D. Walker, Howie Choset and Gregory S. Chirikjian

This chapter provides an overview of the state of the art of snake-like (backbones comprised of many small links) and continuum (continuous backbone) robots. The history of each of these classes of robot is reviewed, focusing on key hardware developments. A review of the existing theory and algorithms for kinematics for both types of robot is presented, followed by a summary ofmodeling of locomotion for snake-like and continuum mechanisms.

Shoe decoration using concentric tube robot

Author  Pierre Dupont

Video ID : 251

This 2012 video illustrates bimanual robotic shoe decoration using Swarovsky crystals at a charity event for Boston Children's Hospital in Stuart Weitzman's New York City showroom.