View Chapter

Chapter 61 — Robot Surveillance and Security

Wendell H. Chun and Nikolaos Papanikolopoulos

This chapter introduces the foundation for surveillance and security robots for multiple military and civilian applications. The key environmental domains are mobile robots for ground, aerial, surface water, and underwater applications. Surveillance literallymeans to watch fromabove,while surveillance robots are used to monitor the behavior, activities, and other changing information that are gathered for the general purpose of managing, directing, or protecting one’s assets or position. In a practical sense, the term surveillance is taken to mean the act of observation from a distance, and security robots are commonly used to protect and safeguard a location, some valuable assets, or personal against danger, damage, loss, and crime. Surveillance is a proactive operation,while security robots are a defensive operation. The construction of each type of robot is similar in nature with amobility component, sensor payload, communication system, and an operator control station.

After introducing the major robot components, this chapter focuses on the various applications. More specifically, Sect. 61.3 discusses the enabling technologies of mobile robot navigation, various payload sensors used for surveillance or security applications, target detection and tracking algorithms, and the operator’s robot control console for human–machine interface (HMI). Section 61.4 presents selected research activities relevant to surveillance and security, including automatic data processing of the payload sensors, automaticmonitoring of human activities, facial recognition, and collaborative automatic target recognition (ATR). Finally, Sect. 61.5 discusses future directions in robot surveillance and security, giving some conclusions and followed by references.

Camera control from gaze

Author  Fabien Spindler

Video ID : 702

Visual-servoing techniques consist of using the data provided by one or several cameras in order to control the motion of a robotic security or surveillance system. A large variety of positioning or target tracking tasks can be implemented by controlling from one to all degrees of freedom of the system.

Chapter 72 — Social Robotics

Cynthia Breazeal, Kerstin Dautenhahn and Takayuki Kanda

This chapter surveys some of the principal research trends in Social Robotics and its application to human–robot interaction (HRI). Social (or Sociable) robots are designed to interact with people in a natural, interpersonal manner – often to achieve positive outcomes in diverse applications such as education, health, quality of life, entertainment, communication, and tasks requiring collaborative teamwork. The long-term goal of creating social robots that are competent and capable partners for people is quite a challenging task. They will need to be able to communicate naturally with people using both verbal and nonverbal signals. They will need to engage us not only on a cognitive level, but on an emotional level as well in order to provide effective social and task-related support to people. They will need a wide range of socialcognitive skills and a theory of other minds to understand human behavior, and to be intuitively understood by people. A deep understanding of human intelligence and behavior across multiple dimensions (i. e., cognitive, affective, physical, social, etc.) is necessary in order to design robots that can successfully play a beneficial role in the daily lives of people. This requires a multidisciplinary approach where the design of social robot technologies and methodologies are informed by robotics, artificial intelligence, psychology, neuroscience, human factors, design, anthropology, and more.

A robot that approaches pedestrians

Author  Takayuki Kanda

Video ID : 258

This video illustrates an example of a study in which the social robot's capability for nonverbal interaction was developed. In the study, an anticipation technique was developed, where the robot observes pedestrians' motions and anticipates each pedestrian's future motions thanks to the accumulation of a large amount of data on pedestrian trajectories. Then, it plans its motion to approach a pedestrian from a frontal direction and initiates a conversation with the pedestrian.

Chapter 0 — Preface

Bruno Siciliano, Oussama Khatib and Torsten Kröger

The preface of the Second Edition of the Springer Handbook of Robotics contains three videos about the creation of the book and using its multimedia app on mobile devices.

The handbook — The story continues

Author  Bruno Siciliano

Video ID : 845

This video illustrates the joyful mood of the big team of the Springer Handbook of Robotics at the completion of the Second Edition.

Chapter 79 — Robotics for Education

David P. Miller and Illah Nourbakhsh

Educational robotics programs have become popular in most developed countries and are becoming more and more prevalent in the developing world as well. Robotics is used to teach problem solving, programming, design, physics, math and even music and art to students at all levels of their education. This chapter provides an overview of some of the major robotics programs along with the robot platforms and the programming environments commonly used. Like robot systems used in research, there is a constant development and upgrade of hardware and software – so this chapter provides a snapshot of the technologies being used at this time. The chapter concludes with a review of the assessment strategies that can be used to determine if a particular robotics program is benefitting students in the intended ways.

New Mexico Elementary Botball 2014 - Teagan's first-ever run.

Author  Jtlboys3

Video ID : 635

This video shows some elementary-school students running their line-following code (written in C) on a robot at the local Junior Botball Challenge event. Details from: https://www.juniorbotballchallenge.org .

Chapter 46 — Simultaneous Localization and Mapping

Cyrill Stachniss, John J. Leonard and Sebastian Thrun

This chapter provides a comprehensive introduction in to the simultaneous localization and mapping problem, better known in its abbreviated form as SLAM. SLAM addresses the main perception problem of a robot navigating an unknown environment. While navigating the environment, the robot seeks to acquire a map thereof, and at the same time it wishes to localize itself using its map. The use of SLAM problems can be motivated in two different ways: one might be interested in detailed environment models, or one might seek to maintain an accurate sense of a mobile robot’s location. SLAM serves both of these purposes.

We review the three major paradigms from which many published methods for SLAM are derived: (1) the extended Kalman filter (EKF); (2) particle filtering; and (3) graph optimization. We also review recent work in three-dimensional (3-D) SLAM using visual and red green blue distance-sensors (RGB-D), and close with a discussion of open research problems in robotic mapping.

SLAM++: Simultaneous localization and mapping at the level of objects

Author  Andrew Davison

Video ID : 454

This video describes SLAM++, an object-based, 3-D SLAM system. Reference. R.F. Salas-Moreno, R.A. Newcombe, H. Strasdat, P.H.J. Kelly, A.J. Davison: SLAM++: Simultaneous localisation and mapping at the level of objects, Proc. IEEE Int. Conf. Computer Vision Pattern Recognition, Portland (2013).

Chapter 25 — Underwater Robots

Hyun-Taek Choi and Junku Yuh

Covering about two-thirds of the earth, the ocean is an enormous system that dominates processes on the Earth and has abundant living and nonliving resources, such as fish and subsea gas and oil. Therefore, it has a great effect on our lives on land, and the importance of the ocean for the future existence of all human beings cannot be overemphasized. However, we have not been able to explore the full depths of the ocean and do not fully understand the complex processes of the ocean. Having said that, underwater robots including remotely operated vehicles (ROVs) and autonomous underwater vehicles (AUVs) have received much attention since they can be an effective tool to explore the ocean and efficiently utilize the ocean resources. This chapter focuses on design issues of underwater robots including major subsystems such as mechanical systems, power sources, actuators and sensors, computers and communications, software architecture, and manipulators while Chap. 51 covers modeling and control of underwater robots.

First recorded dive of the deep-sea ROV Hamire at a depth of 5,882 m

Author  Hyun-Taek Choi

Video ID : 796

This video shows the first deep-sea trial of the ROV Hamire developed by KRISO (Korea Research Institute of Ships and Ocean Engineering) at a depth of 5,882 m.

Chapter 40 — Mobility and Manipulation

Oliver Brock, Jaeheung Park and Marc Toussaint

Mobile manipulation requires the integration of methodologies from all aspects of robotics. Instead of tackling each aspect in isolation,mobilemanipulation research exploits their interdependence to solve challenging problems. As a result, novel views of long-standing problems emerge. In this chapter, we present these emerging views in the areas of grasping, control, motion generation, learning, and perception. All of these areas must address the shared challenges of high-dimensionality, uncertainty, and task variability. The section on grasping and manipulation describes a trend towards actively leveraging contact and physical and dynamic interactions between hand, object, and environment. Research in control addresses the challenges of appropriately coupling mobility and manipulation. The field of motion generation increasingly blurs the boundaries between control and planning, leading to task-consistent motion in high-dimensional configuration spaces, even in dynamic and partially unknown environments. A key challenge of learning formobilemanipulation consists of identifying the appropriate priors, and we survey recent learning approaches to perception, grasping, motion, and manipulation. Finally, a discussion of promising methods in perception shows how concepts and methods from navigation and active perception are applied.

Development of a versatile underwater robot - GTS ROV ALPHA

Author  Georgia Tech Savannah Robotics

Video ID : 790

This underwater vehicle won the award for design elegance at the 2009 MATE International ROV competition. In November 2009, it was deployed from the R/V Savannah for an initial sea trial. In the future, it is intended to serve as a platform for underwater manipulation, mapping, and control experiments.

Chapter 26 — Flying Robots

Stefan Leutenegger, Christoph Hürzeler, Amanda K. Stowers, Kostas Alexis, Markus W. Achtelik, David Lentink, Paul Y. Oh and Roland Siegwart

Unmanned aircraft systems (UASs) have drawn increasing attention recently, owing to advancements in related research, technology, and applications. While having been deployed successfully in military scenarios for decades, civil use cases have lately been tackled by the robotics research community.

This chapter overviews the core elements of this highly interdisciplinary field; the reader is guided through the design process of aerial robots for various applications starting with a qualitative characterization of different types of UAS. Design and modeling are closely related, forming a typically iterative process of drafting and analyzing the related properties. Therefore, we overview aerodynamics and dynamics, as well as their application to fixed-wing, rotary-wing, and flapping-wing UAS, including related analytical tools and practical guidelines. Respecting use-case-specific requirements and core autonomous robot demands, we finally provide guidelines to related system integration challenges.

Towards valve turning using a dual-arm aerial manipulator

Author  Christopher Korpela, Matko Orsag, Paul Oh, Stjepan Bogdan

Video ID : 719

A framework was proposed for valve turning using an aerial vehicle endowed with dual multi-degree of freedom manipulators. A tightly integrated control scheme between the aircraft and manipulators is mandated for tasks requiring aircraft-to-environment coupling. Feature detection is well-established for both ground and aerial vehicles and facilitates valve detection and arm tracking. Force feedback upon contact with the environment provides compliant motions in the presence of position error and coupling with the valve. The video presents results validating the valve turning framework using the proposed aircraft-arm system during flight tests.

Chapter 74 — Learning from Humans

Aude G. Billard, Sylvain Calinon and Rüdiger Dillmann

This chapter surveys the main approaches developed to date to endow robots with the ability to learn from human guidance. The field is best known as robot programming by demonstration, robot learning from/by demonstration, apprenticeship learning and imitation learning. We start with a brief historical overview of the field. We then summarize the various approaches taken to solve four main questions: when, what, who and when to imitate. We emphasize the importance of choosing well the interface and the channels used to convey the demonstrations, with an eye on interfaces providing force control and force feedback. We then review algorithmic approaches to model skills individually and as a compound and algorithms that combine learning from human guidance with reinforcement learning. We close with a look on the use of language to guide teaching and a list of open issues.

Demonstration by teleoperation of humanoid HRP-2

Author  Sylvain Calinon, Paul Evrard, Elena Gribovskaya, Aude Billard, Abderrahmane Kheddar

Video ID : 101

Demonstration by teleoperation of the HRP-2 humanoid robot. Reference: S. Calinon, P. Evrard, E. Gribovskaya, A.G. Billard, A. Kheddar: Learning collaborative manipulation tasks by demonstration using a haptic interface, Proc. Intl Conf. Adv. Robot. (ICAR), (2009), pp. 1–6; URL: http://programming-by-demonstration.org/showVideo.php?video=10 .

Chapter 14 — AI Reasoning Methods for Robotics

Michael Beetz, Raja Chatila, Joachim Hertzberg and Federico Pecora

Artificial intelligence (AI) reasoning technology involving, e.g., inference, planning, and learning, has a track record with a healthy number of successful applications. So can it be used as a toolbox of methods for autonomous mobile robots? Not necessarily, as reasoning on a mobile robot about its dynamic, partially known environment may differ substantially from that in knowledge-based pure software systems, where most of the named successes have been registered. Moreover, recent knowledge about the robot’s environment cannot be given a priori, but needs to be updated from sensor data, involving challenging problems of symbol grounding and knowledge base change. This chapter sketches the main roboticsrelevant topics of symbol-based AI reasoning. Basic methods of knowledge representation and inference are described in general, covering both logicand probability-based approaches. The chapter first gives a motivation by example, to what extent symbolic reasoning has the potential of helping robots perform in the first place. Then (Sect. 14.2), we sketch the landscape of representation languages available for the endeavor. After that (Sect. 14.3), we present approaches and results for several types of practical, robotics-related reasoning tasks, with an emphasis on temporal and spatial reasoning. Plan-based robot control is described in some more detail in Sect. 14.4. Section 14.5 concludes.

From knowledge grounding to dialogue processing

Author  Séverin Lemaignan, Rachid Alami

Video ID : 705

This 2012 video documents the entire process of perspective-aware knowledge acquisition, knowledge representation and storage, and dialogue understanding. It demonstrates several examples of the natural interaction of a human with a PR2 robot, including speech recognition and action execution.