View Chapter

Chapter 61 — Robot Surveillance and Security

Wendell H. Chun and Nikolaos Papanikolopoulos

This chapter introduces the foundation for surveillance and security robots for multiple military and civilian applications. The key environmental domains are mobile robots for ground, aerial, surface water, and underwater applications. Surveillance literallymeans to watch fromabove,while surveillance robots are used to monitor the behavior, activities, and other changing information that are gathered for the general purpose of managing, directing, or protecting one’s assets or position. In a practical sense, the term surveillance is taken to mean the act of observation from a distance, and security robots are commonly used to protect and safeguard a location, some valuable assets, or personal against danger, damage, loss, and crime. Surveillance is a proactive operation,while security robots are a defensive operation. The construction of each type of robot is similar in nature with amobility component, sensor payload, communication system, and an operator control station.

After introducing the major robot components, this chapter focuses on the various applications. More specifically, Sect. 61.3 discusses the enabling technologies of mobile robot navigation, various payload sensors used for surveillance or security applications, target detection and tracking algorithms, and the operator’s robot control console for human–machine interface (HMI). Section 61.4 presents selected research activities relevant to surveillance and security, including automatic data processing of the payload sensors, automaticmonitoring of human activities, facial recognition, and collaborative automatic target recognition (ATR). Finally, Sect. 61.5 discusses future directions in robot surveillance and security, giving some conclusions and followed by references.

Tracking people for security

Author  Nikos Papanikolopoulos

Video ID : 683

Tracking of people in crowded scenes is challenging because people occlude each other as they walk around. The latest revision of the University of Minnesota's person tracker uses adaptive appearance models that explicitly account for the probability that a person may be partially occluded. All potentially occluding targets are tracked jointly, and the most likely visibility order is estimated (so we know the probability that person A is occluding person B). Target-size adaptation is performed using calibration information about the camera, and the reported target positions are made in real-world coordinates.

Chapter 74 — Learning from Humans

Aude G. Billard, Sylvain Calinon and Rüdiger Dillmann

This chapter surveys the main approaches developed to date to endow robots with the ability to learn from human guidance. The field is best known as robot programming by demonstration, robot learning from/by demonstration, apprenticeship learning and imitation learning. We start with a brief historical overview of the field. We then summarize the various approaches taken to solve four main questions: when, what, who and when to imitate. We emphasize the importance of choosing well the interface and the channels used to convey the demonstrations, with an eye on interfaces providing force control and force feedback. We then review algorithmic approaches to model skills individually and as a compound and algorithms that combine learning from human guidance with reinforcement learning. We close with a look on the use of language to guide teaching and a list of open issues.

Full-body motion transfer under kinematic/dynamic disparity

Author  Sovannara Hak, Nicolas Mansard, Oscar Ramos, Layale Saab, Olivier Stasse

Video ID : 98

Offline full-body motion transfer by taking into account the kinematic and dynamic disparity between the human and the humanoid. Reference: S. Hak, N. Mansard, O. Ramos, L. Saab, O. Stasse: Capture, recognition and imitation of anthropomorphic motion, Proc. IEEE Int. Conf. Robot. Autom. (ICRA), St. Paul (2012), pp. 3539–3540; URL: http://techtalks.tv/talks/capture-recognition-and-imitation-of-anthropomorphic-motion/55648/ .

Chapter 18 — Parallel Mechanisms

Jean-Pierre Merlet, Clément Gosselin and Tian Huang

This chapter presents an introduction to the kinematics and dynamics of parallel mechanisms, also referred to as parallel robots. As opposed to classical serial manipulators, the kinematic architecture of parallel robots includes closed-loop kinematic chains. As a consequence, their analysis differs considerably from that of their serial counterparts. This chapter aims at presenting the fundamental formulations and techniques used in their analysis.

Tripteron robot

Author  Clément Gosselin

Video ID : 54

This video demonstrates a 3-DOF decoupled translational parallel robot (Tripteron). References: 1. X. Kong, C.M. Gosselin: Kinematics and singularity analysis of a novel type of 3-CRR 3-DOF translational parallel manipulator, Inte. J. Robot. Res. 21(9), 791-798 (2002); 2. C. Gosselin: Compact dynamic models for the tripteron and quadrupteron parallel manipulators, J. Syst. Control Eng. 223(I1), 1-11 (2009)

Chapter 74 — Learning from Humans

Aude G. Billard, Sylvain Calinon and Rüdiger Dillmann

This chapter surveys the main approaches developed to date to endow robots with the ability to learn from human guidance. The field is best known as robot programming by demonstration, robot learning from/by demonstration, apprenticeship learning and imitation learning. We start with a brief historical overview of the field. We then summarize the various approaches taken to solve four main questions: when, what, who and when to imitate. We emphasize the importance of choosing well the interface and the channels used to convey the demonstrations, with an eye on interfaces providing force control and force feedback. We then review algorithmic approaches to model skills individually and as a compound and algorithms that combine learning from human guidance with reinforcement learning. We close with a look on the use of language to guide teaching and a list of open issues.

Learning compliant motion from human demonstration II

Author  Aude Billard

Video ID : 479

This video shows how the right amount of stiffness at joint level can be taught by human demonstration to allow the robot to strike a match. The robot starts with high stiffness. This leads the robot to break the match. By tapping gently on the joint that requires a decrease in stiffness, the teacher can convey the need for stiffness to decrease. The tapping is recorded using the force sensors available in each joint of the KUKA Light Weight Robot 4++ used for this purpose. Reference: K. Kronander,A. Billard: Learning compliant manipulation through kinesthetic and tactile human-robot interaction, IEEE Trans. Haptics 7(3), 367-380 (2013); doi: 10.1109/TOH.2013.54 .

Chapter 47 — Motion Planning and Obstacle Avoidance

Javier Minguez, Florant Lamiraux and Jean-Paul Laumond

This chapter describes motion planning and obstacle avoidance for mobile robots. We will see how the two areas do not share the same modeling background. From the very beginning of motion planning, research has been dominated by computer sciences. Researchers aim at devising well-grounded algorithms with well-understood completeness and exactness properties.

The challenge of this chapter is to present both nonholonomic motion planning (Sects. 47.1–47.6) and obstacle avoidance (Sects. 47.7–47.10) issues. Section 47.11 reviews recent successful approaches that tend to embrace the whole problemofmotion planning and motion control. These approaches benefit from both nonholonomic motion planning and obstacle avoidance methods.

A ride in the Google self-driving car

Author  Google Self-Driving Car Project

Video ID : 710

The maturity of the tools developed for mobile-robot navigation and explained in this chapter have enabled Google to integrate them into an experimental vehicle. This video demonstrates Google's self-driving technology on the road.

Chapter 6 — Model Identification

John Hollerbach, Wisama Khalil and Maxime Gautier

This chapter discusses how to determine the kinematic parameters and the inertial parameters of robot manipulators. Both instances of model identification are cast into a common framework of least-squares parameter estimation, and are shown to have common numerical issues relating to the identifiability of parameters, adequacy of the measurement sets, and numerical robustness. These discussions are generic to any parameter estimation problem, and can be applied in other contexts.

For kinematic calibration, the main aim is to identify the geometric Denavit–Hartenberg (DH) parameters, although joint-based parameters relating to the sensing and transmission elements can also be identified. Endpoint sensing or endpoint constraints can provide equivalent calibration equations. By casting all calibration methods as closed-loop calibration, the calibration index categorizes methods in terms of how many equations per pose are generated.

Inertial parameters may be estimated through the execution of a trajectory while sensing one or more components of force/torque at a joint. Load estimation of a handheld object is simplest because of full mobility and full wrist force-torque sensing. For link inertial parameter estimation, restricted mobility of links nearer the base as well as sensing only the joint torque means that not all inertial parameters can be identified. Those that can be identified are those that affect joint torque, although they may appear in complicated linear combinations.

Dynamic identification of Kuka LWR : Trajectory with load

Author  Maxime Gautier

Video ID : 483

This video shows a trajectory with a known payload mass of 4.6 (kg) used to identify the dynamic parameters and torque-sensor gains of the KUKA LWR manipulator. Details and results are given in the papers: A. Jubien, M. Gautier, A. Janot: Dynamic identification of the Kuka LWR robot using motor torques and joint torque sensors data, preprints 19th IFAC World Congress, Cape Town (2014) pp. 8391-8396 M. Gautier, A. Jubien: Force calibration of the Kuka LWR-like robots including embedded joint torque sensors and robot structure, IEEE/RSJ Int. Conf. Intel. Robot. Syst. (IROS), Chicago (2014) pp. 416-421

Chapter 50 — Modeling and Control of Robots on Rough Terrain

Keiji Nagatani, Genya Ishigami and Yoshito Okada

In this chapter, we introduce modeling and control for wheeled mobile robots and tracked vehicles. The target environment is rough terrains, which includes both deformable soil and heaps of rubble. Therefore, the topics are roughly divided into two categories, wheeled robots on deformable soil and tracked vehicles on heaps of rubble.

After providing an overview of this area in Sect. 50.1, a modeling method of wheeled robots on a deformable terrain is introduced in Sect. 50.2. It is based on terramechanics, which is the study focusing on the mechanical properties of natural rough terrain and its response to off-road vehicle, specifically the interaction between wheel/track and soil. In Sect. 50.3, the control of wheeled robots is introduced. A wheeled robot often experiences wheel slippage as well as its sideslip while traversing rough terrain. Therefore, the basic approach in this section is to compensate the slip via steering and driving maneuvers. In the case of navigation on heaps of rubble, tracked vehicles have much advantage. To improve traversability in such challenging environments, some tracked vehicles are equipped with subtracks, and one kinematical modeling method of tracked vehicle on rough terrain is introduced in Sect. 50.4. In addition, stability analysis of such vehicles is introduced in Sect. 50.5. Based on such kinematical model and stability analysis, a sensor-based control of tracked vehicle on rough terrain is introduced in Sect. 50.6. Sect. 50.7 summarizes this chapter.

Interactive, human-robot supervision test with the long-range science rover for Mars exploration

Author  Samad Hayati, Richard Volpe, Paul Backes, J. (Bob) Balaram, Richard Welch, Robert Ivlev, Gregory Tharp, Steve Peters, Tim Ohm, Richard Petras

Video ID : 187

This video records a demonstration of the long-range rover mission on the surface of Mars. The Mars rover, the test bed Rocky 7, performs several demonstrations including 3-D terrain mapping using the panoramic camera, telescience over the internet, an autonomous mobility test, and soil sampling. This demonstration was among the preliminary tests for the Mars Pathfinder mission executed in 1997.

Chapter 47 — Motion Planning and Obstacle Avoidance

Javier Minguez, Florant Lamiraux and Jean-Paul Laumond

This chapter describes motion planning and obstacle avoidance for mobile robots. We will see how the two areas do not share the same modeling background. From the very beginning of motion planning, research has been dominated by computer sciences. Researchers aim at devising well-grounded algorithms with well-understood completeness and exactness properties.

The challenge of this chapter is to present both nonholonomic motion planning (Sects. 47.1–47.6) and obstacle avoidance (Sects. 47.7–47.10) issues. Section 47.11 reviews recent successful approaches that tend to embrace the whole problemofmotion planning and motion control. These approaches benefit from both nonholonomic motion planning and obstacle avoidance methods.

Autonomous navigation of a mobile vehicle

Author  Visp team

Video ID : 713

This video shows the vision-based autonomous navigation of a Cycab mobile vehicle able to avoid obstacles detected by its laser range finder. The reference trajectory is provided as a sequence of previously-acquired key images. Obstacle avoidance is based on a predefined set of circular avoidance trajectories. The best trajectory is selected when an obstacle is detected by the laser scanner.

Chapter 30 — Sonar Sensing

Lindsay Kleeman and Roman Kuc

Sonar or ultrasonic sensing uses the propagation of acoustic energy at higher frequencies than normal hearing to extract information from the environment. This chapter presents the fundamentals and physics of sonar sensing for object localization, landmark measurement and classification in robotics applications. The source of sonar artifacts is explained and how they can be dealt with. Different ultrasonic transducer technologies are outlined with their main characteristics highlighted.

Sonar systems are described that range in sophistication from low-cost threshold-based ranging modules to multitransducer multipulse configurations with associated signal processing requirements capable of accurate range and bearing measurement, interference rejection, motion compensation, and target classification. Continuous-transmission frequency-modulated (CTFM) systems are introduced and their ability to improve target sensitivity in the presence of noise is discussed. Various sonar ring designs that provide rapid surrounding environmental coverage are described in conjunction with mapping results. Finally the chapter ends with a discussion of biomimetic sonar, which draws inspiration from animals such as bats and dolphins.

Antwerp biomimetic sonar system tracking two balls

Author  Herbert Peremans

Video ID : 317

The Antwerp biomimetic bat-head sonar system consists of a single emitter and two receivers. The receivers are constructed by inserting a small omnidirectional microphone in the ear canal of a plastic replica of the outer ear of the bat Phyllostomus discolor. Using the head-related transfer (HRTF) cues, the system is able to localize multiple reflectors in three dimensions based on a single emission. This video demonstrates the tracking of two balls serving as targets.

Chapter 66 — Robotics Competitions and Challenges

Daniele Nardi, Jonathan Roberts, Manuela Veloso and Luke Fletcher

This chapter explores the use of competitions to accelerate robotics research and promote science, technology, engineering, and mathematics (STEM) education. We argue that the field of robotics is particularly well suited to innovation through competitions. Two broad categories of robot competition are used to frame the discussion: human-inspired competitions and task-based challenges. Human-inspired robot competitions, of which the majority are sports contests, quickly move through platform development to focus on problemsolving and test through game play. Taskbased challenges attempt to attract participants by presenting a high aim for a robotic system. The contest can then be tuned, as required, to maintain motivation and ensure that the progress is made. Three case studies of robot competitions are presented, namely robot soccer, the UAV challenge, and the DARPA (Defense Advanced Research Projects Agency) grand challenges. The case studies serve to explore from the point of view of organizers and participants, the benefits and limitations of competitions, and what makes a good robot competition.

This chapter ends with some concluding remarks on the natural convergence of humaninspired competitions and task-based challenges in the promotion of STEM education, research, and vocations.

Multirobot teamwork in the CMDragons RoboCup SSL team

Author  Manuela Veloso

Video ID : 387

In this video, we can see the coordination and passing strategy as an example of the play of the RoboCup small-size league (SSL), in this case, the CMDragons team from Veloso and her students, at Carnegie Mellon University. The RoboCup SSL has an overhead camera connected to an offboard computer which plans and commands the robots: The perception, planning, and actuation cycle is fully autonomous.