View Chapter

Chapter 50 — Modeling and Control of Robots on Rough Terrain

Keiji Nagatani, Genya Ishigami and Yoshito Okada

In this chapter, we introduce modeling and control for wheeled mobile robots and tracked vehicles. The target environment is rough terrains, which includes both deformable soil and heaps of rubble. Therefore, the topics are roughly divided into two categories, wheeled robots on deformable soil and tracked vehicles on heaps of rubble.

After providing an overview of this area in Sect. 50.1, a modeling method of wheeled robots on a deformable terrain is introduced in Sect. 50.2. It is based on terramechanics, which is the study focusing on the mechanical properties of natural rough terrain and its response to off-road vehicle, specifically the interaction between wheel/track and soil. In Sect. 50.3, the control of wheeled robots is introduced. A wheeled robot often experiences wheel slippage as well as its sideslip while traversing rough terrain. Therefore, the basic approach in this section is to compensate the slip via steering and driving maneuvers. In the case of navigation on heaps of rubble, tracked vehicles have much advantage. To improve traversability in such challenging environments, some tracked vehicles are equipped with subtracks, and one kinematical modeling method of tracked vehicle on rough terrain is introduced in Sect. 50.4. In addition, stability analysis of such vehicles is introduced in Sect. 50.5. Based on such kinematical model and stability analysis, a sensor-based control of tracked vehicle on rough terrain is introduced in Sect. 50.6. Sect. 50.7 summarizes this chapter.

Qualification testing of a tracked vehicle in the NIST Disaster City

Author  SuperDroid Robots, Inc

Video ID : 189

NIST (National Institute of Standards and Technology) developed a standard test field for evaluation of all-terrain mobile robots, called Disaster City in Texas, U.S.A. The field includes steps, stairs, steep slopes, and random step fields (unfixed wooden blocks), which simulates a disaster environment. This video-clip shows an evaluation test of the tracked vehicle, called LT-F, produced by SuperDroidRobots in 2011 in the Disaster City. All tests had to be performed remotely by the vehicle for 10 successful iterations each to qualify.

Chapter 79 — Robotics for Education

David P. Miller and Illah Nourbakhsh

Educational robotics programs have become popular in most developed countries and are becoming more and more prevalent in the developing world as well. Robotics is used to teach problem solving, programming, design, physics, math and even music and art to students at all levels of their education. This chapter provides an overview of some of the major robotics programs along with the robot platforms and the programming environments commonly used. Like robot systems used in research, there is a constant development and upgrade of hardware and software – so this chapter provides a snapshot of the technologies being used at this time. The chapter concludes with a review of the assessment strategies that can be used to determine if a particular robotics program is benefitting students in the intended ways.

New Mexico Elementary Botball 2014 - Teagan's first-ever run.

Author  Jtlboys3

Video ID : 635

This video shows some elementary-school students running their line-following code (written in C) on a robot at the local Junior Botball Challenge event. Details from: https://www.juniorbotballchallenge.org .

Chapter 74 — Learning from Humans

Aude G. Billard, Sylvain Calinon and Rüdiger Dillmann

This chapter surveys the main approaches developed to date to endow robots with the ability to learn from human guidance. The field is best known as robot programming by demonstration, robot learning from/by demonstration, apprenticeship learning and imitation learning. We start with a brief historical overview of the field. We then summarize the various approaches taken to solve four main questions: when, what, who and when to imitate. We emphasize the importance of choosing well the interface and the channels used to convey the demonstrations, with an eye on interfaces providing force control and force feedback. We then review algorithmic approaches to model skills individually and as a compound and algorithms that combine learning from human guidance with reinforcement learning. We close with a look on the use of language to guide teaching and a list of open issues.

Demonstration by visual tracking of gestures

Author  Ales Ude

Video ID : 99

Demonstration by visual tracking of gestures. Reference: A. Ude: Trajectory generation from noisy positions of object features for teaching robot paths, Robot. Auton. Syst. 11(2), 113–127 (1993); URL: http://www.cns.atr.jp/~aude/movies/ .

Chapter 25 — Underwater Robots

Hyun-Taek Choi and Junku Yuh

Covering about two-thirds of the earth, the ocean is an enormous system that dominates processes on the Earth and has abundant living and nonliving resources, such as fish and subsea gas and oil. Therefore, it has a great effect on our lives on land, and the importance of the ocean for the future existence of all human beings cannot be overemphasized. However, we have not been able to explore the full depths of the ocean and do not fully understand the complex processes of the ocean. Having said that, underwater robots including remotely operated vehicles (ROVs) and autonomous underwater vehicles (AUVs) have received much attention since they can be an effective tool to explore the ocean and efficiently utilize the ocean resources. This chapter focuses on design issues of underwater robots including major subsystems such as mechanical systems, power sources, actuators and sensors, computers and communications, software architecture, and manipulators while Chap. 51 covers modeling and control of underwater robots.

Preliminary experimental result of an AUV yShark2

Author  Hyun-Taek Choi

Video ID : 799

This video shows preliminary experimental result of an underwater robot named yShark2 developed by KRISO (Korea Research Institute of Ships and Ocean Engineering). yShark is a test platform and is designed especially for testing the intelligent algorithms we are working on. For this, it has AHRS, IMU, DVL, two cameras, an LED light, a depth sensor, eight-channel ranging sonar as basic navigation sensors, and we can install an imaging sonar DIDSON for obtaining pictures as shown in Fig. 25.2. More importantly, its system software architecture is implemented using the structure explained in Fig. 25.7. The motion in this video is controlled by autonomous algorithms.

Chapter 58 — Robotics in Hazardous Applications

James Trevelyan, William R. Hamel and Sung-Chul Kang

Robotics researchers have worked hard to realize a long-awaited vision: machines that can eliminate the need for people to work in hazardous environments. Chapter 60 is framed by the vision of disaster response: search and rescue robots carrying people from burning buildings or tunneling through collapsed rock falls to reach trapped miners. In this chapter we review tangible progress towards robots that perform routine work in places too dangerous for humans. Researchers still have many challenges ahead of them but there has been remarkable progress in some areas. Hazardous environments present special challenges for the accomplishment of desired tasks depending on the nature and magnitude of the hazards. Hazards may be present in the form of radiation, toxic contamination, falling objects or potential explosions. Technology that specialized engineering companies can develop and sell without active help from researchers marks the frontier of commercial feasibility. Just inside this border lie teleoperated robots for explosive ordnance disposal (EOD) and for underwater engineering work. Even with the typical tenfold disadvantage in manipulation performance imposed by the limits of today’s telepresence and teleoperation technology, in terms of human dexterity and speed, robots often can offer a more cost-effective solution. However, most routine applications in hazardous environments still lie far beyond the feasibility frontier. Fire fighting, remediating nuclear contamination, reactor decommissioning, tunneling, underwater engineering, underground mining and clearance of landmines and unexploded ordnance still present many unsolved problems.

Bozena 5 remotely-operated robot vehicle

Author  James P. Trevelyan

Video ID : 574

This is an example of several videos available on YouTube showing this Slovak-designed and -constructed machine. It shows the vehicle being used in different test areas with brief glimpses of other mine-resistant vehicles. BOZENA 5 was designed to support mine-clearance teams operating in Croatia, Serbia and Bosnia Herzegovina, removing mines left over from the civil war in the 1990s. In the areas affected by mines, one of the biggest challenges is the rapid growth of vegetation during the summer months. Bare ground can be submerged in vegetation over 1 m high after just two or three weeks. Military defensive positions were often set up on uneven ground with steep slopes which were then mined to deter attacks from other parties in the conflict. Mines were also removed from these defensive minefields and re-laid along routes used for smuggling goods and people. The smugglers would be able to charge higher prices because only they knew how to safely move along the routes. The smuggling routes (and their parent organizations) persisted long after the end of the formal conflict, complicating mine-clearance operations. That is why small, remote control vehicles like this proved to be so effective. They were highly manoeuvrable, easily transported, adaptable with different tools and equipment, and could be safely operated. The machine comes with an armored operator cabin and the whole system can be packed and deployed from a 40-foot shipping container weighing 16 tons. The greatest threat to the de-miners was from bounding fragmentation mines which typically had a lethal radius of several hundred meters. These vehicles provided a means to operate safely in areas affected by these mines. One of the major disadvantages of these machines is the destruction of surface vegetation that can lead to rapid erosion, if there is heavy rain in the weeks following mine clearance operations. Sudden heavy downpours are common in summer months. Therefore, they had to be used with considerable discretion and local knowledge.

Chapter 17 — Limbed Systems

Shuuji Kajita and Christian Ott

A limbed system is a mobile robot with a body, legs and arms. First, its general design process is discussed in Sect. 17.1. Then we consider issues of conceptual design and observe designs of various existing robots in Sect. 17.2. As an example in detail, the design of a humanoid robot HRP-4C is shown in Sect. 17.3. To design a limbed system of good performance, it is important to take into account of actuation and control, like gravity compensation, limit cycle dynamics, template models, and backdrivable actuation. These are discussed in Sect. 17.4.

In Sect. 17.5, we overview divergence of limbed systems. We see odd legged walkers, leg–wheel hybrid robots, leg–arm hybrid robots, tethered walking robots, and wall-climbing robots. To compare limbed systems of different configurations,we can use performance indices such as the gait sensitivity norm, the Froude number, and the specific resistance, etc., which are introduced in Sect. 17.6.

Waalbot: Agile climbing with synthetic fibrillar dry adhesives

Author  Mike Murphy

Video ID : 541

A wall climbing robot developed by Dr. Murphy and Dr. Sitti.

Chapter 72 — Social Robotics

Cynthia Breazeal, Kerstin Dautenhahn and Takayuki Kanda

This chapter surveys some of the principal research trends in Social Robotics and its application to human–robot interaction (HRI). Social (or Sociable) robots are designed to interact with people in a natural, interpersonal manner – often to achieve positive outcomes in diverse applications such as education, health, quality of life, entertainment, communication, and tasks requiring collaborative teamwork. The long-term goal of creating social robots that are competent and capable partners for people is quite a challenging task. They will need to be able to communicate naturally with people using both verbal and nonverbal signals. They will need to engage us not only on a cognitive level, but on an emotional level as well in order to provide effective social and task-related support to people. They will need a wide range of socialcognitive skills and a theory of other minds to understand human behavior, and to be intuitively understood by people. A deep understanding of human intelligence and behavior across multiple dimensions (i. e., cognitive, affective, physical, social, etc.) is necessary in order to design robots that can successfully play a beneficial role in the daily lives of people. This requires a multidisciplinary approach where the design of social robot technologies and methodologies are informed by robotics, artificial intelligence, psychology, neuroscience, human factors, design, anthropology, and more.

A learning companion robot to foster pre-K vocabulary learning

Author  Cynthia Breazeal

Video ID : 564

This video summarizes a study where a learning-companion robot engages children in a storytelling game over repeated encounters over two months. The learning objective is for pre-K children to learn targeted vocabulary words which the robot introduces in its stories. In each session, the robot first tells a story and then invites the child to tell a story. A storyscape app on a tablet computer facilitates the narration of the story. While the child tells his or her story, the robot behaves as an engaged listener. Two conditions were investigated where the robot either matched the complexity of its stories to the child's language level, or does not. Results show that children successfully learn target vocabulary with the robot in general, and more words are learned when the complexity of the robot's stories matches the language ability of the child.

Region-pointing gesture

Author  Takayuki Kanda

Video ID : 811

This short video explains what "region pointing" is. While it known that there are a variety of pointing gestures, in region pointing, unlike in other pointing gestures where the pointing arm is fixed, the arm moves as if it depicts a circle, which evokes the region it refers to.

Chapter 61 — Robot Surveillance and Security

Wendell H. Chun and Nikolaos Papanikolopoulos

This chapter introduces the foundation for surveillance and security robots for multiple military and civilian applications. The key environmental domains are mobile robots for ground, aerial, surface water, and underwater applications. Surveillance literallymeans to watch fromabove,while surveillance robots are used to monitor the behavior, activities, and other changing information that are gathered for the general purpose of managing, directing, or protecting one’s assets or position. In a practical sense, the term surveillance is taken to mean the act of observation from a distance, and security robots are commonly used to protect and safeguard a location, some valuable assets, or personal against danger, damage, loss, and crime. Surveillance is a proactive operation,while security robots are a defensive operation. The construction of each type of robot is similar in nature with amobility component, sensor payload, communication system, and an operator control station.

After introducing the major robot components, this chapter focuses on the various applications. More specifically, Sect. 61.3 discusses the enabling technologies of mobile robot navigation, various payload sensors used for surveillance or security applications, target detection and tracking algorithms, and the operator’s robot control console for human–machine interface (HMI). Section 61.4 presents selected research activities relevant to surveillance and security, including automatic data processing of the payload sensors, automaticmonitoring of human activities, facial recognition, and collaborative automatic target recognition (ATR). Finally, Sect. 61.5 discusses future directions in robot surveillance and security, giving some conclusions and followed by references.

Detection of abandoned objects

Author  Nikos Papanikolopoulos

Video ID : 682

Automatic detection of abandoned objects is of great importance in security and surveillance applications. This project at the Univ. of Minnesota attempts to detect such objects based on several criteria. Our approach is based on a combination of short-term and long-term blob logic, and the analysis of connected components. It is robust to many disturbances that may occur in the scene, such as the presence of moving objects and occlusions.

Chapter 47 — Motion Planning and Obstacle Avoidance

Javier Minguez, Florant Lamiraux and Jean-Paul Laumond

This chapter describes motion planning and obstacle avoidance for mobile robots. We will see how the two areas do not share the same modeling background. From the very beginning of motion planning, research has been dominated by computer sciences. Researchers aim at devising well-grounded algorithms with well-understood completeness and exactness properties.

The challenge of this chapter is to present both nonholonomic motion planning (Sects. 47.1–47.6) and obstacle avoidance (Sects. 47.7–47.10) issues. Section 47.11 reviews recent successful approaches that tend to embrace the whole problemofmotion planning and motion control. These approaches benefit from both nonholonomic motion planning and obstacle avoidance methods.

Autonomous robotic smart-wheelchair navigation in an urban environment

Author  VADERlab

Video ID : 707

This video demonstrates the reliable navigation of a smart wheelchair system (SWS) in an urban environment. Urban environments present unique challenges for service robots. They require localization accuracy at the sidewalk level, but compromise estimated GPS positions through significant multipath effects. However, they are also rich in landmarks that can be leveraged by feature-based localization approaches. To this end, the SWS employed a map-based approach. A map of South Bethlehem was acquired using a server vehicle, synthesized a priori, and made accessible to the SWS client. The map embedded not only the locations of landmarks, but also semantic data delineating seven different landmark classes to facilitate robust data association. Landmark segmentation and tracking by the SWS was then accomplished using both 2-D and 3-D LIDAR systems. The resulting localization algorithm has demonstrated decimeter-level positioning accuracy in a global coordinate frame. The localization package was integrated into a ROS framework with a sample-based planner and control loop running at 5 Hz. For validation, the SWS repeatedly navigated autonomously between Lehigh University's Packard Laboratory and the University bookstore, a distance of approximately 1.0 km roundtrip.