View Chapter

Chapter 47 — Motion Planning and Obstacle Avoidance

Javier Minguez, Florant Lamiraux and Jean-Paul Laumond

This chapter describes motion planning and obstacle avoidance for mobile robots. We will see how the two areas do not share the same modeling background. From the very beginning of motion planning, research has been dominated by computer sciences. Researchers aim at devising well-grounded algorithms with well-understood completeness and exactness properties.

The challenge of this chapter is to present both nonholonomic motion planning (Sects. 47.1–47.6) and obstacle avoidance (Sects. 47.7–47.10) issues. Section 47.11 reviews recent successful approaches that tend to embrace the whole problemofmotion planning and motion control. These approaches benefit from both nonholonomic motion planning and obstacle avoidance methods.

Sensor-based trajectory deformation and docking for nonholonomic mobile robots

Author  Florent Lamiraux

Video ID : 80

This video demonstrates motion planning and reactive obstacle avoidance for nonholonomic robots. A mobile robot with a trailer is asked to park into a U-shaped obstacle. Motion planning is performed by a visibility-based PRM algorithm using a flatness-based steering method built on convex combinations of canonical curves. The planned trajectory is then followed by the robot while detecting obstacles using a laser scanner. The current trajectory is locally deformed in order to avoid obstacles and to end at the detected U-shaped obstacle.

Autonomous robotic smart-wheelchair navigation in an urban environment

Author  VADERlab

Video ID : 707

This video demonstrates the reliable navigation of a smart wheelchair system (SWS) in an urban environment. Urban environments present unique challenges for service robots. They require localization accuracy at the sidewalk level, but compromise estimated GPS positions through significant multipath effects. However, they are also rich in landmarks that can be leveraged by feature-based localization approaches. To this end, the SWS employed a map-based approach. A map of South Bethlehem was acquired using a server vehicle, synthesized a priori, and made accessible to the SWS client. The map embedded not only the locations of landmarks, but also semantic data delineating seven different landmark classes to facilitate robust data association. Landmark segmentation and tracking by the SWS was then accomplished using both 2-D and 3-D LIDAR systems. The resulting localization algorithm has demonstrated decimeter-level positioning accuracy in a global coordinate frame. The localization package was integrated into a ROS framework with a sample-based planner and control loop running at 5 Hz. For validation, the SWS repeatedly navigated autonomously between Lehigh University's Packard Laboratory and the University bookstore, a distance of approximately 1.0 km roundtrip.

Sena wheelchair: Autonomous navigation at University of Malaga (2007)

Author  Jose Luis Blanco

Video ID : 708

This experiment demonstrates how a reactive navigation method successfully enables our robotic wheelchair SENA to navigate reliably in the entrance of our building at the University of Malaga (Spain). The robot navigates autonomously amidst dozens of students while avoiding collisions. The method is based on a space transformation, which simplifies finding collision-free movements in real-time despite the arbitrarily complex shape of the robot and its kinematic restrictions.

Robotic wheelchair: Autonomous navigation with Google Glass

Author  Personal Robotics Group - OSU

Video ID : 709

For people with extreme disabilities such as ALS or quadriplegia, it is often hard to move about on their own and interact with their environments due to their immobility. Our work - nicknamed "Project Chiron" - attempts to alleviate some of this immobility with a kit that can be used on any Permobil-brand wheelchair.

A ride in the Google self-driving car

Author  Google Self-Driving Car Project

Video ID : 710

The maturity of the tools developed for mobile-robot navigation and explained in this chapter have enabled Google to integrate them into an experimental vehicle. This video demonstrates Google's self-driving technology on the road.

Mobile-robot navigation system in outdoor pedestrian environment

Author  Chin-Kai Chang

Video ID : 711

We present a mobile-robot navigation system guided by a novel vision-based, road-recognition approach. The system represents the road as a set of lines extrapolated from the detected image contour segments. These lines enable the robot to maintain its heading by centering the vanishing point in its field of view, and to correct the long-term drift from its original lateral position. We integrate odometry and our visual, road-recognition system into a grid-based local map which estimates the robot pose as well as its surroundings to generate a movement path. Our road recognition system is able to estimate the road center on a standard dataset with 25 076 images to within 11.42 cm (with respect to roads that are at least 3 m wide). It outperforms three other state-of-the-art systems. In addition, we extensively test our navigation system in four busy campus environments using a wheeled robot. Our tests cover more than 5 km of autonomous driving on a busy college campus without failure. This demonstrates the robustness of the proposed approach to handle challenges including occlusion by pedestrians, non-standard complex road markings and shapes, shadows, and miscellaneous obstacle objects.

Mobile-robot, autonomous navigation in Gracia district, Barcelona

Author  Joan Perez

Video ID : 712

This video demonstrates a fully autonomous navigation solution for mobile robots operating in urban pedestrian areas. Path planning is performed by a graph search on a discretized grid of the workspace. Obstacle avoidance is performed by a slightly modified version of the dynamic-window approach.

Autonomous navigation of a mobile vehicle

Author  Visp team

Video ID : 713

This video shows the vision-based autonomous navigation of a Cycab mobile vehicle able to avoid obstacles detected by its laser range finder. The reference trajectory is provided as a sequence of previously-acquired key images. Obstacle avoidance is based on a predefined set of circular avoidance trajectories. The best trajectory is selected when an obstacle is detected by the laser scanner.

Autonomous robot cars drive in the DARPA Urban Challenge

Author  GovernmentTechnology

Video ID : 714

In order to forster research and development in the domain of autonomous navigation, the DARPA agency organized a challenge in 2007 for competitors to develop autonomous vehicles which are able to follow an itinerary through an urban environment. Navigation within unstructured areas like parking lots made extensive use of RRT-like methods.