top of page

Robot Navigation from Scratch

Implementing navigation libraries for a differential drive robot from scratch using ROS as the central platform. Currently this project contains implementations for odometry-based waypoint following and EKF SLAM under teleoperation. The turtlebot used was running custom firmware to accommodate doing everything from scratch.

​

This project was truly from the ground up. It started with developing low level controller libraries to calculate the pose of a robot based on a given twist. This was then built upon by developing a conversion interface between the robot's sensor data and the low level controller. This entailed calculating a twist given encoder readings, converting laser scan data to meaningful landmark information, and implementing a landmark-based EKF SLAM algorithm.

​

A deeper dive in the details of the project, all source code, and API is hosted on Github.

What is Odometry?

Odometry is the process of calculating how far something has moved over a given time period.

 

For a robot, this typically means using sensors to track how far each wheel has turned. Using this information, the robot can estimate how far it has moved in the world relative from where it started. In a perfect world, tracking this information would allow us to perfectly estimate of the robot's position, but the problem is that error will accumulate over time. This error comes from a number of sources such as the wheels spinning without actually moving the robot. This typically occurs when the wheels accelerate quickly and slip or if the robot gets stuck but the wheels can still move. This error accumulation can be seen in the EKF SLAM video above by observing the red line. It starts out very similar to the green line (the actual position of the robot), but over time diverges.

​

This shows that odometry is useful for easily estimating the distance traveled over a short period of time, but due to the accumulating error it is ineffective at estimating the position over the full path followed by the robot. We need a more robust method if we require more accurate position estimation, this where SLAM comes in.

What is SLAM?

SLAM stands for Simultaneous Localization and Mapping and allows the robot to create a map of its surroundings and then use that information to estimate where it is on that map. There are many ways to do this, but for this application the map being used by the robot is assembled by tracking landmarks. 

​

In this application, the robot is using a laser scanner to provide the information for this methodology. The laser scan spins 360 deg. and records distances to the difference objects around it. The scan is then converted into meaningful landmark information and the robot is able to use this measurement information estimate where it is in the world. 

Future Development

The landmark detection pipeline needs to be more robust to operate in real world environments as the testing data has shown that false positive landmarks are being tracked by the SLAM algorithm. These improvements can be made by adding more restrictions before the landmarks is added to the SLAM tracking list. The other is a method to remove landmarks from this list to clean any false positive readings that still get through.

​

Currently, the robot is being controlled by a keyboard in the SLAM videos, the next step is to merge this with waypoint-based navigation goals. Also, since the robot will drive itself we also need to implement a path planner so the robot can avoid obstacles while moving to waypoints.

bottom of page