Tracked Mobile Robot - Executive Summary


 Project aim

The aim of this thesis is to build a robot system that uses the newly introduced depth cameras (RGB-D) to perform a variety of tasks. The core tasks are to map and localise in an unknown environment without a priori knowledge and to recognise 3D objects using only curvature information. The robot is also able to take the current map available and use that to calculate a path from point A to B avoiding any obstacles. The “robot” is a tracked mobile vehicle that was designed and custom built to offer a platform for the development of the above algorithms.




Mobile robots depend heavily on mapping and object recognition in order to operate autonomously in an unknown or partially unexplored environment. Great amount of research has been carried out in the field of machine vision (particularly in 2D imaging) to perform many tasks such as feature extraction up to 6D pose extraction, etc. However the recent development of affordable, advanced 3D camera sensors, such as the PrimeSense based sensors has sparked a new field of accessible 3D camera research. Such sensors provide the same conventional camera data with the addition of a depth vector; that allows us to represent a 3D space in the form of a pointcloud, just as a laser scanner would but for a fraction of the cost. This RGB-D camera is used throughout the project as the robot’s main sensor while its capabilities are demonstrated and evaluated.



The following points summarise the goals of this project.
• Build a tracked mobile robot (TMR) capable of carrying all the necessary components (batteries, processing units, electronics, etc.);
• Develop an algorithm that maps the environment as the robot moves using an RGB-D camera;
• Perform object recognition using depth data from an RGB-D camera;
• Develop a path planning algorithm that uses the map to provide a path from point A to B;
• Integrate all the algorithms and pair them with the robot’s on-board electronics.

Usage of existing work and libraries

Robot Operating System (ROS)
The architecture of the system is designed to be modular so that each program-node can pass information to each layer but also stopped and restarted at any time without affecting the overall system. ROS is used as a message passing library between nodes due to the performance, scalability and debugging capabilities it offers.

PointCloud Library (PCL)
PCL, a 3D version of OpenCV, provides many basic algorithms used to filter, process and extract information from a point-cloud scene. PCL is used for the object recognition part of this project.

ETH-ASL lab’s odometry library
This library is using the “Iterative Closest Point” (ICP) algorithm along with statistical filters to provide the camera’s 6D position in space by comparing scene samples n and n-1.
The usage of this stack from ETH lab was a direct replacement for a custom implementation that was suffering from noise issues. This stack uses statistical algorithms to reject outliers.


The robot and algorithms were developed successfully and within the projected timeframe. The path planning capability was added as an extra objective to mainly emphasise the robot and software interaction. The overall performance of the system compares well with leading robotic research implementations and clearly demonstrates the promising capabilities of an RGB-D camera. 


design mapping

3d path