top of page

2D Occupancy Grid and 3D Octomap from a simulated environment. 

Github: 

Overview

This project aims to create a 2D occupancy grid and 3D octomap from a simulated environment using a custom robot with the RTAB-Map package.

​

RTAB-Map(Real-Time Appearance Based Mapping) is a popular solution for SLAM to develop robots that can map environments in 3D. RTAB-Map has good speed and memory management, and it provides custom-developed tools for information analysis. Most importantly, the quality of the documentation on ROS Wiki is very high. Being able to leverage RTAB-Map with your own robots will lead to a solid foundation for mapping and localization well beyond this Nanodegree program. For this project, we will be using the rtabmap_ros package, which is a ROS wrapper (API) for interacting with RTAB-Map. Keep this in mind when looking at the relative documentation.

This project aims to achieve the following:

​

  • Development of a package to interface with the rtabmap_ros package

  • Build upon the robot localization project to make the necessary changes to interface the robot with the RTAB-Map. An example of this is the addition of an RGB-D camera.

  • Ensure that all the files are in the appropriate place and all links are connected, naming is properly set up and topics are correctly mapped. Furthermore, generate the appropriate launch files to launch the robot and map its surroundings.

  • After the robot is launched, teleop around the room to generate a proper map of the environment.

Simulated World in Gazebo

Generated map

Database Analysis

The rtabmap-databseViewer is a great tool for exploring your database when you are done generating it. It is isolated from ROS and allows a complete analysis of your mapping session. I checked for loop closures, generated 3D maps for viewing and extracted images as well as checked feature mapping rich zones. 

​

 

​

​

​

​

​

​

​

​

 

 

 

 

 

 

 

 

 

 

 

 

 

On the left is the image of the 2D grid map in all of its updated iterations and the path of the robot. In the middle section are the different images from the mapping process. Here we can scrub through the images to see all the features from the detection algorithm. These features are in yellow. The pink features indicate where the two images have features in common and this information is being used to create the neighboring links and loop closures. On the right, you can see the constraint view. This is where we can identify where and how the neighboring links and loop closures were created. We can also see the number of loop closures in the bottom left. The codes stand for the following: Neighbour, Neighbour Merged, Global Loop Closure, Local Loop Closure by space, Local loop closure by time, User loop closure, and Prior Link.

 

  

The instructions on setting up the project as well as the source codes can be found on my GitHub page

bottom of page