Skip to content

TUM-AVS/RoboRacer-Offroad

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RoboRacer Offroad Perception, Planning and Navigation

This package introduces the following elements for the RoboRacer platform and utilities:

  • ROS 2 package ready to build called f1tenth_offroad with the following elements:
    • config.py file with all relevant configuration parameters for the ROS 2 package
    • mag_map.py containing code for magnetometer calibration
    • monolithic.py Overarching ROS 2 node for in-the-field operation
    • navigation.py ROS 2 node and relevant code of the navigational system
    • perception.py ROS 2 node and code facilitating RGB image segmentation
    • transform.py ROS 2 node and code for RGB, depth image and LiDAR fusion and reprojection
  • A custom kmeans implementation, modified from the github source in order to work on the RoboRacer system
  • A variation of the OFFSEG semantic segmentation suite, taken and slightly modified from its github source
  • Masking image data required for the perception and transformation nudes, in the resources directory
  • Various utility scripts in the utils subdirectory, including:
    • colormap.py, which generates the Colormap.JPG file displaying the color coding of the RUGD training image data set
    • The gimp_processImages.py GIMP 2 plugin, a script which allows for the sequential loading of RGB and Segmentation data into GIMP for changes
    • make_lst.py, a list generator creating a text file for neural network model retraining
    • rosbag_sampler.py, a utility to take images out of a ROS 2 image data stream at periodic intervals for diagnostics and retraining

Setup, Preliminaries and Installation

This software package is designed to work on the RoboRacer platform, and assumes the following hardware to be present:

  • Traxxas Slash 4x4 chassis, with motors and power delivery for all other components
  • NVIDIA Jetson Nano Orin
  • ZED 2 Stereo Camera running at USB 2.0
  • Hokuyo UST-10LX 2D LiDAR
  • Whappa WPI 430 GPS module
  • VESC Motor controller

Additionally, the following software is necessary to run this package, with detailed installation explanations in their sources or in their bullet points:

  • Ubuntu 20.04 LTS
  • The appropriate NVIDIA Jetson Nano Orin Drivers via the NVIDIA SDK
  • ROS 2 Foxy
  • The f1tenth_system ROS 2 software package, downloaded and compiled from its source containing drivers for the VESC and LiDAR, assumed to be present, initialized, calibrated and running when starting nodes of this package
  • The ZED camera ROS 2 wrapper, downloaded and compiled from its source, configured according to [1] (7.5 FPS capture framerate, VGA resolution, Neural Depth), and running for any part of this packages' nodes
  • The nmea_navsat_driver ROS 2 package available via the command line entry sudo apt-get install ros-foxy-nmea-navsat-driver, set-up and optional to run the code, however navigation cannot perform global pathfining without GPS data provided by this package

With these preliminaries satisfied, the package can then be installed in the following way:

  1. The following current directory is assumed: ~/ros_ws/src/
  2. This code can be downloaded via git clone git@github.com:TUM-AVS/f1tenth_offroad.git
  3. From the newly created directory of ~/ros_ws/src/f1tenth_offroad certain required python libraries are installed with python3 -m pip install -r requirements.txt
  4. Then, from the ~/ros_ws directory, the code colcon build is run to build the package
  5. With source install/setup.bash the newly built package is loaded
  6. Additionally, if the second stage segmentation algorithm is required, the code from the kmeans_custom subdirectory must also be installed

Running the Code

After completing the installation, the main program nodes can be used via ros2 run f1tenth_offroad (node name)
Available nodes are perception, transform, navigation, full_stack
For debugging purposes, or to only run part of the algorithms, the first three nodes can be used separately or in conjunction, however, their separate functionalities are combined in a more efficient way in the full_stack node, which encompasses perception, transform and navigation\

perception

The perception node provides the OFFSEG-segmented input images, as well as overlayerd images and - if needed - additional segmentation results from the second stage algorithm or auxiary segmentation models from retraining. Feature selection is done via the config.py file, which contains switches and paths for all used models.

transform

The transform node performs the perception result image's top-down transformation and LiDAR fusion. Optionally, it can create a ROS 2 Pointcloud message or a top-down map of the overlayed segmentation result if chosen in config.py, which will then be additionally be supplied along with the normal segmentation top-down result.

navigation

The navigation node combines data from the transform node with odometry, GPS data and magnetometer data to evaluate paths and control the vehicle. It provides both the driving command fed back to the VESC motor controller, but also the final top-down map with navigation overlay. Additonally, only this node provides pose messages for the heading data fusion from various steps within the process, which might be visualized.

full_stack

The full_stack node performs the tasks of all previous nodes, but can not provide any optional capabilites. It publishes only the overlayed segmented image, the final navigation map, the current target point towards which the vehicle navigates as well as the drive command.

Utilities

  • The rviz2 command may be used to create a graphical overlay allowing for the visualization of the output data.
  • The colormap.py code can simply be run by python3 colormap.py and will generate the desired output image color map.
  • make_lst.py can generate the training image list for retraining the OFFSEG model, see next heading. It requires the input and output paths for which to generate the list, which must be adjusted in the file
  • gimp_processImages.py is a plugin for GIMP, allowing the graphical design software to be used to sequentially edit segmentation results for retraining. For this, first, the paths in the file must be adapted to match hardware locations of RGB and segmented images as well as an output path, and then the plugin can be installed and launched according to this article.
  • rosbag_sampler.py is another ROS node which allows for the sampling of camera image data streams. By calling it via python3 rosbag_sampler.py (topic_to_subscribe_to) (interval_between_saved_msgs) (output_directory), it saves the first message of each interval of the specified topic to the specified harddrive location

OFFSEG

All relevant information about using the OFFSEG code as standalone or for retraining is available in its README

Known Issues

  • Sometimes the code will crash on startup if the LiDAR cannot provide a message in time. If this happens, simply restart.
  • If run at more than 1.5 Hz in Operation, the code is prone to crashing.

Acknowledgements

This Package was created by Moritz Wagner as part of his bachelor's thesis [1] under the supervision of Felix Jahncke, who leads the TUM F1TENTH/RoboRacer project at the Chair of Autonomous Vehicles of Professor Johannes Betz.

[1] M. Wagner, “Autonomous Offroad Mobility using the F1TENTH-Platform”, B.Sc. thesis, Technical Univ. of Munich, Munich, Germany, 2024.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors