|
|
|
|
This package introduces the following elements for the RoboRacer platform and utilities:
- ROS 2 package ready to build called
f1tenth_offroadwith the following elements:config.pyfile with all relevant configuration parameters for the ROS 2 packagemag_map.pycontaining code for magnetometer calibrationmonolithic.pyOverarching ROS 2 node for in-the-field operationnavigation.pyROS 2 node and relevant code of the navigational systemperception.pyROS 2 node and code facilitating RGB image segmentationtransform.pyROS 2 node and code for RGB, depth image and LiDAR fusion and reprojection
- A custom
kmeansimplementation, modified from the github source in order to work on the RoboRacer system - A variation of the OFFSEG semantic segmentation suite, taken and slightly modified from its github source
- Masking image data required for the perception and transformation nudes, in the resources directory
- Various utility scripts in the
utilssubdirectory, including:colormap.py, which generates the Colormap.JPG file displaying the color coding of the RUGD training image data set- The
gimp_processImages.pyGIMP 2 plugin, a script which allows for the sequential loading of RGB and Segmentation data into GIMP for changes make_lst.py, a list generator creating a text file for neural network model retrainingrosbag_sampler.py, a utility to take images out of a ROS 2 image data stream at periodic intervals for diagnostics and retraining
This software package is designed to work on the RoboRacer platform, and assumes the following hardware to be present:
- Traxxas Slash 4x4 chassis, with motors and power delivery for all other components
- NVIDIA Jetson Nano Orin
- ZED 2 Stereo Camera running at USB 2.0
- Hokuyo UST-10LX 2D LiDAR
- Whappa WPI 430 GPS module
- VESC Motor controller
Additionally, the following software is necessary to run this package, with detailed installation explanations in their sources or in their bullet points:
- Ubuntu 20.04 LTS
- The appropriate NVIDIA Jetson Nano Orin Drivers via the NVIDIA SDK
- ROS 2 Foxy
- The
f1tenth_systemROS 2 software package, downloaded and compiled from its source containing drivers for the VESC and LiDAR, assumed to be present, initialized, calibrated and running when starting nodes of this package - The ZED camera ROS 2 wrapper, downloaded and compiled from its source, configured according to [1] (7.5 FPS capture framerate, VGA resolution, Neural Depth), and running for any part of this packages' nodes
- The
nmea_navsat_driverROS 2 package available via the command line entrysudo apt-get install ros-foxy-nmea-navsat-driver, set-up and optional to run the code, however navigation cannot perform global pathfining without GPS data provided by this package
With these preliminaries satisfied, the package can then be installed in the following way:
- The following current directory is assumed:
~/ros_ws/src/ - This code can be downloaded via
git clone git@github.com:TUM-AVS/f1tenth_offroad.git - From the newly created directory of
~/ros_ws/src/f1tenth_offroadcertain required python libraries are installed withpython3 -m pip install -r requirements.txt - Then, from the
~/ros_wsdirectory, the codecolcon buildis run to build the package - With
source install/setup.bashthe newly built package is loaded - Additionally, if the second stage segmentation algorithm is required, the code from the
kmeans_customsubdirectory must also be installed
After completing the installation, the main program nodes can be used via ros2 run f1tenth_offroad (node name)
Available nodes are perception, transform, navigation, full_stack
For debugging purposes, or to only run part of the algorithms, the first three nodes can be used separately or in conjunction, however, their separate functionalities are combined in a more efficient way in the full_stack node, which encompasses perception, transform and navigation\
The perception node provides the OFFSEG-segmented input images, as well as overlayerd images and - if needed - additional segmentation results from the second stage algorithm or auxiary segmentation models from retraining. Feature selection is done via the config.py file, which contains switches and paths for all used models.
The transform node performs the perception result image's top-down transformation and LiDAR fusion. Optionally, it can create a ROS 2 Pointcloud message or a top-down map of the overlayed segmentation result if chosen in config.py, which will then be additionally be supplied along with the normal segmentation top-down result.
The navigation node combines data from the transform node with odometry, GPS data and magnetometer data to evaluate paths and control the vehicle. It provides both the driving command fed back to the VESC motor controller, but also the final top-down map with navigation overlay. Additonally, only this node provides pose messages for the heading data fusion from various steps within the process, which might be visualized.
The full_stack node performs the tasks of all previous nodes, but can not provide any optional capabilites. It publishes only the overlayed segmented image, the final navigation map, the current target point towards which the vehicle navigates as well as the drive command.
- The
rviz2command may be used to create a graphical overlay allowing for the visualization of the output data. - The
colormap.pycode can simply be run bypython3 colormap.pyand will generate the desired output image color map. make_lst.pycan generate the training image list for retraining the OFFSEG model, see next heading. It requires the input and output paths for which to generate the list, which must be adjusted in the filegimp_processImages.pyis a plugin for GIMP, allowing the graphical design software to be used to sequentially edit segmentation results for retraining. For this, first, the paths in the file must be adapted to match hardware locations of RGB and segmented images as well as an output path, and then the plugin can be installed and launched according to this article.rosbag_sampler.pyis another ROS node which allows for the sampling of camera image data streams. By calling it viapython3 rosbag_sampler.py (topic_to_subscribe_to) (interval_between_saved_msgs) (output_directory), it saves the first message of each interval of the specified topic to the specified harddrive location
All relevant information about using the OFFSEG code as standalone or for retraining is available in its README
- Sometimes the code will crash on startup if the LiDAR cannot provide a message in time. If this happens, simply restart.
- If run at more than 1.5 Hz in Operation, the code is prone to crashing.
This Package was created by Moritz Wagner as part of his bachelor's thesis [1] under the supervision of Felix Jahncke, who leads the TUM F1TENTH/RoboRacer project at the Chair of Autonomous Vehicles of Professor Johannes Betz.
[1] M. Wagner, “Autonomous Offroad Mobility using the F1TENTH-Platform”, B.Sc. thesis, Technical Univ. of Munich, Munich, Germany, 2024.