Skip to content

TUM-AVS/RoboRacer-Dynamics

Repository files navigation

Requirements and Installation

The requirements.txt file contains a pip freeze of the used venv. However, simply installing a venv from such a freeze often results in errors. requirements_reduced.txt includes the essential libaries necessary for the final version excluding Pytorch which should be installed according to the offical Pytorch website with respect to the used machine and installed CUDA version. The code has been tested with two Pytorch configurations:

  1. torch 2.8.0 with CUDA 12.8 and Python 3.10.12
  2. torch 2.6.0 with CUDA 12.6 and Python 3.10.18

Usage

This section explains the designed usage of the pipeline as is, for deeper modification of the pipeline see Modifiying the Pipeline

Data Pipieline

First transform the ROS2 bags to csv files following the instructions of DataAcquisition. These resutling directories should be copied into the data directory. As the existing examples in the directory show, for each ROS2 bag there should be a directory with 4 csv-files. One file with the controls, its filename must include "controls"; one file with the IMU data, its filename must include "imu"; one file with the Odom data, its filename must include "odom" and one file with the MoCap data, its filename must include "pose". The filenames must included this words because the Data Preparation searches for these key words (case sensitive) to assign the data accordingly.

Data Preparation preprocess the raw data and generates one csv file containing the sychronized, filtered data. Per default the resulting csv file is saved to the processed_data directory. Define in the main function which directories you want to process or uncomment the provided code to process all directories in data. When setting plotting=True some plots visualize the resulting processed data and compare them to the original raw data.

Lastly, if you want to cutoff data at the beginning of a csv file, e.g. we had cases where the recording was started before the vehilce was placed on the track resulting in accelerations without control commands given, you can use cutoff_data.

Network Training

If you want to implement a new network define it in RNN_architectures, your network should expect input sequences with the length of history (default 10) and N features per timestep, where N is equal to the number of input columns. Dataset_Classes contains the classes for two datasets both return inputs and targets in the same manner with the only difference that one version normalizes them with z-score standardization based on the provided mean and standard deviation.

Before running training ensure you did login to wandb and changed the wandb.init in main.py accordingly.

While the actual training is defined in the training.py to run training and define hyperparameters use main.py. An exemplary dictionary with the necessary parameters is given and can be changed as required. A few notes:

  1. The main function is only used when conducting hyerparameter sweeps with a wandb agent
  2. Ensure that the model_name matches the class name in RNN_architectures
  3. For the training files list there are 4 sets predefined inside training.py which can be choosen with integer values from 1 to 4, otherwise lists can be directly given. A few exemplary options are given in main.py.

The model with the lowest validation loss during training will be saved to "trained_models/{run.name}___{datetime.now().strftime('%Y%m%d%H%M%S%f')}.pth", the datetime addition is a fail safe to prevent overwriting previously trained networks, but when using unique run names it is easy to find the corresponding trained model. In addition to the best model in from of the model_state_dict additional parameters are saved to a config directory which allows simply reloading them during seperate testing

best_model_data = {
        'model_state_dict': best_model_state,
        'config': {
            'output_columns': outputs_columns,
            'input_columns': inputs_columns,
            'history': history,
            'model_class': model.__class__.__name__,
            'input_size': input_size,
            'output_size': output_size,
            'learning_rate': learning_rate,
            'weight_decay': weight_decay,
            'batch_size': batch_size,
            'best_val_loss': best_val_loss,
            'dataset_class': train_dataset.__class__.__name__,
            'normed': normed,
            'data_mean': data_mean,
            'data_std': safe_std,  # persist the std actually used
        }
    }

After training the saved model is tested on the test dataset and then autoregressive inference is performed for the testing and validation files. All visualizations are available in wandb.

Network Testing

For testing networks again without training use testing.py after defining wether you want a wandb run logged for the testing and setting the paths for the desired files and model the code will run the testing show you matplotlib.pyplot plots and log the results to wandb if enabled.

Modifiying the Pipeline

This section explains the necessary changes for different modifications

Chaning Preprocessing Parameters

Here the relevant sections of the code are linked to change the preprocessing parameters. When no order is provided, the butterworth filters default to order=4.

  • Removal of offsets for IMU data happens in Line 147ff
  • Intermediate sampling frequency for MoCap is set in Line 171 adjusted dt_intermediate accordingly
  • Butterworth filter cutoff for IMU Line 184
  • Butterworth filter cutoff for Odom Line 203
  • Savgol filter parameters for MoCap data Line 213f
  • Butterwort filters paramaters for MoCap velocities, accelerations and yaw rate (anuglar_velocity) Line 292ff

Using different preprocessed data

Usage of already preprocessed data even from other sources is very easy. Simply ensure that you have one csv file containing all your information and that your columns have names. Only necessary changes are the adjusted your params in the main file accordingly. Give:

  1. The list of input columns to the "inputs_columns" key
  2. The list of output columns to the "outputs_columns" key
  3. Lists of the paths to your files for training validation and testing accordingly to the files keys

As an example preprocessed data from the racecar dataset can be used. Simply use the respective column names and file paths you want (results however are not good, since the dataset lacks control inputs, steering angle and desired speed).

Using different data origin but our preprocessing

The easiest way to use our preprocessing pipeline is to create 4 csv files with the same columns as we have for your data, however if your data origin has different sensors the preprocessing might be suboptimal. For reasonable preprocessing extensive modification of the preprocessing is required, changing the topic names, adjusting filter parameters and possbily changing the number of files and pandas dataframe per dataset which essentially means recoding DataPreparation to fit to the new platform/data source.

Authors and acknowledgment

The Main context for this Software package has been developed by Jonathan Mohr and Felix Jahncke during the Master's Thesis "DiffDynamicModel: Learning-based state estimation of small-scale vehicle dynamics" by Jonathan Mohr, supervised by Felix Jahncke.

Felix Jahncke (Website) leads the TUM F1Tenth/RoboRacer project at the Professorship of Autonomous Vehicle Systems under the supervision of Professor Johannes Betz (Website).

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors