This repository contains a suite of tools for generating and evaluating tasks for the DD2419 course. It allows for the creation of randomized environments with objects, boxes, and obstacles, and provides a framework for evaluating user solutions against a ground truth.
Try the Web Version: DD2419 Tools
- Randomized Task Generation: Create maps with varying numbers of known and unknown objects/boxes, including dynamically placed obstacles.
- Global Transformations: Apply random translations and rotations to the entire environment to find bugs and test robustness.
- Interactive Updates: Selectively update an existing task (change start pose, re-sample items, or apply new transforms) while maintaining spatial alignment.
- Automated Evaluation: Compare a solution map against the ground truth using the Hungarian algorithm for optimal matching.
- Visual Feedback: Generates top-down visualizations for the known map, complete map, and placement guide.
- Web Interface: A unified single-page tool for Generation and Evaluation with a mode switcher, interactive maps, and 4-file analysis.
This project uses Pixi for dependency management.
# Install dependencies
pixi installTo generate a new task, run the generate task:
pixi run generate <output_folder>The task will prompt you for:
- Number of known/unknown objects and boxes.
- Number of obstacles.
- Whether to apply a global transformation.
Updating an existing task:
If the <output_folder> already exists, you will be asked whether to Overwrite or Update.
- Update allows you to selectively change the start pose, re-sample known items, or apply a new global transform.
- Sampling Logic: During an update, the tool only prompts for the number of known items, keeping the total set (from
map_complete.csv) fixed. - Transform Handling: The tool automatically handles "reverse and re-apply" transformation logic to ensure everything stays aligned.
Configuration:
At the top of generate.py, you can configure the following parameters for obstacle sampling:
OBSTACLE_X_RANGE: The(min, max)x-coordinates for random obstacle placement.OBSTACLE_Y_RANGE: The(min, max)y-coordinates for random obstacle placement.OBSTACLE_DISTANCE_THRESHOLD: The minimum required distance (cm) between obstacles and any other map items (pose, objects, boxes).
To evaluate a solution, place your map_solution.csv in the task folder and run:
pixi run evaluate <task_folder>This will generate an evaluation.png visualization and print a detailed report including:
- Discovered Items: Correctly placed unknown items.
- Maintained Items: Correctly placed known items.
- Penalties: Extra items that do not exist in the ground truth.
- Positional Error: Statistics on matching accuracy.
When you generate a task, the following files are created in the output folder:
workspace.csv: Contains the (x, y) coordinates of the transformed workspace boundary.map.csv: The "robot view" of the map. Contains:S: Starting Pose (x, y, angle).O: Known objects (x, y).B: Known boxes (x, y, angle).
map_complete.csv: The "ground truth view". Contains all items frommap.csvPLUS:O: Unknown objects (x, y).B: Unknown boxes (x, y, angle).P: All obstacles (x, y).
visualization.png: A top-down plot focusing only on the known items inmap.csv.visualization_complete.png: A top-down plot displaying the full environment (Known + Unknown items + Obstacles).visualization_placement.png: A simplified placement guide in base coordinates (no transformations). It categorizes all items from the dataset as:- Used Object/Box: Items sampled for the current task (both known and unknown).
- Unused Object/Box: All other possible positions from the dataset that remain vacant.
- Obstacles: All obstacles in the environment.
- Start Pose: The initial position and orientation of the robot.
evaluation.png: (Generated byevaluate.py) A side-by-side comparison of the ground truth vs. a solution map, highlighting correctly identified items and matches.
A unified web version is available in the web/ directory and is deployed at DD2419 Tools. It features a high-performance interactive map with coordinate inspection and rich visual feedback.
- Configure the number of known/unknown objects, boxes, and obstacles.
- Optionally apply a global transform (random rotation + translation).
- View three tabs: Ground Truth, Known, and Placement Guide.
- Export
workspace.csv,map.csv, andmap_gt.csv, or download an SVG snapshot of the current view.
Two input methods are supported:
- Single File: Upload a
solution_<seed>.csvfile — the seed is auto-extracted from the filename and used to regenerate the ground truth internally. No other files required. - 4-File: Manually upload
workspace.csv,map.csv,map_gt.csv, and your solution file for a full comparison.
Results are shown in three tabs:
- Side-by-Side Compare: Ground truth on the left, your solution on the right, with dashed match lines connecting paired items.
- Ground Truth: Items colored by outcome — Maintained (known items found), Discovered (unknown items found), Missing (items not found).
- Your Solution: Items colored by outcome — Maintained, Discovered, Penalty (extra items with no ground truth match).
A results panel shows match statistics and a score verdict. The current view can be exported as an SVG.
Developed for DD2419 course at KTH.