LMU (Le Mans Ultimate) is a PC Sim Racing Game with extensive Telemetry Data as duckdb files.
Build a local-first telemetry analysis tool for Le Mans Ultimate that enables interactive inspection and comparison of driving data recorded in LMU's DuckDB telemetry files.
This is a hobby project, but built with sound software engineering practices so it remains maintainable, testable, and extensible without unnecessary complexity.
Proposed to do the project in 4 steps:
- Access — discover and read telemetry safely
- Inspect — visualize and compare raw signals
- Interpret — segment and quantify driving behavior
- Stabilize — harden and prepare for future ideas
Establish reliable access to LMU telemetry data and define the project's core data boundaries, without assuming a fixed schema.
- Locate LMU telemetry files in a configured directory (auto-discovery with YAML config)
- Open and inspect DuckDB telemetry files in read-only mode
- Discover available sessions, laps, and signals dynamically
- Expose what exists, not interpretations of it
- Session: a single telemetry recording with metadata (track, car, driver, weather)
- Lap: a unit of driving within a session (start/end times, lap time, validity)
- Signal: a time-varying measurement with frequency and unit
- List available sessions
- Retrieve basic session metadata including available channels/events
- List laps belonging to a session
The system can reliably:
- See telemetry recordings
- Identify sessions and laps
- Act as a read-only explorer of LMU telemetry data
Enable interactive inspection of telemetry signals and basic lap comparison.
- Serve requested signals for a given lap with optional downsampling
- Support retrieving multiple laps for comparison
- Perform time-based alignment with normalized timestamps
- Support distance-based X-axis using Lap Dist normalization
- Session and lap selection
- Interactive signal plots (overlayed laps)
- Fast iteration and visual feedback
- Signal slice: a portion of a signal for a lap (timestamps, normalized time, distance)
- Lap comparison: target vs reference lap with normalized X-axis
The user can:
- Plot key signals
- Overlay laps using time or distance alignment
- Visually identify differences and inconsistencies
Introduce domain interpretation by structuring telemetry into meaningful driving segments with two-tier caching and distance-based coordinates.
- Auto-select reference lap using heuristics (fastest valid lap with clean steering/braking)
- Detect track layout automatically from steering curvature:
- Corners (high curvature zones with entry/apex/exit points)
- Straights (gaps between corners)
- Complexes (adjacent corners merged)
- Normalize Lap Dist to monotonic 0..track_length coordinates (handles wrap-around)
- Compute derived metrics per segment:
- Speed: entry, mid, exit, min, max, average
- Time: segment duration, delta to reference lap
- Technique: braking distance, throttle application, steering smoothness
- Two-tier caching (Parquet files in
./cache/):- Tier 1: Track layouts per track (versioned, persistent)
- Tier 2: Lap metrics per session/lap (invalidated on layout version change)
- Display segment lists/tables
- Link segments to plots (click → zoom)
- Compare segment metrics between laps
- Segment: corner, straight, or complex with distance-based boundaries (start_dist, end_dist, entry/apex/exit points)
- TrackLayout: versioned track definition with segment list and reference lap info
- SegmentMetrics: per-segment derived measurements with time deltas
- Distance-based segments: All segment boundaries use track distance (meters from S/F) rather than time, enabling consistent comparison across laps
- Read-only source: Never write to DuckDB files; cache derived data separately in Parquet
- Auto-reference: Best lap selected automatically but user can override
- Versioned layouts: Layout version controls cache invalidation
The tool answers:
- Where time is gained or lost per segment
- Which driving sections differ most between laps
- How driving technique differs (braking points, throttle application)
Stabilize the backend with API doc and health endpoints before building the frontend.
- API documentation: Auto-generated Swagger/OpenAPI docs with FastAPI, including request/response examples
- Health endpoints:
/health(liveness),/ready(readiness), and/metrics(basic telemetry stats) endpoints
A reliable backend with auto-generated API documentation, health monitoring endpoints.
Docs available at /openapi.json, /docs and /redoc
Establish the frontend project foundation with type-safe client code consuming the documented API from Step 4a.
- Validation: Ensure endpoints have Pydantic schemas and error responses documented
- Project setup: Initialize React + TypeScript + ECharts project with Vite, configure strict TypeScript
- Type generation: Auto-generate TypeScript types from OpenAPI spec (consuming
/openapi.jsonfrom 4a) - API client: Build typed API client layer (React Query or similar) with error handling
- Stub components: Create placeholder components matching the target UI structure
A frontend project with auto-generated TypeScript types, typed API client, and component stubs ready for UI implementation.
Build the complete user interface on top of the stable backend and established client foundation.
- Session browser: Search, filter, and select sessions with lap lists
- Signal visualization: Interactive Apache ECharts plots with time/distance axis switching, lap overlay
- Segment analysis: Sortable segment table with metrics, time deltas, linking to plots
- UX polish: Loading states, progress indicators, error boundaries, responsive layout
- Reduce coupling: Basic state management, reusable components
- New signal channels (add to channel lists, no schema changes)
- New metrics (extend SegmentMetrics, recalculate)
- New segment types (extend segment detection algorithm)
- Multi-session comparison (current design is per-session)
A complete, telemetry analysis tool:
- Reliable backend with caching and error handling
- Working frontend for interactive analysis
- Clean separation between raw telemetry and derived data
- Extensible architecture for future analyses
- Time: Session timestamps (seconds from session start) — for raw signal display
- Normalized Time: Seconds from lap start — for lap-to-lap time comparison
- Distance: Meters from start/finish line — for segment boundaries and consistent comparison
- Source data: Read-only DuckDB files (never cached, always current)
- Tier 1 (Track Layout): Per track, versioned, persistent across sessions
- Tier 2 (Lap Metrics): Per session/lap, invalidated when layout version changes
- Storage: Parquet files in local
./cache/directory (configurable)
API Routes (thin) → Core Services (business logic) → DuckDB Service (I/O)
↓
Segment Cache (persistence)
TelemetryManager: Session discovery and cachingSignalService: Signal slicing and lap comparisonSegmentService: Layout detection, metrics calculation, cache orchestrationTrackLayoutService: Automatic segment detection from telemetryMetricsCalculator: Per-segment metric computationReferenceLapSelector: Heuristic-based best lap selectionSegmentCache: Two-tier Parquet persistence
Read-Only Contract: The system never modifies source DuckDB files.
Cache-Only Writes: All derived data (layouts, metrics) written to separate cache directory.
Dynamic Schema: No assumptions about available signals — channels discovered at runtime.
Local-First: No cloud services, no external APIs, works entirely offline.