ARGUS is a web application for structured documentation and analysis of drone images in rescue operations. It creates orthophotos from UAV mapping flights (RGB and thermal/IR), presents flight data in a structured manner, evaluates infrared imagery, and offers object detection using custom-trained neural networks. An integrated local LLM (Ollama) can automatically generate scene descriptions.
ARGUS runs as a multi-container Docker application and is accessible from any device on the same network. It is recommended to use a Chromium-based browser.
Note: ARGUS is developed at Westphalian University of Applied Sciences as part of the E-DRZ research project. It is intended for scientific use and does not offer the reliability of commercial software.
360 Video Support: The previous version of ARGUS supported 360 video processing (path reconstruction, partial point clouds, panoramic tours). This feature is being ported to the new architecture and will be available soon. If you need 360 support now, use the previous version.
- Features
- Prerequisites
- Installation
- Configuration
- Usage
- WebODM Integration (Optional)
- Architecture Overview
- Known Issues
- Example Data
- Papers
- Orthophoto Generation — Built-in fast mapping pipeline that handles both nadir and angled camera orientations using perspective-correct projection
- Thermal/IR Analysis — Temperature matrix extraction, IR overlays, and hotspot detection
- Object Detection — Two detection backends: a custom Transformer-based model trained on rescue scenarios and a YOLO based with models still in development
- AI Scene Descriptions — Local LLM (Ollama with LLaVA) generates automatic image descriptions
- WebODM Integration — Optional high-quality orthophoto generation via OpenDroneMap
- Weather Data — Automatic weather context via OpenWeatherMap API
- Docker & Docker Compose — Installation guide
- Git (with submodule support)
- (Optional) NVIDIA Container Toolkit — Installation guide — enables GPU-accelerated detection and LLM inference. Without it, everything runs on CPU.
Supported platforms: Linux (primary), Windows (via start-win-argus.cmd / PowerShell wrapper)
-
Clone the repository with submodules:
git clone --recursive https://github.com/RoblabWh/argus.git cd argusIf you already cloned without
--recursive:git submodule update --init --recursive
-
Start ARGUS:
./argus.sh up --build
The first build takes several minutes. On the first launch a
.envfile is created automatically from.env.example. -
Open in browser:
http://<your-ip>:5173
The startup script (argus.sh) automatically detects your local IP, checks for NVIDIA GPU support, and selects the appropriate Docker Compose configuration.
The .env file in the project root controls all settings. It is created automatically on first launch. Key variables:
| Variable | Default | Description |
|---|---|---|
PORT_API |
8008 |
API backend port |
PORT_FRONTEND |
5173 |
Frontend port (open this in your browser) |
PORT_DB |
5433 |
PostgreSQL port on host |
OPEN_WEATHER_API_KEY |
(empty) | OpenWeatherMap API key for weather data |
ENABLE_WEBODM |
false |
Enable WebODM integration (see below) |
VITE_API_URL |
(auto-detected) | Backend URL — set automatically by argus.sh |
The IP address is auto-detected on every start. Use --keep-ip to preserve a manually set VITE_API_URL:
./argus.sh up --build --keep-ipAfter editing .env manually, restart the containers for changes to take effect.
# Start all services (builds if needed)
./argus.sh up --build
# Start without rebuilding
./argus.sh up
# Stop all services
./argus.sh down
# or simply in the running terminal
crtl + c
# Windows
start-win-argus.cmd up --buildWorkflow:
- Create a group and a report in the web UI
- Upload drone images (RGB and/or thermal)
- Click Process — images are preprocessed, then an orthophoto is generated
- Optionally run Detection to identify objects in the images
- Optionally run Auto Description for AI-generated scene summaries
For high-quality orthophoto generation via OpenDroneMap:
- Clone WebODM separately: github.com/OpenDroneMap/WebODM
- Set the following in your
.env:ENABLE_WEBODM=true WEBODM_PATH=/path/to/WebODM WEBODM_USERNAME=your_username WEBODM_PASSWORD=your_password argus.shwill start WebODM automatically alongside ARGUS.
ARGUS consists of the following Docker services:
| Service | Description |
|---|---|
api |
FastAPI backend (Python 3.12) with documentation under http://<your-ip>:8008/docs |
frontend |
React frontend (Vite) |
db |
PostgreSQL 16 |
redis |
Task queue broker & progress tracking |
argus_mapping_worker |
Celery worker for orthophoto generation |
argus_detection_worker |
Celery worker for Transformer-based detection |
argus_yolo_worker |
Celery worker for experimental YOLO detection |
argus_ollama_worker |
Celery worker for LLM image descriptions |
ollama |
Local LLM server (LLaVA, Llama 3.2) |
Database migrations are handled automatically via Alembic on startup.
- Firefox may have problems uploading large files (e.g., high-resolution panoramic photos). Use a Chromium-based browser.
- Running multiple processing tasks simultaneously can lead to unexpected behavior.
- Primarily tested with DJI drones. Other manufacturers may require adding camera model definitions to
api/app/cameramodels.json. - Some older Linux distributions use
docker-compose(hyphenated) instead ofdocker compose. ARGUS requires the moderndocker composeplugin syntax.
ARGUS can display images from various cameras/drones. For orthophoto generation, the following EXIF metadata is used:
| Required | Field | Notes |
|---|---|---|
| Yes | GPS latitude & longitude | |
| Yes | Image width & height | Extracted automatically |
| Yes | Creation date/time | |
| Recommended | Relative altitude (AGL) | If missing, a default can be set on upload |
| Recommended | Field of view (FOV) | |
| Recommended | Gimbal yaw, pitch, roll | Camera/gimbal orientation |
| Recommended | UAV yaw, pitch, roll | Drone body orientation |
| Optional | Camera model name | Used to look up per-model EXIF key mappings in api/app/cameramodels.json |
| Optional | Projection type | Used to filter out panoramic images |
Thermal/IR images are identified by:
- An
ImageSourceEXIF tag containing "thermal" or "infrared" (preferred) - Alternatively: image dimensions or filename pattern (configurable per camera model)
Currently tested with DJI drones (M30T, Mavic Enterprise, Mavic 2, Mavic 3). Other drones may work if they provide the required metadata.
- Processed example project: coming soon
- Demo video: ARGUS on YouTube (german)
- Redefining Recon: Bridging Gaps with UAVs, 360 Cameras, and Neural Radiance Fields — Surmann et al., IEEE SSRR 2023, Fukushima
- UAVs and Neural Networks for search and rescue missions — Surmann et al., ISR Europe 2023
- Lessons from Robot-Assisted Disaster Response Deployments — Surmann et al., Journal of Field Robotics, 2023
- Deployment of Aerial Robots during the Flood Disaster in Erftstadt/Blessem — Surmann et al., ICARA 2022
- Deployment of Aerial Robots after a major fire of an industrial hall — Surmann et al., IEEE SSRR 2021
Developed at Westphalian University of Applied Sciences — RobLab | Funded by the German Feederal Ministry of Research, Technology and Space (BMFTR)