Skip to content

RoblabWh/argus

Repository files navigation

Argus Logo

ARGUS - Aerial Rescue and Geospatial Utility System

ARGUS is a web application for structured documentation and analysis of drone images in rescue operations. It creates orthophotos from UAV mapping flights (RGB and thermal/IR), presents flight data in a structured manner, evaluates infrared imagery, and offers object detection using custom-trained neural networks. An integrated local LLM (Ollama) can automatically generate scene descriptions.

ARGUS runs as a multi-container Docker application and is accessible from any device on the same network. It is recommended to use a Chromium-based browser.

Note: ARGUS is developed at Westphalian University of Applied Sciences as part of the E-DRZ research project. It is intended for scientific use and does not offer the reliability of commercial software.

360 Video Support: The previous version of ARGUS supported 360 video processing (path reconstruction, partial point clouds, panoramic tours). This feature is being ported to the new architecture and will be available soon. If you need 360 support now, use the previous version.


Table of Contents


Features

  • Orthophoto Generation — Built-in fast mapping pipeline that handles both nadir and angled camera orientations using perspective-correct projection
  • Thermal/IR Analysis — Temperature matrix extraction, IR overlays, and hotspot detection
  • Object Detection — Two detection backends: a custom Transformer-based model trained on rescue scenarios and a YOLO based with models still in development
  • AI Scene Descriptions — Local LLM (Ollama with LLaVA) generates automatic image descriptions
  • WebODM Integration — Optional high-quality orthophoto generation via OpenDroneMap
  • Weather Data — Automatic weather context via OpenWeatherMap API

Prerequisites

  • Docker & Docker ComposeInstallation guide
  • Git (with submodule support)
  • (Optional) NVIDIA Container ToolkitInstallation guide — enables GPU-accelerated detection and LLM inference. Without it, everything runs on CPU.

Supported platforms: Linux (primary), Windows (via start-win-argus.cmd / PowerShell wrapper)

Installation

  1. Clone the repository with submodules:

    git clone --recursive https://github.com/RoblabWh/argus.git
    cd argus

    If you already cloned without --recursive:

    git submodule update --init --recursive
  2. Start ARGUS:

    ./argus.sh up --build

    The first build takes several minutes. On the first launch a .env file is created automatically from .env.example.

  3. Open in browser: http://<your-ip>:5173

The startup script (argus.sh) automatically detects your local IP, checks for NVIDIA GPU support, and selects the appropriate Docker Compose configuration.

Configuration

The .env file in the project root controls all settings. It is created automatically on first launch. Key variables:

Variable Default Description
PORT_API 8008 API backend port
PORT_FRONTEND 5173 Frontend port (open this in your browser)
PORT_DB 5433 PostgreSQL port on host
OPEN_WEATHER_API_KEY (empty) OpenWeatherMap API key for weather data
ENABLE_WEBODM false Enable WebODM integration (see below)
VITE_API_URL (auto-detected) Backend URL — set automatically by argus.sh

The IP address is auto-detected on every start. Use --keep-ip to preserve a manually set VITE_API_URL:

./argus.sh up --build --keep-ip

After editing .env manually, restart the containers for changes to take effect.

Usage

# Start all services (builds if needed)
./argus.sh up --build

# Start without rebuilding
./argus.sh up

# Stop all services
./argus.sh down
# or simply in the running terminal
crtl + c

# Windows
start-win-argus.cmd up --build

Workflow:

  1. Create a group and a report in the web UI
  2. Upload drone images (RGB and/or thermal)
  3. Click Process — images are preprocessed, then an orthophoto is generated
  4. Optionally run Detection to identify objects in the images
  5. Optionally run Auto Description for AI-generated scene summaries

WebODM Integration (Optional)

For high-quality orthophoto generation via OpenDroneMap:

  1. Clone WebODM separately: github.com/OpenDroneMap/WebODM
  2. Set the following in your .env:
    ENABLE_WEBODM=true
    WEBODM_PATH=/path/to/WebODM
    WEBODM_USERNAME=your_username
    WEBODM_PASSWORD=your_password
    
  3. argus.sh will start WebODM automatically alongside ARGUS.

Architecture Overview

ARGUS consists of the following Docker services:

Service Description
api FastAPI backend (Python 3.12) with documentation under http://<your-ip>:8008/docs
frontend React frontend (Vite)
db PostgreSQL 16
redis Task queue broker & progress tracking
argus_mapping_worker Celery worker for orthophoto generation
argus_detection_worker Celery worker for Transformer-based detection
argus_yolo_worker Celery worker for experimental YOLO detection
argus_ollama_worker Celery worker for LLM image descriptions
ollama Local LLM server (LLaVA, Llama 3.2)

Database migrations are handled automatically via Alembic on startup.

Known Issues

  • Firefox may have problems uploading large files (e.g., high-resolution panoramic photos). Use a Chromium-based browser.
  • Running multiple processing tasks simultaneously can lead to unexpected behavior.
  • Primarily tested with DJI drones. Other manufacturers may require adding camera model definitions to api/app/cameramodels.json.
  • Some older Linux distributions use docker-compose (hyphenated) instead of docker compose. ARGUS requires the modern docker compose plugin syntax.

Image & Metadata Requirements

ARGUS can display images from various cameras/drones. For orthophoto generation, the following EXIF metadata is used:

Required Field Notes
Yes GPS latitude & longitude
Yes Image width & height Extracted automatically
Yes Creation date/time
Recommended Relative altitude (AGL) If missing, a default can be set on upload
Recommended Field of view (FOV)
Recommended Gimbal yaw, pitch, roll Camera/gimbal orientation
Recommended UAV yaw, pitch, roll Drone body orientation
Optional Camera model name Used to look up per-model EXIF key mappings in api/app/cameramodels.json
Optional Projection type Used to filter out panoramic images

Thermal/IR images are identified by:

  • An ImageSource EXIF tag containing "thermal" or "infrared" (preferred)
  • Alternatively: image dimensions or filename pattern (configurable per camera model)

Currently tested with DJI drones (M30T, Mavic Enterprise, Mavic 2, Mavic 3). Other drones may work if they provide the required metadata.

Example Data

Papers

  1. Redefining Recon: Bridging Gaps with UAVs, 360 Cameras, and Neural Radiance Fields — Surmann et al., IEEE SSRR 2023, Fukushima
  2. UAVs and Neural Networks for search and rescue missions — Surmann et al., ISR Europe 2023
  3. Lessons from Robot-Assisted Disaster Response Deployments — Surmann et al., Journal of Field Robotics, 2023
  4. Deployment of Aerial Robots during the Flood Disaster in Erftstadt/Blessem — Surmann et al., ICARA 2022
  5. Deployment of Aerial Robots after a major fire of an industrial hall — Surmann et al., IEEE SSRR 2021

Developed at Westphalian University of Applied SciencesRobLab | Funded by the German Feederal Ministry of Research, Technology and Space (BMFTR)

About

Analyse and view your UAV mapping flights in a structured webapp

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors